Zum Hauptinhalt springen
Dekorationsartikel gehören nicht zum Leistungsumfang.
Stochastic Controls
Hamiltonian Systems and HJB Equations
Taschenbuch von Xun Yu Zhou (u. a.)
Sprache: Englisch

192,59 €*

inkl. MwSt.

Versandkostenfrei per Post / DHL

Aktuell nicht verfügbar

Kategorien:
Beschreibung
As is well known, Pontryagin's maximum principle and Bellman's dynamic programming are the two principal and most commonly used approaches in solving stochastic optimal control problems. * An interesting phenomenon one can observe from the literature is that these two approaches have been developed separately and independently. Since both methods are used to investigate the same problems, a natural question one will ask is the fol­ lowing: (Q) What is the relationship betwccn the maximum principlc and dy­ namic programming in stochastic optimal controls? There did exist some researches (prior to the 1980s) on the relationship between these two. Nevertheless, the results usually werestated in heuristic terms and proved under rather restrictive assumptions, which were not satisfied in most cases. In the statement of a Pontryagin-type maximum principle there is an adjoint equation, which is an ordinary differential equation (ODE) in the (finite-dimensional) deterministic case and a stochastic differential equation (SDE) in the stochastic case. The system consisting of the adjoint equa­ tion, the original state equation, and the maximum condition is referred to as an (extended) Hamiltonian system. On the other hand, in Bellman's dynamic programming, there is a partial differential equation (PDE), of first order in the (finite-dimensional) deterministic case and of second or­ der in the stochastic case. This is known as a Hamilton-Jacobi-Bellman (HJB) equation.
As is well known, Pontryagin's maximum principle and Bellman's dynamic programming are the two principal and most commonly used approaches in solving stochastic optimal control problems. * An interesting phenomenon one can observe from the literature is that these two approaches have been developed separately and independently. Since both methods are used to investigate the same problems, a natural question one will ask is the fol­ lowing: (Q) What is the relationship betwccn the maximum principlc and dy­ namic programming in stochastic optimal controls? There did exist some researches (prior to the 1980s) on the relationship between these two. Nevertheless, the results usually werestated in heuristic terms and proved under rather restrictive assumptions, which were not satisfied in most cases. In the statement of a Pontryagin-type maximum principle there is an adjoint equation, which is an ordinary differential equation (ODE) in the (finite-dimensional) deterministic case and a stochastic differential equation (SDE) in the stochastic case. The system consisting of the adjoint equa­ tion, the original state equation, and the maximum condition is referred to as an (extended) Hamiltonian system. On the other hand, in Bellman's dynamic programming, there is a partial differential equation (PDE), of first order in the (finite-dimensional) deterministic case and of second or­ der in the stochastic case. This is known as a Hamilton-Jacobi-Bellman (HJB) equation.
Zusammenfassung
This monograph unifies the two key approaches in solving optimal control problems. The book will be of interest to researchers and graduate students in applied probability, control engineering, and econometrics.
Inhaltsverzeichnis
1. Basic Stochastic Calculus.- 1. Probability.- 2. Stochastic Processes.- 3. Stopping Times.- 4. Martingales.- 5. Itô's Integral.- 6. Stochastic Differential Equations.- 2. Stochastic Optimal Control Problems.- 1. Introduction.- 2. Deterministic Cases Revisited.- 3. Examples of Stochastic Control Problems.- 4. Formulations of Stochastic Optimal Control Problems.- 5. Existence of Optimal Controls.- 6. Reachable Sets of Stochastic Control Systems.- 7. Other Stochastic Control Models.- 8. Historical Remarks.- 3. Maximum Principle and Stochastic Hamiltonian Systems.- 1. Introduction.- 2. The Deterministic Case Revisited.- 3. Statement of the Stochastic Maximum Principle.- 4. A Proof of the Maximum Principle.- 5. Sufficient Conditions of Optimality.- 6. Problems with State Constraints.- 7. Historical Remarks.- 4. Dynamic Programming and HJB Equations.- 1. Introduction.- 2. The Deterministic Case Revisited.- 3. The Stochastic Principle of Optimality and the HJB Equation.- 4. Other Propertiesof the Value Function.- 5. Viscosity Solutions.- 6. Uniqueness of Viscosity Solutions.- 7. Historical Remarks.- 5. The Relationship Between the Maximum Principle and Dynamic Programming.- 1. Introduction.- 2. Classical Hamilton-Jacobi Theory.- 3. Relationship for Deterministic Systems.- 4. Relationship for Stochastic Systems.- 5. Stochastic Verification Theorems.- 6. Optimal Feedback Controls.- 7. Historical Remarks.- 6. Linear Quadratic Optimal Control Problems.- 1. Introduction.- 2. The Deterministic LQ Problems Revisited.- 3. Formulation of Stochastic LQ Problems.- 4. Finiteness and Solvability.- 5. A Necessary Condition and a Hamiltonian System.- 6. Stochastic Riccati Equations.- 7. Global Solvability of Stochastic Riccati Equations.- 8. A Mean-variance Portfolio Selection Problem.- 9. Historical Remarks.- 7. Backward Stochastic Differential Equations.- 1. Introduction.- 2. Linear Backward Stochastic Differential Equations.- 3. Nonlinear Backward Stochastic Differential Equations.- 4. Feynman-Kac-Type Formulae.- 5. Forward-Backward Stochastic Differential Equations.- 6. Option Pricing Problems.- 7. Historical Remarks.- References.
Details
Erscheinungsjahr: 2012
Fachbereich: Wahrscheinlichkeitstheorie
Genre: Mathematik
Rubrik: Naturwissenschaften & Technik
Medium: Taschenbuch
Reihe: Stochastic Modelling and Applied Probability
Inhalt: xxii
439 S.
ISBN-13: 9781461271543
ISBN-10: 1461271541
Sprache: Englisch
Ausstattung / Beilage: Paperback
Einband: Kartoniert / Broschiert
Autor: Zhou, Xun Yu
Yong, Jiongmin
Auflage: Softcover reprint of the original 1st ed. 1999
Hersteller: Springer New York
Springer US, New York, N.Y.
Stochastic Modelling and Applied Probability
Maße: 235 x 155 x 25 mm
Von/Mit: Xun Yu Zhou (u. a.)
Erscheinungsdatum: 27.09.2012
Gewicht: 0,698 kg
Artikel-ID: 105721229
Zusammenfassung
This monograph unifies the two key approaches in solving optimal control problems. The book will be of interest to researchers and graduate students in applied probability, control engineering, and econometrics.
Inhaltsverzeichnis
1. Basic Stochastic Calculus.- 1. Probability.- 2. Stochastic Processes.- 3. Stopping Times.- 4. Martingales.- 5. Itô's Integral.- 6. Stochastic Differential Equations.- 2. Stochastic Optimal Control Problems.- 1. Introduction.- 2. Deterministic Cases Revisited.- 3. Examples of Stochastic Control Problems.- 4. Formulations of Stochastic Optimal Control Problems.- 5. Existence of Optimal Controls.- 6. Reachable Sets of Stochastic Control Systems.- 7. Other Stochastic Control Models.- 8. Historical Remarks.- 3. Maximum Principle and Stochastic Hamiltonian Systems.- 1. Introduction.- 2. The Deterministic Case Revisited.- 3. Statement of the Stochastic Maximum Principle.- 4. A Proof of the Maximum Principle.- 5. Sufficient Conditions of Optimality.- 6. Problems with State Constraints.- 7. Historical Remarks.- 4. Dynamic Programming and HJB Equations.- 1. Introduction.- 2. The Deterministic Case Revisited.- 3. The Stochastic Principle of Optimality and the HJB Equation.- 4. Other Propertiesof the Value Function.- 5. Viscosity Solutions.- 6. Uniqueness of Viscosity Solutions.- 7. Historical Remarks.- 5. The Relationship Between the Maximum Principle and Dynamic Programming.- 1. Introduction.- 2. Classical Hamilton-Jacobi Theory.- 3. Relationship for Deterministic Systems.- 4. Relationship for Stochastic Systems.- 5. Stochastic Verification Theorems.- 6. Optimal Feedback Controls.- 7. Historical Remarks.- 6. Linear Quadratic Optimal Control Problems.- 1. Introduction.- 2. The Deterministic LQ Problems Revisited.- 3. Formulation of Stochastic LQ Problems.- 4. Finiteness and Solvability.- 5. A Necessary Condition and a Hamiltonian System.- 6. Stochastic Riccati Equations.- 7. Global Solvability of Stochastic Riccati Equations.- 8. A Mean-variance Portfolio Selection Problem.- 9. Historical Remarks.- 7. Backward Stochastic Differential Equations.- 1. Introduction.- 2. Linear Backward Stochastic Differential Equations.- 3. Nonlinear Backward Stochastic Differential Equations.- 4. Feynman-Kac-Type Formulae.- 5. Forward-Backward Stochastic Differential Equations.- 6. Option Pricing Problems.- 7. Historical Remarks.- References.
Details
Erscheinungsjahr: 2012
Fachbereich: Wahrscheinlichkeitstheorie
Genre: Mathematik
Rubrik: Naturwissenschaften & Technik
Medium: Taschenbuch
Reihe: Stochastic Modelling and Applied Probability
Inhalt: xxii
439 S.
ISBN-13: 9781461271543
ISBN-10: 1461271541
Sprache: Englisch
Ausstattung / Beilage: Paperback
Einband: Kartoniert / Broschiert
Autor: Zhou, Xun Yu
Yong, Jiongmin
Auflage: Softcover reprint of the original 1st ed. 1999
Hersteller: Springer New York
Springer US, New York, N.Y.
Stochastic Modelling and Applied Probability
Maße: 235 x 155 x 25 mm
Von/Mit: Xun Yu Zhou (u. a.)
Erscheinungsdatum: 27.09.2012
Gewicht: 0,698 kg
Artikel-ID: 105721229
Warnhinweis