由于此商品库存有限,请在下单后15分钟之内支付完成,手慢无哦!
100%刮中券,最高50元无敌券,券有效期7天
活动自2017年6月2日上线,敬请关注云钻刮券活动规则更新。
如活动受政府机关指令需要停止举办的,或活动遭受严重网络攻击需暂停举办的,或者系统故障导致的其它意外问题,苏宁无需为此承担赔偿或者进行补偿。
正版新书]抽象动态规划(第2版)(美)德梅萃·P.博赛卡斯978730259
¥ ×1
1 Introduction
1.1 Structure of Dynamic Programming Problems
1.2 Abstract Dynamic Programming Models
1.2.1 Problem Formulation
1.2.2 Monotonicity and Contraction Properties
1.2.3 Some Examples
1.2.4 Approximation Models-Projected and Aggregation Bellman Equations
1.2.5 Multistep Models-Temporal Difference and Proximal Algorithms
1.3 Organization of the Book
1.4 Notes, Sources, and Exercises
2 Contractive Models
2.1 Bellmans Equation and Optimality Conditions
2.2 Limited Lookahead Policies
2.3 Value Iteration
2.4 Policy Iteration
2.4.1 Approximate Policy Iteration
2.4.2 Approximate Policy Iteration Where Policies Converge
2.5 Optimistic Policy Iteration and λ-Policy Iteration
2.5.1 Convergence of Optimistic Policy Iteration
2.5.2 Approximate Optimistic Policy Iteration
2.5.3 Randomized Optimistic Policy Iteration
2.6 Asynchronous Algorithms
2.6.1 Asynchronous Value Iteration
2.6.2 Asynchronous Policy Iteration
2.6.3 Optimistic Asynchronous Policy Iteration with a Uniform Fixed Point
2.7 Notes, Sources, and Exercises
3 Semicontractive Models
3.1 Pathologies of Noncontractive DP Models
3.1.1 Deterministic Shortest Path Problems
3.1.2 Stochastic Shortest Path Problems
3.1.3 The Blackmailers Dilemma
3.1.4 Linear-Quadratic Problems
3.1.5 An Intuitive View of Semicontractive Analysis
3.2 Semicontractive Models and Regular Policies
3.2.1 S-Regular Policies
3.2.2 Restricted Optimization over S-Regular Policies
3.2.3 Policy Iteration Analysis of Bellmans Equation
3.2.4 Optimistic Policy Iteration and λ-Policy Iteration
3.2.5 A Mathematical Programming Approach
3.3 Irregular Policies/Infinite Cost Case
3.4 Irregular Policies/Finite Cost Case-A Perturbation Approach
3.5 Applications in Shortest Path and Other Contexts
3.5.1 Stochastic Shortest Path Problems
3.5.2 Affine Monotonic Problems
3.5.3 Robust Shortest Path Planning
3.5.4 Linear-Quadratic Optimal Control
3.5.5 Continuous-State Deterministic Optimal Control
3.6 Algorithms
3.6.1 Asynchronous Value Iteration
3.6.2 Asynchronous Policy Iteration
3.7 Notes, Sources, and Exercises
4 Noncontractive Models
4.1 Noncontractive Models-Problem Formulation
4.2 Finite Horizon Problems
4.3 Infinite Horizon Problems
4.3.1 Fixed Point Properties and Optimality Conditions
4.3.2 Value Iteration
4.3.3 Exact and Optimistic Policy Iteration-λ-Policy Iteration
4.4 Regularity and Nonstationary Policies
4.4.1 Regularity and Monotone Increasing Models
4.4.2 Nonnegative Cost Stochastic Optimal Control
4.4.3 Discounted Stochastic Optimal Control
4.4.4 Convergent Models
4.5 Stable Policies for Deterministic Optimal Control
4.5.1 Forcing Functions and p-Stable Policies
4.5.2 Restricted Optimization over Stable Policies
4.5.3 Policy Iteration Methods
4.6 Infinite-Spaces Stochastic Shortest Path Problems
4.6.1 The Multiplicity of Solutions of Bellmans Equation
4.6.2 The Case of Bounded Cost per Stage
4.7 Notes, Sources, and Exercises
Appendix A: Notation and Mathematical Conventions
A.1 Set Notation and Conventions
A.2 Functions
Appendix B: Contraction Mappings
B.1 Contraction Mapping Fixed Point Theorems
B.2 Weighted Sup-Norm Contractions
References
Index
德梅萃 P.博塞克斯(Dimitri P. Bertseka),美国MIT终身教授,美国国家工程院院士,清华大学复杂与网络化系统研究中心客座教授。电气工程与计算机科学领域国际知名作者,著有《非线性规划》《网络优化》《凸优化》等十几本畅销教材和专著。
本书以动态规划为基础,运用抽象映射的单调性和压缩映射理论研究近似动态规划或动态规划的若干典型问题,主要特点是:不涉及所讨论问题的随机特性,也不涉及特殊类型的动态规划问题的某些有趣特征。本书中展示的理论方法位居随机运筹学和随机控制领域的学科前沿,其严谨的分析方法和处理技巧具有重要的理论价值,在数学与人工智能科学的交叉研究领域具有广阔的应用前景。
第2版的主要目的是扩大第1版(2013)的第3章和第4章的半契约模型的内容,并以自第1版以来作者在期刊和报告中发表的研究成果作为补充。这本书的数学内容非常优雅且严格,依靠抽象的力量专注于基础知识。该书首次提供了该领域的全面综合知识,同时提出了许多新研究,其中一些研究与当前非常活跃的领域(如近似动态编程)有关。本书中散布着许多例子,用严谨的理论统一起来,并将其应用于特定类型的问题,例如折扣、随机短路径、半马尔可夫、小极大、序贯博弈、乘法和风险敏感模型。本书还包括练习(提供完整的解答),并通过示例、反例和理论扩展来补充本文。 就像Bertsekas的其他几本著作一样,这本书写得很好,非常适合自学。它可用作研究生动态编程课程的补充。
亲,大宗购物请点击企业用户渠道>小苏的服务会更贴心!
亲,很抱歉,您购买的宝贝销售异常火爆让小苏措手不及,请稍后再试~
非常抱歉,您前期未参加预订活动,
无法支付尾款哦!
抱歉,您暂无任性付资格