返回首页
苏宁会员
购物车 0
易付宝
手机苏宁

服务体验

店铺评分与同行业相比

用户评价:----

物流时效:----

售后服务:----

  • 服务承诺: 正品保障
  • 公司名称:
  • 所 在 地:
本店所有商品

  • 醉染图书强化学习与控制9787302540328
  • 正版全新
    • 作者: (美)德梅萃·P.博塞卡斯著 | (美)德梅萃·P.博塞卡斯编 | (美)德梅萃·P.博塞卡斯译 | (美)德梅萃·P.博塞卡斯绘
    • 出版社: 清华大学出版社
    • 出版时间:2020-06-01
    送至
  • 由""直接销售和发货,并提供售后服务
  • 加入购物车 购买电子书
    服务

    看了又看

    商品预定流程:

    查看大图
    /
    ×

    苏宁商家

    商家:
    醉染图书旗舰店
    联系:
    • 商品

    • 服务

    • 物流

    搜索店内商品

    商品分类

    商品参数
    • 作者: (美)德梅萃·P.博塞卡斯著| (美)德梅萃·P.博塞卡斯编| (美)德梅萃·P.博塞卡斯译| (美)德梅萃·P.博塞卡斯绘
    • 出版社:清华大学出版社
    • 出版时间:2020-06-01
    • 版次:1
    • 印次:1
    • 字数:411000
    • 页数:373
    • 开本:32开
    • ISBN:9787302540328
    • 版权提供:清华大学出版社
    • 作者:(美)德梅萃·P.博塞卡斯
    • 著:(美)德梅萃·P.博塞卡斯
    • 装帧:平装
    • 印次:1
    • 定价:149.00
    • ISBN:9787302540328
    • 出版社:清华大学出版社
    • 开本:32开
    • 印刷时间:暂无
    • 语种:暂无
    • 出版时间:2020-06-01
    • 页数:373
    • 外部编号:1202094838
    • 版次:1
    • 成品尺寸:暂无

    1. Exact Dynamic Programming

    1.1. Deterministic Dynamic Programming

    1.1.1. Deterministic Problems

    1.1.2. The Dynamic Programming Algorithm

    1.1.3. Approximation in Value Space

    1.2. Stochastic Dynamic Programming

    1.3. Examples, Variations, and Simpictions

    1.3.1. Deterministic Shortest Path Problems

    1.3.2. Discrete Deterministic Optimization

    1.3.3. Problems with a Termination State

    1.3.4. Forecasts

    1.3.5. Problems with Uncontrollable State Components

    1.3.6. Partial State Information and Belief States

    1.3.7. Linear dratic Optimal Control

    1.3.8. Systems with Unknown Parameters - Adaptive Control

    1.4. Reinforcement Learning and Optimal Control - Some Terminology

    1.5. Notes and Sources

    2. Approximation in Value Space

    2.1. Approximation Approaches in Reinforcement Learning

    2.1.1. General Issues of Approximation in Value Space

    2.1.2. Off-Line and On-Line Methods

    2.1.3. Model-Based Simpiction of the Lookahead Minimization

    2.1.4. Model-Free off-Line -Factor Approximation

    2.1.5. Approximation in Policy Space on Top of Approximation in Value Space

    2.1.6. When is Approximation in Value Space Effective?

    2.2. Multistep Lookahead

    2.2.1. Multistep Lookahead and Rolling Horizon

    2.2.2. Multistep Lookahead and Deterministic Problems

    .. Problem Approximation

    ..1. Enforced Decoition

    ..2. Probabilistic Approximation - Certainty Equivalent Control

    2.4. Rollout and the Policy Improvement Principle

    2.4.1. On-Line Rollout for Deterministic Discrete Optimization

    2.4.2. Stochastic RolloundMnte Carlo Tree Search

    2.4.3. Rollout with an Expert

    2.5. On-Line Rollout for Deterministic Infinite-Spaces Problems Optimization Heuristics

    2.5.1. Model Predictive Control

    2.5.2. Target Tubes and the Constrained Controllability Condition

    2.5.3. Variants of Model Predictive Control

    2.6. Notes and Sources

    3. Parametric Approximation

    3.1. Approximation Architectures

    3.1.1. Linear and Nonlinear Feature-Based Architectures

    3.1.2. Training of Linear and Nonlinear Architectures

    3.1.3. Incremental Gradient and Newton Methods

    3.2. Neural Networks

    3.2.1. Training of Neural Networks

    3.2.2. Multilayer and Deep Neural Networks

    3.3. Sequential Dynamic Programming Approximation

    3.4. -Factor Parametric Approximation

    3.5. Parametric Approximation in Policy Space by Classification

    3.6. Notes and Sources

    4. Infinite Horizon Dynamic Programming

    4.1. An Overview of Infinite Horizon Problems

    4.2. Stochastic Shortest Path Problems

    4.3. Discounted Problems

    4.4. Semi-Markov Discounted Problems

    4.5. Asynchronous Distributed Value Iteration

    4.6. Policy Iteration

    4.6.1. Exact Policy Iteration

    4.6.2. Optimistic an Mtstep Lookahead Policy Iteration

    4.6.3. Policy Iteration for -factors

    4.7. Notes and Sources

    4.8. Appendix: Mathematical Analysis

    4.8.1. Proofs for Stochastic Shortest Path Problems

    4.8.2. Proofs for Discounted Problems

    4.8.3. Convergence of Exact and Optimistic Policy Iteration

    5. Infinite Horizon Reinforcement Learning

    5.1. Approximation in Value Space - Performance Bounds

    5.1.1. Limited Lookahead

    5.1.2. Rollout and Approximate Policy Improvement

    5.1.3. Approximate Policy Iteration

    5.2. Fitted Value Iteration

    5.3. Simulation-Based Policy Iteration with Parametric Approximation

    5.3.1. Self-Learning and Actor-Critic Methods

    5.3.2. Model-Based Variant of a Critic-Only Method

    5.3.3. Model-Free Variant of a Critic-Only Method

    5.3.4. Implementation Issues of Parametric Policy Iteration

    5.3.5. Convergence Issues of Parametric Policy Iteration Oscillations

    5.4. -Learning

    5.4.1. Optimistic Policy Iteration with Parametric -Factor Approximation - SARSA and DN

    5.5. Additional Methods - Temporal Differences

    ……

    Dimitri P. Bertseka,美国MIT终身教授,美国工程院院士,清华大学复杂与网络化系统研究中心客座教授。电气工程与计算机科学领域靠前知名作者,著有《非线规划》《网络优化》《凸优化》等十几本教材和专著。

    "Dimitri P. Bertseka,美国MIT终身教授,美国工程院院士,清华大学复杂与网络化系统研究中心客座教授,电气工程与计算机科学领域靠前知名作者,著有《非线规划》《网络优化》《凸优化》等十几本教材和专著。本书的目的是考虑大型且具有挑战的多阶段决策问题,这些问题原则上可以通过动态规划和很优控制来解决,但它们的准确解决方案在计算上是难以处理的。本书讨论依赖于近似的解决方法,以产生具有足够能的次优策略。这些方法统称为学习,也可以叫做近似动态规划和神经动态规划等。
    本书的主题产生于很优控制和人工智能思想的相互作用。本书的目的之一是探索这两个领域之间的共同边界,并架设一座具有任一领域背景的专业人士都可以访问的桥梁。
    "

    售后保障

    最近浏览

    猜你喜欢

    该商品在当前城市正在进行 促销

    注:参加抢购将不再享受其他优惠活动

    x
    您已成功将商品加入收藏夹

    查看我的收藏夹

    确定

    非常抱歉,您前期未参加预订活动,
    无法支付尾款哦!

    关闭

    抱歉,您暂无任性付资格

    此时为正式期SUPER会员专享抢购期,普通会员暂不可抢购