返回首页
苏宁会员
购物车 0
易付宝
手机苏宁

服务体验

店铺评分与同行业相比

用户评价:----

物流时效:----

售后服务:----

  • 服务承诺: 正品保障
  • 公司名称:
  • 所 在 地:
本店所有商品

  • [正版新书]强化学习的数学原理(英文版) 赵世钰 清华大学出版社 强化学习
  • 新商品上架
    • 作者: 赵世钰著
    • 出版社: 清华大学出版社
    送至
  • 由""直接销售和发货,并提供售后服务
  • 加入购物车 购买电子书
    服务

    看了又看

    商品预定流程:

    查看大图
    /
    ×

    苏宁商家

    商家:
    句字图书专营店
    联系:
    • 商品

    • 服务

    • 物流

    搜索店内商品

    商品分类

    商品参数
    • 作者: 赵世钰著
    • 出版社:清华大学出版社
    • 开本:16开
    • ISBN:9788248638798
    • 版权提供:清华大学出版社

     书名:  强化学习的数学原理(英文版)
     出版社:  清华大学出版社
     出版日期  2024
     ISBN号:  9787302658528

    本书从强化学习最基本的概念开始介绍, 将介绍基础的分析工具, 包括贝尔曼公式和贝尔曼最 

    优公式, 然后推广到基于模型的和无模型的强化学习算法, 最后推广到基于函数逼近的强化学习方 

    法。本书强调从数学的角度引入概念、分析问题、分析算法, 并不强调算法的编程实现。本书不要求 

    读者具备任何关于强化学习的知识背景, 仅要求读者具备一定的概率论和线性代数的知识。如果读者 

    已经具备强化学习的学习基础, 本书可以帮助读者更深入地理解一些问题并提供新的视角。 

    本书面向对强化学习感兴趣的本科生、研究生、研究人员和企业或研究所的从业者。




     

    ·从零开始到透彻理解,知其然并知其所以然;

    ·本书在GitHub收获2000+星;

    ·课程视频全网播放超过80万;

    ·国内外读者反馈口碑爆棚;

    ·教材、视频、课件三位一体。


     

     


    Contents


    Overview of this Book 1

    Chapter 1 Basic Concepts  6

    1.1 A grid world example  7

    1.2 State and action 8

    1.3 State transition  9

    1.4 Policy  11

    1.5 Reward 13

    1.6 Trajectories, returns, and episodes  15

    1.7 Markov decision processes 18

    1.8 Summary 20

    1.9 Q&A 20

    Chapter 2 State Values and the Bellman Equation  21

    2.1 Motivating example 1: Why are returns important? 23

    2.2 Motivating example 2: How to calculate returns?  24

    2.3 State values 26

    2.4 The Bellman equation  27

    2.5 Examples for illustrating the Bellman equation  30

    2.6 Matrix-vector form of the Bellman equation 33

    2.7 Solving state values from the Bellman equation 35

    2.7.1 Closed-form solution  35

    2.7.2 Iterative solution 35

    2.7.3 Illustrative examples  36

    2.8 From state value to action value 38

    2.8.1 Illustrative examples  39

    2.8.2 The Bellman equation in terms of action values 40

    2.9 Summary 41

    2.10 Q&A  42

    Chapter 3 Optimal State Values and the Bellman Optimality Equation 43

    3.1 Motivating example: How to improve policies?  45

    3.2 Optimal state values and optimal policies 46

    3.3 The Bellman optimality equation 47

    3.3.1 Maximization of the right-hand side of the BOE  48

    3.3.2 Matrix-vector form of the BOE 49

    3.3.3 Contraction mapping theorem  50

    3.3.4 Contraction property of the right-hand side of the BOE  53

    3.4 Solving an optimal policy from the BOE  55

    3.5 Factors that influence optimal policies 58

    3.6 Summary 63

    3.7 Q&A 63

    Chapter 4 Value Iteration and Policy Iteration 66

    4.1 Value iteration  68

    4.1.1 Elementwise form and implementation  68

    4.1.2 Illustrative examples  70

    4.2 Policy iteration 72

    4.2.1 Algorithm analysis 73

    4.2.2 Elementwise form and implementation  76

    4.2.3 Illustrative examples  77

    4.3 Truncated policy iteration 81

    4.3.1 Comparing value iteration and policy iteration  81

    4.3.2 Truncated policy iteration algorithm  83

    4.4 Summary 85

    4.5 Q&A 86

    Chapter 5 Monte Carlo Methods 89

    5.1 Motivating example: Mean estimation 91

    5.2 MC Basic: The simplest MC-based algorithm 93

    5.2.1 Converting policy iteration to be model-free 93

    5.2.2 The MC Basic algorithm 94

    5.2.3 Illustrative examples  96

    5.3 MC Exploring Starts  99

    5.3.1 Utilizing samples more efficiently  100

    5.3.2 Updating policies more efficiently  101

    5.3.3 Algorithm description 101

    5.4 MC -Greedy: Learning without exploring starts 102

    5.4.1 -greedy policies 103

    5.4.2 Algorithm description 103

    5.4.3 Illustrative examples 105

    5.5 Exploration and exploitation of -greedy policies 106

    5.6 Summary  111

    5.7 Q&A  111

    Chapter 6 Stochastic Approximation 114

    6.1 Motivating example: Mean estimation 116

    6.2 Robbins-Monro algorithm  117

    6.2.1 Convergence properties  119

    6.2.2 Application to mean estimation  123

    6.3 Dvoretzky's convergence theorem  124

    6.3.1 Proof of Dvoretzky's theorem  125

    6.3.2 Application to mean estimation. 126

    6.3.3 Application to the Robbins-Monro theorem  127

    6.3.4 An extension of Dvoretzky's theorem  127

    6.4 Stochastic gradient descent  128

    6.4.1 Application to mean estimation 130

    6.4.2 Convergence pattern of SGD 131

    6.4.3 A deterministic formulation of SGD 133

    6.4.4 BGD, SGD, and mini-batch GD 134

    6.4.5 Convergence of SGD 136

    6.5 Summary  138

    6.6 Q&A  138

    Chapter 7 Temporal-Difference Methods 140

    7.1 TD learning of state values 142

    7.1.1 Algorithm description 142

    7.1.2 Property analysis  144

    7.1.3 Convergence analysis  146

    7.2 TD learning of action values: Sarsa  149

    7.2.1 Algorithm description 149

    7.2.2 Optimal policy learning via Sarsa  151

    7.3 TD learning of action values: n-step Sarsa 154

    7.4 TD learning of optimal action values: Q-learning 156

    7.4.1 Algorithm description 156

    7.4.2 Off-policy vs. on-policy  158

    7.4.3 Implementation 160

    7.4.4 Illustrative examples 161

    7.5 A unified viewpoint  165

    7.6 Summary  165

    7.7 Q&A  166

    Chapter 8 Value Function Approximation 168

    8.1 Value representation: From table to function 170

    8.2 TD learning of state values with function approximation 174

    8.2.1 Objective function 174

    8.2.2 Optimization algorithms 180

    8.2.3 Selection of function approximators  182

    8.2.4 Illustrative examples 183

    8.2.5 Theoretical analysis 187

    8.3 TD learning of action values with function approximation  198

    8.3.1 Sarsa with function approximation 198

    8.3.2 Q-learning with function approximation 200

    8.4 Deep Q-learning 201

    8.4.1 Algorithm description 202

    8.4.2 Illustrative examples 204

    8.5 Summary  207

    8.6 Q&A  207

    Chapter 9 Policy Gradient Methods 211

    9.1 Policy representation: From table to function  213

    9.2 Metrics for defining optimal policies  214

    9.3 Gradients of the metrics 219

    9.3.1 Derivation of the gradients in the discounted case  221

    9.3.2 Derivation of the gradients in the undiscounted case 226

    9.4 Monte Carlo policy gradient (REINFORCE) 232

    9.5 Summary  235

    9.6 Q&A  235

    Chapter 10 Actor-Critic Methods  237

    10.1 The simplest actor-critic algorithm (QAC)  239

    10.2 Advantage actor-critic (A2C) 240

    10.2.1 Baseline invariance 240

    10.2.2 Algorithm description  243

    10.3 Off-policy actor-critic 244

    10.3.1 Importance sampling 245

    10.3.2 The off-policy policy gradient theorem  247

    10.3.3 Algorithm description  249

    10.4 Deterministic actor-critic 251

    10.4.1 The deterministic policy gradient theorem  251

    10.4.2 Algorithm description  258

    10.5 Summary 259

    10.6 Q&A 260

    Appendix A Preliminaries for Probability Theory 262

    Appendix B Measure-Theoretic Probability Theory 268

    Appendix C Convergence of Sequences  276

    C.1 Convergence of deterministic sequences  277

    C.2 Convergence of stochastic sequences 280

    Appendix D Preliminaries for Gradient Descent 284

    Bibliography  290

    Symbols 297

    Index  299



     


     

    1
    • 商品详情
    • 内容简介

    售后保障

    最近浏览

    猜你喜欢

    该商品在当前城市正在进行 促销

    注:参加抢购将不再享受其他优惠活动

    x
    您已成功将商品加入收藏夹

    查看我的收藏夹

    确定

    非常抱歉,您前期未参加预订活动,
    无法支付尾款哦!

    关闭

    抱歉,您暂无任性付资格

    此时为正式期SUPER会员专享抢购期,普通会员暂不可抢购