由于此商品库存有限,请在下单后15分钟之内支付完成,手慢无哦!
100%刮中券,最高50元无敌券,券有效期7天
活动自2017年6月2日上线,敬请关注云钻刮券活动规则更新。
如活动受政府机关指令需要停止举办的,或活动遭受严重网络攻击需暂停举办的,或者系统故障导致的其它意外问题,苏宁无需为此承担赔偿或者进行补偿。
正版 Python深度学习算法实践(影印版)(英文版) (印)苏达桑·拉维
¥ ×1
Preface
Section 1" Getting Started with Deep Learning
Chapter 1: Introduction to Deep Learning
What is deep learning?
Biological and artificial neurons
ANN and its layers
Input layer
Hidden layer
Output layer
Exploring activation functions
The sigmoid function
The tanh function
The Rectified Linear Unit function
The leaky ReLU function
The Exponential linear unit function
The Swish function
The softmax function
Forward propagation in ANN
How does ANN learn?
Debugging gradient descent with gradient checking
Putting it all together
Building a neural network from scratch
Summary
Questions
Further reading
Chapter 2: Getting to Know TensorFIow
What is TensorFIow?
Understanding computational graphs and sessions
Sessions
Variables, constants, and placeholders
Variables
Constants
Placeholders and feed dictionaries
Introducing TensorBoard
Creating a name scope
Handwritten digit classification using TensorFIow
Importing the required libraries
Loading the dataset
Defining the number of neurons in each layer
Defining placeholders
Forward propagation
Computing loss and backpropagation
Computing accuracy
Creating summary
Training the model
Visualizing graphs in TensorBoard
Introducing eager execution
Math operations in TensorFIow
TensorFIow 2.0 and Keras
Bonjour Keras
Defining the model
Defining a sequential model
Defining a functional model
Compiling the model
Training the model
Evaluating the model
MNIST digit classification using TensorFIow 2.0
Should we use Keras or TensorFIow?
Summary
Questions
Further reading
Section 2: Fundamental Deep Learning Algorithms
Chapter 3: Gradient Descent and Its Variants
Demystifying gradient descent
Performing gradient descent in regression "
Importing the libraries
Preparing the dataset
Defining the loss function
Computing the gradients of the loss function
Updating the model parameters
Gradient descent versus stochastic gradient descent
Momentum-based gradient descent
Gradient descent with momentum
Nesterov accelerated gradient
Adaptive methods of gradient descent
Setting a learning rate adaptively using Adagrad
Doing away with the learning rate using Adadelta
Overcoming the limitations of Adagrad using RMSProp
Adaptive moment estimation
Adamax - Adam based on infinity-norm
Adaptive moment estimation with AMSGrad
……
Section 3 Advanced Deep Learning Algorithms
本书深入浅出地剖析了深度学习的原理和相关技术。书中使用Python,从基本的数学知识出发,带领读者从零创建一个经典的深度学习网络,使读者在此过程中逐步理解深度学习。书中不仅介绍了深度学习和神经网络的概念、特征等基础知识,对误差反向传播法、卷积神经网络等也有深入讲解,此外还介绍了深度学习相关的实用技巧,自动驾驶、图像生成、强化学习等方面的应用,以及为什么加深层可以提高识别精度等疑难的问题。
亲,大宗购物请点击企业用户渠道>小苏的服务会更贴心!
亲,很抱歉,您购买的宝贝销售异常火爆让小苏措手不及,请稍后再试~
非常抱歉,您前期未参加预订活动,
无法支付尾款哦!
抱歉,您暂无任性付资格