由于此商品库存有限,请在下单后15分钟之内支付完成,手慢无哦!
100%刮中券,最高50元无敌券,券有效期7天
活动自2017年6月2日上线,敬请关注云钻刮券活动规则更新。
如活动受政府机关指令需要停止举办的,或活动遭受严重网络攻击需暂停举办的,或者系统故障导致的其它意外问题,苏宁无需为此承担赔偿或者进行补偿。
醉染图书Python深度学习算法实践()(英文版)9787564189693
¥ ×1
新春将至,本公司假期时间为:2025年1月23日至2025年2月7日。2月8日订单陆续发货,期间带来不便,敬请谅解!
Preface
Section 1" Getting Started with Deep Learning
Chapter 1: Introduction to Deep Learning
What is deep learning?
Biological and artifi neurons
ANN and its layers
Input layer
Hidden layer
Output layer
Exploring activation functions
The sigmoid function
The tanh function
The Rectified Linear Unit function
The leaky ReLU function
The Exponential linear unit function
The Swish function
The softmax function
Forward propagation in ANN
How does ANN learn?
Debugging gradient descent with gradient checking
Putting illtgether
Building a neural network from scratch
Summary
estions
Further reading
Chapter 2: Getting to Know TensorFIow
What is TensorFIow?
Understanding computational graphs and sessions
Sessions
Variables, constants, and placeholders
Variables
Constants
Placeholders and feed dictionaries
Introducing TensorBoard
Creating a name scope
Handwritten digit classification using TensorFIow
Importing the required libraries
Loading the dataset
Defining the number of neurons in each layer
Defining placeholders
Forward propagation
Computing loss and backpropagation
Computing accuracy
Creating summary
Training the model
Visualizing graphs in TensorBoard
Introducing eager execution
Math oraios in TensorFIow
TensorFIow 2.0 and Keras
Bonjour Keras
Defining the model
Defining a sequential model
Defining a functional model
Compiling the model
Training the model
Evaluating the model
MNIST digit classification using TensorFIow 2.0
Should we use Keras or TensorFIow?
Summary
estions
Further reading
Section 2: Fundamental Deep Learning Algorithms
Chapter 3: Gradient Descent and Its Variants
Demystifying gradient descent
Performing gradient descent in regression "
Importing the libraries
Preparing the dataset
Defining the loss function
Computing the gradients of the loss function
Updating the model parameters
Gradient descent versus stochastic gradient descent
Momentum-based gradient descent
Gradient descent with momentum
Nesterov accelerated gradient
Adaptive methods of gradient descent
Setting a learning rate adaptively using Adagrad
Doing away with the learning rate using Adadelta
Overcoming the limitations of Adagrad using RMSProp
Adaptive moment estimation
Adamax - Adam based on infinity-norm
Adaptive moment estimation with AMSGrad
……
Section 3 Advanced Deep Learning Algorithms
亲,大宗购物请点击企业用户渠道>小苏的服务会更贴心!
亲,很抱歉,您购买的宝贝销售异常火爆让小苏措手不及,请稍后再试~
非常抱歉,您前期未参加预订活动,
无法支付尾款哦!
抱歉,您暂无任性付资格