Zum Hauptinhalt springen
Dekorationsartikel gehören nicht zum Leistungsumfang.
Applied Deep Learning with TensorFlow 2
Learn to Implement Advanced Deep Learning Techniques with Python
Taschenbuch von Umberto Michelucci
Sprache: Englisch

73,80 €*

inkl. MwSt.

Versandkostenfrei per Post / DHL

Lieferzeit 1-2 Wochen

Kategorien:
Beschreibung
Understand how neural networks work and learn how to implement them using TensorFlow 2.0 and Keras. This new edition focuses on the fundamental concepts and at the same time on practical aspects of implementing neural networks and deep learning for your research projects.

This book is designed so that you can focus on the parts you are interested in. You will explore topics as regularization, optimizers, optimization, metric analysis, and hyper-parameter tuning. In addition, you will learn the fundamentals ideas behind autoencoders and generative adversarial networks.
All the code presented in the book will be available in the form of Jupyter notebooks which would allow you to try out all examples and extend them in interesting ways. A companion online book is available with the complete code for all examples discussed in the book and additional material more related to TensorFlow and Keras. All the code will be available in Jupyter notebook format and can be openeddirectly in Google Colab (no need to install anything locally) or downloaded on your own machine and tested locally.
You will:
¿ Understand the fundamental concepts of how neural networks work¿ Learn the fundamental ideas behind autoencoders and generative adversarial networks

¿ Be able to try all the examples with complete code examples that you can expand for your own projects
¿ Have available a complete online companion book with examples and tutorials.

This book is for:
Readers with an intermediate understanding of machine learning, linear algebra, calculus, and basic Python programming.
Understand how neural networks work and learn how to implement them using TensorFlow 2.0 and Keras. This new edition focuses on the fundamental concepts and at the same time on practical aspects of implementing neural networks and deep learning for your research projects.

This book is designed so that you can focus on the parts you are interested in. You will explore topics as regularization, optimizers, optimization, metric analysis, and hyper-parameter tuning. In addition, you will learn the fundamentals ideas behind autoencoders and generative adversarial networks.
All the code presented in the book will be available in the form of Jupyter notebooks which would allow you to try out all examples and extend them in interesting ways. A companion online book is available with the complete code for all examples discussed in the book and additional material more related to TensorFlow and Keras. All the code will be available in Jupyter notebook format and can be openeddirectly in Google Colab (no need to install anything locally) or downloaded on your own machine and tested locally.
You will:
¿ Understand the fundamental concepts of how neural networks work¿ Learn the fundamental ideas behind autoencoders and generative adversarial networks

¿ Be able to try all the examples with complete code examples that you can expand for your own projects
¿ Have available a complete online companion book with examples and tutorials.

This book is for:
Readers with an intermediate understanding of machine learning, linear algebra, calculus, and basic Python programming.
Über den Autor
Umberto Michelucci is the founder and the chief AI scientist of TOELT - Advanced AI LAB LLC. He's an expert in numerical simulation, statistics, data science, and machine learning. He has 15 years of practical experience in the fields of data warehouse, data science, and machine learning. His first book, Applied Deep Learning-A Case-Based Approach to Understanding Deep Neural Networks, was published in 2018. His second book, Convolutional and Recurrent Neural Networks Theory and Applications was published in 2019. He publishes his research regularly and gives lectures on machine learning and statistics at various universities. He holds a PhD in machine learning, and he is also a Google Developer Expert in Machine Learning based in Switzerland.
Zusammenfassung

Covers Debugging and optimization of deep learning techniques with TensorFlow 2.0 and Python

Covers recent advances in autoencoders and multitask learning

Explains how to build models and deploy them on edge devices as Raspberry Pi using TensorFlow lite

Inhaltsverzeichnis
Chapter 1 : Optimization and neural networks
Subtopics:
How to read the book
Introduction to the book
Chapter 2: Hands-on with One Single Neuron
Subtopics:
Overview of optimization
A definition of learning
Constrained vs. unconstrained optimization
Absolute and local minima
Optimization algorithms with focus on Gradient Descent
Variations of Gradient Descent (mini-batch and stochastic)
How to choose the right mini-batch size
Chapter 3: Feed Forward Neural Networks
Subtopics:
A short introduction to matrix algebra
Activation functions (identity, sigmoid, tanh, swish, etc.)
Implementation of one neuron in Keras
Linear regression with one neuron
Logistic regression with one neuron
Chapter 4: Regularization
Subtopics:
Matrix formalism
Softmax activation function
Overfitting and bias-variance discussion
How to implement a fully conneted network with Keras
Multi-class classification with the Zalando dataset in Keras
Gradient descent variation in practice with a real dataset
Weight initialization
How to compare the complexity of neural networks
How to estimate memory used by neural networks in Keras
Chapter 5: Advanced Optimizers
Subtopics:
An introduction to regularization
l_p norm
l_2 regularization
Weight decay when using regularization
Dropout
Early Stopping
Chapter 6
Chapter Title: Hyper-Parameter tuning
Subtopics:
Exponentially weighted averages
Momentum
RMSProp
Adam
Comparison of optimizers
Chapter 7
Chapter Title: Convolutional Neural Networks
Subtopics:
Introduction to Hyper-parameter tuning
Black box optimization
Grid Search
Random Search
Coarse to fine optimization
Sampling on logarithmic scale
Bayesian optimisation
Chapter 8
Chapter Title: Brief Introduction to Recurrent Neural Networks
Subtopics:
Theory of convolution
Pooling and padding
Building blocks of a CNN
Implementation of a CNN with Keras
Introduction to recurrent neural networks
Implementation of a RNN with Keras
Chapter 9: Autoencoders
Subtopics:
Feed Forward Autoencoders
Loss function in autoencoders
Reconstruction error
Application of autoencoders: dimensionality reduction
Application of autoencoders: Classification with latent features
Curse of dimensionality
Denoising autoencoders
Autoencoders with CNN
Chapter 10: Metric Analysis
Subtopics:
Human level performance and Bayes error
Bias
Metric analysis diagram
Training set overfitting
How to split your dataset
Unbalanced dataset: what can happen
K-fold cross validation
Manual metric analysis: an example
Chapter 11
Chapter Title: General Adversarial Networks (GANs)
Subtopics:
Introduction to GANs
The building blocks of GANs
An example of implementation of GANs in Keras
APPENDIX 1: Introduction to Keras
Subtopics:
Sequential model
Keras Layers
Functional APIs
Specifying loss functions
Putting all together and training a model
Callback functions
Save and load models
APPENDIX 2: Customizing Keras
Subtopics:
Custom callback functions
Custom training loops
Custom loss functions
APPENDIX 3: Symbols and Abbreviations
Details
Erscheinungsjahr: 2022
Genre: Informatik
Rubrik: Naturwissenschaften & Technik
Medium: Taschenbuch
Inhalt: xxviii
380 S.
117 s/w Illustr.
31 farbige Illustr.
380 p. 148 illus.
31 illus. in color.
ISBN-13: 9781484280195
ISBN-10: 1484280199
Sprache: Englisch
Ausstattung / Beilage: Paperback
Einband: Kartoniert / Broschiert
Autor: Michelucci, Umberto
Auflage: 2nd ed.
Hersteller: Apress
Apress L.P.
Maße: 254 x 178 x 23 mm
Von/Mit: Umberto Michelucci
Erscheinungsdatum: 29.03.2022
Gewicht: 0,764 kg
Artikel-ID: 120927798
Über den Autor
Umberto Michelucci is the founder and the chief AI scientist of TOELT - Advanced AI LAB LLC. He's an expert in numerical simulation, statistics, data science, and machine learning. He has 15 years of practical experience in the fields of data warehouse, data science, and machine learning. His first book, Applied Deep Learning-A Case-Based Approach to Understanding Deep Neural Networks, was published in 2018. His second book, Convolutional and Recurrent Neural Networks Theory and Applications was published in 2019. He publishes his research regularly and gives lectures on machine learning and statistics at various universities. He holds a PhD in machine learning, and he is also a Google Developer Expert in Machine Learning based in Switzerland.
Zusammenfassung

Covers Debugging and optimization of deep learning techniques with TensorFlow 2.0 and Python

Covers recent advances in autoencoders and multitask learning

Explains how to build models and deploy them on edge devices as Raspberry Pi using TensorFlow lite

Inhaltsverzeichnis
Chapter 1 : Optimization and neural networks
Subtopics:
How to read the book
Introduction to the book
Chapter 2: Hands-on with One Single Neuron
Subtopics:
Overview of optimization
A definition of learning
Constrained vs. unconstrained optimization
Absolute and local minima
Optimization algorithms with focus on Gradient Descent
Variations of Gradient Descent (mini-batch and stochastic)
How to choose the right mini-batch size
Chapter 3: Feed Forward Neural Networks
Subtopics:
A short introduction to matrix algebra
Activation functions (identity, sigmoid, tanh, swish, etc.)
Implementation of one neuron in Keras
Linear regression with one neuron
Logistic regression with one neuron
Chapter 4: Regularization
Subtopics:
Matrix formalism
Softmax activation function
Overfitting and bias-variance discussion
How to implement a fully conneted network with Keras
Multi-class classification with the Zalando dataset in Keras
Gradient descent variation in practice with a real dataset
Weight initialization
How to compare the complexity of neural networks
How to estimate memory used by neural networks in Keras
Chapter 5: Advanced Optimizers
Subtopics:
An introduction to regularization
l_p norm
l_2 regularization
Weight decay when using regularization
Dropout
Early Stopping
Chapter 6
Chapter Title: Hyper-Parameter tuning
Subtopics:
Exponentially weighted averages
Momentum
RMSProp
Adam
Comparison of optimizers
Chapter 7
Chapter Title: Convolutional Neural Networks
Subtopics:
Introduction to Hyper-parameter tuning
Black box optimization
Grid Search
Random Search
Coarse to fine optimization
Sampling on logarithmic scale
Bayesian optimisation
Chapter 8
Chapter Title: Brief Introduction to Recurrent Neural Networks
Subtopics:
Theory of convolution
Pooling and padding
Building blocks of a CNN
Implementation of a CNN with Keras
Introduction to recurrent neural networks
Implementation of a RNN with Keras
Chapter 9: Autoencoders
Subtopics:
Feed Forward Autoencoders
Loss function in autoencoders
Reconstruction error
Application of autoencoders: dimensionality reduction
Application of autoencoders: Classification with latent features
Curse of dimensionality
Denoising autoencoders
Autoencoders with CNN
Chapter 10: Metric Analysis
Subtopics:
Human level performance and Bayes error
Bias
Metric analysis diagram
Training set overfitting
How to split your dataset
Unbalanced dataset: what can happen
K-fold cross validation
Manual metric analysis: an example
Chapter 11
Chapter Title: General Adversarial Networks (GANs)
Subtopics:
Introduction to GANs
The building blocks of GANs
An example of implementation of GANs in Keras
APPENDIX 1: Introduction to Keras
Subtopics:
Sequential model
Keras Layers
Functional APIs
Specifying loss functions
Putting all together and training a model
Callback functions
Save and load models
APPENDIX 2: Customizing Keras
Subtopics:
Custom callback functions
Custom training loops
Custom loss functions
APPENDIX 3: Symbols and Abbreviations
Details
Erscheinungsjahr: 2022
Genre: Informatik
Rubrik: Naturwissenschaften & Technik
Medium: Taschenbuch
Inhalt: xxviii
380 S.
117 s/w Illustr.
31 farbige Illustr.
380 p. 148 illus.
31 illus. in color.
ISBN-13: 9781484280195
ISBN-10: 1484280199
Sprache: Englisch
Ausstattung / Beilage: Paperback
Einband: Kartoniert / Broschiert
Autor: Michelucci, Umberto
Auflage: 2nd ed.
Hersteller: Apress
Apress L.P.
Maße: 254 x 178 x 23 mm
Von/Mit: Umberto Michelucci
Erscheinungsdatum: 29.03.2022
Gewicht: 0,764 kg
Artikel-ID: 120927798
Warnhinweis