44,15 €*
Versandkostenfrei per Post / DHL
Lieferzeit 1-2 Wochen
Understand deep learning fundamentals such as feed-forward networks, convolution neural networks, recurrent neural networks, automatic differentiation, and stochastic gradient descent.
Apply in-depth linear algebra with PyTorch
Explore PyTorch fundamentals andits building blocks
Work with tuning and optimizing models
Who This Book Is For
Understand deep learning fundamentals such as feed-forward networks, convolution neural networks, recurrent neural networks, automatic differentiation, and stochastic gradient descent.
Apply in-depth linear algebra with PyTorch
Explore PyTorch fundamentals andits building blocks
Work with tuning and optimizing models
Who This Book Is For
Offers a sound theoretical/mathematical foundation and practical programming techniques using PyTorch
Covers deep learning with multiple GPUs and optimizing deep learning models
Reviews best practices of taking deep learning models to production with PyTorch
Chapter 1 - Introduction Deep Learning
A brief introduction to Machine Learning and Deep Learning. We explore foundational topics within the subject that provide us the building blocks for several topics within the subject.
Chapter 2 - Introduction to PyTorch
A quick-start guide to PyTorch and a comprehensive introduction to tensors, linear algebra and mathematical operations for Tensors. The chapter provides the required PyTorch foundations for readers to meaningfully implement practical Deep Learning solutions for various topics within the book. Advanced PyTorch topics are explored as and when touch-based during the course of exercises in later chapter.
Chapter 3- Feed Forward Networks (30 Pages)
In this chapter, we explore the building blocks of a neural network and build an intuition on training and evaluating networks. We briefly explore loss functions, activation functions, optimizers, backpropagation, that could be used for training. Finally, we would stitch together each of these smaller components into a full-fledged feed-forward neural network with PyTorch.
Chapter 4-Automatic Differentiation in Deep Learning
In this chapter we open this black box topic within backpropagation that enables training of neural networks i.e. automatic differentiation. We cover a brief history of other techniques that were ruled out in favor of automatic differentiation and study the topic with a practical example and implement the same using PyTorchs Autograd module.
Chapter 5 - Training Deep Neural Networks
In this chapter we explore few additional important topics around deep learning and implement them into a practical example. We will delve into specifics of model performance and study in detail about overfitting and underfitting, hyperparameter tuning and regularization. Finally, we will leverage a real dataset and combined our learnings from the beginning of this book into a practical example using PyTorch.
Chapter 6 - Convolutional Neural Networks (35 Pages)
Introduction to Convolutional Neural Networks for Computer Vision. We explore the core components with CNNs with examples to understand the internals of the network, build an intuition around the automated feature extraction, parameter sharing and thus understand the holistic process of training CNNs with incremental building blocks. We also leverage hands-on exercises to study the practical implementation of CNNs for a simple dataset i.e. MNIST (classification of handwritten digits), and later extend the exercise for a binary classification use-case with the popular cats and dogs' dataset.
Chapter 7 - Recurrent Neural Networks
Introduction to Recurrent Neural Networks and its variants (viz. Bidirectional RNNs and LSTMs). We explore the construction of a recurrent unit, study the mathematical background and build intuition around how RNNs are trained by exploring a simple four step unrolled network. We then explore hands-on exercises in natural language processing that leverages vanilla RNNs and later improve their performance by using Bidirectional RNNS combined with LSTM layers.
Chapter 8 - Recent advances in Deep Learning
A brief note of the cutting-edge advancements in the field will be added. We explore important inventions within the field with no implementation details, however focus on the applications and the path forward.
Erscheinungsjahr: | 2021 |
---|---|
Fachbereich: | Programmiersprachen |
Genre: | Informatik |
Rubrik: | Naturwissenschaften & Technik |
Medium: | Taschenbuch |
Inhalt: |
xvii
306 S. 82 s/w Illustr. 306 p. 82 illus. |
ISBN-13: | 9781484253632 |
ISBN-10: | 1484253639 |
Sprache: | Englisch |
Ausstattung / Beilage: | Paperback |
Einband: | Kartoniert / Broschiert |
Autor: |
Moolayil, Jojo
Ketkar, Nikhil |
Auflage: | 2nd ed. |
Hersteller: |
Apress
Apress L.P. |
Maße: | 235 x 155 x 18 mm |
Von/Mit: | Jojo Moolayil (u. a.) |
Erscheinungsdatum: | 10.04.2021 |
Gewicht: | 0,493 kg |
Offers a sound theoretical/mathematical foundation and practical programming techniques using PyTorch
Covers deep learning with multiple GPUs and optimizing deep learning models
Reviews best practices of taking deep learning models to production with PyTorch
Chapter 1 - Introduction Deep Learning
A brief introduction to Machine Learning and Deep Learning. We explore foundational topics within the subject that provide us the building blocks for several topics within the subject.
Chapter 2 - Introduction to PyTorch
A quick-start guide to PyTorch and a comprehensive introduction to tensors, linear algebra and mathematical operations for Tensors. The chapter provides the required PyTorch foundations for readers to meaningfully implement practical Deep Learning solutions for various topics within the book. Advanced PyTorch topics are explored as and when touch-based during the course of exercises in later chapter.
Chapter 3- Feed Forward Networks (30 Pages)
In this chapter, we explore the building blocks of a neural network and build an intuition on training and evaluating networks. We briefly explore loss functions, activation functions, optimizers, backpropagation, that could be used for training. Finally, we would stitch together each of these smaller components into a full-fledged feed-forward neural network with PyTorch.
Chapter 4-Automatic Differentiation in Deep Learning
In this chapter we open this black box topic within backpropagation that enables training of neural networks i.e. automatic differentiation. We cover a brief history of other techniques that were ruled out in favor of automatic differentiation and study the topic with a practical example and implement the same using PyTorchs Autograd module.
Chapter 5 - Training Deep Neural Networks
In this chapter we explore few additional important topics around deep learning and implement them into a practical example. We will delve into specifics of model performance and study in detail about overfitting and underfitting, hyperparameter tuning and regularization. Finally, we will leverage a real dataset and combined our learnings from the beginning of this book into a practical example using PyTorch.
Chapter 6 - Convolutional Neural Networks (35 Pages)
Introduction to Convolutional Neural Networks for Computer Vision. We explore the core components with CNNs with examples to understand the internals of the network, build an intuition around the automated feature extraction, parameter sharing and thus understand the holistic process of training CNNs with incremental building blocks. We also leverage hands-on exercises to study the practical implementation of CNNs for a simple dataset i.e. MNIST (classification of handwritten digits), and later extend the exercise for a binary classification use-case with the popular cats and dogs' dataset.
Chapter 7 - Recurrent Neural Networks
Introduction to Recurrent Neural Networks and its variants (viz. Bidirectional RNNs and LSTMs). We explore the construction of a recurrent unit, study the mathematical background and build intuition around how RNNs are trained by exploring a simple four step unrolled network. We then explore hands-on exercises in natural language processing that leverages vanilla RNNs and later improve their performance by using Bidirectional RNNS combined with LSTM layers.
Chapter 8 - Recent advances in Deep Learning
A brief note of the cutting-edge advancements in the field will be added. We explore important inventions within the field with no implementation details, however focus on the applications and the path forward.
Erscheinungsjahr: | 2021 |
---|---|
Fachbereich: | Programmiersprachen |
Genre: | Informatik |
Rubrik: | Naturwissenschaften & Technik |
Medium: | Taschenbuch |
Inhalt: |
xvii
306 S. 82 s/w Illustr. 306 p. 82 illus. |
ISBN-13: | 9781484253632 |
ISBN-10: | 1484253639 |
Sprache: | Englisch |
Ausstattung / Beilage: | Paperback |
Einband: | Kartoniert / Broschiert |
Autor: |
Moolayil, Jojo
Ketkar, Nikhil |
Auflage: | 2nd ed. |
Hersteller: |
Apress
Apress L.P. |
Maße: | 235 x 155 x 18 mm |
Von/Mit: | Jojo Moolayil (u. a.) |
Erscheinungsdatum: | 10.04.2021 |
Gewicht: | 0,493 kg |