Dekorationsartikel gehören nicht zum Leistungsumfang.
Deep Learning for Physical Scientists
Accelerating Research with Machine Learning
Buch von Edward O. Pyzer-Knapp (u. a.)
Sprache: Englisch

75,90 €*

inkl. MwSt.

Versandkostenfrei per Post / DHL

Aktuell nicht verfügbar

Kategorien:
Beschreibung
Discover the power of machine learning in the physical sciences with this one-stop resource from a leading voice in the field

Deep Learning for Physical Scientists: Accelerating Research with Machine Learning delivers an insightful analysis of the transformative techniques being used in deep learning within the physical sciences. The book offers readers the ability to understand, select, and apply the best deep learning techniques for their individual research problem and interpret the outcome.

Designed to teach researchers to think in useful new ways about how to achieve results in their research, the book provides scientists with new avenues to attack problems and avoid common pitfalls and problems. Practical case studies and problems are presented, giving readers an opportunity to put what they have learned into practice, with exemplar coding approaches provided to assist the reader.

From modelling basics to feed-forward networks, the book offers a broad cross-section of machine learning techniques to improve physical science research. Readers will also enjoy:
* A thorough introduction to the basic classification and regression with perceptrons
* An exploration of training algorithms, including back propagation and stochastic gradient descent and the parallelization of training
* An examination of multi-layer perceptrons for learning from descriptors and de-noising data
* Discussions of recurrent neural networks for learning from sequences and convolutional neural networks for learning from images
* A treatment of Bayesian optimization for tuning deep learning architectures

Perfect for academic and industrial research professionals in the physical sciences, Deep Learning for Physical Scientists: Accelerating Research with Machine Learning will also earn a place in the libraries of industrial researchers who have access to large amounts of data but have yet to learn the techniques to fully exploit that access. Perfect for academic and industrial research professionals in the physical sciences, Deep Learning for Physical Scientists: Accelerating Research with Machine Learning will also earn a place in the libraries of industrial researchers who have access to large amounts of data but have yet to learn the techniques to fully exploit that access. This book introduces the reader to the transformative techniques involved in deep learning. A range of methodologies are addressed including: *Basic classification and regression with perceptrons *Training algorithms, such as back propagation and stochastic gradient descent and the parallelization of training *Multi-Layer Perceptrons for learning from descriptors, and de-noising data *Recurrent neural networks for learning from sequences *Convolutional neural networks for learning from images *Bayesian optimization for tuning deep learning architectures Each of these areas has direct application to physical science research, and by the end of the book, the reader should feel comfortable enough to select the methodology which is best for their situation, and be able to implement and interpret outcome of the deep learning model. The book is designed to teach researchers to think in new ways, providing them with new avenues to attack problems, and avoid roadblocks within their research. This is achieved through the inclusion of case-study like problems at the end of each chapter, which will give the reader a chance to practice what they have just learnt in a close-to-real-world setting, with example 'solutions' provided through an online resource. Market Description This book introduces the reader to the transformative techniques involved in deep learning. A range of methodologies are addressed including: * Basic classification and regression with perceptrons * Training algorithms, such as back propagation and stochastic gradient descent and the parallelization of training * Multi-Layer Perceptrons for learning from desc
Discover the power of machine learning in the physical sciences with this one-stop resource from a leading voice in the field

Deep Learning for Physical Scientists: Accelerating Research with Machine Learning delivers an insightful analysis of the transformative techniques being used in deep learning within the physical sciences. The book offers readers the ability to understand, select, and apply the best deep learning techniques for their individual research problem and interpret the outcome.

Designed to teach researchers to think in useful new ways about how to achieve results in their research, the book provides scientists with new avenues to attack problems and avoid common pitfalls and problems. Practical case studies and problems are presented, giving readers an opportunity to put what they have learned into practice, with exemplar coding approaches provided to assist the reader.

From modelling basics to feed-forward networks, the book offers a broad cross-section of machine learning techniques to improve physical science research. Readers will also enjoy:
* A thorough introduction to the basic classification and regression with perceptrons
* An exploration of training algorithms, including back propagation and stochastic gradient descent and the parallelization of training
* An examination of multi-layer perceptrons for learning from descriptors and de-noising data
* Discussions of recurrent neural networks for learning from sequences and convolutional neural networks for learning from images
* A treatment of Bayesian optimization for tuning deep learning architectures

Perfect for academic and industrial research professionals in the physical sciences, Deep Learning for Physical Scientists: Accelerating Research with Machine Learning will also earn a place in the libraries of industrial researchers who have access to large amounts of data but have yet to learn the techniques to fully exploit that access. Perfect for academic and industrial research professionals in the physical sciences, Deep Learning for Physical Scientists: Accelerating Research with Machine Learning will also earn a place in the libraries of industrial researchers who have access to large amounts of data but have yet to learn the techniques to fully exploit that access. This book introduces the reader to the transformative techniques involved in deep learning. A range of methodologies are addressed including: *Basic classification and regression with perceptrons *Training algorithms, such as back propagation and stochastic gradient descent and the parallelization of training *Multi-Layer Perceptrons for learning from descriptors, and de-noising data *Recurrent neural networks for learning from sequences *Convolutional neural networks for learning from images *Bayesian optimization for tuning deep learning architectures Each of these areas has direct application to physical science research, and by the end of the book, the reader should feel comfortable enough to select the methodology which is best for their situation, and be able to implement and interpret outcome of the deep learning model. The book is designed to teach researchers to think in new ways, providing them with new avenues to attack problems, and avoid roadblocks within their research. This is achieved through the inclusion of case-study like problems at the end of each chapter, which will give the reader a chance to practice what they have just learnt in a close-to-real-world setting, with example 'solutions' provided through an online resource. Market Description This book introduces the reader to the transformative techniques involved in deep learning. A range of methodologies are addressed including: * Basic classification and regression with perceptrons * Training algorithms, such as back propagation and stochastic gradient descent and the parallelization of training * Multi-Layer Perceptrons for learning from desc
Inhaltsverzeichnis
About the Authors xi

Acknowledgements xii

1 Prefix - Learning to "Think Deep" 1

1.1 So What Do I Mean by Changing the Way You Think? 2

2 Setting Up a Python Environment for Deep Learning Projects 5

2.1 Python Overview 5

2.2 Why Use Python for Data Science? 6

2.3 Anaconda Python 7

2.3.1 Why Use Anaconda? 7

2.3.2 Downloading and Installing Anaconda Python 7

2.3.2.1 Installing TensorFlow 9

2.4 Jupyter Notebooks 10

2.4.1 Why Use a Notebook? 10

2.4.2 Starting a Jupyter Notebook Server 11

2.4.3 Adding Markdown to Notebooks 12

2.4.4 A Simple Plotting Example 14

2.4.5 Summary 16

3 Modelling Basics 17

3.1 Introduction 17

3.2 Start Where You Mean to Go On - Input Definition and Creation 17

3.3 Loss Functions 18

3.3.1 Classification and Regression 19

3.3.2 Regression Loss Functions 19

3.3.2.1 Mean Absolute Error 19

3.3.2.2 Root Mean Squared Error 19

3.3.3 Classification Loss Functions 20

3.3.3.1 Precision 21

3.3.3.2 Recall 21

3.3.3.3 F1 Score 22

3.3.3.4 Confusion Matrix 22

3.3.3.5 (Area Under) Receiver Operator Curve (AU-ROC) 23

3.3.3.6 Cross Entropy 25

3.4 Overfitting and Underfitting 28

3.4.1 Bias-Variance Trade-Off 29

3.5 Regularisation 31

3.5.1 Ridge Regression 31

3.5.2 LASSO Regularisation 33

3.5.3 Elastic Net 34

3.5.4 Bagging and Model Averaging 34

3.6 Evaluating a Model 35

3.6.1 Holdout Testing 35

3.6.2 Cross Validation 36

3.7 The Curse of Dimensionality 37

3.7.1 Normalising Inputs and Targets 37

3.8 Summary 39

Notes 39

4 Feedforward Networks and Multilayered Perceptrons 41

4.1 Introduction 41

4.2 The Single Perceptron 41

4.2.1 Training a Perceptron 41

4.2.2 Activation Functions 42

4.2.3 Back Propagation 43

4.2.3.1 Weight Initialisation 45

4.2.3.2 Learning Rate 46

4.2.4 Key Assumptions 46

4.2.5 Putting It All Together in TensorFlow 47

4.3 Moving to a Deep Network 49

4.4 Vanishing Gradients and Other "Deep" Problems 53

4.4.1 Gradient Clipping 54

4.4.2 Non-saturating Activation Functions 54

4.4.2.1 ReLU 54

4.4.2.2 Leaky ReLU 56

4.4.2.3 ELU 57

4.4.3 More Complex Initialisation Schemes 57

4.4.3.1 Xavier 58

4.4.3.2 He 58

4.4.4 Mini Batching 59

4.5 Improving the Optimisation 60

4.5.1 Bias 60

4.5.2 Momentum 63

4.5.3 Nesterov Momentum 63

4.5.4 (Adaptive) Learning Rates 63

4.5.5 AdaGrad 64

4.5.6 RMSProp 65

4.5.7 Adam 65

4.5.8 Regularisation 66

4.5.9 Early Stopping 66

4.5.10 Dropout 68

4.6 Parallelisation of learning 69

4.6.1 Hogwild! 69

4.7 High and Low-level Tensorflow APIs 70

4.8 Architecture Implementations 72

4.9 Summary 73

4.10 Papers to Read 73

5 Recurrent Neural Networks 77

5.1 Introduction 77

5.2 Basic Recurrent Neural Networks 77

5.2.1 Training a Basic RNN 78

5.2.2 Putting It All Together in TensorFlow 79

5.2.3 The Problem with Vanilla RNNs 81

5.3 Long Short-Term Memory (LSTM) Networks 82

5.3.1 Forget Gate 82

5.3.2 Input Gate 84

5.3.3 Output Gat
Details
Erscheinungsjahr: 2021
Genre: Chemie
Rubrik: Naturwissenschaften & Technik
Medium: Buch
Seiten: 208
Inhalt: 208 S.
ISBN-13: 9781119408338
ISBN-10: 1119408334
Sprache: Englisch
Herstellernummer: 1W119408330
Autor: Pyzer-Knapp, Edward O.
Benatan, Matthew
Auflage: 1. Auflage
Hersteller: Wiley
Wiley & Sons
Maße: 239 x 151 x 17 mm
Von/Mit: Edward O. Pyzer-Knapp (u. a.)
Erscheinungsdatum: 21.10.2021
Gewicht: 0,438 kg
preigu-id: 117870582
Inhaltsverzeichnis
About the Authors xi

Acknowledgements xii

1 Prefix - Learning to "Think Deep" 1

1.1 So What Do I Mean by Changing the Way You Think? 2

2 Setting Up a Python Environment for Deep Learning Projects 5

2.1 Python Overview 5

2.2 Why Use Python for Data Science? 6

2.3 Anaconda Python 7

2.3.1 Why Use Anaconda? 7

2.3.2 Downloading and Installing Anaconda Python 7

2.3.2.1 Installing TensorFlow 9

2.4 Jupyter Notebooks 10

2.4.1 Why Use a Notebook? 10

2.4.2 Starting a Jupyter Notebook Server 11

2.4.3 Adding Markdown to Notebooks 12

2.4.4 A Simple Plotting Example 14

2.4.5 Summary 16

3 Modelling Basics 17

3.1 Introduction 17

3.2 Start Where You Mean to Go On - Input Definition and Creation 17

3.3 Loss Functions 18

3.3.1 Classification and Regression 19

3.3.2 Regression Loss Functions 19

3.3.2.1 Mean Absolute Error 19

3.3.2.2 Root Mean Squared Error 19

3.3.3 Classification Loss Functions 20

3.3.3.1 Precision 21

3.3.3.2 Recall 21

3.3.3.3 F1 Score 22

3.3.3.4 Confusion Matrix 22

3.3.3.5 (Area Under) Receiver Operator Curve (AU-ROC) 23

3.3.3.6 Cross Entropy 25

3.4 Overfitting and Underfitting 28

3.4.1 Bias-Variance Trade-Off 29

3.5 Regularisation 31

3.5.1 Ridge Regression 31

3.5.2 LASSO Regularisation 33

3.5.3 Elastic Net 34

3.5.4 Bagging and Model Averaging 34

3.6 Evaluating a Model 35

3.6.1 Holdout Testing 35

3.6.2 Cross Validation 36

3.7 The Curse of Dimensionality 37

3.7.1 Normalising Inputs and Targets 37

3.8 Summary 39

Notes 39

4 Feedforward Networks and Multilayered Perceptrons 41

4.1 Introduction 41

4.2 The Single Perceptron 41

4.2.1 Training a Perceptron 41

4.2.2 Activation Functions 42

4.2.3 Back Propagation 43

4.2.3.1 Weight Initialisation 45

4.2.3.2 Learning Rate 46

4.2.4 Key Assumptions 46

4.2.5 Putting It All Together in TensorFlow 47

4.3 Moving to a Deep Network 49

4.4 Vanishing Gradients and Other "Deep" Problems 53

4.4.1 Gradient Clipping 54

4.4.2 Non-saturating Activation Functions 54

4.4.2.1 ReLU 54

4.4.2.2 Leaky ReLU 56

4.4.2.3 ELU 57

4.4.3 More Complex Initialisation Schemes 57

4.4.3.1 Xavier 58

4.4.3.2 He 58

4.4.4 Mini Batching 59

4.5 Improving the Optimisation 60

4.5.1 Bias 60

4.5.2 Momentum 63

4.5.3 Nesterov Momentum 63

4.5.4 (Adaptive) Learning Rates 63

4.5.5 AdaGrad 64

4.5.6 RMSProp 65

4.5.7 Adam 65

4.5.8 Regularisation 66

4.5.9 Early Stopping 66

4.5.10 Dropout 68

4.6 Parallelisation of learning 69

4.6.1 Hogwild! 69

4.7 High and Low-level Tensorflow APIs 70

4.8 Architecture Implementations 72

4.9 Summary 73

4.10 Papers to Read 73

5 Recurrent Neural Networks 77

5.1 Introduction 77

5.2 Basic Recurrent Neural Networks 77

5.2.1 Training a Basic RNN 78

5.2.2 Putting It All Together in TensorFlow 79

5.2.3 The Problem with Vanilla RNNs 81

5.3 Long Short-Term Memory (LSTM) Networks 82

5.3.1 Forget Gate 82

5.3.2 Input Gate 84

5.3.3 Output Gat
Details
Erscheinungsjahr: 2021
Genre: Chemie
Rubrik: Naturwissenschaften & Technik
Medium: Buch
Seiten: 208
Inhalt: 208 S.
ISBN-13: 9781119408338
ISBN-10: 1119408334
Sprache: Englisch
Herstellernummer: 1W119408330
Autor: Pyzer-Knapp, Edward O.
Benatan, Matthew
Auflage: 1. Auflage
Hersteller: Wiley
Wiley & Sons
Maße: 239 x 151 x 17 mm
Von/Mit: Edward O. Pyzer-Knapp (u. a.)
Erscheinungsdatum: 21.10.2021
Gewicht: 0,438 kg
preigu-id: 117870582
Warnhinweis

Ähnliche Produkte

Ähnliche Produkte