Dekorationsartikel gehören nicht zum Leistungsumfang.
The Principles of Deep Learning Theory
Buch von Daniel A. Roberts (u. a.)
Sprache: Englisch

82,30 €*

inkl. MwSt.

Versandkostenfrei per Post / DHL

Lieferzeit 1-2 Wochen

Kategorien:
Beschreibung
"This textbook establishes a theoretical framework for understanding deep learning models of practical relevance. With an approach that borrows from theoretical physics, Roberts and Yaida provide clear and pedagogical explanations of how realistic deep neural networks actually work. To make results from the theoretical forefront accessible, the authors eschew the subject's traditional emphasis on intimidating formality without sacrificing accuracy. Straightforward and approachable, this volume balances detailed first-principle derivations of novel results with insight and intuition for theorists and practitioners alike. This self-contained textbook is ideal for students and researchers interested in artificial intelligence with minimal prerequisites of linear algebra, calculus, and informal probability theory, and it can easily fill a semester-long course on deep learning theory. For the first time, the exciting practical advances in modern artificial intelligence capabilities can be matched with a set of effective principles, providing a timeless blueprint for theoretical research in deep learning"--
"This textbook establishes a theoretical framework for understanding deep learning models of practical relevance. With an approach that borrows from theoretical physics, Roberts and Yaida provide clear and pedagogical explanations of how realistic deep neural networks actually work. To make results from the theoretical forefront accessible, the authors eschew the subject's traditional emphasis on intimidating formality without sacrificing accuracy. Straightforward and approachable, this volume balances detailed first-principle derivations of novel results with insight and intuition for theorists and practitioners alike. This self-contained textbook is ideal for students and researchers interested in artificial intelligence with minimal prerequisites of linear algebra, calculus, and informal probability theory, and it can easily fill a semester-long course on deep learning theory. For the first time, the exciting practical advances in modern artificial intelligence capabilities can be matched with a set of effective principles, providing a timeless blueprint for theoretical research in deep learning"--
Über den Autor
Daniel A. Roberts was cofounder and CTO of Diffeo, an AI company acquired by Salesforce; a research scientist at Facebook AI Research; and a member of the School of Natural Sciences at the Institute for Advanced Study in Princeton, NJ. He was a Hertz Fellow, earning a PhD from MIT in theoretical physics, and was also a Marshall Scholar at Cambridge and Oxford Universities.
Inhaltsverzeichnis
Preface; 0. Initialization; 1. Pretraining; 2. Neural networks; 3. Effective theory of deep linear networks at initialization; 4. RG flow of preactivations; 5. Effective theory of preactivations at initializations; 6. Bayesian learning; 7. Gradient-based learning; 8. RG flow of the neural tangent kernel; 9. Effective theory of the NTK at initialization; 10. Kernel learning; 11. Representation learning; ¿. The end of training; ¿. Epilogue; A. Information in deep learning; B. Residual learning; References; Index.
Details
Erscheinungsjahr: 2022
Genre: Physik
Rubrik: Naturwissenschaften & Technik
Medium: Buch
Seiten: 472
Inhalt: Gebunden
ISBN-13: 9781316519332
ISBN-10: 1316519333
Sprache: Englisch
Ausstattung / Beilage: HC gerader Rücken kaschiert
Einband: Gebunden
Autor: Roberts, Daniel A.
Yaida, Sho
Hanin, Boris
Besonderheit: Unsere Aufsteiger
Hersteller: Cambridge University Press
Maße: 260 x 183 x 30 mm
Von/Mit: Daniel A. Roberts (u. a.)
Erscheinungsdatum: 15.04.2022
Gewicht: 1,077 kg
preigu-id: 120952167
Über den Autor
Daniel A. Roberts was cofounder and CTO of Diffeo, an AI company acquired by Salesforce; a research scientist at Facebook AI Research; and a member of the School of Natural Sciences at the Institute for Advanced Study in Princeton, NJ. He was a Hertz Fellow, earning a PhD from MIT in theoretical physics, and was also a Marshall Scholar at Cambridge and Oxford Universities.
Inhaltsverzeichnis
Preface; 0. Initialization; 1. Pretraining; 2. Neural networks; 3. Effective theory of deep linear networks at initialization; 4. RG flow of preactivations; 5. Effective theory of preactivations at initializations; 6. Bayesian learning; 7. Gradient-based learning; 8. RG flow of the neural tangent kernel; 9. Effective theory of the NTK at initialization; 10. Kernel learning; 11. Representation learning; ¿. The end of training; ¿. Epilogue; A. Information in deep learning; B. Residual learning; References; Index.
Details
Erscheinungsjahr: 2022
Genre: Physik
Rubrik: Naturwissenschaften & Technik
Medium: Buch
Seiten: 472
Inhalt: Gebunden
ISBN-13: 9781316519332
ISBN-10: 1316519333
Sprache: Englisch
Ausstattung / Beilage: HC gerader Rücken kaschiert
Einband: Gebunden
Autor: Roberts, Daniel A.
Yaida, Sho
Hanin, Boris
Besonderheit: Unsere Aufsteiger
Hersteller: Cambridge University Press
Maße: 260 x 183 x 30 mm
Von/Mit: Daniel A. Roberts (u. a.)
Erscheinungsdatum: 15.04.2022
Gewicht: 1,077 kg
preigu-id: 120952167
Warnhinweis

Ähnliche Produkte

Ähnliche Produkte