Dekorationsartikel gehören nicht zum Leistungsumfang.
Optimization for Data Analysis
Buch von Stephen J. Wright
Sprache: Englisch

55,10 €*

inkl. MwSt.

Versandkostenfrei per Post / DHL

Lieferzeit 1-2 Wochen

Kategorien:
Beschreibung
Optimization techniques are at the core of data science, including data analysis and machine learning. An understanding of basic optimization techniques and their fundamental properties provides important grounding for students, researchers, and practitioners in these areas. This text covers the fundamentals of optimization algorithms in a compact, self-contained way, focusing on the techniques most relevant to data science. An introductory chapter demonstrates that many standard problems in data science can be formulated as optimization problems. Next, many fundamental methods in optimization are described and analyzed, including: gradient and accelerated gradient methods for unconstrained optimization of smooth (especially convex) functions; the stochastic gradient method, a workhorse algorithm in machine learning; the coordinate descent approach; several key algorithms for constrained optimization problems; algorithms for minimizing nonsmooth functions arising in data science; foundations of the analysis of nonsmooth functions and optimization duality; and the back-propagation approach, relevant to neural networks.
Optimization techniques are at the core of data science, including data analysis and machine learning. An understanding of basic optimization techniques and their fundamental properties provides important grounding for students, researchers, and practitioners in these areas. This text covers the fundamentals of optimization algorithms in a compact, self-contained way, focusing on the techniques most relevant to data science. An introductory chapter demonstrates that many standard problems in data science can be formulated as optimization problems. Next, many fundamental methods in optimization are described and analyzed, including: gradient and accelerated gradient methods for unconstrained optimization of smooth (especially convex) functions; the stochastic gradient method, a workhorse algorithm in machine learning; the coordinate descent approach; several key algorithms for constrained optimization problems; algorithms for minimizing nonsmooth functions arising in data science; foundations of the analysis of nonsmooth functions and optimization duality; and the back-propagation approach, relevant to neural networks.
Über den Autor
Stephen J. Wright holds the George B. Dantzig Professorship, the Sheldon Lubar Chair, and the Amar and Balinder Sohi Professorship of Computer Sciences at the University of Wisconsin-Madison and a Discovery Fellow in the Wisconsin Institute for Discovery. He works in computational optimization and its applications to data science and many other areas of science and engineering. He is a Fellow of SIAM, and recipient of the 2014 W. R. G. Baker Award from IEEE for most outstanding paper, the 2020 Khachiyan Prize by the INFORMS Optimization Society for lifetime achievements in optimization, and the 2020 NeurIPS Test of Time Award. Professor Wright is the author and co-author of widely used textbooks and reference books in optimization including Primal Dual Interior-Point Methods (1987) and Numerical Optimization (2006).
Inhaltsverzeichnis
1. Introduction; 2. Foundations of smooth optimization; 3. Descent methods; 4. Gradient methods using momentum; 5. Stochastic gradient; 6. Coordinate descent; 7. First-order methods for constrained optimization; 8. Nonsmooth functions and subgradients; 9. Nonsmooth optimization methods; 10. Duality and algorithms; 11. Differentiation and adjoints.
Details
Erscheinungsjahr: 2022
Fachbereich: Kommunikationswissenschaften
Genre: Medienwissenschaften
Rubrik: Wissenschaften
Medium: Buch
Seiten: 238
Inhalt: Gebunden
ISBN-13: 9781316518984
ISBN-10: 1316518981
Sprache: Englisch
Ausstattung / Beilage: HC gerader Rücken kaschiert
Einband: Gebunden
Autor: Wright, Stephen J.
Hersteller: Cambridge University Press
Maße: 235 x 157 x 17 mm
Von/Mit: Stephen J. Wright
Erscheinungsdatum: 30.03.2022
Gewicht: 0,503 kg
preigu-id: 119698491
Über den Autor
Stephen J. Wright holds the George B. Dantzig Professorship, the Sheldon Lubar Chair, and the Amar and Balinder Sohi Professorship of Computer Sciences at the University of Wisconsin-Madison and a Discovery Fellow in the Wisconsin Institute for Discovery. He works in computational optimization and its applications to data science and many other areas of science and engineering. He is a Fellow of SIAM, and recipient of the 2014 W. R. G. Baker Award from IEEE for most outstanding paper, the 2020 Khachiyan Prize by the INFORMS Optimization Society for lifetime achievements in optimization, and the 2020 NeurIPS Test of Time Award. Professor Wright is the author and co-author of widely used textbooks and reference books in optimization including Primal Dual Interior-Point Methods (1987) and Numerical Optimization (2006).
Inhaltsverzeichnis
1. Introduction; 2. Foundations of smooth optimization; 3. Descent methods; 4. Gradient methods using momentum; 5. Stochastic gradient; 6. Coordinate descent; 7. First-order methods for constrained optimization; 8. Nonsmooth functions and subgradients; 9. Nonsmooth optimization methods; 10. Duality and algorithms; 11. Differentiation and adjoints.
Details
Erscheinungsjahr: 2022
Fachbereich: Kommunikationswissenschaften
Genre: Medienwissenschaften
Rubrik: Wissenschaften
Medium: Buch
Seiten: 238
Inhalt: Gebunden
ISBN-13: 9781316518984
ISBN-10: 1316518981
Sprache: Englisch
Ausstattung / Beilage: HC gerader Rücken kaschiert
Einband: Gebunden
Autor: Wright, Stephen J.
Hersteller: Cambridge University Press
Maße: 235 x 157 x 17 mm
Von/Mit: Stephen J. Wright
Erscheinungsdatum: 30.03.2022
Gewicht: 0,503 kg
preigu-id: 119698491
Warnhinweis

Ähnliche Produkte

Ähnliche Produkte