Zum Hauptinhalt springen
Dekorationsartikel gehören nicht zum Leistungsumfang.
Data Orchestration in Deep Learning Accelerators
Taschenbuch von Tushar Krishna (u. a.)
Sprache: Englisch

64,19 €*

inkl. MwSt.

Versandkostenfrei per Post / DHL

Aktuell nicht verfügbar

Kategorien:
Beschreibung
This Synthesis Lecture focuses on techniques for efficient data orchestration within DNN accelerators. The End of Moore's Law, coupled with the increasing growth in deep learning and other AI applications has led to the emergence of custom Deep Neural Network (DNN) accelerators for energy-efficient inference on edge devices. Modern DNNs have millions of hyper parameters and involve billions of computations; this necessitates extensive data movement from memory to on-chip processing engines. It is well known that the cost of data movement today surpasses the cost of the actual computation; therefore, DNN accelerators require careful orchestration of data across on-chip compute, network, and memory elements to minimize the number of accesses to external DRAM. The book covers DNN dataflows, data reuse, buffer hierarchies, networks-on-chip, and automated design-space exploration. It concludes with data orchestration challenges with compressed and sparse DNNs and future trends. The target audience is students, engineers, and researchers interested in designing high-performance and low-energy accelerators for DNN inference.
This Synthesis Lecture focuses on techniques for efficient data orchestration within DNN accelerators. The End of Moore's Law, coupled with the increasing growth in deep learning and other AI applications has led to the emergence of custom Deep Neural Network (DNN) accelerators for energy-efficient inference on edge devices. Modern DNNs have millions of hyper parameters and involve billions of computations; this necessitates extensive data movement from memory to on-chip processing engines. It is well known that the cost of data movement today surpasses the cost of the actual computation; therefore, DNN accelerators require careful orchestration of data across on-chip compute, network, and memory elements to minimize the number of accesses to external DRAM. The book covers DNN dataflows, data reuse, buffer hierarchies, networks-on-chip, and automated design-space exploration. It concludes with data orchestration challenges with compressed and sparse DNNs and future trends. The target audience is students, engineers, and researchers interested in designing high-performance and low-energy accelerators for DNN inference.
Über den Autor
Tushar Krishna is an Assistant Professor in the School of Electrical and Computer Engineering at the Georgia Institute of Technology. He received a Ph.D. in Electrical Engineering and Computer Science from the Massachusetts Institute of Technology in 2014. Prior to that, he received an M.S.E in Electrical Engineering from Princeton University in 2009 and a B.Tech in Electrical Engineering from the Indian Institute of Technology (IIT), Delhi in 2007. Before joining Georgia Tech in 2015, he worked as a researcher in the VSSAD Group at Intel in Massachusetts. Dr. Krishna's research spans computer architecture, interconnection networks, networks-on-chip (NoC), and deep learning accelerators, with a focus on optimizing data movement in modern computing systems. Three of his papers have been selected for IEEE Micro's Top Picks from Computer Architecture, one more received an honorable mention, and three have won best paper awards. He received the National Science Foundation (NSF) CRII awardin 2018 and both a Google Faculty Award and a Facebook Faculty Award in 2019.
Hyoukjun Kwon is a research scientist at Facebook AR/VR. He received his Ph.D. in Computer Science from Georgia Institute of Technology in 2020, advised by Dr. Tushar Krishna. He received B.S. degrees in Environmental Materials Science and in Computer Science and Engineering from Seoul National University in 2015. His research interests include communication-centric DNN accelerator designs, modeling of DNN accelerator architecture and mapping, NoC for accelerators, and co-optimization of DNN model, mapping, and accelerator architecture. He is actively leading the development of multiple open-source tools and RTLs in the DNN accelerator domain, including MAESTRO, MAERI, Microswitch NoC, and OpenSMART. One of his papers was selected for IEEE Micro's Top Picks from computer architecture in 2019, one received honorable mention in 2018, and another won the best paper award at HPCA 2020.
Angshuman Parashar is a Senior Research Scientist at NVIDIA. His research interests are in building, evaluating, and programming spatial and data-parallel architectures, with a present focus on automated mapping of machine learning algorithms onto architectures based on explicit decoupled data orchestration. Prior to NVIDIA, he was a member of the VSSAD group at Intel, where he worked with a small team of experts in architecture, languages, workloads, and implementation to design and evaluate a new spatial architecture. Dr. Parashar received his Ph.D. in Computer Science and Engineering from the Pennsylvania State University in 2007, and his B.Tech. in Computer Science and Engineering from the Indian Institute of Technology, Delhi in 2002.
Michael Pellauer is a Senior Research Scientist at NVIDIA. His research interest is building domain specific accelerators, with a special emphasis on deep learning and sparse tensor algebra. Prior to NVIDIA, he was a member of the VSSAD group at Intel,where he performed research and advanced development on customized spatial accelerators. Dr. Pellauer holds a Ph.D. from the Massachusetts Institute of Technology in Cambridge, Massachusetts (2010), a Master's from Chalmers University of Technology in Gothenburg, Sweden (2003), and a Bachelor's from Brown University in Providence, Rhode Island (1999).
Ananda Samajdar is a Ph.D. student at the school of Electrical and Computer Engineering (ECE) at the Georgia Institute of Technology. He completed his B.Tech. (Hons.) in Electronics and Communication Engineering (ECE) from the Indian Institute of Information Technology, Allahabad India (IIIT-A) in 2013. Before joining Georgia Tech, Anand worked as a VLSI design engineer at Qualcomm Bangalore for three years. Anand's research interest includes designing custom architecture for efficient and deep learning systems. He has authored a number of papers in top-tier computer architecture conferences. Two of his papers received honorablementions in the IEEE MICRO Top Picks 2019, and one was awarded the best paper award at HPCA 2020. He is also the recipient of the silver medal for the ACM student research competition at ASPLOS 2019.
Inhaltsverzeichnis
Preface.- Acknowledgments.- Introduction to Data Orchestration.- Dataflow and Data Reuse.- Buffer Hierarchies.- Networks-on-Chip.- Putting it Together: Architecting a DNN Accelerator.- Modeling Accelerator Design Space.- Orchestrating Compressed-Sparse Data.- Conclusions.- Bibliography.- Authors' Biographies.
Details
Erscheinungsjahr: 2020
Fachbereich: Nachrichtentechnik
Genre: Technik
Rubrik: Naturwissenschaften & Technik
Medium: Taschenbuch
Reihe: Synthesis Lectures on Computer Architecture
Inhalt: xvii
146 S.
ISBN-13: 9783031006395
ISBN-10: 3031006399
Sprache: Englisch
Ausstattung / Beilage: Paperback
Einband: Kartoniert / Broschiert
Autor: Krishna, Tushar
Kwon, Hyoukjun
Samajdar, Ananda
Pellauer, Michael
Parashar, Angshuman
Hersteller: Springer International Publishing
Springer International Publishing AG
Synthesis Lectures on Computer Architecture
Maße: 235 x 191 x 10 mm
Von/Mit: Tushar Krishna (u. a.)
Erscheinungsdatum: 18.08.2020
Gewicht: 0,327 kg
Artikel-ID: 121975119
Über den Autor
Tushar Krishna is an Assistant Professor in the School of Electrical and Computer Engineering at the Georgia Institute of Technology. He received a Ph.D. in Electrical Engineering and Computer Science from the Massachusetts Institute of Technology in 2014. Prior to that, he received an M.S.E in Electrical Engineering from Princeton University in 2009 and a B.Tech in Electrical Engineering from the Indian Institute of Technology (IIT), Delhi in 2007. Before joining Georgia Tech in 2015, he worked as a researcher in the VSSAD Group at Intel in Massachusetts. Dr. Krishna's research spans computer architecture, interconnection networks, networks-on-chip (NoC), and deep learning accelerators, with a focus on optimizing data movement in modern computing systems. Three of his papers have been selected for IEEE Micro's Top Picks from Computer Architecture, one more received an honorable mention, and three have won best paper awards. He received the National Science Foundation (NSF) CRII awardin 2018 and both a Google Faculty Award and a Facebook Faculty Award in 2019.
Hyoukjun Kwon is a research scientist at Facebook AR/VR. He received his Ph.D. in Computer Science from Georgia Institute of Technology in 2020, advised by Dr. Tushar Krishna. He received B.S. degrees in Environmental Materials Science and in Computer Science and Engineering from Seoul National University in 2015. His research interests include communication-centric DNN accelerator designs, modeling of DNN accelerator architecture and mapping, NoC for accelerators, and co-optimization of DNN model, mapping, and accelerator architecture. He is actively leading the development of multiple open-source tools and RTLs in the DNN accelerator domain, including MAESTRO, MAERI, Microswitch NoC, and OpenSMART. One of his papers was selected for IEEE Micro's Top Picks from computer architecture in 2019, one received honorable mention in 2018, and another won the best paper award at HPCA 2020.
Angshuman Parashar is a Senior Research Scientist at NVIDIA. His research interests are in building, evaluating, and programming spatial and data-parallel architectures, with a present focus on automated mapping of machine learning algorithms onto architectures based on explicit decoupled data orchestration. Prior to NVIDIA, he was a member of the VSSAD group at Intel, where he worked with a small team of experts in architecture, languages, workloads, and implementation to design and evaluate a new spatial architecture. Dr. Parashar received his Ph.D. in Computer Science and Engineering from the Pennsylvania State University in 2007, and his B.Tech. in Computer Science and Engineering from the Indian Institute of Technology, Delhi in 2002.
Michael Pellauer is a Senior Research Scientist at NVIDIA. His research interest is building domain specific accelerators, with a special emphasis on deep learning and sparse tensor algebra. Prior to NVIDIA, he was a member of the VSSAD group at Intel,where he performed research and advanced development on customized spatial accelerators. Dr. Pellauer holds a Ph.D. from the Massachusetts Institute of Technology in Cambridge, Massachusetts (2010), a Master's from Chalmers University of Technology in Gothenburg, Sweden (2003), and a Bachelor's from Brown University in Providence, Rhode Island (1999).
Ananda Samajdar is a Ph.D. student at the school of Electrical and Computer Engineering (ECE) at the Georgia Institute of Technology. He completed his B.Tech. (Hons.) in Electronics and Communication Engineering (ECE) from the Indian Institute of Information Technology, Allahabad India (IIIT-A) in 2013. Before joining Georgia Tech, Anand worked as a VLSI design engineer at Qualcomm Bangalore for three years. Anand's research interest includes designing custom architecture for efficient and deep learning systems. He has authored a number of papers in top-tier computer architecture conferences. Two of his papers received honorablementions in the IEEE MICRO Top Picks 2019, and one was awarded the best paper award at HPCA 2020. He is also the recipient of the silver medal for the ACM student research competition at ASPLOS 2019.
Inhaltsverzeichnis
Preface.- Acknowledgments.- Introduction to Data Orchestration.- Dataflow and Data Reuse.- Buffer Hierarchies.- Networks-on-Chip.- Putting it Together: Architecting a DNN Accelerator.- Modeling Accelerator Design Space.- Orchestrating Compressed-Sparse Data.- Conclusions.- Bibliography.- Authors' Biographies.
Details
Erscheinungsjahr: 2020
Fachbereich: Nachrichtentechnik
Genre: Technik
Rubrik: Naturwissenschaften & Technik
Medium: Taschenbuch
Reihe: Synthesis Lectures on Computer Architecture
Inhalt: xvii
146 S.
ISBN-13: 9783031006395
ISBN-10: 3031006399
Sprache: Englisch
Ausstattung / Beilage: Paperback
Einband: Kartoniert / Broschiert
Autor: Krishna, Tushar
Kwon, Hyoukjun
Samajdar, Ananda
Pellauer, Michael
Parashar, Angshuman
Hersteller: Springer International Publishing
Springer International Publishing AG
Synthesis Lectures on Computer Architecture
Maße: 235 x 191 x 10 mm
Von/Mit: Tushar Krishna (u. a.)
Erscheinungsdatum: 18.08.2020
Gewicht: 0,327 kg
Artikel-ID: 121975119
Warnhinweis