Dekorationsartikel gehören nicht zum Leistungsumfang.
Sprache:
Englisch
58,25 €*
Versandkostenfrei per Post / DHL
Lieferzeit 1-2 Wochen
Kategorien:
Beschreibung
Build, train, and deploy large machine learning models at scale in various domains such as computational fluid dynamics, genomics, autonomous vehicles, and numerical optimization using Amazon SageMaker
Key Features:Understand the need for high-performance computing (HPC)
Build, train, and deploy large ML models with billions of parameters using Amazon SageMaker
Learn best practices and architectures for implementing ML at scale using HPC
Book Description:
Machine learning (ML) and high-performance computing (HPC) on AWS run compute-intensive workloads across industries and emerging applications. Its use cases can be linked to various verticals, such as computational fluid dynamics (CFD), genomics, and autonomous vehicles.
This book provides end-to-end guidance, starting with HPC concepts for storage and networking. It then progresses to working examples on how to process large datasets using SageMaker Studio and EMR. Next, you'll learn how to build, train, and deploy large models using distributed training. Later chapters also guide you through deploying models to edge devices using SageMaker and IoT Greengrass, and performance optimization of ML models, for low latency use cases.
By the end of this book, you'll be able to build, train, and deploy your own large-scale ML application, using HPC on AWS, following industry best practices and addressing the key pain points encountered in the application life cycle.
What You Will Learn:Explore data management, storage, and fast networking for HPC applications
Focus on the analysis and visualization of a large volume of data using Spark
Train visual transformer models using SageMaker distributed training
Deploy and manage ML models at scale on the cloud and at the edge
Get to grips with performance optimization of ML models for low latency workloads
Apply HPC to industry domains such as CFD, genomics, AV, and optimization
Who this book is for:
The book begins with HPC concepts, however, it expects you to have prior machine learning knowledge. This book is for ML engineers and data scientists interested in learning advanced topics on using large datasets for training large models using distributed training concepts on AWS, deploying models at scale, and performance optimization for low latency use cases. Practitioners in fields such as numerical optimization, computation fluid dynamics, autonomous vehicles, and genomics, who require HPC for applying ML models to applications at scale will also find the book useful.
Key Features:Understand the need for high-performance computing (HPC)
Build, train, and deploy large ML models with billions of parameters using Amazon SageMaker
Learn best practices and architectures for implementing ML at scale using HPC
Book Description:
Machine learning (ML) and high-performance computing (HPC) on AWS run compute-intensive workloads across industries and emerging applications. Its use cases can be linked to various verticals, such as computational fluid dynamics (CFD), genomics, and autonomous vehicles.
This book provides end-to-end guidance, starting with HPC concepts for storage and networking. It then progresses to working examples on how to process large datasets using SageMaker Studio and EMR. Next, you'll learn how to build, train, and deploy large models using distributed training. Later chapters also guide you through deploying models to edge devices using SageMaker and IoT Greengrass, and performance optimization of ML models, for low latency use cases.
By the end of this book, you'll be able to build, train, and deploy your own large-scale ML application, using HPC on AWS, following industry best practices and addressing the key pain points encountered in the application life cycle.
What You Will Learn:Explore data management, storage, and fast networking for HPC applications
Focus on the analysis and visualization of a large volume of data using Spark
Train visual transformer models using SageMaker distributed training
Deploy and manage ML models at scale on the cloud and at the edge
Get to grips with performance optimization of ML models for low latency workloads
Apply HPC to industry domains such as CFD, genomics, AV, and optimization
Who this book is for:
The book begins with HPC concepts, however, it expects you to have prior machine learning knowledge. This book is for ML engineers and data scientists interested in learning advanced topics on using large datasets for training large models using distributed training concepts on AWS, deploying models at scale, and performance optimization for low latency use cases. Practitioners in fields such as numerical optimization, computation fluid dynamics, autonomous vehicles, and genomics, who require HPC for applying ML models to applications at scale will also find the book useful.
Build, train, and deploy large machine learning models at scale in various domains such as computational fluid dynamics, genomics, autonomous vehicles, and numerical optimization using Amazon SageMaker
Key Features:Understand the need for high-performance computing (HPC)
Build, train, and deploy large ML models with billions of parameters using Amazon SageMaker
Learn best practices and architectures for implementing ML at scale using HPC
Book Description:
Machine learning (ML) and high-performance computing (HPC) on AWS run compute-intensive workloads across industries and emerging applications. Its use cases can be linked to various verticals, such as computational fluid dynamics (CFD), genomics, and autonomous vehicles.
This book provides end-to-end guidance, starting with HPC concepts for storage and networking. It then progresses to working examples on how to process large datasets using SageMaker Studio and EMR. Next, you'll learn how to build, train, and deploy large models using distributed training. Later chapters also guide you through deploying models to edge devices using SageMaker and IoT Greengrass, and performance optimization of ML models, for low latency use cases.
By the end of this book, you'll be able to build, train, and deploy your own large-scale ML application, using HPC on AWS, following industry best practices and addressing the key pain points encountered in the application life cycle.
What You Will Learn:Explore data management, storage, and fast networking for HPC applications
Focus on the analysis and visualization of a large volume of data using Spark
Train visual transformer models using SageMaker distributed training
Deploy and manage ML models at scale on the cloud and at the edge
Get to grips with performance optimization of ML models for low latency workloads
Apply HPC to industry domains such as CFD, genomics, AV, and optimization
Who this book is for:
The book begins with HPC concepts, however, it expects you to have prior machine learning knowledge. This book is for ML engineers and data scientists interested in learning advanced topics on using large datasets for training large models using distributed training concepts on AWS, deploying models at scale, and performance optimization for low latency use cases. Practitioners in fields such as numerical optimization, computation fluid dynamics, autonomous vehicles, and genomics, who require HPC for applying ML models to applications at scale will also find the book useful.
Key Features:Understand the need for high-performance computing (HPC)
Build, train, and deploy large ML models with billions of parameters using Amazon SageMaker
Learn best practices and architectures for implementing ML at scale using HPC
Book Description:
Machine learning (ML) and high-performance computing (HPC) on AWS run compute-intensive workloads across industries and emerging applications. Its use cases can be linked to various verticals, such as computational fluid dynamics (CFD), genomics, and autonomous vehicles.
This book provides end-to-end guidance, starting with HPC concepts for storage and networking. It then progresses to working examples on how to process large datasets using SageMaker Studio and EMR. Next, you'll learn how to build, train, and deploy large models using distributed training. Later chapters also guide you through deploying models to edge devices using SageMaker and IoT Greengrass, and performance optimization of ML models, for low latency use cases.
By the end of this book, you'll be able to build, train, and deploy your own large-scale ML application, using HPC on AWS, following industry best practices and addressing the key pain points encountered in the application life cycle.
What You Will Learn:Explore data management, storage, and fast networking for HPC applications
Focus on the analysis and visualization of a large volume of data using Spark
Train visual transformer models using SageMaker distributed training
Deploy and manage ML models at scale on the cloud and at the edge
Get to grips with performance optimization of ML models for low latency workloads
Apply HPC to industry domains such as CFD, genomics, AV, and optimization
Who this book is for:
The book begins with HPC concepts, however, it expects you to have prior machine learning knowledge. This book is for ML engineers and data scientists interested in learning advanced topics on using large datasets for training large models using distributed training concepts on AWS, deploying models at scale, and performance optimization for low latency use cases. Practitioners in fields such as numerical optimization, computation fluid dynamics, autonomous vehicles, and genomics, who require HPC for applying ML models to applications at scale will also find the book useful.
Über den Autor
Mani Khanuja is a seasoned IT professional with over 17 years of software engineering experience. She has successfully led machine learning and artificial intelligence projects in various domains, such as forecasting, computer vision, and natural language processing. At AWS, she helps customers to build, train, and deploy large machine learning models at scale. She also specializes in data preparation, distributed model training, performance optimization, machine learning at the edge, and automating the complete machine learning life cycle to build repeatable and scalable applications.
Details
Erscheinungsjahr: | 2022 |
---|---|
Genre: | Informatik |
Rubrik: | Naturwissenschaften & Technik |
Medium: | Taschenbuch |
ISBN-13: | 9781803237015 |
ISBN-10: | 1803237015 |
Sprache: | Englisch |
Ausstattung / Beilage: | Paperback |
Einband: | Kartoniert / Broschiert |
Autor: |
Khanuja, Mani
Sabir, Farooq Subramanian, Shreyas |
Hersteller: | Packt Publishing |
Maße: | 235 x 191 x 21 mm |
Von/Mit: | Mani Khanuja (u. a.) |
Erscheinungsdatum: | 30.12.2022 |
Gewicht: | 0,712 kg |
Über den Autor
Mani Khanuja is a seasoned IT professional with over 17 years of software engineering experience. She has successfully led machine learning and artificial intelligence projects in various domains, such as forecasting, computer vision, and natural language processing. At AWS, she helps customers to build, train, and deploy large machine learning models at scale. She also specializes in data preparation, distributed model training, performance optimization, machine learning at the edge, and automating the complete machine learning life cycle to build repeatable and scalable applications.
Details
Erscheinungsjahr: | 2022 |
---|---|
Genre: | Informatik |
Rubrik: | Naturwissenschaften & Technik |
Medium: | Taschenbuch |
ISBN-13: | 9781803237015 |
ISBN-10: | 1803237015 |
Sprache: | Englisch |
Ausstattung / Beilage: | Paperback |
Einband: | Kartoniert / Broschiert |
Autor: |
Khanuja, Mani
Sabir, Farooq Subramanian, Shreyas |
Hersteller: | Packt Publishing |
Maße: | 235 x 191 x 21 mm |
Von/Mit: | Mani Khanuja (u. a.) |
Erscheinungsdatum: | 30.12.2022 |
Gewicht: | 0,712 kg |
Warnhinweis