Dekorationsartikel gehören nicht zum Leistungsumfang.
Sprache:
Englisch
59,25 €*
Versandkostenfrei per Post / DHL
Lieferzeit 1-2 Wochen
Kategorien:
Beschreibung
Explore how Delta brings reliability, performance, and governance to your data lake and all the AI and BI use cases built on top of it
Key Features:Learn Delta's core concepts and features as well as what makes it a perfect match for data engineering and analysis
Solve business challenges of different industry verticals using a scenario-based approach
Make optimal choices by understanding the various tradeoffs provided by Delta
Book Description:
Delta helps you generate reliable insights at scale and simplifies architecture around data pipelines, allowing you to focus primarily on refining the use cases being worked on. This is especially important when you consider that existing architecture is frequently reused for new use cases.
In this book, you'll learn about the principles of distributed computing, data modeling techniques, and big data design patterns and templates that help solve end-to-end data flow problems for common scenarios and are reusable across use cases and industry verticals. You'll also learn how to recover from errors and the best practices around handling structured, semi-structured, and unstructured data using Delta. After that, you'll get to grips with features such as ACID transactions on big data, disciplined schema evolution, time travel to help rewind a dataset to a different time or version, and unified batch and streaming capabilities that will help you build agile and robust data products.
By the end of this Delta book, you'll be able to use Delta as the foundational block for creating analytics-ready data that fuels all AI/BI use cases.
What You Will Learn:Explore the key challenges of traditional data lakes
Appreciate the unique features of Delta that come out of the box
Address reliability, performance, and governance concerns using Delta
Analyze the open data format for an extensible and pluggable architecture
Handle multiple use cases to support BI, AI, streaming, and data discovery
Discover how common data and machine learning design patterns are executed on Delta
Build and deploy data and machine learning pipelines at scale using Delta
Who this book is for:
Data engineers, data scientists, ML practitioners, BI analysts, or anyone in the data domain working with big data will be able to put their knowledge to work with this practical guide to executing pipelines and supporting diverse use cases using the Delta protocol. Basic knowledge of SQL, Python programming, and Spark is required to get the most out of this book.
Key Features:Learn Delta's core concepts and features as well as what makes it a perfect match for data engineering and analysis
Solve business challenges of different industry verticals using a scenario-based approach
Make optimal choices by understanding the various tradeoffs provided by Delta
Book Description:
Delta helps you generate reliable insights at scale and simplifies architecture around data pipelines, allowing you to focus primarily on refining the use cases being worked on. This is especially important when you consider that existing architecture is frequently reused for new use cases.
In this book, you'll learn about the principles of distributed computing, data modeling techniques, and big data design patterns and templates that help solve end-to-end data flow problems for common scenarios and are reusable across use cases and industry verticals. You'll also learn how to recover from errors and the best practices around handling structured, semi-structured, and unstructured data using Delta. After that, you'll get to grips with features such as ACID transactions on big data, disciplined schema evolution, time travel to help rewind a dataset to a different time or version, and unified batch and streaming capabilities that will help you build agile and robust data products.
By the end of this Delta book, you'll be able to use Delta as the foundational block for creating analytics-ready data that fuels all AI/BI use cases.
What You Will Learn:Explore the key challenges of traditional data lakes
Appreciate the unique features of Delta that come out of the box
Address reliability, performance, and governance concerns using Delta
Analyze the open data format for an extensible and pluggable architecture
Handle multiple use cases to support BI, AI, streaming, and data discovery
Discover how common data and machine learning design patterns are executed on Delta
Build and deploy data and machine learning pipelines at scale using Delta
Who this book is for:
Data engineers, data scientists, ML practitioners, BI analysts, or anyone in the data domain working with big data will be able to put their knowledge to work with this practical guide to executing pipelines and supporting diverse use cases using the Delta protocol. Basic knowledge of SQL, Python programming, and Spark is required to get the most out of this book.
Explore how Delta brings reliability, performance, and governance to your data lake and all the AI and BI use cases built on top of it
Key Features:Learn Delta's core concepts and features as well as what makes it a perfect match for data engineering and analysis
Solve business challenges of different industry verticals using a scenario-based approach
Make optimal choices by understanding the various tradeoffs provided by Delta
Book Description:
Delta helps you generate reliable insights at scale and simplifies architecture around data pipelines, allowing you to focus primarily on refining the use cases being worked on. This is especially important when you consider that existing architecture is frequently reused for new use cases.
In this book, you'll learn about the principles of distributed computing, data modeling techniques, and big data design patterns and templates that help solve end-to-end data flow problems for common scenarios and are reusable across use cases and industry verticals. You'll also learn how to recover from errors and the best practices around handling structured, semi-structured, and unstructured data using Delta. After that, you'll get to grips with features such as ACID transactions on big data, disciplined schema evolution, time travel to help rewind a dataset to a different time or version, and unified batch and streaming capabilities that will help you build agile and robust data products.
By the end of this Delta book, you'll be able to use Delta as the foundational block for creating analytics-ready data that fuels all AI/BI use cases.
What You Will Learn:Explore the key challenges of traditional data lakes
Appreciate the unique features of Delta that come out of the box
Address reliability, performance, and governance concerns using Delta
Analyze the open data format for an extensible and pluggable architecture
Handle multiple use cases to support BI, AI, streaming, and data discovery
Discover how common data and machine learning design patterns are executed on Delta
Build and deploy data and machine learning pipelines at scale using Delta
Who this book is for:
Data engineers, data scientists, ML practitioners, BI analysts, or anyone in the data domain working with big data will be able to put their knowledge to work with this practical guide to executing pipelines and supporting diverse use cases using the Delta protocol. Basic knowledge of SQL, Python programming, and Spark is required to get the most out of this book.
Key Features:Learn Delta's core concepts and features as well as what makes it a perfect match for data engineering and analysis
Solve business challenges of different industry verticals using a scenario-based approach
Make optimal choices by understanding the various tradeoffs provided by Delta
Book Description:
Delta helps you generate reliable insights at scale and simplifies architecture around data pipelines, allowing you to focus primarily on refining the use cases being worked on. This is especially important when you consider that existing architecture is frequently reused for new use cases.
In this book, you'll learn about the principles of distributed computing, data modeling techniques, and big data design patterns and templates that help solve end-to-end data flow problems for common scenarios and are reusable across use cases and industry verticals. You'll also learn how to recover from errors and the best practices around handling structured, semi-structured, and unstructured data using Delta. After that, you'll get to grips with features such as ACID transactions on big data, disciplined schema evolution, time travel to help rewind a dataset to a different time or version, and unified batch and streaming capabilities that will help you build agile and robust data products.
By the end of this Delta book, you'll be able to use Delta as the foundational block for creating analytics-ready data that fuels all AI/BI use cases.
What You Will Learn:Explore the key challenges of traditional data lakes
Appreciate the unique features of Delta that come out of the box
Address reliability, performance, and governance concerns using Delta
Analyze the open data format for an extensible and pluggable architecture
Handle multiple use cases to support BI, AI, streaming, and data discovery
Discover how common data and machine learning design patterns are executed on Delta
Build and deploy data and machine learning pipelines at scale using Delta
Who this book is for:
Data engineers, data scientists, ML practitioners, BI analysts, or anyone in the data domain working with big data will be able to put their knowledge to work with this practical guide to executing pipelines and supporting diverse use cases using the Delta protocol. Basic knowledge of SQL, Python programming, and Spark is required to get the most out of this book.
Über den Autor
Anindita Mahapatra is a Solutions Architect at Databricks in the data and AI space helping clients across all industry verticals reap value from their data infrastructure [...] teaches a data engineering and analytics course at Harvard University as part of their extension school [...] has extensive big data and Hadoop consulting experience from Thinkbig/Teradata prior to which she was managing development of algorithmic app discovery and promotion for both Nokia and Microsoft [...] holds a Masters degree in Liberal Arts and Management from Harvard Extension School, a Masters in Computer Science from Boston University and a Bachelors in Computer Science from BITS Pilani, India.
Details
Erscheinungsjahr: | 2022 |
---|---|
Genre: | Informatik |
Rubrik: | Naturwissenschaften & Technik |
Medium: | Taschenbuch |
ISBN-13: | 9781801814867 |
ISBN-10: | 1801814864 |
Sprache: | Englisch |
Ausstattung / Beilage: | Paperback |
Einband: | Kartoniert / Broschiert |
Autor: | Mahapatra, Anindita |
Hersteller: | Packt Publishing |
Maße: | 235 x 191 x 18 mm |
Von/Mit: | Anindita Mahapatra |
Erscheinungsdatum: | 29.07.2022 |
Gewicht: | 0,625 kg |
Über den Autor
Anindita Mahapatra is a Solutions Architect at Databricks in the data and AI space helping clients across all industry verticals reap value from their data infrastructure [...] teaches a data engineering and analytics course at Harvard University as part of their extension school [...] has extensive big data and Hadoop consulting experience from Thinkbig/Teradata prior to which she was managing development of algorithmic app discovery and promotion for both Nokia and Microsoft [...] holds a Masters degree in Liberal Arts and Management from Harvard Extension School, a Masters in Computer Science from Boston University and a Bachelors in Computer Science from BITS Pilani, India.
Details
Erscheinungsjahr: | 2022 |
---|---|
Genre: | Informatik |
Rubrik: | Naturwissenschaften & Technik |
Medium: | Taschenbuch |
ISBN-13: | 9781801814867 |
ISBN-10: | 1801814864 |
Sprache: | Englisch |
Ausstattung / Beilage: | Paperback |
Einband: | Kartoniert / Broschiert |
Autor: | Mahapatra, Anindita |
Hersteller: | Packt Publishing |
Maße: | 235 x 191 x 18 mm |
Von/Mit: | Anindita Mahapatra |
Erscheinungsdatum: | 29.07.2022 |
Gewicht: | 0,625 kg |
Warnhinweis