Dekorationsartikel gehören nicht zum Leistungsumfang.
Sprache:
Englisch
67,15 €*
Versandkostenfrei per Post / DHL
Lieferzeit 1-2 Wochen
Kategorien:
Beschreibung
Build, monitor, and manage real-time data pipelines to create data engineering infrastructure efficiently using open-source Apache projects
Key features:Become well-versed in data architectures, data preparation, and data optimization skills with the help of practical examples
Design data models and learn how to extract, transform, and load (ETL) data using Python
Schedule, automate, and monitor complex data pipelines in production
Book Description
Data engineering provides the foundation for data science and analytics, and forms an important part of all businesses. This book will help you to explore various tools and methods that are used for understanding the data engineering process using Python.
The book will show you how to tackle challenges commonly faced in different aspects of data engineering. You'll start with an introduction to the basics of data engineering, along with the technologies and frameworks required to build data pipelines to work with large datasets. You'll learn how to transform and clean data and perform analytics to get the most out of your data. As you advance, you'll discover how to work with big data of varying complexity and production databases, and build data pipelines. Using real-world examples, you'll build architectures on which you'll learn how to deploy data pipelines.
By the end of this Python book, you'll have gained a clear understanding of data modeling techniques, and will be able to confidently build data engineering pipelines for tracking data, running quality checks, and making necessary changes in production.
What you will learnUnderstand how data engineering supports data science workflows
Discover how to extract data from files and databases and then clean, transform, and enrich it
Configure processors for handling different file formats as well as both relational and NoSQL databases
Find out how to implement a data pipeline and dashboard to visualize results
Use staging and validation to check data before landing in the warehouse
Build real-time pipelines with staging areas that perform validation and handle failures
Get to grips with deploying pipelines in the production environment
Who this book is for
¿This book is for data analysts, ETL developers, and anyone looking to get started with or transition to the field of data engineering or refresh their knowledge of data engineering using Python. This book will also be useful for students planning to build a career in data engineering or IT professionals preparing for a transition. No previous knowledge of data engineering is required.
Key features:Become well-versed in data architectures, data preparation, and data optimization skills with the help of practical examples
Design data models and learn how to extract, transform, and load (ETL) data using Python
Schedule, automate, and monitor complex data pipelines in production
Book Description
Data engineering provides the foundation for data science and analytics, and forms an important part of all businesses. This book will help you to explore various tools and methods that are used for understanding the data engineering process using Python.
The book will show you how to tackle challenges commonly faced in different aspects of data engineering. You'll start with an introduction to the basics of data engineering, along with the technologies and frameworks required to build data pipelines to work with large datasets. You'll learn how to transform and clean data and perform analytics to get the most out of your data. As you advance, you'll discover how to work with big data of varying complexity and production databases, and build data pipelines. Using real-world examples, you'll build architectures on which you'll learn how to deploy data pipelines.
By the end of this Python book, you'll have gained a clear understanding of data modeling techniques, and will be able to confidently build data engineering pipelines for tracking data, running quality checks, and making necessary changes in production.
What you will learnUnderstand how data engineering supports data science workflows
Discover how to extract data from files and databases and then clean, transform, and enrich it
Configure processors for handling different file formats as well as both relational and NoSQL databases
Find out how to implement a data pipeline and dashboard to visualize results
Use staging and validation to check data before landing in the warehouse
Build real-time pipelines with staging areas that perform validation and handle failures
Get to grips with deploying pipelines in the production environment
Who this book is for
¿This book is for data analysts, ETL developers, and anyone looking to get started with or transition to the field of data engineering or refresh their knowledge of data engineering using Python. This book will also be useful for students planning to build a career in data engineering or IT professionals preparing for a transition. No previous knowledge of data engineering is required.
Build, monitor, and manage real-time data pipelines to create data engineering infrastructure efficiently using open-source Apache projects
Key features:Become well-versed in data architectures, data preparation, and data optimization skills with the help of practical examples
Design data models and learn how to extract, transform, and load (ETL) data using Python
Schedule, automate, and monitor complex data pipelines in production
Book Description
Data engineering provides the foundation for data science and analytics, and forms an important part of all businesses. This book will help you to explore various tools and methods that are used for understanding the data engineering process using Python.
The book will show you how to tackle challenges commonly faced in different aspects of data engineering. You'll start with an introduction to the basics of data engineering, along with the technologies and frameworks required to build data pipelines to work with large datasets. You'll learn how to transform and clean data and perform analytics to get the most out of your data. As you advance, you'll discover how to work with big data of varying complexity and production databases, and build data pipelines. Using real-world examples, you'll build architectures on which you'll learn how to deploy data pipelines.
By the end of this Python book, you'll have gained a clear understanding of data modeling techniques, and will be able to confidently build data engineering pipelines for tracking data, running quality checks, and making necessary changes in production.
What you will learnUnderstand how data engineering supports data science workflows
Discover how to extract data from files and databases and then clean, transform, and enrich it
Configure processors for handling different file formats as well as both relational and NoSQL databases
Find out how to implement a data pipeline and dashboard to visualize results
Use staging and validation to check data before landing in the warehouse
Build real-time pipelines with staging areas that perform validation and handle failures
Get to grips with deploying pipelines in the production environment
Who this book is for
¿This book is for data analysts, ETL developers, and anyone looking to get started with or transition to the field of data engineering or refresh their knowledge of data engineering using Python. This book will also be useful for students planning to build a career in data engineering or IT professionals preparing for a transition. No previous knowledge of data engineering is required.
Key features:Become well-versed in data architectures, data preparation, and data optimization skills with the help of practical examples
Design data models and learn how to extract, transform, and load (ETL) data using Python
Schedule, automate, and monitor complex data pipelines in production
Book Description
Data engineering provides the foundation for data science and analytics, and forms an important part of all businesses. This book will help you to explore various tools and methods that are used for understanding the data engineering process using Python.
The book will show you how to tackle challenges commonly faced in different aspects of data engineering. You'll start with an introduction to the basics of data engineering, along with the technologies and frameworks required to build data pipelines to work with large datasets. You'll learn how to transform and clean data and perform analytics to get the most out of your data. As you advance, you'll discover how to work with big data of varying complexity and production databases, and build data pipelines. Using real-world examples, you'll build architectures on which you'll learn how to deploy data pipelines.
By the end of this Python book, you'll have gained a clear understanding of data modeling techniques, and will be able to confidently build data engineering pipelines for tracking data, running quality checks, and making necessary changes in production.
What you will learnUnderstand how data engineering supports data science workflows
Discover how to extract data from files and databases and then clean, transform, and enrich it
Configure processors for handling different file formats as well as both relational and NoSQL databases
Find out how to implement a data pipeline and dashboard to visualize results
Use staging and validation to check data before landing in the warehouse
Build real-time pipelines with staging areas that perform validation and handle failures
Get to grips with deploying pipelines in the production environment
Who this book is for
¿This book is for data analysts, ETL developers, and anyone looking to get started with or transition to the field of data engineering or refresh their knowledge of data engineering using Python. This book will also be useful for students planning to build a career in data engineering or IT professionals preparing for a transition. No previous knowledge of data engineering is required.
Über den Autor
Paul Crickard is the author of [...] Essentials and co-author of Mastering Geospatial Analysis with Python and the Chief Information Officer at the Second Judicial District Attorney's Office in Albuquerque, New Mexico.
With a Master's degree in Political Science and a background in Community, and Regional Planning, he combines rigorous social science theory and techniques to technology projects. He has Presented at the New Mexico Big Data and Analytics Summit and the ExperienceIT NM Conference. He has given talks on data to the New Mexico Big Data Working Group, Sandia National Labs, and the New Mexico Geographic Information Council.
With a Master's degree in Political Science and a background in Community, and Regional Planning, he combines rigorous social science theory and techniques to technology projects. He has Presented at the New Mexico Big Data and Analytics Summit and the ExperienceIT NM Conference. He has given talks on data to the New Mexico Big Data Working Group, Sandia National Labs, and the New Mexico Geographic Information Council.
Details
Erscheinungsjahr: | 2020 |
---|---|
Genre: | Informatik |
Rubrik: | Naturwissenschaften & Technik |
Medium: | Taschenbuch |
ISBN-13: | 9781839214189 |
ISBN-10: | 183921418X |
Sprache: | Englisch |
Ausstattung / Beilage: | Paperback |
Einband: | Kartoniert / Broschiert |
Autor: | Crickard, Paul |
Hersteller: | Packt Publishing |
Maße: | 235 x 191 x 20 mm |
Von/Mit: | Paul Crickard |
Erscheinungsdatum: | 23.10.2020 |
Gewicht: | 0,665 kg |
Über den Autor
Paul Crickard is the author of [...] Essentials and co-author of Mastering Geospatial Analysis with Python and the Chief Information Officer at the Second Judicial District Attorney's Office in Albuquerque, New Mexico.
With a Master's degree in Political Science and a background in Community, and Regional Planning, he combines rigorous social science theory and techniques to technology projects. He has Presented at the New Mexico Big Data and Analytics Summit and the ExperienceIT NM Conference. He has given talks on data to the New Mexico Big Data Working Group, Sandia National Labs, and the New Mexico Geographic Information Council.
With a Master's degree in Political Science and a background in Community, and Regional Planning, he combines rigorous social science theory and techniques to technology projects. He has Presented at the New Mexico Big Data and Analytics Summit and the ExperienceIT NM Conference. He has given talks on data to the New Mexico Big Data Working Group, Sandia National Labs, and the New Mexico Geographic Information Council.
Details
Erscheinungsjahr: | 2020 |
---|---|
Genre: | Informatik |
Rubrik: | Naturwissenschaften & Technik |
Medium: | Taschenbuch |
ISBN-13: | 9781839214189 |
ISBN-10: | 183921418X |
Sprache: | Englisch |
Ausstattung / Beilage: | Paperback |
Einband: | Kartoniert / Broschiert |
Autor: | Crickard, Paul |
Hersteller: | Packt Publishing |
Maße: | 235 x 191 x 20 mm |
Von/Mit: | Paul Crickard |
Erscheinungsdatum: | 23.10.2020 |
Gewicht: | 0,665 kg |
Warnhinweis