Zum Hauptinhalt springen
Dekorationsartikel gehören nicht zum Leistungsumfang.
Large Language Models: A Deep Dive
Bridging Theory and Practice
Buch von Uday Kamath (u. a.)
Sprache: Englisch

80,24 €*

inkl. MwSt.

Versandkostenfrei per Post / DHL

Aktuell nicht verfügbar

Kategorien:
Beschreibung
Large Language Models (LLMs) have emerged as a cornerstone technology, transforming how we interact with information and redefining the boundaries of artificial intelligence. LLMs offer an unprecedented ability to understand, generate, and interact with human language in an intuitive and insightful manner, leading to transformative applications across domains like content creation, chatbots, search engines, and research tools. While fascinating, the complex workings of LLMs¿their intricate architecture, underlying algorithms, and ethical considerations¿require thorough exploration, creating a need for a comprehensive book on this subject.

This book provides an authoritative exploration of the design, training, evolution, and application of LLMs. It begins with an overview of pre-trained language models and Transformer architectures, laying the groundwork for understanding prompt-based learning techniques. Next, it dives into methods for fine-tuning LLMs, integrating reinforcement learning for value alignment, and the convergence of LLMs with computer vision, robotics, and speech processing. The book strongly emphasizes practical applications, detailing real-world use cases such as conversational chatbots, retrieval-augmented generation (RAG), and code generation. These examples are carefully chosen to illustrate the diverse and impactful ways LLMs are being applied in various industries and scenarios.

Readers will gain insights into operationalizing and deploying LLMs, from implementing modern tools and libraries to addressing challenges like bias and ethical implications. The book also introduces the cutting-edge realm of multimodal LLMs that can process audio, images, video, and robotic inputs. With hands-on tutorials for applying LLMs to natural language tasks, this thorough guide equips readers with both theoretical knowledge and practical skills for leveraging the full potential of large language models.

This comprehensive resource is appropriate for a wide audience: students, researchers and academics in AI or NLP, practicing data scientists, and anyone looking to grasp the essence and intricacies of LLMs.

Key Features:

Over 100 techniques and state-of-the-art methods, including pre-training, prompt-based tuning, instruction tuning, parameter-efficient and compute-efficient fine-tuning, end-user prompt engineering, and building and optimizing Retrieval-Augmented Generation systems, along with strategies for aligning LLMs with human values using reinforcement learning

Over 200 datasets compiled in one place, covering everything from pre- training to multimodal tuning, providing a robust foundation for diverse LLM applications

Over 50 strategies to address key ethical issues such as hallucination, toxicity, bias, fairness, and privacy. Gain comprehensive methods for measuring, evaluating, and mitigating these challenges to ensure responsible LLM deployment

Over 200 benchmarks covering LLM performance across various tasks, ethical considerations, multimodal applications, and more than 50 evaluation metrics for the LLM lifecycle

Nine detailed tutorials that guide readers through pre-training, fine- tuning, alignment tuning, bias mitigation, multimodal training, and deploying large language models using tools and libraries compatible with Google Colab, ensuring practical application of theoretical concepts

Over 100 practical tips for data scientists and practitioners, offering implementation details, tricks, and tools to successfully navigate the LLM life- cycle and accomplish tasks efficiently
Large Language Models (LLMs) have emerged as a cornerstone technology, transforming how we interact with information and redefining the boundaries of artificial intelligence. LLMs offer an unprecedented ability to understand, generate, and interact with human language in an intuitive and insightful manner, leading to transformative applications across domains like content creation, chatbots, search engines, and research tools. While fascinating, the complex workings of LLMs¿their intricate architecture, underlying algorithms, and ethical considerations¿require thorough exploration, creating a need for a comprehensive book on this subject.

This book provides an authoritative exploration of the design, training, evolution, and application of LLMs. It begins with an overview of pre-trained language models and Transformer architectures, laying the groundwork for understanding prompt-based learning techniques. Next, it dives into methods for fine-tuning LLMs, integrating reinforcement learning for value alignment, and the convergence of LLMs with computer vision, robotics, and speech processing. The book strongly emphasizes practical applications, detailing real-world use cases such as conversational chatbots, retrieval-augmented generation (RAG), and code generation. These examples are carefully chosen to illustrate the diverse and impactful ways LLMs are being applied in various industries and scenarios.

Readers will gain insights into operationalizing and deploying LLMs, from implementing modern tools and libraries to addressing challenges like bias and ethical implications. The book also introduces the cutting-edge realm of multimodal LLMs that can process audio, images, video, and robotic inputs. With hands-on tutorials for applying LLMs to natural language tasks, this thorough guide equips readers with both theoretical knowledge and practical skills for leveraging the full potential of large language models.

This comprehensive resource is appropriate for a wide audience: students, researchers and academics in AI or NLP, practicing data scientists, and anyone looking to grasp the essence and intricacies of LLMs.

Key Features:

Over 100 techniques and state-of-the-art methods, including pre-training, prompt-based tuning, instruction tuning, parameter-efficient and compute-efficient fine-tuning, end-user prompt engineering, and building and optimizing Retrieval-Augmented Generation systems, along with strategies for aligning LLMs with human values using reinforcement learning

Over 200 datasets compiled in one place, covering everything from pre- training to multimodal tuning, providing a robust foundation for diverse LLM applications

Over 50 strategies to address key ethical issues such as hallucination, toxicity, bias, fairness, and privacy. Gain comprehensive methods for measuring, evaluating, and mitigating these challenges to ensure responsible LLM deployment

Over 200 benchmarks covering LLM performance across various tasks, ethical considerations, multimodal applications, and more than 50 evaluation metrics for the LLM lifecycle

Nine detailed tutorials that guide readers through pre-training, fine- tuning, alignment tuning, bias mitigation, multimodal training, and deploying large language models using tools and libraries compatible with Google Colab, ensuring practical application of theoretical concepts

Over 100 practical tips for data scientists and practitioners, offering implementation details, tricks, and tools to successfully navigate the LLM life- cycle and accomplish tasks efficiently
Über den Autor

Uday Kamath has 25 years of experience in analytical development and a Ph.D. in scalable machine learning. His significant contributions span numerous journals, conferences, books, and patents. Notable books include Applied Causal Inference, Explainable Artificial Intelligence, Transformers for Machine Learning, Deep Learning for NLP and Speech Recognition, Mastering Java Machine Learning, and Machine Learning: End-to-End Guide for Java Developers. Currently serving as the Chief Analytics Officer for Smarsh, his role encompasses spearheading data science and research in communication AI. He is also an active member of the Board of Advisors for entities, including commercial companies like Falkonry and academic institutions such as the Center for Human-Machine Partnership at GMU.

Kevin Keenan, Ph.D has more than 15 years of experience in the application of statistics, data analytics, and machine learning to real-world data across academia, cybersecurity, and financial services. Within these domains, he has specialized in the rigorous application of the scientific method, especially within scrappy commercial environments, where data quality and completeness are never ideal but from which immense value and insight can still be derived. With 8+ years of experience using NLP to surface human-mediated corporate, legal, and regulatory risk from communications and deep packet network traffic data, Kevin has successfully delivered machine learning applied to unstructured data at huge scales. He is the author of four published scientific papers in the academic field of Evolutionary Genetics, with over 1,400 citations, and is the author and maintainer of the open-source "diveRsity" project for population genetics research in the R statistical programming language.

Sarah Sorenson has spent over 15 years working in the software industry. She is a polyglot programmer, having done full-stack development in Python, Java, C#, and JavaScript at various times. She has spent the past ten years building machine learning capabilities and putting them into operation, primarily in the financial services domain. She has extensive experience in the application of machine learning to fraud detection and, most recently, has specialized in the development and deployment of NLP models for regulatory compliance on large-scale communications data at some of the world's top banks.

Garrett Somers has been doing data-intensive research for over 10 years. Trained as an astrophysicist, he began his career studying X-ray emissions from distant black holes, before authoring his dissertation on numerical models of the evolving structure, spin, and magnetic fields of stars. He is the first author of eight peer-reviewed astrophysics articles totaling over 400 citations and the contributing author of an additional twenty-seven (over 4,000 citations in total). In 2019, he began a career in data science, specializing in applications of natural language processing to behavioral analysis in large communication corpora.

Inhaltsverzeichnis

1. Large Language Models: An Introduction.- 2. Pre-trained Models.- 3. Prompt-based Learning.- 4. LLM Adaptation and Utilization.- 5. Tuning for LLM Alignment.- 6. LLM Challenges and Solutions.- 7. Retrieval-Augmented Generation.- 8. LLMs in Production.- 9. Multimodal LLMs.- 10. LLMs: Evolution and New Frontiers.- Appendix.

Details
Erscheinungsjahr: 2024
Genre: Informatik
Rubrik: Naturwissenschaften & Technik
Medium: Buch
Inhalt: xxxiv
472 S.
ISBN-13: 9783031656460
ISBN-10: 3031656466
Sprache: Englisch
Ausstattung / Beilage: HC runder Rücken kaschiert
Einband: Gebunden
Autor: Kamath, Uday
Sorenson, Sarah
Somers, Garrett
Keenan, Kevin
Hersteller: Springer Nature Switzerland
Springer International Publishing
Springer International Publishing AG
Maße: 260 x 183 x 33 mm
Von/Mit: Uday Kamath (u. a.)
Erscheinungsdatum: 21.08.2024
Gewicht: 1,143 kg
Artikel-ID: 129529477
Über den Autor

Uday Kamath has 25 years of experience in analytical development and a Ph.D. in scalable machine learning. His significant contributions span numerous journals, conferences, books, and patents. Notable books include Applied Causal Inference, Explainable Artificial Intelligence, Transformers for Machine Learning, Deep Learning for NLP and Speech Recognition, Mastering Java Machine Learning, and Machine Learning: End-to-End Guide for Java Developers. Currently serving as the Chief Analytics Officer for Smarsh, his role encompasses spearheading data science and research in communication AI. He is also an active member of the Board of Advisors for entities, including commercial companies like Falkonry and academic institutions such as the Center for Human-Machine Partnership at GMU.

Kevin Keenan, Ph.D has more than 15 years of experience in the application of statistics, data analytics, and machine learning to real-world data across academia, cybersecurity, and financial services. Within these domains, he has specialized in the rigorous application of the scientific method, especially within scrappy commercial environments, where data quality and completeness are never ideal but from which immense value and insight can still be derived. With 8+ years of experience using NLP to surface human-mediated corporate, legal, and regulatory risk from communications and deep packet network traffic data, Kevin has successfully delivered machine learning applied to unstructured data at huge scales. He is the author of four published scientific papers in the academic field of Evolutionary Genetics, with over 1,400 citations, and is the author and maintainer of the open-source "diveRsity" project for population genetics research in the R statistical programming language.

Sarah Sorenson has spent over 15 years working in the software industry. She is a polyglot programmer, having done full-stack development in Python, Java, C#, and JavaScript at various times. She has spent the past ten years building machine learning capabilities and putting them into operation, primarily in the financial services domain. She has extensive experience in the application of machine learning to fraud detection and, most recently, has specialized in the development and deployment of NLP models for regulatory compliance on large-scale communications data at some of the world's top banks.

Garrett Somers has been doing data-intensive research for over 10 years. Trained as an astrophysicist, he began his career studying X-ray emissions from distant black holes, before authoring his dissertation on numerical models of the evolving structure, spin, and magnetic fields of stars. He is the first author of eight peer-reviewed astrophysics articles totaling over 400 citations and the contributing author of an additional twenty-seven (over 4,000 citations in total). In 2019, he began a career in data science, specializing in applications of natural language processing to behavioral analysis in large communication corpora.

Inhaltsverzeichnis

1. Large Language Models: An Introduction.- 2. Pre-trained Models.- 3. Prompt-based Learning.- 4. LLM Adaptation and Utilization.- 5. Tuning for LLM Alignment.- 6. LLM Challenges and Solutions.- 7. Retrieval-Augmented Generation.- 8. LLMs in Production.- 9. Multimodal LLMs.- 10. LLMs: Evolution and New Frontiers.- Appendix.

Details
Erscheinungsjahr: 2024
Genre: Informatik
Rubrik: Naturwissenschaften & Technik
Medium: Buch
Inhalt: xxxiv
472 S.
ISBN-13: 9783031656460
ISBN-10: 3031656466
Sprache: Englisch
Ausstattung / Beilage: HC runder Rücken kaschiert
Einband: Gebunden
Autor: Kamath, Uday
Sorenson, Sarah
Somers, Garrett
Keenan, Kevin
Hersteller: Springer Nature Switzerland
Springer International Publishing
Springer International Publishing AG
Maße: 260 x 183 x 33 mm
Von/Mit: Uday Kamath (u. a.)
Erscheinungsdatum: 21.08.2024
Gewicht: 1,143 kg
Artikel-ID: 129529477
Warnhinweis

Ähnliche Produkte

Ähnliche Produkte