Dekorationsartikel gehören nicht zum Leistungsumfang.
Quick Start Guide to Large Language Models: Strategies and Best Practices for Using ChatGPT and Other LLMs
Strategies and Best Practices for Using ChatGPT and Other LLMs
Taschenbuch von Sinan Ozdemir
Sprache: Englisch

39,10 €*

inkl. MwSt.

Versandkostenfrei per Post / DHL

auf Lager, Lieferzeit 1-2 Werktage

Kategorien:
Beschreibung
The Practical, Step-by-Step Guide to Using LLMs at Scale in Projects and Products Large Language Models (LLMs) like ChatGPT are demonstrating breathtaking capabilities, but their size and complexity have deterred many practitioners from applying them. In Quick Start Guide to Large Language Models, pioneering data scientist and AI entrepreneur Sinan Ozdemir clears away those obstacles and provides a guide to working with, integrating, and deploying LLMs to solve practical problems. Ozdemir brings together all you need to get started, even if you have no direct experience with LLMs: step-by-step instructions, best practices, real-world case studies, hands-on exercises, and more. Along the way, he shares insights into LLMs' inner workings to help you optimize model choice, data formats, parameters, and performance. You'll find even more resources on the companion website, including sample datasets and code for working with open- and closed-source LLMs such as those from OpenAI (GPT-4 and ChatGPT), Google (BERT, T5, and Bard), EleutherAI (GPT-J and GPT-Neo), Cohere (the Command family), and Meta (BART and the LLaMA family). Learn key concepts: pre-training, transfer learning, fine-tuning, attention, embeddings, tokenization, and moreUse APIs and Python to fine-tune and customize LLMs for your requirementsBuild a complete neural/semantic information retrieval system and attach to conversational LLMs for retrieval-augmented generationMaster advanced prompt engineering techniques like output structuring, chain-ofthought, and semantic few-shot promptingCustomize LLM embeddings to build a complete recommendation engine from scratch with user dataConstruct and fine-tune multimodal Transformer architectures using opensource LLMsAlign LLMs using Reinforcement Learning from Human and AI Feedback (RLHF/RLAIF)Deploy prompts and custom fine-tuned LLMs to the cloud with scalability and evaluation pipelines in mind "By balancing the potential of both open- and closed-source models, Quick Start Guide to Large Language Models stands as a comprehensive guide to understanding and using LLMs, bridging the gap between theoretical concepts and practical application."--Giada Pistilli, Principal Ethicist at HuggingFace "A refreshing and inspiring resource. Jam-packed with practical guidance and clear explanations that leave you smarter about this incredible new field."--Pete Huang, author of The Neuron Register your book for convenient access to downloads, updates, and/or corrections as they become available. See inside book for details.
The Practical, Step-by-Step Guide to Using LLMs at Scale in Projects and Products Large Language Models (LLMs) like ChatGPT are demonstrating breathtaking capabilities, but their size and complexity have deterred many practitioners from applying them. In Quick Start Guide to Large Language Models, pioneering data scientist and AI entrepreneur Sinan Ozdemir clears away those obstacles and provides a guide to working with, integrating, and deploying LLMs to solve practical problems. Ozdemir brings together all you need to get started, even if you have no direct experience with LLMs: step-by-step instructions, best practices, real-world case studies, hands-on exercises, and more. Along the way, he shares insights into LLMs' inner workings to help you optimize model choice, data formats, parameters, and performance. You'll find even more resources on the companion website, including sample datasets and code for working with open- and closed-source LLMs such as those from OpenAI (GPT-4 and ChatGPT), Google (BERT, T5, and Bard), EleutherAI (GPT-J and GPT-Neo), Cohere (the Command family), and Meta (BART and the LLaMA family). Learn key concepts: pre-training, transfer learning, fine-tuning, attention, embeddings, tokenization, and moreUse APIs and Python to fine-tune and customize LLMs for your requirementsBuild a complete neural/semantic information retrieval system and attach to conversational LLMs for retrieval-augmented generationMaster advanced prompt engineering techniques like output structuring, chain-ofthought, and semantic few-shot promptingCustomize LLM embeddings to build a complete recommendation engine from scratch with user dataConstruct and fine-tune multimodal Transformer architectures using opensource LLMsAlign LLMs using Reinforcement Learning from Human and AI Feedback (RLHF/RLAIF)Deploy prompts and custom fine-tuned LLMs to the cloud with scalability and evaluation pipelines in mind "By balancing the potential of both open- and closed-source models, Quick Start Guide to Large Language Models stands as a comprehensive guide to understanding and using LLMs, bridging the gap between theoretical concepts and practical application."--Giada Pistilli, Principal Ethicist at HuggingFace "A refreshing and inspiring resource. Jam-packed with practical guidance and clear explanations that leave you smarter about this incredible new field."--Pete Huang, author of The Neuron Register your book for convenient access to downloads, updates, and/or corrections as they become available. See inside book for details.
Über den Autor

Sinan Ozdemir is currently the founder and CTO of Shiba Technologies. Sinan is a former lecturer of Data Science at Johns Hopkins University and the author of multiple textbooks on data science and machine learning. Additionally, he is the founder of the recently acquired Kylie.ai, an enterprise-grade conversational AI platform with RPA capabilities. He holds a master's degree in Pure Mathematics from Johns Hopkins University and is based in San Francisco, CA.

Inhaltsverzeichnis

Foreword xv
Preface xvii
Acknowledgments xxi
About the Author xxiii

Part I: Introduction to Large Language Models 1

Chapter 1: Overview of Large Language Models 3
What Are Large Language Models? 4
Popular Modern LLMs 20
Domain-Specific LLMs 22
Applications of LLMs 23
Summary 29

Chapter 2: Semantic Search with LLMs 31
Introduction 31
The Task 32
Solution Overview 34
The Components 35
Putting It All Together 51
The Cost of Closed-Source Components 54
Summary 55

Chapter 3: First Steps with Prompt Engineering 57
Introduction 57
Prompt Engineering 57
Working with Prompts Across Models 65
Building a Q/A Bot with ChatGPT 69
Summary 74

Part II: Getting the Most Out of LLMs 75

Chapter 4: Optimizing LLMs with Customized Fine-Tuning 77
Introduction 77
Transfer Learning and Fine-Tuning: A Primer 78
A Look at the OpenAI Fine-Tuning API 82
Preparing Custom Examples with the OpenAI CLI 84
Setting Up the OpenAI CLI 87
Our First Fine-Tuned LLM 88
Case Study: Amazon Review Category Classification 93
Summary 95

Chapter 5: Advanced Prompt Engineering 97
Introduction 97
Prompt Injection Attacks 97
Input/Output Validation 99
Batch Prompting 103
Prompt Chaining 104
Chain-of-Thought Prompting 111
Revisiting Few-Shot Learning 113
Testing and Iterative Prompt Development 123
Summary 124

Chapter 6: Customizing Embeddings and Model Architectures 125
Introduction 125
Case Study: Building a Recommendation System 126
Summary 144

Part III: Advanced LLM Usage 145

Chapter 7: Moving Beyond Foundation Models 147
Introduction 147
Case Study: Visual Q/A 147
Case Study: Reinforcement Learning from Feedback 163
Summary 173

Chapter 8: Advanced Open-Source LLM Fine-Tuning 175
Introduction 175
Example: Anime Genre Multilabel Classification with BERT 176
Example: LaTeX Generation with GPT2 189
Sinan's Attempt at Wise Yet Engaging Responses: SAWYER 193
The Ever-Changing World of Fine-Tuning 206
Summary 207

Chapter 9: Moving LLMs into Production 209
Introduction 209
Deploying Closed-Source LLMs to Production 209
Deploying Open-Source LLMs to Production 210
Summary 225

Part IV: Appendices 227

Appendix A: LLM FAQs 229
Appendix B: LLM Glossary 233
Appendix C: LLM Application Archetypes 239

Index 243

Details
Erscheinungsjahr: 2023
Genre: Informatik
Rubrik: Naturwissenschaften & Technik
Medium: Taschenbuch
Seiten: 251
Reihe: Addison-Wesley Data & Analytic
ISBN-13: 9780138199197
ISBN-10: 0138199191
Sprache: Englisch
Einband: Kartoniert / Broschiert
Autor: Ozdemir, Sinan
Hersteller: Pearson
Maße: 228 x 176 x 20 mm
Von/Mit: Sinan Ozdemir
Erscheinungsdatum: 11.09.2023
Gewicht: 0,496 kg
preigu-id: 126919589
Über den Autor

Sinan Ozdemir is currently the founder and CTO of Shiba Technologies. Sinan is a former lecturer of Data Science at Johns Hopkins University and the author of multiple textbooks on data science and machine learning. Additionally, he is the founder of the recently acquired Kylie.ai, an enterprise-grade conversational AI platform with RPA capabilities. He holds a master's degree in Pure Mathematics from Johns Hopkins University and is based in San Francisco, CA.

Inhaltsverzeichnis

Foreword xv
Preface xvii
Acknowledgments xxi
About the Author xxiii

Part I: Introduction to Large Language Models 1

Chapter 1: Overview of Large Language Models 3
What Are Large Language Models? 4
Popular Modern LLMs 20
Domain-Specific LLMs 22
Applications of LLMs 23
Summary 29

Chapter 2: Semantic Search with LLMs 31
Introduction 31
The Task 32
Solution Overview 34
The Components 35
Putting It All Together 51
The Cost of Closed-Source Components 54
Summary 55

Chapter 3: First Steps with Prompt Engineering 57
Introduction 57
Prompt Engineering 57
Working with Prompts Across Models 65
Building a Q/A Bot with ChatGPT 69
Summary 74

Part II: Getting the Most Out of LLMs 75

Chapter 4: Optimizing LLMs with Customized Fine-Tuning 77
Introduction 77
Transfer Learning and Fine-Tuning: A Primer 78
A Look at the OpenAI Fine-Tuning API 82
Preparing Custom Examples with the OpenAI CLI 84
Setting Up the OpenAI CLI 87
Our First Fine-Tuned LLM 88
Case Study: Amazon Review Category Classification 93
Summary 95

Chapter 5: Advanced Prompt Engineering 97
Introduction 97
Prompt Injection Attacks 97
Input/Output Validation 99
Batch Prompting 103
Prompt Chaining 104
Chain-of-Thought Prompting 111
Revisiting Few-Shot Learning 113
Testing and Iterative Prompt Development 123
Summary 124

Chapter 6: Customizing Embeddings and Model Architectures 125
Introduction 125
Case Study: Building a Recommendation System 126
Summary 144

Part III: Advanced LLM Usage 145

Chapter 7: Moving Beyond Foundation Models 147
Introduction 147
Case Study: Visual Q/A 147
Case Study: Reinforcement Learning from Feedback 163
Summary 173

Chapter 8: Advanced Open-Source LLM Fine-Tuning 175
Introduction 175
Example: Anime Genre Multilabel Classification with BERT 176
Example: LaTeX Generation with GPT2 189
Sinan's Attempt at Wise Yet Engaging Responses: SAWYER 193
The Ever-Changing World of Fine-Tuning 206
Summary 207

Chapter 9: Moving LLMs into Production 209
Introduction 209
Deploying Closed-Source LLMs to Production 209
Deploying Open-Source LLMs to Production 210
Summary 225

Part IV: Appendices 227

Appendix A: LLM FAQs 229
Appendix B: LLM Glossary 233
Appendix C: LLM Application Archetypes 239

Index 243

Details
Erscheinungsjahr: 2023
Genre: Informatik
Rubrik: Naturwissenschaften & Technik
Medium: Taschenbuch
Seiten: 251
Reihe: Addison-Wesley Data & Analytic
ISBN-13: 9780138199197
ISBN-10: 0138199191
Sprache: Englisch
Einband: Kartoniert / Broschiert
Autor: Ozdemir, Sinan
Hersteller: Pearson
Maße: 228 x 176 x 20 mm
Von/Mit: Sinan Ozdemir
Erscheinungsdatum: 11.09.2023
Gewicht: 0,496 kg
preigu-id: 126919589
Warnhinweis

Ähnliche Produkte

Ähnliche Produkte