Dekorationsartikel gehören nicht zum Leistungsumfang.
Sprache:
Englisch
55,40 €*
Versandkostenfrei per Post / DHL
Lieferzeit 1-2 Wochen
Kategorien:
Beschreibung
In just a few years, deep reinforcement learning (DRL) systems such as DeepMinds DQN have yielded remarkable results. This hybrid approach to machine learning shares many similarities with human learning: its unsupervised self-learning, self-discovery of strategies, usage of memory, balance of exploration and exploitation, and its exceptional flexibility. Exciting in its own right, DRL may presage even more remarkable advances in general artificial intelligence.
Deep Reinforcement Learning in Python: A Hands-On Introduction is the fastest and most accessible way to get started with DRL. The authors teach through practical hands-on examples presented with their advanced OpenAI Lab framework. While providing a solid theoretical overview, they emphasize building intuition for the theory, rather than a deep mathematical treatment of results. Coverage includes:
- Components of an RL system, including environment and agents
- Value-based algorithms: SARSA, Q-learning and extensions, offline learning
- Policy-based algorithms: REINFORCE and extensions; comparisons with value-based techniques
- Combined methods: Actor-Critic and extensions; scalability through async methods
- Agent evaluation
- Advanced and experimental techniques, and more
- How to achieve breakthrough machine learning performance by combining deep neural networks with reinforcement learning
- Reduces the learning curve by relying on the authors' OpenAI Lab framework: requires less upfront theory, math, and programming expertise
- Provides well-designed, modularized, and tested code examples with complete experimental data sets to illuminate the underlying algorithms
- Includes case studies, practical tips, definitions, and other aids to learning and mastery
- Prepares readers for exciting future advances in artificial general intelligence
The accessible, hands-on, full-color tutorial for building practical deep reinforcement learning solutions
- How to achieve breakthrough machine learning performance by combining deep neural networks with reinforcement learning
- Reduces the learning curve by relying on the authors' OpenAI Lab framework: requires less upfront theory, math, and programming expertise
- Provides well-designed, modularized, and tested code examples with complete experimental data sets to illuminate the underlying algorithms
- Includes case studies, practical tips, definitions, and other aids to learning and mastery
- Prepares readers for exciting future advances in artificial general intelligence
In just a few years, deep reinforcement learning (DRL) systems such as DeepMinds DQN have yielded remarkable results. This hybrid approach to machine learning shares many similarities with human learning: its unsupervised self-learning, self-discovery of strategies, usage of memory, balance of exploration and exploitation, and its exceptional flexibility. Exciting in its own right, DRL may presage even more remarkable advances in general artificial intelligence.
Deep Reinforcement Learning in Python: A Hands-On Introduction is the fastest and most accessible way to get started with DRL. The authors teach through practical hands-on examples presented with their advanced OpenAI Lab framework. While providing a solid theoretical overview, they emphasize building intuition for the theory, rather than a deep mathematical treatment of results. Coverage includes:
- Components of an RL system, including environment and agents
- Value-based algorithms: SARSA, Q-learning and extensions, offline learning
- Policy-based algorithms: REINFORCE and extensions; comparisons with value-based techniques
- Combined methods: Actor-Critic and extensions; scalability through async methods
- Agent evaluation
- Advanced and experimental techniques, and more
- How to achieve breakthrough machine learning performance by combining deep neural networks with reinforcement learning
- Reduces the learning curve by relying on the authors' OpenAI Lab framework: requires less upfront theory, math, and programming expertise
- Provides well-designed, modularized, and tested code examples with complete experimental data sets to illuminate the underlying algorithms
- Includes case studies, practical tips, definitions, and other aids to learning and mastery
- Prepares readers for exciting future advances in artificial general intelligence
The accessible, hands-on, full-color tutorial for building practical deep reinforcement learning solutions
- How to achieve breakthrough machine learning performance by combining deep neural networks with reinforcement learning
- Reduces the learning curve by relying on the authors' OpenAI Lab framework: requires less upfront theory, math, and programming expertise
- Provides well-designed, modularized, and tested code examples with complete experimental data sets to illuminate the underlying algorithms
- Includes case studies, practical tips, definitions, and other aids to learning and mastery
- Prepares readers for exciting future advances in artificial general intelligence
Über den Autor
Laura Graesser is a research software engineer working in robotics at Google. She holds a master’s degree in computer science from New York University, where she specialized in machine learning.
Wah Loon Keng is an AI engineer at Machine Zone, where he applies deep reinforcement learning to industrial problems. He has a background in both theoretical physics and computer science.
Inhaltsverzeichnis
Foreword xix
Preface xxi
Acknowledgments xxv
About the Authors xxvii
Chapter 1: Introduction to Reinforcement Learning 1
1.1 Reinforcement Learning 1
1.2 Reinforcement Learning as MDP 6
1.3 Learnable Functions in Reinforcement Learning 9
1.4 Deep Reinforcement Learning Algorithms 11
1.5 Deep Learning for Reinforcement Learning 17
1.6 Reinforcement Learning and Supervised Learning 19
1.7 Summary 21
2.1 Policy 26
2.2 The Objective Function 26
2.3 The Policy Gradient 27
2.4 Monte Carlo Sampling 30
2.5 REINFORCE Algorithm 31
2.6 Implementing REINFORCE 33
2.7 Training a REINFORCE Agent 44
2.8 Experimental Results 47
2.9 Summary 51
2.10 Further Reading 51
2.11 History 51
Chapter 3: SARSA 53
3.1 The Q- and V-Functions 54
3.2 Temporal Difference Learning 56
3.3 Action Selection in SARSA 65
3.4 SARSA Algorithm 67
3.5 Implementing SARSA 69
3.6 Training a SARSA Agent 74
3.7 Experimental Results 76
3.8 Summary 78
3.9 Further Reading 79
3.10 History 79
Chapter 4: Deep Q-Networks (DQN) 81
4.1 Learning the Q-Function in DQN 82
4.2 Action Selection in DQN 83
4.3 Experience Replay 88
4.4 DQN Algorithm 89
4.5 Implementing DQN 91
4.6 Training a DQN Agent 96
4.7 Experimental Results 99
4.8 Summary 101
4.9 Further Reading 102
4.10 History 102
Chapter 5: Improving DQN 103
5.1 Target Networks 104
5.2 Double DQN 106
5.3 Prioritized Experience Replay (PER) 109
5.4 Modified DQN Implementation 112
5.5 Training a DQN Agent to Play Atari Games 123
5.6 Experimental Results 128
5.7 Summary 132
5.8 Further Reading 132
Part II: Combined Methods 133
Chapter 6: Advantage Actor-Critic (A2C) 135
6.1 The Actor 136
6.2 The Critic 136
6.3 A2C Algorithm 141
6.4 Implementing A2C 143
6.5 Network Architecture 148
6.6 Training an A2C Agent 150
6.7 Experimental Results 157
6.8 Summary 161
6.9 Further Reading 162
6.10 History 162
Chapter 7: Proximal Policy Optimization (PPO) 165
7.1 Surrogate Objective 165
7.2 Proximal Policy Optimization (PPO) 174
7.3 PPO Algorithm 177
7.4 Implementing PPO 179
7.5 Training a PPO Agent 182
7.6 Experimental Results 188
7.7 Summary 192
7.8 Further Reading 192
Chapter 8: Parallelization Methods 195
8.1 Synchronous Parallelization 196
8.2 Asynchronous Parallelization 197
8.3 Training an A3C Agent 200
8.4 Summary 203
8.5 Further Reading 204
Chapter 9: Algorithm Summary 205
Part III: Practical Details 207
Chapter 10: Getting Deep RL to Work 209
10.1 Software Engineering Practices 209
10.2 Debugging Tips 218
10.3 Atari Tricks 228
10.4 Deep RL Almanac 231
10.5 Summary 238
Chapter 11: SLM Lab 239
11.1 Algorithms Implemented in SLM Lab 239
11.2 Spec File 241
11.3 Running SLM Lab 246
11.4 Analyzing Experiment Results 247
11.5 Summary 249
Chapter 12: Network Architectures 251
12.1 Types of Neural Networks 251
12.2 Guidelines for Choosing a Network Family 256
12.3 The Net API 262
12.4 Summary 271
12.5 Further Reading 271
Chapter 13: Hardware 273
13.1 Computer 273
13.2 Data Types 278
13.3 Optimizing Data Types in RL 280
13.4 Choosing Hardware 285
13.5 Summary 285
Part IV: Environment Design 287
Chapter 14: States 289
14.1 Examples of States 289
14.2 State Completeness 296
14.3 State Complexity 297
14.4 State Information Loss 301
14.5 Preprocessing 306
14.6 Summary 313
Chapter 15: Actions 315
15.1 Examples of Actions 315
15.2 Action Completeness 318
15.3 Action Complexity 319
15.4 Summary 323
15.5 Further Reading: Action Design in Everyday Things 324
Chapter 16: Rewards 327
16.1 The Role of Rewards 327
16.2 Reward Design Guidelines 328
16.3 Summary 332
Chapter 17: Transition Function 333
17.1 Feasibility Checks 333
17.2 Reality Check 335
17.3 Summary 337
Epilogue 338
Appendix A: Deep Reinforcement Learning Timeline 343
Appendix B: Example Environments 345
B.1 Discrete Environments 346
B.2 Continuous Environments 350
References 353
Index 363
Preface xxi
Acknowledgments xxv
About the Authors xxvii
Chapter 1: Introduction to Reinforcement Learning 1
1.1 Reinforcement Learning 1
1.2 Reinforcement Learning as MDP 6
1.3 Learnable Functions in Reinforcement Learning 9
1.4 Deep Reinforcement Learning Algorithms 11
1.5 Deep Learning for Reinforcement Learning 17
1.6 Reinforcement Learning and Supervised Learning 19
1.7 Summary 21
Part I: Policy-Based and Value-Based Algorithms 23
Chapter 2: REINFORCE 252.1 Policy 26
2.2 The Objective Function 26
2.3 The Policy Gradient 27
2.4 Monte Carlo Sampling 30
2.5 REINFORCE Algorithm 31
2.6 Implementing REINFORCE 33
2.7 Training a REINFORCE Agent 44
2.8 Experimental Results 47
2.9 Summary 51
2.10 Further Reading 51
2.11 History 51
Chapter 3: SARSA 53
3.1 The Q- and V-Functions 54
3.2 Temporal Difference Learning 56
3.3 Action Selection in SARSA 65
3.4 SARSA Algorithm 67
3.5 Implementing SARSA 69
3.6 Training a SARSA Agent 74
3.7 Experimental Results 76
3.8 Summary 78
3.9 Further Reading 79
3.10 History 79
Chapter 4: Deep Q-Networks (DQN) 81
4.1 Learning the Q-Function in DQN 82
4.2 Action Selection in DQN 83
4.3 Experience Replay 88
4.4 DQN Algorithm 89
4.5 Implementing DQN 91
4.6 Training a DQN Agent 96
4.7 Experimental Results 99
4.8 Summary 101
4.9 Further Reading 102
4.10 History 102
Chapter 5: Improving DQN 103
5.1 Target Networks 104
5.2 Double DQN 106
5.3 Prioritized Experience Replay (PER) 109
5.4 Modified DQN Implementation 112
5.5 Training a DQN Agent to Play Atari Games 123
5.6 Experimental Results 128
5.7 Summary 132
5.8 Further Reading 132
Part II: Combined Methods 133
Chapter 6: Advantage Actor-Critic (A2C) 135
6.1 The Actor 136
6.2 The Critic 136
6.3 A2C Algorithm 141
6.4 Implementing A2C 143
6.5 Network Architecture 148
6.6 Training an A2C Agent 150
6.7 Experimental Results 157
6.8 Summary 161
6.9 Further Reading 162
6.10 History 162
Chapter 7: Proximal Policy Optimization (PPO) 165
7.1 Surrogate Objective 165
7.2 Proximal Policy Optimization (PPO) 174
7.3 PPO Algorithm 177
7.4 Implementing PPO 179
7.5 Training a PPO Agent 182
7.6 Experimental Results 188
7.7 Summary 192
7.8 Further Reading 192
Chapter 8: Parallelization Methods 195
8.1 Synchronous Parallelization 196
8.2 Asynchronous Parallelization 197
8.3 Training an A3C Agent 200
8.4 Summary 203
8.5 Further Reading 204
Chapter 9: Algorithm Summary 205
Part III: Practical Details 207
Chapter 10: Getting Deep RL to Work 209
10.1 Software Engineering Practices 209
10.2 Debugging Tips 218
10.3 Atari Tricks 228
10.4 Deep RL Almanac 231
10.5 Summary 238
Chapter 11: SLM Lab 239
11.1 Algorithms Implemented in SLM Lab 239
11.2 Spec File 241
11.3 Running SLM Lab 246
11.4 Analyzing Experiment Results 247
11.5 Summary 249
Chapter 12: Network Architectures 251
12.1 Types of Neural Networks 251
12.2 Guidelines for Choosing a Network Family 256
12.3 The Net API 262
12.4 Summary 271
12.5 Further Reading 271
Chapter 13: Hardware 273
13.1 Computer 273
13.2 Data Types 278
13.3 Optimizing Data Types in RL 280
13.4 Choosing Hardware 285
13.5 Summary 285
Part IV: Environment Design 287
Chapter 14: States 289
14.1 Examples of States 289
14.2 State Completeness 296
14.3 State Complexity 297
14.4 State Information Loss 301
14.5 Preprocessing 306
14.6 Summary 313
Chapter 15: Actions 315
15.1 Examples of Actions 315
15.2 Action Completeness 318
15.3 Action Complexity 319
15.4 Summary 323
15.5 Further Reading: Action Design in Everyday Things 324
Chapter 16: Rewards 327
16.1 The Role of Rewards 327
16.2 Reward Design Guidelines 328
16.3 Summary 332
Chapter 17: Transition Function 333
17.1 Feasibility Checks 333
17.2 Reality Check 335
17.3 Summary 337
Epilogue 338
Appendix A: Deep Reinforcement Learning Timeline 343
Appendix B: Example Environments 345
B.1 Discrete Environments 346
B.2 Continuous Environments 350
References 353
Index 363
Details
Erscheinungsjahr: | 2019 |
---|---|
Fachbereich: | Programmiersprachen |
Genre: | Informatik |
Rubrik: | Naturwissenschaften & Technik |
Medium: | Taschenbuch |
Inhalt: | Kartoniert / Broschiert |
ISBN-13: | 9780135172384 |
ISBN-10: | 0135172381 |
Sprache: | Englisch |
Einband: | Kartoniert / Broschiert |
Autor: |
Graesser, Laura
Keng, Wah Loon |
Hersteller: | Pearson Education |
Maße: | 176 x 231 x 17 mm |
Von/Mit: | Laura Graesser (u. a.) |
Erscheinungsdatum: | 05.12.2019 |
Gewicht: | 0,64 kg |
Über den Autor
Laura Graesser is a research software engineer working in robotics at Google. She holds a master’s degree in computer science from New York University, where she specialized in machine learning.
Wah Loon Keng is an AI engineer at Machine Zone, where he applies deep reinforcement learning to industrial problems. He has a background in both theoretical physics and computer science.
Inhaltsverzeichnis
Foreword xix
Preface xxi
Acknowledgments xxv
About the Authors xxvii
Chapter 1: Introduction to Reinforcement Learning 1
1.1 Reinforcement Learning 1
1.2 Reinforcement Learning as MDP 6
1.3 Learnable Functions in Reinforcement Learning 9
1.4 Deep Reinforcement Learning Algorithms 11
1.5 Deep Learning for Reinforcement Learning 17
1.6 Reinforcement Learning and Supervised Learning 19
1.7 Summary 21
2.1 Policy 26
2.2 The Objective Function 26
2.3 The Policy Gradient 27
2.4 Monte Carlo Sampling 30
2.5 REINFORCE Algorithm 31
2.6 Implementing REINFORCE 33
2.7 Training a REINFORCE Agent 44
2.8 Experimental Results 47
2.9 Summary 51
2.10 Further Reading 51
2.11 History 51
Chapter 3: SARSA 53
3.1 The Q- and V-Functions 54
3.2 Temporal Difference Learning 56
3.3 Action Selection in SARSA 65
3.4 SARSA Algorithm 67
3.5 Implementing SARSA 69
3.6 Training a SARSA Agent 74
3.7 Experimental Results 76
3.8 Summary 78
3.9 Further Reading 79
3.10 History 79
Chapter 4: Deep Q-Networks (DQN) 81
4.1 Learning the Q-Function in DQN 82
4.2 Action Selection in DQN 83
4.3 Experience Replay 88
4.4 DQN Algorithm 89
4.5 Implementing DQN 91
4.6 Training a DQN Agent 96
4.7 Experimental Results 99
4.8 Summary 101
4.9 Further Reading 102
4.10 History 102
Chapter 5: Improving DQN 103
5.1 Target Networks 104
5.2 Double DQN 106
5.3 Prioritized Experience Replay (PER) 109
5.4 Modified DQN Implementation 112
5.5 Training a DQN Agent to Play Atari Games 123
5.6 Experimental Results 128
5.7 Summary 132
5.8 Further Reading 132
Part II: Combined Methods 133
Chapter 6: Advantage Actor-Critic (A2C) 135
6.1 The Actor 136
6.2 The Critic 136
6.3 A2C Algorithm 141
6.4 Implementing A2C 143
6.5 Network Architecture 148
6.6 Training an A2C Agent 150
6.7 Experimental Results 157
6.8 Summary 161
6.9 Further Reading 162
6.10 History 162
Chapter 7: Proximal Policy Optimization (PPO) 165
7.1 Surrogate Objective 165
7.2 Proximal Policy Optimization (PPO) 174
7.3 PPO Algorithm 177
7.4 Implementing PPO 179
7.5 Training a PPO Agent 182
7.6 Experimental Results 188
7.7 Summary 192
7.8 Further Reading 192
Chapter 8: Parallelization Methods 195
8.1 Synchronous Parallelization 196
8.2 Asynchronous Parallelization 197
8.3 Training an A3C Agent 200
8.4 Summary 203
8.5 Further Reading 204
Chapter 9: Algorithm Summary 205
Part III: Practical Details 207
Chapter 10: Getting Deep RL to Work 209
10.1 Software Engineering Practices 209
10.2 Debugging Tips 218
10.3 Atari Tricks 228
10.4 Deep RL Almanac 231
10.5 Summary 238
Chapter 11: SLM Lab 239
11.1 Algorithms Implemented in SLM Lab 239
11.2 Spec File 241
11.3 Running SLM Lab 246
11.4 Analyzing Experiment Results 247
11.5 Summary 249
Chapter 12: Network Architectures 251
12.1 Types of Neural Networks 251
12.2 Guidelines for Choosing a Network Family 256
12.3 The Net API 262
12.4 Summary 271
12.5 Further Reading 271
Chapter 13: Hardware 273
13.1 Computer 273
13.2 Data Types 278
13.3 Optimizing Data Types in RL 280
13.4 Choosing Hardware 285
13.5 Summary 285
Part IV: Environment Design 287
Chapter 14: States 289
14.1 Examples of States 289
14.2 State Completeness 296
14.3 State Complexity 297
14.4 State Information Loss 301
14.5 Preprocessing 306
14.6 Summary 313
Chapter 15: Actions 315
15.1 Examples of Actions 315
15.2 Action Completeness 318
15.3 Action Complexity 319
15.4 Summary 323
15.5 Further Reading: Action Design in Everyday Things 324
Chapter 16: Rewards 327
16.1 The Role of Rewards 327
16.2 Reward Design Guidelines 328
16.3 Summary 332
Chapter 17: Transition Function 333
17.1 Feasibility Checks 333
17.2 Reality Check 335
17.3 Summary 337
Epilogue 338
Appendix A: Deep Reinforcement Learning Timeline 343
Appendix B: Example Environments 345
B.1 Discrete Environments 346
B.2 Continuous Environments 350
References 353
Index 363
Preface xxi
Acknowledgments xxv
About the Authors xxvii
Chapter 1: Introduction to Reinforcement Learning 1
1.1 Reinforcement Learning 1
1.2 Reinforcement Learning as MDP 6
1.3 Learnable Functions in Reinforcement Learning 9
1.4 Deep Reinforcement Learning Algorithms 11
1.5 Deep Learning for Reinforcement Learning 17
1.6 Reinforcement Learning and Supervised Learning 19
1.7 Summary 21
Part I: Policy-Based and Value-Based Algorithms 23
Chapter 2: REINFORCE 252.1 Policy 26
2.2 The Objective Function 26
2.3 The Policy Gradient 27
2.4 Monte Carlo Sampling 30
2.5 REINFORCE Algorithm 31
2.6 Implementing REINFORCE 33
2.7 Training a REINFORCE Agent 44
2.8 Experimental Results 47
2.9 Summary 51
2.10 Further Reading 51
2.11 History 51
Chapter 3: SARSA 53
3.1 The Q- and V-Functions 54
3.2 Temporal Difference Learning 56
3.3 Action Selection in SARSA 65
3.4 SARSA Algorithm 67
3.5 Implementing SARSA 69
3.6 Training a SARSA Agent 74
3.7 Experimental Results 76
3.8 Summary 78
3.9 Further Reading 79
3.10 History 79
Chapter 4: Deep Q-Networks (DQN) 81
4.1 Learning the Q-Function in DQN 82
4.2 Action Selection in DQN 83
4.3 Experience Replay 88
4.4 DQN Algorithm 89
4.5 Implementing DQN 91
4.6 Training a DQN Agent 96
4.7 Experimental Results 99
4.8 Summary 101
4.9 Further Reading 102
4.10 History 102
Chapter 5: Improving DQN 103
5.1 Target Networks 104
5.2 Double DQN 106
5.3 Prioritized Experience Replay (PER) 109
5.4 Modified DQN Implementation 112
5.5 Training a DQN Agent to Play Atari Games 123
5.6 Experimental Results 128
5.7 Summary 132
5.8 Further Reading 132
Part II: Combined Methods 133
Chapter 6: Advantage Actor-Critic (A2C) 135
6.1 The Actor 136
6.2 The Critic 136
6.3 A2C Algorithm 141
6.4 Implementing A2C 143
6.5 Network Architecture 148
6.6 Training an A2C Agent 150
6.7 Experimental Results 157
6.8 Summary 161
6.9 Further Reading 162
6.10 History 162
Chapter 7: Proximal Policy Optimization (PPO) 165
7.1 Surrogate Objective 165
7.2 Proximal Policy Optimization (PPO) 174
7.3 PPO Algorithm 177
7.4 Implementing PPO 179
7.5 Training a PPO Agent 182
7.6 Experimental Results 188
7.7 Summary 192
7.8 Further Reading 192
Chapter 8: Parallelization Methods 195
8.1 Synchronous Parallelization 196
8.2 Asynchronous Parallelization 197
8.3 Training an A3C Agent 200
8.4 Summary 203
8.5 Further Reading 204
Chapter 9: Algorithm Summary 205
Part III: Practical Details 207
Chapter 10: Getting Deep RL to Work 209
10.1 Software Engineering Practices 209
10.2 Debugging Tips 218
10.3 Atari Tricks 228
10.4 Deep RL Almanac 231
10.5 Summary 238
Chapter 11: SLM Lab 239
11.1 Algorithms Implemented in SLM Lab 239
11.2 Spec File 241
11.3 Running SLM Lab 246
11.4 Analyzing Experiment Results 247
11.5 Summary 249
Chapter 12: Network Architectures 251
12.1 Types of Neural Networks 251
12.2 Guidelines for Choosing a Network Family 256
12.3 The Net API 262
12.4 Summary 271
12.5 Further Reading 271
Chapter 13: Hardware 273
13.1 Computer 273
13.2 Data Types 278
13.3 Optimizing Data Types in RL 280
13.4 Choosing Hardware 285
13.5 Summary 285
Part IV: Environment Design 287
Chapter 14: States 289
14.1 Examples of States 289
14.2 State Completeness 296
14.3 State Complexity 297
14.4 State Information Loss 301
14.5 Preprocessing 306
14.6 Summary 313
Chapter 15: Actions 315
15.1 Examples of Actions 315
15.2 Action Completeness 318
15.3 Action Complexity 319
15.4 Summary 323
15.5 Further Reading: Action Design in Everyday Things 324
Chapter 16: Rewards 327
16.1 The Role of Rewards 327
16.2 Reward Design Guidelines 328
16.3 Summary 332
Chapter 17: Transition Function 333
17.1 Feasibility Checks 333
17.2 Reality Check 335
17.3 Summary 337
Epilogue 338
Appendix A: Deep Reinforcement Learning Timeline 343
Appendix B: Example Environments 345
B.1 Discrete Environments 346
B.2 Continuous Environments 350
References 353
Index 363
Details
Erscheinungsjahr: | 2019 |
---|---|
Fachbereich: | Programmiersprachen |
Genre: | Informatik |
Rubrik: | Naturwissenschaften & Technik |
Medium: | Taschenbuch |
Inhalt: | Kartoniert / Broschiert |
ISBN-13: | 9780135172384 |
ISBN-10: | 0135172381 |
Sprache: | Englisch |
Einband: | Kartoniert / Broschiert |
Autor: |
Graesser, Laura
Keng, Wah Loon |
Hersteller: | Pearson Education |
Maße: | 176 x 231 x 17 mm |
Von/Mit: | Laura Graesser (u. a.) |
Erscheinungsdatum: | 05.12.2019 |
Gewicht: | 0,64 kg |
Warnhinweis