Dekorationsartikel gehören nicht zum Leistungsumfang.
An Elementary Introduction to Statistical Learning Theory
Buch von Sanjeev Kulkarni (u. a.)
Sprache: Englisch

129,50 €*

inkl. MwSt.

Versandkostenfrei per Post / DHL

Aktuell nicht verfügbar

Kategorien:
Beschreibung
A thought-provoking look at statistical learning theory and its role in understanding human learning and inductive reasoning

A joint endeavor from leading researchers in the fields of philosophy and electrical engineering, An Elementary Introduction to Statistical Learning Theory is a comprehensive and accessible primer on the rapidly evolving fields of statistical pattern recognition and statistical learning theory. Explaining these areas at a level and in a way that is not often found in other books on the topic, the authors present the basic theory behind contemporary machine learning and uniquely utilize its foundations as a framework for philosophical thinking about inductive inference.

Promoting the fundamental goal of statistical learning, knowing what is achievable and what is not, this book demonstrates the value of a systematic methodology when used along with the needed techniques for evaluating the performance of a learning system. First, an introduction to machine learning is presented that includes brief discussions of applications such as image recognition, speech recognition, medical diagnostics, and statistical arbitrage. To enhance accessibility, two chapters on relevant aspects of probability theory are provided. Subsequent chapters feature coverage of topics such as the pattern recognition problem, optimal Bayes decision rule, the nearest neighbor rule, kernel rules, neural networks, support vector machines, and boosting.

Appendices throughout the book explore the relationship between the discussed material and related topics from mathematics, philosophy, psychology, and statistics, drawing insightful connections between problems in these areas and statistical learning theory. All chapters conclude with a summary section, a set of practice questions, and a reference sections that supplies historical notes and additional resources for further study.

An Elementary Introduction to Statistical Learning Theory is an excellent book for courses on statistical learning theory, pattern recognition, and machine learning at the upper-undergraduate and graduate levels. It also serves as an introductory reference for researchers and practitioners in the fields of engineering, computer science, philosophy, and cognitive science that would like to further their knowledge of the topic.
A thought-provoking look at statistical learning theory and its role in understanding human learning and inductive reasoning

A joint endeavor from leading researchers in the fields of philosophy and electrical engineering, An Elementary Introduction to Statistical Learning Theory is a comprehensive and accessible primer on the rapidly evolving fields of statistical pattern recognition and statistical learning theory. Explaining these areas at a level and in a way that is not often found in other books on the topic, the authors present the basic theory behind contemporary machine learning and uniquely utilize its foundations as a framework for philosophical thinking about inductive inference.

Promoting the fundamental goal of statistical learning, knowing what is achievable and what is not, this book demonstrates the value of a systematic methodology when used along with the needed techniques for evaluating the performance of a learning system. First, an introduction to machine learning is presented that includes brief discussions of applications such as image recognition, speech recognition, medical diagnostics, and statistical arbitrage. To enhance accessibility, two chapters on relevant aspects of probability theory are provided. Subsequent chapters feature coverage of topics such as the pattern recognition problem, optimal Bayes decision rule, the nearest neighbor rule, kernel rules, neural networks, support vector machines, and boosting.

Appendices throughout the book explore the relationship between the discussed material and related topics from mathematics, philosophy, psychology, and statistics, drawing insightful connections between problems in these areas and statistical learning theory. All chapters conclude with a summary section, a set of practice questions, and a reference sections that supplies historical notes and additional resources for further study.

An Elementary Introduction to Statistical Learning Theory is an excellent book for courses on statistical learning theory, pattern recognition, and machine learning at the upper-undergraduate and graduate levels. It also serves as an introductory reference for researchers and practitioners in the fields of engineering, computer science, philosophy, and cognitive science that would like to further their knowledge of the topic.
Über den Autor

SANJEEV KULKARNI, PhD, is Professor in the Department of Electrical Engineering at Princeton University, where he is also an affiliated faculty member in the Department of Operations Research and Financial Engineering and the Department of Philosophy. Dr. Kulkarni has published widely on statistical pattern recognition, nonparametric estimation, machine learning, information theory, and other areas. A Fellow of the IEEE, he was awarded Princeton University's President's Award for Distinguished Teaching in 2007.

GILBERT HARMAN, PhD, is James S. McDonnell Distinguished University Professor in the Department of Philosophy at Princeton University. A Fellow of the Cognitive Science Society, he is the author of more than fifty published articles in his areas of research interest, which include ethics, statistical learning theory, psychology of reasoning, and logic.

Inhaltsverzeichnis
Preface.

1. Introduction: Classification, Learning, Features, Applications.

1.1 Scope.

1.2 Why Machine Learning?

1.3 Some Applications.

1.4 Measurements, Features, and Feature Vectors.

1.5 The Need for Probability.

1.6 Supervised Learning.

1.7 Summary.

1.8 Appendix: Induction.

1.9 Questions.

1.10 References.

2. Probability.

2.1 Probability of Some Basic Events.

2.2 Probabilities of compound events.

2.3 Conditional probability.

2.4 Drawing without replacement.

2.5 A Classic Birthday Problem.

2.6 Random Variables.

2.7 Expected Value.

2.8 Variance.

2.9 Summary.

2.10 Appendix: Interpretations of Probability.

2.11 Questions.

2.12 References.

3. Probability Densities.

3.1 An Example in Two Dimensions.

3.2 Random Numbers in [0, 1].

3.3 Density Functions.

3.4 Probability Densities in Higher Dimensions.

3.5 Joint and Conditional Densities.

3.6 Expected Value and Variance.

3.7 Laws of Large Numbers.

3.8 Summary.

3.9 Appendix: Measurability.

3.10 Questions.

3.11 References.

4. The Pattern Recognition Problem.

4.1 A Simple Example.

4.2 Decision Rules.

4.3 Success Criterion.

4.4 The Best Classifier: Bayes Decision Rule.

4.5 Continuous Features and Densities.

4.6 Summary.

4.7 Appendix: Uncountably Many.

4.8 Questions.

4.9 References.

5. The Optimal Bayes Decision Rule.

5.1 Bayes Theorem.

5.2 Bayes Decision Rule.

5.3 Optimality and Some Comments.

5.4 An Example.

5.5 Bayes Theorem and Decision Rule With Densities.

5.6 Summary.

5.7 Appendix: Defining Conditional Probability.

5.8 Questions.

5.9 References.

6. Learning from Examples.

6.1 Lack of Knowledge of Distributions.

6.2 Training Data.

6.3 Assumptions on the Training Data.

6.4 A Brute Force Approach to Learning.

6.5 Curse of Dimensionality, Inductive Bias, and No Free Lunch.

6.6 Summary.

6.7 Appendix: What Sort of Learning?

6.8 Questions.

6.9 References.

7. The Nearest Neighbor Rule.

7.1 The Nearest Neighbor Rule.

7.2 Performance of the Nearest Neighbor Rule.

7.3 Intuition and Proof Sketch of Performance.

7.4 Using More Neighbors.

7.5 Summary.

7.6 Appendix: When People Use Nearest Neighbor Reasoning.

7.7 Questions.

7.8 References.

8. Kernel Rules.

8.1 Motivation.

8.2 A Variation on Nearest Neighbor Rules.

8.3 Kernel Rules.

8.4 Universal Consistency of Kernel Rules.

8.5 Potential Functions.

8.6 More General Kernels.

8.7 Summary.

8.8 Appendix: Kernels, Similarity, and Features.

8.9 Questions.

8.10 References.

9. Neural Networks: Perceptrons.

9.1 Multilayer Feed Forward Networks.

9.2 Neural Networks for Learning and Classification.

9.3 Perceptrons.

9.4 Learning Rule for Perceptrons.

9.5 Representational Capabilities of Perceptrons.

9.6 Summary.

9.7 Appendix: Models of Mind.

9.8 Questions.

9.9 References.

10. Multilayer Networks.

10.1 Representation Capabilities of Multilayer Networks.

10.2 Learning and Sigmoidal Outputs.

10.3 Training Error and Weight Space.

10.4 Error Minimization by Gradient Descent.

10.5 Backpropagation.

10.6 Derivation of Backpropagation Equations.

10.7 Summary.

10.8 Appendix: Gradient Descent and Reasoning Toward Reflective Equilibrium.

10.9 Questions.

10.10 References.

11. PAC Learning.

11.1 Class of Decision Rules.

11.2 Best Rule From a Class.

11.3 Probably Approximately Correct Criterion.

11.4 PAC Learning.

11.5 Summary.

11.6 Appendix: Identifying Indiscernibles.

11.7 Questions.

11.8 References.

12. VC Dimension.

12.1 Approximation and Estimation Errors.

12.2 Shattering.

12.3 VC Dimension.

12.4 Learning Result.

12.5 Some Examples.

12.6 Application to Neural Nets.

12.7 Summary.

12.8 Appendix: VC Dimension and Popper Dimension.

12.9 Questions.

12.10 References.

13. Infinite VC Dimension.

13.1 A Hierarchy of Classes and Modified PAC Criterion.

13.2 Misfit Versus Complexity Tradeoff.

13.3 Learning Results.

13.4 Inductive Bias and Simplicity.

13.5 Summary.

13.6 Appendix: Uniform Convergence and Universal Consistency.

13.7 Questions.

13.8 References.

14. The Function Estimation Problem.

14.1 Estimation.

14.2 Success Criterion.

14.3 Best Estimator: Regression Function.

14.4 Learning in Function Estimation.

14.5 Summary.

14.6 Appendix: Regression Toward the Mean.

14.7 Questions.

14.8 References.

15. Learning Function Estimation.

15.1 Review of the Function Estimation/Regression Problem.

15.2 Nearest Neighbor Rules.

15.3 Kernel Methods.

15.4 Neural Networks Learning.

15.5 Estimation with a fixed class of functions.

15.6 Shattering, Pseudo-Dimension, and Learning.

15.7 Conclusion.

15.8 Appendix: Accuracy, Precision, Bias, and Variance in Estimation.

15.9 Questions.

15.10 References.

16. Simplicity.

16.1 Simplicity in Science.

16.2 Ordering Hypotheses.

16.3 Two Examples.

16.4 Simplicity as Simplicity of Representation.

16.5 Pragmatic Theory of Simplicity.

16.6 Simplicity and Global Indeterminacy.

16.7 Summary.

16.8 Appendix: Basic Science and Statistical Learning Theory.

16.9 Questions.

16.10 References.

17. Support Vector Machines.

17.1 Mapping the Feature Vectors.

17.2 Maximizing the Margin.

17.3 Optimization and Support Vectors.

17.4 Implementation and Connection to Kernel Methods.

17.5 Details of the Optimization Problem.

17.6 Summary.

17.7 Appendix: Computation.

17.8 Questions.

17.9 References.

18. Boosting.

18.1 Weak Learning Rules.

18.2 Combining Classifiers.

18.3 Distribution on the Training Examples.

18.4 The AdaBoost Algorithm.

18.5 Performance on Training Data.

18.6 Generalization Performance.

18.7 Summary.

18.8 Appendix: Ensemble Methods.

18.9 Questions.

18.10 References.
Details
Erscheinungsjahr: 2011
Fachbereich: Wahrscheinlichkeitstheorie
Genre: Mathematik
Rubrik: Naturwissenschaften & Technik
Medium: Buch
Seiten: 232
Inhalt: 232 S.
ISBN-13: 9780470641835
ISBN-10: 0470641835
Sprache: Englisch
Einband: Gebunden
Autor: Kulkarni, Sanjeev
Harman, Gilbert
Hersteller: John Wiley & Sons
Maße: 240 x 161 x 17 mm
Von/Mit: Sanjeev Kulkarni (u. a.)
Erscheinungsdatum: 15.06.2011
Gewicht: 0,522 kg
preigu-id: 106947255
Über den Autor

SANJEEV KULKARNI, PhD, is Professor in the Department of Electrical Engineering at Princeton University, where he is also an affiliated faculty member in the Department of Operations Research and Financial Engineering and the Department of Philosophy. Dr. Kulkarni has published widely on statistical pattern recognition, nonparametric estimation, machine learning, information theory, and other areas. A Fellow of the IEEE, he was awarded Princeton University's President's Award for Distinguished Teaching in 2007.

GILBERT HARMAN, PhD, is James S. McDonnell Distinguished University Professor in the Department of Philosophy at Princeton University. A Fellow of the Cognitive Science Society, he is the author of more than fifty published articles in his areas of research interest, which include ethics, statistical learning theory, psychology of reasoning, and logic.

Inhaltsverzeichnis
Preface.

1. Introduction: Classification, Learning, Features, Applications.

1.1 Scope.

1.2 Why Machine Learning?

1.3 Some Applications.

1.4 Measurements, Features, and Feature Vectors.

1.5 The Need for Probability.

1.6 Supervised Learning.

1.7 Summary.

1.8 Appendix: Induction.

1.9 Questions.

1.10 References.

2. Probability.

2.1 Probability of Some Basic Events.

2.2 Probabilities of compound events.

2.3 Conditional probability.

2.4 Drawing without replacement.

2.5 A Classic Birthday Problem.

2.6 Random Variables.

2.7 Expected Value.

2.8 Variance.

2.9 Summary.

2.10 Appendix: Interpretations of Probability.

2.11 Questions.

2.12 References.

3. Probability Densities.

3.1 An Example in Two Dimensions.

3.2 Random Numbers in [0, 1].

3.3 Density Functions.

3.4 Probability Densities in Higher Dimensions.

3.5 Joint and Conditional Densities.

3.6 Expected Value and Variance.

3.7 Laws of Large Numbers.

3.8 Summary.

3.9 Appendix: Measurability.

3.10 Questions.

3.11 References.

4. The Pattern Recognition Problem.

4.1 A Simple Example.

4.2 Decision Rules.

4.3 Success Criterion.

4.4 The Best Classifier: Bayes Decision Rule.

4.5 Continuous Features and Densities.

4.6 Summary.

4.7 Appendix: Uncountably Many.

4.8 Questions.

4.9 References.

5. The Optimal Bayes Decision Rule.

5.1 Bayes Theorem.

5.2 Bayes Decision Rule.

5.3 Optimality and Some Comments.

5.4 An Example.

5.5 Bayes Theorem and Decision Rule With Densities.

5.6 Summary.

5.7 Appendix: Defining Conditional Probability.

5.8 Questions.

5.9 References.

6. Learning from Examples.

6.1 Lack of Knowledge of Distributions.

6.2 Training Data.

6.3 Assumptions on the Training Data.

6.4 A Brute Force Approach to Learning.

6.5 Curse of Dimensionality, Inductive Bias, and No Free Lunch.

6.6 Summary.

6.7 Appendix: What Sort of Learning?

6.8 Questions.

6.9 References.

7. The Nearest Neighbor Rule.

7.1 The Nearest Neighbor Rule.

7.2 Performance of the Nearest Neighbor Rule.

7.3 Intuition and Proof Sketch of Performance.

7.4 Using More Neighbors.

7.5 Summary.

7.6 Appendix: When People Use Nearest Neighbor Reasoning.

7.7 Questions.

7.8 References.

8. Kernel Rules.

8.1 Motivation.

8.2 A Variation on Nearest Neighbor Rules.

8.3 Kernel Rules.

8.4 Universal Consistency of Kernel Rules.

8.5 Potential Functions.

8.6 More General Kernels.

8.7 Summary.

8.8 Appendix: Kernels, Similarity, and Features.

8.9 Questions.

8.10 References.

9. Neural Networks: Perceptrons.

9.1 Multilayer Feed Forward Networks.

9.2 Neural Networks for Learning and Classification.

9.3 Perceptrons.

9.4 Learning Rule for Perceptrons.

9.5 Representational Capabilities of Perceptrons.

9.6 Summary.

9.7 Appendix: Models of Mind.

9.8 Questions.

9.9 References.

10. Multilayer Networks.

10.1 Representation Capabilities of Multilayer Networks.

10.2 Learning and Sigmoidal Outputs.

10.3 Training Error and Weight Space.

10.4 Error Minimization by Gradient Descent.

10.5 Backpropagation.

10.6 Derivation of Backpropagation Equations.

10.7 Summary.

10.8 Appendix: Gradient Descent and Reasoning Toward Reflective Equilibrium.

10.9 Questions.

10.10 References.

11. PAC Learning.

11.1 Class of Decision Rules.

11.2 Best Rule From a Class.

11.3 Probably Approximately Correct Criterion.

11.4 PAC Learning.

11.5 Summary.

11.6 Appendix: Identifying Indiscernibles.

11.7 Questions.

11.8 References.

12. VC Dimension.

12.1 Approximation and Estimation Errors.

12.2 Shattering.

12.3 VC Dimension.

12.4 Learning Result.

12.5 Some Examples.

12.6 Application to Neural Nets.

12.7 Summary.

12.8 Appendix: VC Dimension and Popper Dimension.

12.9 Questions.

12.10 References.

13. Infinite VC Dimension.

13.1 A Hierarchy of Classes and Modified PAC Criterion.

13.2 Misfit Versus Complexity Tradeoff.

13.3 Learning Results.

13.4 Inductive Bias and Simplicity.

13.5 Summary.

13.6 Appendix: Uniform Convergence and Universal Consistency.

13.7 Questions.

13.8 References.

14. The Function Estimation Problem.

14.1 Estimation.

14.2 Success Criterion.

14.3 Best Estimator: Regression Function.

14.4 Learning in Function Estimation.

14.5 Summary.

14.6 Appendix: Regression Toward the Mean.

14.7 Questions.

14.8 References.

15. Learning Function Estimation.

15.1 Review of the Function Estimation/Regression Problem.

15.2 Nearest Neighbor Rules.

15.3 Kernel Methods.

15.4 Neural Networks Learning.

15.5 Estimation with a fixed class of functions.

15.6 Shattering, Pseudo-Dimension, and Learning.

15.7 Conclusion.

15.8 Appendix: Accuracy, Precision, Bias, and Variance in Estimation.

15.9 Questions.

15.10 References.

16. Simplicity.

16.1 Simplicity in Science.

16.2 Ordering Hypotheses.

16.3 Two Examples.

16.4 Simplicity as Simplicity of Representation.

16.5 Pragmatic Theory of Simplicity.

16.6 Simplicity and Global Indeterminacy.

16.7 Summary.

16.8 Appendix: Basic Science and Statistical Learning Theory.

16.9 Questions.

16.10 References.

17. Support Vector Machines.

17.1 Mapping the Feature Vectors.

17.2 Maximizing the Margin.

17.3 Optimization and Support Vectors.

17.4 Implementation and Connection to Kernel Methods.

17.5 Details of the Optimization Problem.

17.6 Summary.

17.7 Appendix: Computation.

17.8 Questions.

17.9 References.

18. Boosting.

18.1 Weak Learning Rules.

18.2 Combining Classifiers.

18.3 Distribution on the Training Examples.

18.4 The AdaBoost Algorithm.

18.5 Performance on Training Data.

18.6 Generalization Performance.

18.7 Summary.

18.8 Appendix: Ensemble Methods.

18.9 Questions.

18.10 References.
Details
Erscheinungsjahr: 2011
Fachbereich: Wahrscheinlichkeitstheorie
Genre: Mathematik
Rubrik: Naturwissenschaften & Technik
Medium: Buch
Seiten: 232
Inhalt: 232 S.
ISBN-13: 9780470641835
ISBN-10: 0470641835
Sprache: Englisch
Einband: Gebunden
Autor: Kulkarni, Sanjeev
Harman, Gilbert
Hersteller: John Wiley & Sons
Maße: 240 x 161 x 17 mm
Von/Mit: Sanjeev Kulkarni (u. a.)
Erscheinungsdatum: 15.06.2011
Gewicht: 0,522 kg
preigu-id: 106947255
Warnhinweis

Ähnliche Produkte

Ähnliche Produkte