Dekorationsartikel gehören nicht zum Leistungsumfang.
Sprache:
Englisch
69,54 €*
Versandkostenfrei per Post / DHL
Aktuell nicht verfügbar
Kategorien:
Beschreibung
Discover a variety of data-mining algorithms that are useful for selecting small sets of important features from among unwieldy masses of candidates, or extracting useful features from measured variables.
As a serious data miner you will often be faced with thousands of candidate features for your prediction or classification application, with most of the features being of little or no value. Yoüll know that many of these features may be useful only in combination with certain other features while being practically worthless alone or in combination with most others. Some features may have enormous predictive power, but only within a small, specialized area of the feature space. The problems that plague modern data miners are endless. This book helps you solve this problem by presenting modern feature selection techniques and the code to implement them. Some of these techniques are:Forward selection component analysis
Local feature selection
Linking features and a target with a hidden Markov model
Improvements on traditional stepwise selection
Nominal-to-ordinal conversion
All algorithms are intuitively justified and supported by the relevant equations and explanatory material. The author also presents and explains complete, highly commented source code.
The example code is in C++ and CUDA C but Python or other code can be substituted; the algorithm is important, not the code that's used to write it.
What You Will Learn
Combine principal component analysis with forward and backward stepwise selection to identify a compact subset of a large collection of variables that captures the maximum possible variation within the entire set.
Identify features that may have predictive power over only a small subset of the feature domain. Such features can be profitably used by modern predictive models but may be missed by other feature selection methods.
Find an underlying hidden Markov model that controls the distributions of feature variables and the target simultaneously. The memory inherent in this method is especially valuable in high-noise applications such as prediction of financial markets.
Improve traditional stepwise selection in three ways: examine a collection of 'best-so-far' feature sets; test candidate features for inclusion with cross validation to automatically and effectively limit model complexity; and at each step estimate the probability that our results so far could be just the product of random good luck. We also estimate the probability that the improvement obtained by adding a new variable could have been just good luck. Take a potentially valuable nominal variable (a category or class membership) that is unsuitable for input to a prediction model, and assign to each category a sensible numeric value that can be used as a model input.
Who This Book Is For
Intermediate to advanced data science programmers and analysts.
As a serious data miner you will often be faced with thousands of candidate features for your prediction or classification application, with most of the features being of little or no value. Yoüll know that many of these features may be useful only in combination with certain other features while being practically worthless alone or in combination with most others. Some features may have enormous predictive power, but only within a small, specialized area of the feature space. The problems that plague modern data miners are endless. This book helps you solve this problem by presenting modern feature selection techniques and the code to implement them. Some of these techniques are:Forward selection component analysis
Local feature selection
Linking features and a target with a hidden Markov model
Improvements on traditional stepwise selection
Nominal-to-ordinal conversion
All algorithms are intuitively justified and supported by the relevant equations and explanatory material. The author also presents and explains complete, highly commented source code.
The example code is in C++ and CUDA C but Python or other code can be substituted; the algorithm is important, not the code that's used to write it.
What You Will Learn
Combine principal component analysis with forward and backward stepwise selection to identify a compact subset of a large collection of variables that captures the maximum possible variation within the entire set.
Identify features that may have predictive power over only a small subset of the feature domain. Such features can be profitably used by modern predictive models but may be missed by other feature selection methods.
Find an underlying hidden Markov model that controls the distributions of feature variables and the target simultaneously. The memory inherent in this method is especially valuable in high-noise applications such as prediction of financial markets.
Improve traditional stepwise selection in three ways: examine a collection of 'best-so-far' feature sets; test candidate features for inclusion with cross validation to automatically and effectively limit model complexity; and at each step estimate the probability that our results so far could be just the product of random good luck. We also estimate the probability that the improvement obtained by adding a new variable could have been just good luck. Take a potentially valuable nominal variable (a category or class membership) that is unsuitable for input to a prediction model, and assign to each category a sensible numeric value that can be used as a model input.
Who This Book Is For
Intermediate to advanced data science programmers and analysts.
Discover a variety of data-mining algorithms that are useful for selecting small sets of important features from among unwieldy masses of candidates, or extracting useful features from measured variables.
As a serious data miner you will often be faced with thousands of candidate features for your prediction or classification application, with most of the features being of little or no value. Yoüll know that many of these features may be useful only in combination with certain other features while being practically worthless alone or in combination with most others. Some features may have enormous predictive power, but only within a small, specialized area of the feature space. The problems that plague modern data miners are endless. This book helps you solve this problem by presenting modern feature selection techniques and the code to implement them. Some of these techniques are:Forward selection component analysis
Local feature selection
Linking features and a target with a hidden Markov model
Improvements on traditional stepwise selection
Nominal-to-ordinal conversion
All algorithms are intuitively justified and supported by the relevant equations and explanatory material. The author also presents and explains complete, highly commented source code.
The example code is in C++ and CUDA C but Python or other code can be substituted; the algorithm is important, not the code that's used to write it.
What You Will Learn
Combine principal component analysis with forward and backward stepwise selection to identify a compact subset of a large collection of variables that captures the maximum possible variation within the entire set.
Identify features that may have predictive power over only a small subset of the feature domain. Such features can be profitably used by modern predictive models but may be missed by other feature selection methods.
Find an underlying hidden Markov model that controls the distributions of feature variables and the target simultaneously. The memory inherent in this method is especially valuable in high-noise applications such as prediction of financial markets.
Improve traditional stepwise selection in three ways: examine a collection of 'best-so-far' feature sets; test candidate features for inclusion with cross validation to automatically and effectively limit model complexity; and at each step estimate the probability that our results so far could be just the product of random good luck. We also estimate the probability that the improvement obtained by adding a new variable could have been just good luck. Take a potentially valuable nominal variable (a category or class membership) that is unsuitable for input to a prediction model, and assign to each category a sensible numeric value that can be used as a model input.
Who This Book Is For
Intermediate to advanced data science programmers and analysts.
As a serious data miner you will often be faced with thousands of candidate features for your prediction or classification application, with most of the features being of little or no value. Yoüll know that many of these features may be useful only in combination with certain other features while being practically worthless alone or in combination with most others. Some features may have enormous predictive power, but only within a small, specialized area of the feature space. The problems that plague modern data miners are endless. This book helps you solve this problem by presenting modern feature selection techniques and the code to implement them. Some of these techniques are:Forward selection component analysis
Local feature selection
Linking features and a target with a hidden Markov model
Improvements on traditional stepwise selection
Nominal-to-ordinal conversion
All algorithms are intuitively justified and supported by the relevant equations and explanatory material. The author also presents and explains complete, highly commented source code.
The example code is in C++ and CUDA C but Python or other code can be substituted; the algorithm is important, not the code that's used to write it.
What You Will Learn
Combine principal component analysis with forward and backward stepwise selection to identify a compact subset of a large collection of variables that captures the maximum possible variation within the entire set.
Identify features that may have predictive power over only a small subset of the feature domain. Such features can be profitably used by modern predictive models but may be missed by other feature selection methods.
Find an underlying hidden Markov model that controls the distributions of feature variables and the target simultaneously. The memory inherent in this method is especially valuable in high-noise applications such as prediction of financial markets.
Improve traditional stepwise selection in three ways: examine a collection of 'best-so-far' feature sets; test candidate features for inclusion with cross validation to automatically and effectively limit model complexity; and at each step estimate the probability that our results so far could be just the product of random good luck. We also estimate the probability that the improvement obtained by adding a new variable could have been just good luck. Take a potentially valuable nominal variable (a category or class membership) that is unsuitable for input to a prediction model, and assign to each category a sensible numeric value that can be used as a model input.
Who This Book Is For
Intermediate to advanced data science programmers and analysts.
Über den Autor
Timothy Masters has a PhD in statistics and is an experienced programmer. His dissertation was in image analysis. His career moved in the direction of signal processing, and for the last 25 years he's been involved in the development of automated trading systems in various financial markets.
Zusammenfassung
A novel expert-driven data-mining approach to algorithms in C++ and CUDA C
Author has been developing and using algorithms for over 20 years
Data mining is an important topic in big data and data science
Inhaltsverzeichnis
1) Introduction 7
2) Forward Selection Component Analysis 11
A) Introduction to Forward Selection Component Analysis 12
B) The Mathematics and Code Examples 16
Maximizing the Explained Variance 18
Code for the Variance Maximization Criterion 20
Backward Refinement 24
Multi-Threading Backward Refinement 28
Orthogonalizing Ordered Components 36
C) Putting It All Together 39
Components From a Forward-Only Subset 44
Components From a Backward Refined Subset 46
D) An Example With Contrived Variables 48
3) Local Feature Selection 53
A) Intuitive Overview of the Algorithm 54
What This Algorithm Reports 60
B) A Brief Detour: the Simplex Algorithm 62
The Linear Programming Problem 63
Interfacing to the Simplex Class 64
A Little More Detail 67
C) A More Rigorous Approach to LFS 69
Intra-Class and Inter-Class Separation 73
Computing the Weights 77
Maximizing Inter-Class Separation 81
Minimizing Intra-Class Separation 86
Testing a Trial Beta 88
A Quick Note on Threads 93
D) CUDA Computation of Weights 94
Integrating the CUDA Code Into the Algorithm 95
Initializing the CUDA Hardware 97
Computing Differences from the Current Case 100
Computing the Distance Matrix 102Computing the Minimum Distances 104
Computing the Terms for the Weight Equation 112
Transposing the Term Matrix 113
Summing the Terms For the Weights 114
Moving the Weights to the Host 116
E) An Example of Local Feature Selection 117
F) A Note on Run Time 118
4) Memory in Time Series Features 119
A) A Gentle Mathematical Overview 122
The Forward Algorithm 123
The Backward Algorithm 128
Correct Alpha and Beta, For Those Who Care 131
B) Some Mundane Computations 136
Means and Covariances 136
Densities 138
The Multivariate Normal Density Function 139
C) Starting Parameters 141
Outline of the Initialization Algorithm 141Perturbing Means 142
Perturbing Covariances 143
Perturbing Transition Probabilities 144
A Note on Random Number Generators 145
D) The Complete Optimization Algorithm 146
Computing State Probabilities 147
Updating the Means and Covariances 151
Updating Initial and Transition Probabilities 153
E) Assessing HMM Memory in a Time Series 159F) Linking Features to a Target 164
Linking HMM States to the Target 173
A Contrived and Inappropriate Example 183
A Sensible and Practical Example 186
5) Stepwise Selection on Steroids 189
A) The Feature Evaluation Model 192
Code For the Foundation Model 193
B) The Cross-Validated Performance Measure 198
C) The Stepwise Algorithm 201
Finding the First Variable 207
Adding a Variable to an Existing Model 210
D) Demonstrating the Algorithm Three Ways 214
6) Nominal-to-Ordinal Conversion 217
A) Implementation Overview 221
B) Testing For a Legitimate Relationship 222
C) An Example From Equity Price Changes 223
D) Code for Nominal-to-Ordinal Conversion 227
The Constructor 228
Printing the Table of Counts 232
Computing the Mapping Function 234
Monte-Carlo Permutation Tests 237
7) Index 353
Details
Erscheinungsjahr: | 2020 |
---|---|
Genre: | Informatik |
Rubrik: | Naturwissenschaften & Technik |
Medium: | Taschenbuch |
Inhalt: |
ix
228 S. 2 s/w Illustr. 228 p. 2 illus. |
ISBN-13: | 9781484259870 |
ISBN-10: | 1484259874 |
Sprache: | Englisch |
Ausstattung / Beilage: | Paperback |
Einband: | Kartoniert / Broschiert |
Autor: | Masters, Timothy |
Auflage: | 1st ed. |
Hersteller: |
Apress
Apress L.P. |
Maße: | 254 x 178 x 14 mm |
Von/Mit: | Timothy Masters |
Erscheinungsdatum: | 06.06.2020 |
Gewicht: | 0,46 kg |
Über den Autor
Timothy Masters has a PhD in statistics and is an experienced programmer. His dissertation was in image analysis. His career moved in the direction of signal processing, and for the last 25 years he's been involved in the development of automated trading systems in various financial markets.
Zusammenfassung
A novel expert-driven data-mining approach to algorithms in C++ and CUDA C
Author has been developing and using algorithms for over 20 years
Data mining is an important topic in big data and data science
Inhaltsverzeichnis
1) Introduction 7
2) Forward Selection Component Analysis 11
A) Introduction to Forward Selection Component Analysis 12
B) The Mathematics and Code Examples 16
Maximizing the Explained Variance 18
Code for the Variance Maximization Criterion 20
Backward Refinement 24
Multi-Threading Backward Refinement 28
Orthogonalizing Ordered Components 36
C) Putting It All Together 39
Components From a Forward-Only Subset 44
Components From a Backward Refined Subset 46
D) An Example With Contrived Variables 48
3) Local Feature Selection 53
A) Intuitive Overview of the Algorithm 54
What This Algorithm Reports 60
B) A Brief Detour: the Simplex Algorithm 62
The Linear Programming Problem 63
Interfacing to the Simplex Class 64
A Little More Detail 67
C) A More Rigorous Approach to LFS 69
Intra-Class and Inter-Class Separation 73
Computing the Weights 77
Maximizing Inter-Class Separation 81
Minimizing Intra-Class Separation 86
Testing a Trial Beta 88
A Quick Note on Threads 93
D) CUDA Computation of Weights 94
Integrating the CUDA Code Into the Algorithm 95
Initializing the CUDA Hardware 97
Computing Differences from the Current Case 100
Computing the Distance Matrix 102Computing the Minimum Distances 104
Computing the Terms for the Weight Equation 112
Transposing the Term Matrix 113
Summing the Terms For the Weights 114
Moving the Weights to the Host 116
E) An Example of Local Feature Selection 117
F) A Note on Run Time 118
4) Memory in Time Series Features 119
A) A Gentle Mathematical Overview 122
The Forward Algorithm 123
The Backward Algorithm 128
Correct Alpha and Beta, For Those Who Care 131
B) Some Mundane Computations 136
Means and Covariances 136
Densities 138
The Multivariate Normal Density Function 139
C) Starting Parameters 141
Outline of the Initialization Algorithm 141Perturbing Means 142
Perturbing Covariances 143
Perturbing Transition Probabilities 144
A Note on Random Number Generators 145
D) The Complete Optimization Algorithm 146
Computing State Probabilities 147
Updating the Means and Covariances 151
Updating Initial and Transition Probabilities 153
E) Assessing HMM Memory in a Time Series 159F) Linking Features to a Target 164
Linking HMM States to the Target 173
A Contrived and Inappropriate Example 183
A Sensible and Practical Example 186
5) Stepwise Selection on Steroids 189
A) The Feature Evaluation Model 192
Code For the Foundation Model 193
B) The Cross-Validated Performance Measure 198
C) The Stepwise Algorithm 201
Finding the First Variable 207
Adding a Variable to an Existing Model 210
D) Demonstrating the Algorithm Three Ways 214
6) Nominal-to-Ordinal Conversion 217
A) Implementation Overview 221
B) Testing For a Legitimate Relationship 222
C) An Example From Equity Price Changes 223
D) Code for Nominal-to-Ordinal Conversion 227
The Constructor 228
Printing the Table of Counts 232
Computing the Mapping Function 234
Monte-Carlo Permutation Tests 237
7) Index 353
Details
Erscheinungsjahr: | 2020 |
---|---|
Genre: | Informatik |
Rubrik: | Naturwissenschaften & Technik |
Medium: | Taschenbuch |
Inhalt: |
ix
228 S. 2 s/w Illustr. 228 p. 2 illus. |
ISBN-13: | 9781484259870 |
ISBN-10: | 1484259874 |
Sprache: | Englisch |
Ausstattung / Beilage: | Paperback |
Einband: | Kartoniert / Broschiert |
Autor: | Masters, Timothy |
Auflage: | 1st ed. |
Hersteller: |
Apress
Apress L.P. |
Maße: | 254 x 178 x 14 mm |
Von/Mit: | Timothy Masters |
Erscheinungsdatum: | 06.06.2020 |
Gewicht: | 0,46 kg |
Warnhinweis