125,50 €*
Versandkostenfrei per Post / DHL
Aktuell nicht verfügbar
Dieses übersichtlich und zugängliche Fachbuch richtet sich an Studenten höherer Semester, präsentiert die Interferenzstatistik ausführlich und praxisorientiert und stellt Ergebnisableitungen sowie MATLAB-Programme umfassend dar, ergänzt um Erläuterungen. Besonderes Augenmerk liegt auf einzelnen bedeutenden Aspekten, auf einer intuitiven Herangehensweise und auf Diskussionen. Der Blick auf die Interferenzstatistik ist dabei überaus modern. Inhalte neben den klassischen Themen rund um die mathematische Statistik: intuitive Präsentation von Einfach-/Doppel-Bootstraps bei der Berechnung von Konfidenzintervallen, Schrumpfungsschätzung, Schätzung des maximalen Moments sowie eine Vielzahl vom Methoden der Punktschätzung, maximale Wahrscheinlichkeit, Anwendung von charakteristischen Funktionen und indirekte Interferenz. Zu allen Methoden gibt es praktische Beispiele. Ausführlich behandelt werden Schätzprobleme und deren Lösung in Verbindung mit der diskreten Mischung bei Normalverteilungen. Durchgängig liegt der Schwerpunkt auf nicht-Gaußschen Verteilungen, einschließlich der ausführlichen Behandlung der stabilen Pareto-Verteilung und der schnellen Berechnung von nicht-zentralen Student-t-Tests. Ein komplettes Kapitel widmet sich der Optimierung, darunter der Entwicklung von Hessian-Methoden, heuristische/genetische Algorithmen, die keine Kontinuität erfordern. Die entsprechenden MATLAB-Codes werden zur Verfügung gestellt. Der Fokus liegt auch auf Berechnungen, die das Thema greifbar und für die Studierenden zugänglich machen.
Dieses übersichtlich und zugängliche Fachbuch richtet sich an Studenten höherer Semester, präsentiert die Interferenzstatistik ausführlich und praxisorientiert und stellt Ergebnisableitungen sowie MATLAB-Programme umfassend dar, ergänzt um Erläuterungen. Besonderes Augenmerk liegt auf einzelnen bedeutenden Aspekten, auf einer intuitiven Herangehensweise und auf Diskussionen. Der Blick auf die Interferenzstatistik ist dabei überaus modern. Inhalte neben den klassischen Themen rund um die mathematische Statistik: intuitive Präsentation von Einfach-/Doppel-Bootstraps bei der Berechnung von Konfidenzintervallen, Schrumpfungsschätzung, Schätzung des maximalen Moments sowie eine Vielzahl vom Methoden der Punktschätzung, maximale Wahrscheinlichkeit, Anwendung von charakteristischen Funktionen und indirekte Interferenz. Zu allen Methoden gibt es praktische Beispiele. Ausführlich behandelt werden Schätzprobleme und deren Lösung in Verbindung mit der diskreten Mischung bei Normalverteilungen. Durchgängig liegt der Schwerpunkt auf nicht-Gaußschen Verteilungen, einschließlich der ausführlichen Behandlung der stabilen Pareto-Verteilung und der schnellen Berechnung von nicht-zentralen Student-t-Tests. Ein komplettes Kapitel widmet sich der Optimierung, darunter der Entwicklung von Hessian-Methoden, heuristische/genetische Algorithmen, die keine Kontinuität erfordern. Die entsprechenden MATLAB-Codes werden zur Verfügung gestellt. Der Fokus liegt auch auf Berechnungen, die das Thema greifbar und für die Studierenden zugänglich machen.
Marc S. Paolella, PhD, is a Professor at the Department of Banking and Finance, University of Zurich. He is also the Editor of Econometrics and an Associate Editor of the Royal Statistical Society Journal Series A.
Preface xi
PART I ESSENTIAL CONCEPTS IN STATISTICS
1 Introducing Point and Interval Estimation 3
1.1 Point Estimation / 4
1.1.1 Bernoulli Model / 4
1.1.2 Geometric Model / 6
1.1.3 Some Remarks on Bias and Consistency / 11
1.2 Interval Estimation via Simulation / 12
1.3 Interval Estimation via the Bootstrap / 18
1.3.1 Computation and Comparison with Parametric Bootstrap / 18
1.3.2 Application to Bernoulli Model and Modification / 20
1.3.3 Double Bootstrap / 24
1.3.4 Double Bootstrap with Analytic Inner Loop / 26
1.4 Bootstrap Confidence Intervals in the Geometric Model / 31
1.5 Problems / 35
2 Goodness of Fit and Hypothesis Testing 37
2.1 Empirical Cumulative Distribution Function / 38
2.1.1 The Glivenko-Cantelli Theorem / 38
2.1.2 Proofs of the Glivenko-Cantelli Theorem / 41
2.1.3 Example with Continuous Data and Approximate Confidence Intervals / 45
2.1.4 Example with Discrete Data and Approximate Confidence Intervals / 49
2.2 Comparing Parametric and Nonparametric Methods / 52
2.3 Kolmogorov-Smirnov Distance and Hypothesis Testing / 57
2.3.1 The Kolmogorov-Smirnov and Anderson-Darling Statistics / 57
2.3.2 Significance and Hypothesis Testing / 59
2.3.3 Small-Sample Correction / 63
2.4 Testing Normality with KD and AD / 65
2.5 Testing Normality with W2 and U2 / 68
2.6 Testing the Stable Paretian Distributional Assumption: First Attempt / 69
2.7 Two-Sample Kolmogorov Test / 73
2.8 More on (Moron?) Hypothesis Testing / 74
2.8.1 Explanation / 75
2.8.2 Misuse of Hypothesis Testing / 77
2.8.3 Use and Misuse of p-Values / 79
2.9 Problems / 82
3 Likelihood 85
3.1 Introduction / 85
3.1.1 Scalar Parameter Case / 87
3.1.2 Vector Parameter Case / 92
3.1.3 Robustness and the MCD Estimator / 100
3.1.4 Asymptotic Properties of the Maximum Likelihood Estimator / 102
3.2 Cramér-Rao Lower Bound / 107
3.2.1 Univariate Case / 108
3.2.2 Multivariate Case / 111
3.3 Model Selection / 114
3.3.1 Model Misspecification / 114
3.3.2 The Likelihood Ratio Statistic / 117
3.3.3 Use of Information Criteria / 119
3.4 Problems / 120
4 Numerical Optimization 123
4.1 Root Finding / 123
4.1.1 One Parameter / 124
4.1.2 Several Parameters / 131
4.2 Approximating the Distribution of the Maximum Likelihood Estimator / 135
4.3 General Numerical Likelihood Maximization / 136
4.3.1 Newton-Raphson and Quasi-Newton Methods / 137
4.3.2 Imposing Parameter Restrictions / 140
4.4 Evolutionary Algorithms / 145
4.4.1 Differential Evolution / 146
4.4.2 Covariance Matrix Adaption Evolutionary Strategy / 149
4.5 Problems / 155
5 Methods of Point Estimation 157
5.1 Univariate Mixed Normal Distribution / 157
5.1.1 Introduction / 157
5.1.2 Simulation of Univariate Mixtures / 160
5.1.3 Direct Likelihood Maximization / 161
5.1.4 Use of the EM Algorithm / 169
5.1.5 Shrinkage-Type Estimation / 174
5.1.6 Quasi-Bayesian Estimation / 176
5.1.7 Confidence Intervals / 178
5.2 Alternative Point Estimation Methodologies / 184
5.2.1 Method of Moments Estimator / 185
5.2.2 Use of Goodness-of-Fit Measures / 190
5.2.3 Quantile Least Squares / 191
5.2.4 Pearson Minimum Chi-Square / 193
5.2.5 Empirical Moment Generating Function Estimator / 195
5.2.6 Empirical Characteristic Function Estimator / 198
5.3 Comparison of Methods / 199
5.4 A Primer on Shrinkage Estimation / 200
5.5 Problems / 202
PART II FURTHER FUNDAMENTAL CONCEPTS IN STATISTICS
6 Q-Q Plots and Distribution Testing 209
6.1 P-P Plots and Q-Q Plots / 209
6.2 Null Bands / 211
6.2.1 Definition and Motivation / 211
6.2.2 Pointwise Null Bands via Simulation / 212
6.2.3 Asymptotic Approximation of Pointwise Null Bands / 213
6.2.4 Mapping Pointwise and Simultaneous Significance Levels / 215
6.3 Q-Q Test / 217
6.4 Further P-P and Q-Q Type Plots / 219
6.4.1 (Horizontal) Stabilized P-P Plots / 219
6.4.2 Modified S-P Plots / 220
6.4.3 MSP Test for Normality / 224
6.4.4 Modified Percentile (Fowlkes-MP) Plots / 228
6.5 Further Tests for Composite Normality / 231
6.5.1 Motivation / 232
6.5.2 Jarque-Bera Test / 234
6.5.3 Three Powerful (and More Recent) Normality Tests / 237
6.5.4 Testing Goodness of Fit via Binning: Pearson's X P2 Test / 240
6.6 Combining Tests and Power Envelopes / 247
6.6.1 Combining Tests / 248
6.6.2 Power Comparisons for Testing Composite Normality / 252
6.6.3 Most Powerful Tests and Power Envelopes / 252
6.7 Details of a Failed Attempt / 255
6.8 Problems / 260
7 Unbiased Point Estimation and Bias Reduction 269
7.1 Sufficiency / 269
7.1.1 Introduction / 269
7.1.2 Factorization / 272
7.1.3 Minimal Sufficiency / 276
7.1.4 The Rao-Blackwell Theorem / 283
7.2 Completeness and the Uniformly Minimum Variance Unbiased Estimator / 286
7.3 An Example with i.i.d. Geometric Data / 289
7.4 Methods of Bias Reduction / 293
7.4.1 The Bias-Function Approach / 293
7.4.2 Median-Unbiased Estimation / 296
7.4.3 Mode-Adjusted Estimator / 297
7.4.4 The Jackknife / 302
7.5 Problems / 305
8 Analytic Interval Estimation 313
8.1 Definitions / 313
8.2 Pivotal Method / 315
8.2.1 Exact Pivots / 315
8.2.2 Asymptotic Pivots / 318
8.3 Intervals Associated with Normal Samples / 319
8.3.1 Single Sample / 319
8.3.2 Paired Sample / 320
8.3.3 Two Independent Samples / 322
8.3.4 Welch's Method for ¿1 ¿ ¿2 when ¿12 ¿ ¿22 / 323
8.3.5 Satterthwaite's Approximation / 324
8.4 Cumulative Distribution Function Inversion / 326
8.4.1 Continuous Case / 326
8.4.2 Discrete Case / 330
8.5 Application of the Nonparametric Bootstrap / 334
8.6 Problems / 337
PART III ADDITIONAL TOPICS
9 Inference in a Heavy-Tailed Context 341
9.1 Estimating the Maximally Existing Moment / 342
9.2 A Primer on Tail Estimation / 346
9.2.1 Introduction / 346
9.2.2 The Hill Estimator / 346
9.2.3 Use with Stable Paretian Data / 349
9.3 Noncentral Student's t Estimation / 351
9.3.1 Introduction / 351
9.3.2 Direct Density Approximation / 352
9.3.3 Quantile-Based Table Lookup Estimation / 353
9.3.4 Comparison of NCT Estimators / 354
9.4 Asymmetric Stable Paretian Estimation / 358
9.4.1 Introduction / 358
9.4.2 The Hint Estimator / 359
9.4.3 Maximum Likelihood Estimation / 360
9.4.4 The McCulloch Estimator / 361
9.4.5 The Empirical Characteristic Function Estimator / 364
9.4.6 Testing for Symmetry in the Stable Model / 366
9.5 Testing the Stable Paretian Distribution / 368
9.5.1 Test Based on the Empirical Characteristic Function / 368
9.5.2 Summability Test and Modification / 371
9.5.3 ALHADI: The ¿-Hat Discrepancy Test / 375
9.5.4 Joint Test Procedure / 383
9.5.5 Likelihood Ratio Tests / 384
9.5.6 Size and Power of the Symmetric Stable Tests / 385
9.5.7 Extension to Testing the Asymmetric Stable Paretian Case / 395
10 The Method of Indirect Inference 401
10.1 Introduction / 401
10.2 Application to the Laplace Distribution / 403
10.3 Application to Randomized Response / 403
10.3.1 Introduction / 403
10.3.2 Estimation via Indirect Inference / 406
10.4 Application to the Stable Paretian Distribution / 409
10.5 Problems / 416
A Review of Fundamental Concepts in Probability Theory 419
A.1 Combinatorics and Special Functions / 420
A.2 Basic Probability and Conditioning / 423
A.3 Univariate Random Variables / 424
A.4 Multivariate Random Variables / 427
A.5 Continuous Univariate Random Variables / 430
A.6 Conditional Random Variables / 432
A.7 Generating Functions and Inversion Formulas / 434
A.8 Value at Risk and Expected Shortfall / 437
A.9 Jacobian Transformations / 451
A.10 Sums and Other Functions / 453
A.11 Saddlepoint Approximations / 456
A.12 Order Statistics / 460
A.13 The Multivariate Normal Distribution / 462
A.14 Noncentral Distributions / 465
A.15 Inequalities and Convergence / 467
A.15.1 Inequalities for Random Variables / 467
A.15.2 Convergence of Sequences of Sets / 469
A.15.3 Convergence of Sequences of Random Variables / 473
A.16 The Stable Paretian Distribution / 483
A.17 Problems / 492
A.18 Solutions / 509
References 537
Index 561
Erscheinungsjahr: | 2018 |
---|---|
Fachbereich: | Wahrscheinlichkeitstheorie |
Genre: | Mathematik |
Rubrik: | Naturwissenschaften & Technik |
Medium: | Buch |
Inhalt: | 584 S. |
ISBN-13: | 9781119417866 |
ISBN-10: | 1119417864 |
Sprache: | Englisch |
Einband: | Gebunden |
Autor: | Paolella, Marc S |
Hersteller: | Wiley |
Maße: | 251 x 174 x 35 mm |
Von/Mit: | Marc S Paolella |
Erscheinungsdatum: | 04.09.2018 |
Gewicht: | 0,992 kg |
Marc S. Paolella, PhD, is a Professor at the Department of Banking and Finance, University of Zurich. He is also the Editor of Econometrics and an Associate Editor of the Royal Statistical Society Journal Series A.
Preface xi
PART I ESSENTIAL CONCEPTS IN STATISTICS
1 Introducing Point and Interval Estimation 3
1.1 Point Estimation / 4
1.1.1 Bernoulli Model / 4
1.1.2 Geometric Model / 6
1.1.3 Some Remarks on Bias and Consistency / 11
1.2 Interval Estimation via Simulation / 12
1.3 Interval Estimation via the Bootstrap / 18
1.3.1 Computation and Comparison with Parametric Bootstrap / 18
1.3.2 Application to Bernoulli Model and Modification / 20
1.3.3 Double Bootstrap / 24
1.3.4 Double Bootstrap with Analytic Inner Loop / 26
1.4 Bootstrap Confidence Intervals in the Geometric Model / 31
1.5 Problems / 35
2 Goodness of Fit and Hypothesis Testing 37
2.1 Empirical Cumulative Distribution Function / 38
2.1.1 The Glivenko-Cantelli Theorem / 38
2.1.2 Proofs of the Glivenko-Cantelli Theorem / 41
2.1.3 Example with Continuous Data and Approximate Confidence Intervals / 45
2.1.4 Example with Discrete Data and Approximate Confidence Intervals / 49
2.2 Comparing Parametric and Nonparametric Methods / 52
2.3 Kolmogorov-Smirnov Distance and Hypothesis Testing / 57
2.3.1 The Kolmogorov-Smirnov and Anderson-Darling Statistics / 57
2.3.2 Significance and Hypothesis Testing / 59
2.3.3 Small-Sample Correction / 63
2.4 Testing Normality with KD and AD / 65
2.5 Testing Normality with W2 and U2 / 68
2.6 Testing the Stable Paretian Distributional Assumption: First Attempt / 69
2.7 Two-Sample Kolmogorov Test / 73
2.8 More on (Moron?) Hypothesis Testing / 74
2.8.1 Explanation / 75
2.8.2 Misuse of Hypothesis Testing / 77
2.8.3 Use and Misuse of p-Values / 79
2.9 Problems / 82
3 Likelihood 85
3.1 Introduction / 85
3.1.1 Scalar Parameter Case / 87
3.1.2 Vector Parameter Case / 92
3.1.3 Robustness and the MCD Estimator / 100
3.1.4 Asymptotic Properties of the Maximum Likelihood Estimator / 102
3.2 Cramér-Rao Lower Bound / 107
3.2.1 Univariate Case / 108
3.2.2 Multivariate Case / 111
3.3 Model Selection / 114
3.3.1 Model Misspecification / 114
3.3.2 The Likelihood Ratio Statistic / 117
3.3.3 Use of Information Criteria / 119
3.4 Problems / 120
4 Numerical Optimization 123
4.1 Root Finding / 123
4.1.1 One Parameter / 124
4.1.2 Several Parameters / 131
4.2 Approximating the Distribution of the Maximum Likelihood Estimator / 135
4.3 General Numerical Likelihood Maximization / 136
4.3.1 Newton-Raphson and Quasi-Newton Methods / 137
4.3.2 Imposing Parameter Restrictions / 140
4.4 Evolutionary Algorithms / 145
4.4.1 Differential Evolution / 146
4.4.2 Covariance Matrix Adaption Evolutionary Strategy / 149
4.5 Problems / 155
5 Methods of Point Estimation 157
5.1 Univariate Mixed Normal Distribution / 157
5.1.1 Introduction / 157
5.1.2 Simulation of Univariate Mixtures / 160
5.1.3 Direct Likelihood Maximization / 161
5.1.4 Use of the EM Algorithm / 169
5.1.5 Shrinkage-Type Estimation / 174
5.1.6 Quasi-Bayesian Estimation / 176
5.1.7 Confidence Intervals / 178
5.2 Alternative Point Estimation Methodologies / 184
5.2.1 Method of Moments Estimator / 185
5.2.2 Use of Goodness-of-Fit Measures / 190
5.2.3 Quantile Least Squares / 191
5.2.4 Pearson Minimum Chi-Square / 193
5.2.5 Empirical Moment Generating Function Estimator / 195
5.2.6 Empirical Characteristic Function Estimator / 198
5.3 Comparison of Methods / 199
5.4 A Primer on Shrinkage Estimation / 200
5.5 Problems / 202
PART II FURTHER FUNDAMENTAL CONCEPTS IN STATISTICS
6 Q-Q Plots and Distribution Testing 209
6.1 P-P Plots and Q-Q Plots / 209
6.2 Null Bands / 211
6.2.1 Definition and Motivation / 211
6.2.2 Pointwise Null Bands via Simulation / 212
6.2.3 Asymptotic Approximation of Pointwise Null Bands / 213
6.2.4 Mapping Pointwise and Simultaneous Significance Levels / 215
6.3 Q-Q Test / 217
6.4 Further P-P and Q-Q Type Plots / 219
6.4.1 (Horizontal) Stabilized P-P Plots / 219
6.4.2 Modified S-P Plots / 220
6.4.3 MSP Test for Normality / 224
6.4.4 Modified Percentile (Fowlkes-MP) Plots / 228
6.5 Further Tests for Composite Normality / 231
6.5.1 Motivation / 232
6.5.2 Jarque-Bera Test / 234
6.5.3 Three Powerful (and More Recent) Normality Tests / 237
6.5.4 Testing Goodness of Fit via Binning: Pearson's X P2 Test / 240
6.6 Combining Tests and Power Envelopes / 247
6.6.1 Combining Tests / 248
6.6.2 Power Comparisons for Testing Composite Normality / 252
6.6.3 Most Powerful Tests and Power Envelopes / 252
6.7 Details of a Failed Attempt / 255
6.8 Problems / 260
7 Unbiased Point Estimation and Bias Reduction 269
7.1 Sufficiency / 269
7.1.1 Introduction / 269
7.1.2 Factorization / 272
7.1.3 Minimal Sufficiency / 276
7.1.4 The Rao-Blackwell Theorem / 283
7.2 Completeness and the Uniformly Minimum Variance Unbiased Estimator / 286
7.3 An Example with i.i.d. Geometric Data / 289
7.4 Methods of Bias Reduction / 293
7.4.1 The Bias-Function Approach / 293
7.4.2 Median-Unbiased Estimation / 296
7.4.3 Mode-Adjusted Estimator / 297
7.4.4 The Jackknife / 302
7.5 Problems / 305
8 Analytic Interval Estimation 313
8.1 Definitions / 313
8.2 Pivotal Method / 315
8.2.1 Exact Pivots / 315
8.2.2 Asymptotic Pivots / 318
8.3 Intervals Associated with Normal Samples / 319
8.3.1 Single Sample / 319
8.3.2 Paired Sample / 320
8.3.3 Two Independent Samples / 322
8.3.4 Welch's Method for ¿1 ¿ ¿2 when ¿12 ¿ ¿22 / 323
8.3.5 Satterthwaite's Approximation / 324
8.4 Cumulative Distribution Function Inversion / 326
8.4.1 Continuous Case / 326
8.4.2 Discrete Case / 330
8.5 Application of the Nonparametric Bootstrap / 334
8.6 Problems / 337
PART III ADDITIONAL TOPICS
9 Inference in a Heavy-Tailed Context 341
9.1 Estimating the Maximally Existing Moment / 342
9.2 A Primer on Tail Estimation / 346
9.2.1 Introduction / 346
9.2.2 The Hill Estimator / 346
9.2.3 Use with Stable Paretian Data / 349
9.3 Noncentral Student's t Estimation / 351
9.3.1 Introduction / 351
9.3.2 Direct Density Approximation / 352
9.3.3 Quantile-Based Table Lookup Estimation / 353
9.3.4 Comparison of NCT Estimators / 354
9.4 Asymmetric Stable Paretian Estimation / 358
9.4.1 Introduction / 358
9.4.2 The Hint Estimator / 359
9.4.3 Maximum Likelihood Estimation / 360
9.4.4 The McCulloch Estimator / 361
9.4.5 The Empirical Characteristic Function Estimator / 364
9.4.6 Testing for Symmetry in the Stable Model / 366
9.5 Testing the Stable Paretian Distribution / 368
9.5.1 Test Based on the Empirical Characteristic Function / 368
9.5.2 Summability Test and Modification / 371
9.5.3 ALHADI: The ¿-Hat Discrepancy Test / 375
9.5.4 Joint Test Procedure / 383
9.5.5 Likelihood Ratio Tests / 384
9.5.6 Size and Power of the Symmetric Stable Tests / 385
9.5.7 Extension to Testing the Asymmetric Stable Paretian Case / 395
10 The Method of Indirect Inference 401
10.1 Introduction / 401
10.2 Application to the Laplace Distribution / 403
10.3 Application to Randomized Response / 403
10.3.1 Introduction / 403
10.3.2 Estimation via Indirect Inference / 406
10.4 Application to the Stable Paretian Distribution / 409
10.5 Problems / 416
A Review of Fundamental Concepts in Probability Theory 419
A.1 Combinatorics and Special Functions / 420
A.2 Basic Probability and Conditioning / 423
A.3 Univariate Random Variables / 424
A.4 Multivariate Random Variables / 427
A.5 Continuous Univariate Random Variables / 430
A.6 Conditional Random Variables / 432
A.7 Generating Functions and Inversion Formulas / 434
A.8 Value at Risk and Expected Shortfall / 437
A.9 Jacobian Transformations / 451
A.10 Sums and Other Functions / 453
A.11 Saddlepoint Approximations / 456
A.12 Order Statistics / 460
A.13 The Multivariate Normal Distribution / 462
A.14 Noncentral Distributions / 465
A.15 Inequalities and Convergence / 467
A.15.1 Inequalities for Random Variables / 467
A.15.2 Convergence of Sequences of Sets / 469
A.15.3 Convergence of Sequences of Random Variables / 473
A.16 The Stable Paretian Distribution / 483
A.17 Problems / 492
A.18 Solutions / 509
References 537
Index 561
Erscheinungsjahr: | 2018 |
---|---|
Fachbereich: | Wahrscheinlichkeitstheorie |
Genre: | Mathematik |
Rubrik: | Naturwissenschaften & Technik |
Medium: | Buch |
Inhalt: | 584 S. |
ISBN-13: | 9781119417866 |
ISBN-10: | 1119417864 |
Sprache: | Englisch |
Einband: | Gebunden |
Autor: | Paolella, Marc S |
Hersteller: | Wiley |
Maße: | 251 x 174 x 35 mm |
Von/Mit: | Marc S Paolella |
Erscheinungsdatum: | 04.09.2018 |
Gewicht: | 0,992 kg |