Dekorationsartikel gehören nicht zum Leistungsumfang.
Sprache:
Englisch
118,95 €*
Versandkostenfrei per Post / DHL
Lieferzeit 2-3 Wochen
Kategorien:
Beschreibung
1 Introduction 1
1.1 Linear Filters 1
1.2 Adaptive Filters 2
1.3 Adaptive Filter Structures 3
1.4 Adaptation Approaches 7
1.4.1 Approach Based on Wiener Filter Theory 7
1.4.2 Method of Least-Squares 8
1.5 Real and Complex Forms of Adaptive Filters 9
1.6 Applications 9
1.6.1 Modeling 10
1.6.2 InverseModeling 11
1.6.3 Linear Prediction 15
1.6.4 Interference Cancellation 20
2 Discrete-Time Signals and Systems 29
2.1 Sequences and z-Transform 29
2.2 Parseval's Relation 33
2.3 System Function 34
2.4 Stochastic Processes 36
2.4.1 Stochastic Averages 36
2.4.2 z-Transform Representations 38
2.4.3 The power spectral density 39
2.4.4 Response of Linear Systems to Stochastic Processes 41
2.4.5 Ergodicity and Time Averages 44
3 Wiener Filters 49
3.1 Mean-Squared Error Criterion 49
3.2 Wiener Filter - Transversal, Real-valued Case 51
3.3 Principle of Orthogonality 57
3.4 Normalized Performance Function 59
3.5 Extension to Complex-Valued Case 59
3.6 UnconstrainedWiener Filters 62
3.6.1 Performance Function 62
3.6.2 Optimum Transfer Function 65
3.6.3 Modeling 67
3.6.4 InverseModeling 70
3.6.5 Noise Cancellation 74
3.7 Summary and Discussion 80
4 Eigenanalysis and Performance Surface 91
4.1 Eigenvalues and Eigenvectors 91
4.2 Properties of Eigenvalues and Eigenvectors 92
4.3 Performance Surface 104
5 Search Methods 121
5.1 Method of Steepest-Descent 122
5.2 Learning Curve 128
5.3 Effect of Eigenvalue-Spread 132
5.4 Newton's Method 133
5.5 An Alternative Interpretation of Newton's Algorithm 136
6 LMS Algorithm 141
6.1 Derivation of LMS Algorithm 141
6.2 Average Tap-Weight Behavior of the LMS Algorithm 143
6.3 MSE Behavior of the LMS Algorithm 146
6.3.1 Learning Curve 148
6.3.2 Weight-Error Correlation Matrix 150
6.3.3 Excess MSE and Misadjustment 153
6.3.4 Stability 155
6.3.5 The Effect of Initial Values of TapWeights on the Transient Behavior of the LMS Algorithm 156
6.4 Computer Simulations 157
6.4.1 System Modeling 158
6.4.2 Channel Equalization 160
6.4.3 Adaptive Line Enhancement 164
6.4.4 Beamforming 166
6.5 Simplified LMS Algorithms 168
6.6 Normalized LMS Algorithm 172
6.7 Affine Projection LMS Algorithm 175
6.8 Variable Step-Size LMS Algorithm 178
6.9 LMS Algorithm for Complex-Valued Signals 181
6.10 Beamforming (Revisited) 183
6.11 Linearly Constrained LMS Algorithm 186
6.11.1 Statement of the Problem and Its Optimal Solution 187
6.11.2 Update Equations 188
6.11.3 Extension to the Complex-Valued Case 189
7 Transform Domain Adaptive Filters 209
7.1 Overview of Transform Domain Adaptive Filters 209
7.2 Band-Partitioning Property of Orthogonal Transforms 212
7.3 Orthogonalization Property of Orthogonal Transforms 212
7.4 Transform Domain LMS Algorithm 215
7.5 Ideal LMS-Newton Algorithm and Its Relationship with TDLMS 217
7.6 Selection of the transform T 217
7.6.1 A Geometrical Interpretation 218
7.6.2 A Useful Performance Index 222
7.6.3 Improvement Factor and Comparisons 223
7.6.4 Filtering View 226
7.7 Transforms 230
7.8 Sliding Transforms 232
7.8.1 Frequency Sampling Filters 232
7.8.2 Recursive Realization of Sliding Transforms 233
7.8.3 Non-recursive Realization of Sliding Transforms 237
7.8.4 Comparison of Recursive and Non-recursive Sliding Transforms 240
7.9 Summary and Discussion 245
8 Block Implementation of Adaptive Filters 255
8.1 Block LMS Algorithm 256
8.2 Mathematical Background 259
8.2.1 Linear Convolution Using the Discrete Fourier Transform 259
8.2.2 Circular Matrices 260
8.2.3 WindowMatrices andMatrix Formulation of the Overlap-SaveMethod 263
8.3 The FBLMS Algorithm 264
8.3.1 Constrained and Unconstrained FBLMS Algorithms 265
8.3.2 Convergence Behavior of the FBLMS Algorithm 267
8.3.3 Step-Normalization 268
8.3.4 Summary of the FBLMS Algorithm 269
8.3.5 FBLMS Misadjustment Equations 271
8.3.6 Selection of the Block Length 271
8.4 The Partitioned FBLMS Algorithm 272
8.4.1 Analysis of the PFBLMS Algorithm 273
8.4.2 PFBLMS Algorithm withM > L 276
8.4.3 PFBLMS Misadjustment Equations 279
8.4.4 Computational Complexity and Memory Requirement 279
8.4.5 Modified Constrained PFBLMS Algorithm 280
8.5 Computer Simulations 281
9 Subband Adaptive Filters 299
9.1 DFT Filter Banks 300
9.1.1 Weighted Overlap-Add Method for Realization of DFT Analysis Filter Banks 301
9.1.2 Weighted Overlap-Add Method for Realization of DFT Synthesis Filter Banks 302
9.2 Complementary Filter Banks 304
9.3 Subband Adaptive Filter Structures 308
9.4 Selection of Analysis and Synthesis Filters 311
9.5 Computational Complexity 313
9.6 Decimation Factor and Aliasing 314
9.7 Low-Delay Analysis and Synthesis Filter Banks 316
9.7.1 Design Method 316
9.7.2 Filters Properties 318
9.8 A Design Procedure for Subband Adaptive Filters 319
9.9 An Example 322
9.10 Comparison with FBLMS Algorithm 323
10 IIR Adaptive Filters 329
10.1 Output Error Method 330
10.2 Equation Error Method 336
10.3 Case Study I: IIR Adaptive Line Enhancement 339
10.3.1 IIR ALE Filter,W(z) 340
10.3.2 Performance Functions 340
10.3.3 Simultaneous Adaptation of s and w 344
10.3.4 Robust Adaptation of w 344
10.3.5 Simulation Results 346
10.4 Case Study II: Equalizer Design for Magnetic Recording Channels 349
10.4.1 Channel Discretization 351
10.4.2 Design Steps 352
10.4.3 FIR Equalizer Design 352
10.4.4 Conversion from FIR to IIR Equalizer 355
10.4.5 Conversion from z-Domain to s-Domain 355
10.4.6 Numerical Results 356
10.5 Concluding Remarks 358
11 Lattice Filters 363
11.1 Forward Linear Prediction 363
11.2 Backward Linear Prediction 365
11.3 Relationship Between Forward and Backward Predictors 366
11.4 Prediction-Error Filters 367
11.5 Properties of Prediction Errors 367
11.6 Derivation of Lattice Structure 370
11.7 Lattice as an Orthogonalization Transform 375
11.8 Lattice Joint Process Estimator 377
11.9 System Functions 377
11.10Conversions 378
11.10.1 Conversion Between Lattice and Transversal Predictors 379
11.10.2 Levinson-Durbin Algorithm 380
11.10.3 Extension of Levinson-Durbin Algorithm 382
11.11All-Pole Lattice Structure 383
11.12Pole-Zero Lattice Structure 385
11.13Adaptive Lattice Filter 385
11.13.1Discussion and Simulations 387
11.14AutoregressiveModeling of Random Processes 391
11.15Adaptive Algorithms Based on AutoregressiveModeling 392
11.15.1Algorithms 393
11.15.2 Performance Analysis 398
11.15.3 Simulation Results and Discussion 402
12 Method of Least-Squares 419
12.1 Formulation of Least-Squares Estimation for a Linear Combiner 420
12.2 Principle of Orthogonality 421
12.3 Projection Operator 423
12.4 Standard Recursive Least-Squares Algorithm 424
12.4.1 RLS Recursions 424
12.4.2 Initialization of the RLS Algorithm 427
12.4.3 Summary of the Standard RLS Algorithm 427
12.5 Convergence Behavior of the RLS Algorithm 430
12.5.1 Average Tap-Weight Behavior of the RLS Algorithm 430
12.5.2 Weight-Error Correlation Matrix 431
12.5.3 Learning Curve 432
12.5.4 Excess MSE and Misadjustment 435
12.5.5 Initial Transient Behavior of the RLS Algorithm 435
13 Fast RLS Algorithms 443
13.1 Least-Squares Forward Prediction 443
13.2 Least-Squares Backward Prediction 445
13.3 Least-Squares Lattice 447
13.4 RLSL Algorithm 450
13.4.1 Notations and Preliminaries 450
13.4.2 Update Recursion for the Least-Squares Error Sums 453
13.4.3 Conversion Factor 454
13.4.4 Update Equation for Conversion Factor 455
13.4.5 Update Equation for Crosscorrelations 457
13.4.6 RLSL Algorithm Using A Posteriori Errors 459
13.4.7 RLSL algorithm with Error Feedback 461
13.5 FTRLS Algorithm 463
13.5.1 Derivation of the FTRLS Algorithm 464
13.5.2 Summary of the FTRLS Algorithm 467
13.5.3 Stabilized FTRLS Algorithm 467
14 Tracking 473
14.1 Formulation of the Tracking Problem 473
14.2 Generalized Formulation of LMS Algorithm 474
14.3 MSE Analysis of the Generalized LMS Algorithm 475
14.4 Optimum Step-Size Parameters 479
14.5 Comparisons of Conventional Algorithms 481
14.6 Comparisons Based on Optimum Step-Size Parameters 484
14.7 VSLMS: An algorithm with Optimum Tracking Behavior 486
14.7.1 Derivation of VSLMS Algorithm 487
14.7.2 Variations and Extensions 488
14.7.3 Normalization of the Parameter rho 489
14.7.4 Computer Simulations 490
14.8 RLS Algorithm with Variable Forgetting Factor 494
14.9 Summary 497
15 Echo Cancellation 501
15.1 The Problem Statement 501
15.2 Structures and Adaptive Algorithms 504
15.2.1 Normalized LMS (NLMS) Algorithm 505
15.2.2 Affine Projection LMS (APLMS) Algorithm 508
15.2.3 Frequency Domain Block LMS Algorithm 509
15.2.4 Subband LMS Algorithm 511
15.2.5 LMS-Newton Algorithm 513
15.2.6 Numerical...
1.1 Linear Filters 1
1.2 Adaptive Filters 2
1.3 Adaptive Filter Structures 3
1.4 Adaptation Approaches 7
1.4.1 Approach Based on Wiener Filter Theory 7
1.4.2 Method of Least-Squares 8
1.5 Real and Complex Forms of Adaptive Filters 9
1.6 Applications 9
1.6.1 Modeling 10
1.6.2 InverseModeling 11
1.6.3 Linear Prediction 15
1.6.4 Interference Cancellation 20
2 Discrete-Time Signals and Systems 29
2.1 Sequences and z-Transform 29
2.2 Parseval's Relation 33
2.3 System Function 34
2.4 Stochastic Processes 36
2.4.1 Stochastic Averages 36
2.4.2 z-Transform Representations 38
2.4.3 The power spectral density 39
2.4.4 Response of Linear Systems to Stochastic Processes 41
2.4.5 Ergodicity and Time Averages 44
3 Wiener Filters 49
3.1 Mean-Squared Error Criterion 49
3.2 Wiener Filter - Transversal, Real-valued Case 51
3.3 Principle of Orthogonality 57
3.4 Normalized Performance Function 59
3.5 Extension to Complex-Valued Case 59
3.6 UnconstrainedWiener Filters 62
3.6.1 Performance Function 62
3.6.2 Optimum Transfer Function 65
3.6.3 Modeling 67
3.6.4 InverseModeling 70
3.6.5 Noise Cancellation 74
3.7 Summary and Discussion 80
4 Eigenanalysis and Performance Surface 91
4.1 Eigenvalues and Eigenvectors 91
4.2 Properties of Eigenvalues and Eigenvectors 92
4.3 Performance Surface 104
5 Search Methods 121
5.1 Method of Steepest-Descent 122
5.2 Learning Curve 128
5.3 Effect of Eigenvalue-Spread 132
5.4 Newton's Method 133
5.5 An Alternative Interpretation of Newton's Algorithm 136
6 LMS Algorithm 141
6.1 Derivation of LMS Algorithm 141
6.2 Average Tap-Weight Behavior of the LMS Algorithm 143
6.3 MSE Behavior of the LMS Algorithm 146
6.3.1 Learning Curve 148
6.3.2 Weight-Error Correlation Matrix 150
6.3.3 Excess MSE and Misadjustment 153
6.3.4 Stability 155
6.3.5 The Effect of Initial Values of TapWeights on the Transient Behavior of the LMS Algorithm 156
6.4 Computer Simulations 157
6.4.1 System Modeling 158
6.4.2 Channel Equalization 160
6.4.3 Adaptive Line Enhancement 164
6.4.4 Beamforming 166
6.5 Simplified LMS Algorithms 168
6.6 Normalized LMS Algorithm 172
6.7 Affine Projection LMS Algorithm 175
6.8 Variable Step-Size LMS Algorithm 178
6.9 LMS Algorithm for Complex-Valued Signals 181
6.10 Beamforming (Revisited) 183
6.11 Linearly Constrained LMS Algorithm 186
6.11.1 Statement of the Problem and Its Optimal Solution 187
6.11.2 Update Equations 188
6.11.3 Extension to the Complex-Valued Case 189
7 Transform Domain Adaptive Filters 209
7.1 Overview of Transform Domain Adaptive Filters 209
7.2 Band-Partitioning Property of Orthogonal Transforms 212
7.3 Orthogonalization Property of Orthogonal Transforms 212
7.4 Transform Domain LMS Algorithm 215
7.5 Ideal LMS-Newton Algorithm and Its Relationship with TDLMS 217
7.6 Selection of the transform T 217
7.6.1 A Geometrical Interpretation 218
7.6.2 A Useful Performance Index 222
7.6.3 Improvement Factor and Comparisons 223
7.6.4 Filtering View 226
7.7 Transforms 230
7.8 Sliding Transforms 232
7.8.1 Frequency Sampling Filters 232
7.8.2 Recursive Realization of Sliding Transforms 233
7.8.3 Non-recursive Realization of Sliding Transforms 237
7.8.4 Comparison of Recursive and Non-recursive Sliding Transforms 240
7.9 Summary and Discussion 245
8 Block Implementation of Adaptive Filters 255
8.1 Block LMS Algorithm 256
8.2 Mathematical Background 259
8.2.1 Linear Convolution Using the Discrete Fourier Transform 259
8.2.2 Circular Matrices 260
8.2.3 WindowMatrices andMatrix Formulation of the Overlap-SaveMethod 263
8.3 The FBLMS Algorithm 264
8.3.1 Constrained and Unconstrained FBLMS Algorithms 265
8.3.2 Convergence Behavior of the FBLMS Algorithm 267
8.3.3 Step-Normalization 268
8.3.4 Summary of the FBLMS Algorithm 269
8.3.5 FBLMS Misadjustment Equations 271
8.3.6 Selection of the Block Length 271
8.4 The Partitioned FBLMS Algorithm 272
8.4.1 Analysis of the PFBLMS Algorithm 273
8.4.2 PFBLMS Algorithm withM > L 276
8.4.3 PFBLMS Misadjustment Equations 279
8.4.4 Computational Complexity and Memory Requirement 279
8.4.5 Modified Constrained PFBLMS Algorithm 280
8.5 Computer Simulations 281
9 Subband Adaptive Filters 299
9.1 DFT Filter Banks 300
9.1.1 Weighted Overlap-Add Method for Realization of DFT Analysis Filter Banks 301
9.1.2 Weighted Overlap-Add Method for Realization of DFT Synthesis Filter Banks 302
9.2 Complementary Filter Banks 304
9.3 Subband Adaptive Filter Structures 308
9.4 Selection of Analysis and Synthesis Filters 311
9.5 Computational Complexity 313
9.6 Decimation Factor and Aliasing 314
9.7 Low-Delay Analysis and Synthesis Filter Banks 316
9.7.1 Design Method 316
9.7.2 Filters Properties 318
9.8 A Design Procedure for Subband Adaptive Filters 319
9.9 An Example 322
9.10 Comparison with FBLMS Algorithm 323
10 IIR Adaptive Filters 329
10.1 Output Error Method 330
10.2 Equation Error Method 336
10.3 Case Study I: IIR Adaptive Line Enhancement 339
10.3.1 IIR ALE Filter,W(z) 340
10.3.2 Performance Functions 340
10.3.3 Simultaneous Adaptation of s and w 344
10.3.4 Robust Adaptation of w 344
10.3.5 Simulation Results 346
10.4 Case Study II: Equalizer Design for Magnetic Recording Channels 349
10.4.1 Channel Discretization 351
10.4.2 Design Steps 352
10.4.3 FIR Equalizer Design 352
10.4.4 Conversion from FIR to IIR Equalizer 355
10.4.5 Conversion from z-Domain to s-Domain 355
10.4.6 Numerical Results 356
10.5 Concluding Remarks 358
11 Lattice Filters 363
11.1 Forward Linear Prediction 363
11.2 Backward Linear Prediction 365
11.3 Relationship Between Forward and Backward Predictors 366
11.4 Prediction-Error Filters 367
11.5 Properties of Prediction Errors 367
11.6 Derivation of Lattice Structure 370
11.7 Lattice as an Orthogonalization Transform 375
11.8 Lattice Joint Process Estimator 377
11.9 System Functions 377
11.10Conversions 378
11.10.1 Conversion Between Lattice and Transversal Predictors 379
11.10.2 Levinson-Durbin Algorithm 380
11.10.3 Extension of Levinson-Durbin Algorithm 382
11.11All-Pole Lattice Structure 383
11.12Pole-Zero Lattice Structure 385
11.13Adaptive Lattice Filter 385
11.13.1Discussion and Simulations 387
11.14AutoregressiveModeling of Random Processes 391
11.15Adaptive Algorithms Based on AutoregressiveModeling 392
11.15.1Algorithms 393
11.15.2 Performance Analysis 398
11.15.3 Simulation Results and Discussion 402
12 Method of Least-Squares 419
12.1 Formulation of Least-Squares Estimation for a Linear Combiner 420
12.2 Principle of Orthogonality 421
12.3 Projection Operator 423
12.4 Standard Recursive Least-Squares Algorithm 424
12.4.1 RLS Recursions 424
12.4.2 Initialization of the RLS Algorithm 427
12.4.3 Summary of the Standard RLS Algorithm 427
12.5 Convergence Behavior of the RLS Algorithm 430
12.5.1 Average Tap-Weight Behavior of the RLS Algorithm 430
12.5.2 Weight-Error Correlation Matrix 431
12.5.3 Learning Curve 432
12.5.4 Excess MSE and Misadjustment 435
12.5.5 Initial Transient Behavior of the RLS Algorithm 435
13 Fast RLS Algorithms 443
13.1 Least-Squares Forward Prediction 443
13.2 Least-Squares Backward Prediction 445
13.3 Least-Squares Lattice 447
13.4 RLSL Algorithm 450
13.4.1 Notations and Preliminaries 450
13.4.2 Update Recursion for the Least-Squares Error Sums 453
13.4.3 Conversion Factor 454
13.4.4 Update Equation for Conversion Factor 455
13.4.5 Update Equation for Crosscorrelations 457
13.4.6 RLSL Algorithm Using A Posteriori Errors 459
13.4.7 RLSL algorithm with Error Feedback 461
13.5 FTRLS Algorithm 463
13.5.1 Derivation of the FTRLS Algorithm 464
13.5.2 Summary of the FTRLS Algorithm 467
13.5.3 Stabilized FTRLS Algorithm 467
14 Tracking 473
14.1 Formulation of the Tracking Problem 473
14.2 Generalized Formulation of LMS Algorithm 474
14.3 MSE Analysis of the Generalized LMS Algorithm 475
14.4 Optimum Step-Size Parameters 479
14.5 Comparisons of Conventional Algorithms 481
14.6 Comparisons Based on Optimum Step-Size Parameters 484
14.7 VSLMS: An algorithm with Optimum Tracking Behavior 486
14.7.1 Derivation of VSLMS Algorithm 487
14.7.2 Variations and Extensions 488
14.7.3 Normalization of the Parameter rho 489
14.7.4 Computer Simulations 490
14.8 RLS Algorithm with Variable Forgetting Factor 494
14.9 Summary 497
15 Echo Cancellation 501
15.1 The Problem Statement 501
15.2 Structures and Adaptive Algorithms 504
15.2.1 Normalized LMS (NLMS) Algorithm 505
15.2.2 Affine Projection LMS (APLMS) Algorithm 508
15.2.3 Frequency Domain Block LMS Algorithm 509
15.2.4 Subband LMS Algorithm 511
15.2.5 LMS-Newton Algorithm 513
15.2.6 Numerical...
1 Introduction 1
1.1 Linear Filters 1
1.2 Adaptive Filters 2
1.3 Adaptive Filter Structures 3
1.4 Adaptation Approaches 7
1.4.1 Approach Based on Wiener Filter Theory 7
1.4.2 Method of Least-Squares 8
1.5 Real and Complex Forms of Adaptive Filters 9
1.6 Applications 9
1.6.1 Modeling 10
1.6.2 InverseModeling 11
1.6.3 Linear Prediction 15
1.6.4 Interference Cancellation 20
2 Discrete-Time Signals and Systems 29
2.1 Sequences and z-Transform 29
2.2 Parseval's Relation 33
2.3 System Function 34
2.4 Stochastic Processes 36
2.4.1 Stochastic Averages 36
2.4.2 z-Transform Representations 38
2.4.3 The power spectral density 39
2.4.4 Response of Linear Systems to Stochastic Processes 41
2.4.5 Ergodicity and Time Averages 44
3 Wiener Filters 49
3.1 Mean-Squared Error Criterion 49
3.2 Wiener Filter - Transversal, Real-valued Case 51
3.3 Principle of Orthogonality 57
3.4 Normalized Performance Function 59
3.5 Extension to Complex-Valued Case 59
3.6 UnconstrainedWiener Filters 62
3.6.1 Performance Function 62
3.6.2 Optimum Transfer Function 65
3.6.3 Modeling 67
3.6.4 InverseModeling 70
3.6.5 Noise Cancellation 74
3.7 Summary and Discussion 80
4 Eigenanalysis and Performance Surface 91
4.1 Eigenvalues and Eigenvectors 91
4.2 Properties of Eigenvalues and Eigenvectors 92
4.3 Performance Surface 104
5 Search Methods 121
5.1 Method of Steepest-Descent 122
5.2 Learning Curve 128
5.3 Effect of Eigenvalue-Spread 132
5.4 Newton's Method 133
5.5 An Alternative Interpretation of Newton's Algorithm 136
6 LMS Algorithm 141
6.1 Derivation of LMS Algorithm 141
6.2 Average Tap-Weight Behavior of the LMS Algorithm 143
6.3 MSE Behavior of the LMS Algorithm 146
6.3.1 Learning Curve 148
6.3.2 Weight-Error Correlation Matrix 150
6.3.3 Excess MSE and Misadjustment 153
6.3.4 Stability 155
6.3.5 The Effect of Initial Values of TapWeights on the Transient Behavior of the LMS Algorithm 156
6.4 Computer Simulations 157
6.4.1 System Modeling 158
6.4.2 Channel Equalization 160
6.4.3 Adaptive Line Enhancement 164
6.4.4 Beamforming 166
6.5 Simplified LMS Algorithms 168
6.6 Normalized LMS Algorithm 172
6.7 Affine Projection LMS Algorithm 175
6.8 Variable Step-Size LMS Algorithm 178
6.9 LMS Algorithm for Complex-Valued Signals 181
6.10 Beamforming (Revisited) 183
6.11 Linearly Constrained LMS Algorithm 186
6.11.1 Statement of the Problem and Its Optimal Solution 187
6.11.2 Update Equations 188
6.11.3 Extension to the Complex-Valued Case 189
7 Transform Domain Adaptive Filters 209
7.1 Overview of Transform Domain Adaptive Filters 209
7.2 Band-Partitioning Property of Orthogonal Transforms 212
7.3 Orthogonalization Property of Orthogonal Transforms 212
7.4 Transform Domain LMS Algorithm 215
7.5 Ideal LMS-Newton Algorithm and Its Relationship with TDLMS 217
7.6 Selection of the transform T 217
7.6.1 A Geometrical Interpretation 218
7.6.2 A Useful Performance Index 222
7.6.3 Improvement Factor and Comparisons 223
7.6.4 Filtering View 226
7.7 Transforms 230
7.8 Sliding Transforms 232
7.8.1 Frequency Sampling Filters 232
7.8.2 Recursive Realization of Sliding Transforms 233
7.8.3 Non-recursive Realization of Sliding Transforms 237
7.8.4 Comparison of Recursive and Non-recursive Sliding Transforms 240
7.9 Summary and Discussion 245
8 Block Implementation of Adaptive Filters 255
8.1 Block LMS Algorithm 256
8.2 Mathematical Background 259
8.2.1 Linear Convolution Using the Discrete Fourier Transform 259
8.2.2 Circular Matrices 260
8.2.3 WindowMatrices andMatrix Formulation of the Overlap-SaveMethod 263
8.3 The FBLMS Algorithm 264
8.3.1 Constrained and Unconstrained FBLMS Algorithms 265
8.3.2 Convergence Behavior of the FBLMS Algorithm 267
8.3.3 Step-Normalization 268
8.3.4 Summary of the FBLMS Algorithm 269
8.3.5 FBLMS Misadjustment Equations 271
8.3.6 Selection of the Block Length 271
8.4 The Partitioned FBLMS Algorithm 272
8.4.1 Analysis of the PFBLMS Algorithm 273
8.4.2 PFBLMS Algorithm withM > L 276
8.4.3 PFBLMS Misadjustment Equations 279
8.4.4 Computational Complexity and Memory Requirement 279
8.4.5 Modified Constrained PFBLMS Algorithm 280
8.5 Computer Simulations 281
9 Subband Adaptive Filters 299
9.1 DFT Filter Banks 300
9.1.1 Weighted Overlap-Add Method for Realization of DFT Analysis Filter Banks 301
9.1.2 Weighted Overlap-Add Method for Realization of DFT Synthesis Filter Banks 302
9.2 Complementary Filter Banks 304
9.3 Subband Adaptive Filter Structures 308
9.4 Selection of Analysis and Synthesis Filters 311
9.5 Computational Complexity 313
9.6 Decimation Factor and Aliasing 314
9.7 Low-Delay Analysis and Synthesis Filter Banks 316
9.7.1 Design Method 316
9.7.2 Filters Properties 318
9.8 A Design Procedure for Subband Adaptive Filters 319
9.9 An Example 322
9.10 Comparison with FBLMS Algorithm 323
10 IIR Adaptive Filters 329
10.1 Output Error Method 330
10.2 Equation Error Method 336
10.3 Case Study I: IIR Adaptive Line Enhancement 339
10.3.1 IIR ALE Filter,W(z) 340
10.3.2 Performance Functions 340
10.3.3 Simultaneous Adaptation of s and w 344
10.3.4 Robust Adaptation of w 344
10.3.5 Simulation Results 346
10.4 Case Study II: Equalizer Design for Magnetic Recording Channels 349
10.4.1 Channel Discretization 351
10.4.2 Design Steps 352
10.4.3 FIR Equalizer Design 352
10.4.4 Conversion from FIR to IIR Equalizer 355
10.4.5 Conversion from z-Domain to s-Domain 355
10.4.6 Numerical Results 356
10.5 Concluding Remarks 358
11 Lattice Filters 363
11.1 Forward Linear Prediction 363
11.2 Backward Linear Prediction 365
11.3 Relationship Between Forward and Backward Predictors 366
11.4 Prediction-Error Filters 367
11.5 Properties of Prediction Errors 367
11.6 Derivation of Lattice Structure 370
11.7 Lattice as an Orthogonalization Transform 375
11.8 Lattice Joint Process Estimator 377
11.9 System Functions 377
11.10Conversions 378
11.10.1 Conversion Between Lattice and Transversal Predictors 379
11.10.2 Levinson-Durbin Algorithm 380
11.10.3 Extension of Levinson-Durbin Algorithm 382
11.11All-Pole Lattice Structure 383
11.12Pole-Zero Lattice Structure 385
11.13Adaptive Lattice Filter 385
11.13.1Discussion and Simulations 387
11.14AutoregressiveModeling of Random Processes 391
11.15Adaptive Algorithms Based on AutoregressiveModeling 392
11.15.1Algorithms 393
11.15.2 Performance Analysis 398
11.15.3 Simulation Results and Discussion 402
12 Method of Least-Squares 419
12.1 Formulation of Least-Squares Estimation for a Linear Combiner 420
12.2 Principle of Orthogonality 421
12.3 Projection Operator 423
12.4 Standard Recursive Least-Squares Algorithm 424
12.4.1 RLS Recursions 424
12.4.2 Initialization of the RLS Algorithm 427
12.4.3 Summary of the Standard RLS Algorithm 427
12.5 Convergence Behavior of the RLS Algorithm 430
12.5.1 Average Tap-Weight Behavior of the RLS Algorithm 430
12.5.2 Weight-Error Correlation Matrix 431
12.5.3 Learning Curve 432
12.5.4 Excess MSE and Misadjustment 435
12.5.5 Initial Transient Behavior of the RLS Algorithm 435
13 Fast RLS Algorithms 443
13.1 Least-Squares Forward Prediction 443
13.2 Least-Squares Backward Prediction 445
13.3 Least-Squares Lattice 447
13.4 RLSL Algorithm 450
13.4.1 Notations and Preliminaries 450
13.4.2 Update Recursion for the Least-Squares Error Sums 453
13.4.3 Conversion Factor 454
13.4.4 Update Equation for Conversion Factor 455
13.4.5 Update Equation for Crosscorrelations 457
13.4.6 RLSL Algorithm Using A Posteriori Errors 459
13.4.7 RLSL algorithm with Error Feedback 461
13.5 FTRLS Algorithm 463
13.5.1 Derivation of the FTRLS Algorithm 464
13.5.2 Summary of the FTRLS Algorithm 467
13.5.3 Stabilized FTRLS Algorithm 467
14 Tracking 473
14.1 Formulation of the Tracking Problem 473
14.2 Generalized Formulation of LMS Algorithm 474
14.3 MSE Analysis of the Generalized LMS Algorithm 475
14.4 Optimum Step-Size Parameters 479
14.5 Comparisons of Conventional Algorithms 481
14.6 Comparisons Based on Optimum Step-Size Parameters 484
14.7 VSLMS: An algorithm with Optimum Tracking Behavior 486
14.7.1 Derivation of VSLMS Algorithm 487
14.7.2 Variations and Extensions 488
14.7.3 Normalization of the Parameter rho 489
14.7.4 Computer Simulations 490
14.8 RLS Algorithm with Variable Forgetting Factor 494
14.9 Summary 497
15 Echo Cancellation 501
15.1 The Problem Statement 501
15.2 Structures and Adaptive Algorithms 504
15.2.1 Normalized LMS (NLMS) Algorithm 505
15.2.2 Affine Projection LMS (APLMS) Algorithm 508
15.2.3 Frequency Domain Block LMS Algorithm 509
15.2.4 Subband LMS Algorithm 511
15.2.5 LMS-Newton Algorithm 513
15.2.6 Numerical...
1.1 Linear Filters 1
1.2 Adaptive Filters 2
1.3 Adaptive Filter Structures 3
1.4 Adaptation Approaches 7
1.4.1 Approach Based on Wiener Filter Theory 7
1.4.2 Method of Least-Squares 8
1.5 Real and Complex Forms of Adaptive Filters 9
1.6 Applications 9
1.6.1 Modeling 10
1.6.2 InverseModeling 11
1.6.3 Linear Prediction 15
1.6.4 Interference Cancellation 20
2 Discrete-Time Signals and Systems 29
2.1 Sequences and z-Transform 29
2.2 Parseval's Relation 33
2.3 System Function 34
2.4 Stochastic Processes 36
2.4.1 Stochastic Averages 36
2.4.2 z-Transform Representations 38
2.4.3 The power spectral density 39
2.4.4 Response of Linear Systems to Stochastic Processes 41
2.4.5 Ergodicity and Time Averages 44
3 Wiener Filters 49
3.1 Mean-Squared Error Criterion 49
3.2 Wiener Filter - Transversal, Real-valued Case 51
3.3 Principle of Orthogonality 57
3.4 Normalized Performance Function 59
3.5 Extension to Complex-Valued Case 59
3.6 UnconstrainedWiener Filters 62
3.6.1 Performance Function 62
3.6.2 Optimum Transfer Function 65
3.6.3 Modeling 67
3.6.4 InverseModeling 70
3.6.5 Noise Cancellation 74
3.7 Summary and Discussion 80
4 Eigenanalysis and Performance Surface 91
4.1 Eigenvalues and Eigenvectors 91
4.2 Properties of Eigenvalues and Eigenvectors 92
4.3 Performance Surface 104
5 Search Methods 121
5.1 Method of Steepest-Descent 122
5.2 Learning Curve 128
5.3 Effect of Eigenvalue-Spread 132
5.4 Newton's Method 133
5.5 An Alternative Interpretation of Newton's Algorithm 136
6 LMS Algorithm 141
6.1 Derivation of LMS Algorithm 141
6.2 Average Tap-Weight Behavior of the LMS Algorithm 143
6.3 MSE Behavior of the LMS Algorithm 146
6.3.1 Learning Curve 148
6.3.2 Weight-Error Correlation Matrix 150
6.3.3 Excess MSE and Misadjustment 153
6.3.4 Stability 155
6.3.5 The Effect of Initial Values of TapWeights on the Transient Behavior of the LMS Algorithm 156
6.4 Computer Simulations 157
6.4.1 System Modeling 158
6.4.2 Channel Equalization 160
6.4.3 Adaptive Line Enhancement 164
6.4.4 Beamforming 166
6.5 Simplified LMS Algorithms 168
6.6 Normalized LMS Algorithm 172
6.7 Affine Projection LMS Algorithm 175
6.8 Variable Step-Size LMS Algorithm 178
6.9 LMS Algorithm for Complex-Valued Signals 181
6.10 Beamforming (Revisited) 183
6.11 Linearly Constrained LMS Algorithm 186
6.11.1 Statement of the Problem and Its Optimal Solution 187
6.11.2 Update Equations 188
6.11.3 Extension to the Complex-Valued Case 189
7 Transform Domain Adaptive Filters 209
7.1 Overview of Transform Domain Adaptive Filters 209
7.2 Band-Partitioning Property of Orthogonal Transforms 212
7.3 Orthogonalization Property of Orthogonal Transforms 212
7.4 Transform Domain LMS Algorithm 215
7.5 Ideal LMS-Newton Algorithm and Its Relationship with TDLMS 217
7.6 Selection of the transform T 217
7.6.1 A Geometrical Interpretation 218
7.6.2 A Useful Performance Index 222
7.6.3 Improvement Factor and Comparisons 223
7.6.4 Filtering View 226
7.7 Transforms 230
7.8 Sliding Transforms 232
7.8.1 Frequency Sampling Filters 232
7.8.2 Recursive Realization of Sliding Transforms 233
7.8.3 Non-recursive Realization of Sliding Transforms 237
7.8.4 Comparison of Recursive and Non-recursive Sliding Transforms 240
7.9 Summary and Discussion 245
8 Block Implementation of Adaptive Filters 255
8.1 Block LMS Algorithm 256
8.2 Mathematical Background 259
8.2.1 Linear Convolution Using the Discrete Fourier Transform 259
8.2.2 Circular Matrices 260
8.2.3 WindowMatrices andMatrix Formulation of the Overlap-SaveMethod 263
8.3 The FBLMS Algorithm 264
8.3.1 Constrained and Unconstrained FBLMS Algorithms 265
8.3.2 Convergence Behavior of the FBLMS Algorithm 267
8.3.3 Step-Normalization 268
8.3.4 Summary of the FBLMS Algorithm 269
8.3.5 FBLMS Misadjustment Equations 271
8.3.6 Selection of the Block Length 271
8.4 The Partitioned FBLMS Algorithm 272
8.4.1 Analysis of the PFBLMS Algorithm 273
8.4.2 PFBLMS Algorithm withM > L 276
8.4.3 PFBLMS Misadjustment Equations 279
8.4.4 Computational Complexity and Memory Requirement 279
8.4.5 Modified Constrained PFBLMS Algorithm 280
8.5 Computer Simulations 281
9 Subband Adaptive Filters 299
9.1 DFT Filter Banks 300
9.1.1 Weighted Overlap-Add Method for Realization of DFT Analysis Filter Banks 301
9.1.2 Weighted Overlap-Add Method for Realization of DFT Synthesis Filter Banks 302
9.2 Complementary Filter Banks 304
9.3 Subband Adaptive Filter Structures 308
9.4 Selection of Analysis and Synthesis Filters 311
9.5 Computational Complexity 313
9.6 Decimation Factor and Aliasing 314
9.7 Low-Delay Analysis and Synthesis Filter Banks 316
9.7.1 Design Method 316
9.7.2 Filters Properties 318
9.8 A Design Procedure for Subband Adaptive Filters 319
9.9 An Example 322
9.10 Comparison with FBLMS Algorithm 323
10 IIR Adaptive Filters 329
10.1 Output Error Method 330
10.2 Equation Error Method 336
10.3 Case Study I: IIR Adaptive Line Enhancement 339
10.3.1 IIR ALE Filter,W(z) 340
10.3.2 Performance Functions 340
10.3.3 Simultaneous Adaptation of s and w 344
10.3.4 Robust Adaptation of w 344
10.3.5 Simulation Results 346
10.4 Case Study II: Equalizer Design for Magnetic Recording Channels 349
10.4.1 Channel Discretization 351
10.4.2 Design Steps 352
10.4.3 FIR Equalizer Design 352
10.4.4 Conversion from FIR to IIR Equalizer 355
10.4.5 Conversion from z-Domain to s-Domain 355
10.4.6 Numerical Results 356
10.5 Concluding Remarks 358
11 Lattice Filters 363
11.1 Forward Linear Prediction 363
11.2 Backward Linear Prediction 365
11.3 Relationship Between Forward and Backward Predictors 366
11.4 Prediction-Error Filters 367
11.5 Properties of Prediction Errors 367
11.6 Derivation of Lattice Structure 370
11.7 Lattice as an Orthogonalization Transform 375
11.8 Lattice Joint Process Estimator 377
11.9 System Functions 377
11.10Conversions 378
11.10.1 Conversion Between Lattice and Transversal Predictors 379
11.10.2 Levinson-Durbin Algorithm 380
11.10.3 Extension of Levinson-Durbin Algorithm 382
11.11All-Pole Lattice Structure 383
11.12Pole-Zero Lattice Structure 385
11.13Adaptive Lattice Filter 385
11.13.1Discussion and Simulations 387
11.14AutoregressiveModeling of Random Processes 391
11.15Adaptive Algorithms Based on AutoregressiveModeling 392
11.15.1Algorithms 393
11.15.2 Performance Analysis 398
11.15.3 Simulation Results and Discussion 402
12 Method of Least-Squares 419
12.1 Formulation of Least-Squares Estimation for a Linear Combiner 420
12.2 Principle of Orthogonality 421
12.3 Projection Operator 423
12.4 Standard Recursive Least-Squares Algorithm 424
12.4.1 RLS Recursions 424
12.4.2 Initialization of the RLS Algorithm 427
12.4.3 Summary of the Standard RLS Algorithm 427
12.5 Convergence Behavior of the RLS Algorithm 430
12.5.1 Average Tap-Weight Behavior of the RLS Algorithm 430
12.5.2 Weight-Error Correlation Matrix 431
12.5.3 Learning Curve 432
12.5.4 Excess MSE and Misadjustment 435
12.5.5 Initial Transient Behavior of the RLS Algorithm 435
13 Fast RLS Algorithms 443
13.1 Least-Squares Forward Prediction 443
13.2 Least-Squares Backward Prediction 445
13.3 Least-Squares Lattice 447
13.4 RLSL Algorithm 450
13.4.1 Notations and Preliminaries 450
13.4.2 Update Recursion for the Least-Squares Error Sums 453
13.4.3 Conversion Factor 454
13.4.4 Update Equation for Conversion Factor 455
13.4.5 Update Equation for Crosscorrelations 457
13.4.6 RLSL Algorithm Using A Posteriori Errors 459
13.4.7 RLSL algorithm with Error Feedback 461
13.5 FTRLS Algorithm 463
13.5.1 Derivation of the FTRLS Algorithm 464
13.5.2 Summary of the FTRLS Algorithm 467
13.5.3 Stabilized FTRLS Algorithm 467
14 Tracking 473
14.1 Formulation of the Tracking Problem 473
14.2 Generalized Formulation of LMS Algorithm 474
14.3 MSE Analysis of the Generalized LMS Algorithm 475
14.4 Optimum Step-Size Parameters 479
14.5 Comparisons of Conventional Algorithms 481
14.6 Comparisons Based on Optimum Step-Size Parameters 484
14.7 VSLMS: An algorithm with Optimum Tracking Behavior 486
14.7.1 Derivation of VSLMS Algorithm 487
14.7.2 Variations and Extensions 488
14.7.3 Normalization of the Parameter rho 489
14.7.4 Computer Simulations 490
14.8 RLS Algorithm with Variable Forgetting Factor 494
14.9 Summary 497
15 Echo Cancellation 501
15.1 The Problem Statement 501
15.2 Structures and Adaptive Algorithms 504
15.2.1 Normalized LMS (NLMS) Algorithm 505
15.2.2 Affine Projection LMS (APLMS) Algorithm 508
15.2.3 Frequency Domain Block LMS Algorithm 509
15.2.4 Subband LMS Algorithm 511
15.2.5 LMS-Newton Algorithm 513
15.2.6 Numerical...
Details
Erscheinungsjahr: | 2013 |
---|---|
Fachbereich: | Nachrichtentechnik |
Genre: | Technik |
Rubrik: | Naturwissenschaften & Technik |
Medium: | Buch |
Inhalt: | 800 S. |
ISBN-13: | 9781119979548 |
ISBN-10: | 1119979544 |
Sprache: | Englisch |
Einband: | Gebunden |
Autor: | Farhang-Boroujeny, Behrouz |
Auflage: | 2/2013 |
Hersteller: | Wiley-VCH GmbH |
Verantwortliche Person für die EU: | Wiley-VCH GmbH, Boschstr. 12, D-69469 Weinheim, product-safety@wiley.com |
Maße: | 243 x 180 x 43 mm |
Von/Mit: | Behrouz Farhang-Boroujeny |
Erscheinungsdatum: | 17.05.2013 |
Gewicht: | 1,308 kg |
Details
Erscheinungsjahr: | 2013 |
---|---|
Fachbereich: | Nachrichtentechnik |
Genre: | Technik |
Rubrik: | Naturwissenschaften & Technik |
Medium: | Buch |
Inhalt: | 800 S. |
ISBN-13: | 9781119979548 |
ISBN-10: | 1119979544 |
Sprache: | Englisch |
Einband: | Gebunden |
Autor: | Farhang-Boroujeny, Behrouz |
Auflage: | 2/2013 |
Hersteller: | Wiley-VCH GmbH |
Verantwortliche Person für die EU: | Wiley-VCH GmbH, Boschstr. 12, D-69469 Weinheim, product-safety@wiley.com |
Maße: | 243 x 180 x 43 mm |
Von/Mit: | Behrouz Farhang-Boroujeny |
Erscheinungsdatum: | 17.05.2013 |
Gewicht: | 1,308 kg |
Sicherheitshinweis