Zum Hauptinhalt springen Zur Suche springen Zur Hauptnavigation springen
Dekorationsartikel gehören nicht zum Leistungsumfang.
Computational Analysis of Sound Scenes and Events
Taschenbuch von Tuomas Virtanen (u. a.)
Sprache: Englisch

160,95 €*

-16 % UVP 192,59 €
inkl. MwSt.

Versandkostenfrei per Post / DHL

Lieferzeit 2-4 Werktage

Produkt Anzahl: Gib den gewünschten Wert ein oder benutze die Schaltflächen um die Anzahl zu erhöhen oder zu reduzieren.
Kategorien:
Beschreibung
This book presents computational methods for extracting the useful information from audio signals, collecting the state of the art in the field of sound event and scene analysis. The authors cover the entire procedure for developing such methods, ranging from data acquisition and labeling, through the design of taxonomies used in the systems, to signal processing methods for feature extraction and machine learning methods for sound recognition. The book also covers advanced techniques for dealing with environmental variation and multiple overlapping sound sources, and taking advantage of multiple microphones or other modalities. The book gives examples of usage scenarios in large media databases, acoustic monitoring, bioacoustics, and context-aware devices. Graphical illustrations of sound signals and their spectrographic representations are presented, as well as block diagrams and pseudocode of algorithms.
This book presents computational methods for extracting the useful information from audio signals, collecting the state of the art in the field of sound event and scene analysis. The authors cover the entire procedure for developing such methods, ranging from data acquisition and labeling, through the design of taxonomies used in the systems, to signal processing methods for feature extraction and machine learning methods for sound recognition. The book also covers advanced techniques for dealing with environmental variation and multiple overlapping sound sources, and taking advantage of multiple microphones or other modalities. The book gives examples of usage scenarios in large media databases, acoustic monitoring, bioacoustics, and context-aware devices. Graphical illustrations of sound signals and their spectrographic representations are presented, as well as block diagrams and pseudocode of algorithms.
Über den Autor

Tuomas Virtanen is Professor at Laboratory of Signal Processing, Tampere University of Technology (TUT), Finland, where he is leading the Audio Research Group. He received the [...]. and Doctor of Science degrees in information technology from TUT in 2001 and 2006, respectively. He has also been working as a research associate at Cambridge University Engineering Department, UK. He is known for his pioneering work on single-channel sound source separation using non-negative matrix factorization based techniques, and their application to noise-robust speech recognition, music content analysis and audio event detection. In addition to the above topics, his research interests include content analysis of audio signals in general and machine learning. He has authored more than 100 scientific publications on the above topics, which have been cited more than 5000 times. He has received the IEEE Signal Processing Society 2012 best paper award for his article "Monaural Sound Source Separation by Nonnegative Matrix Factorization with Temporal Continuity and Sparseness Criteria" as well as three other best paper awards. He is an IEEE Senior Member, a member of the Audio and Acoustic Signal Processing Technical Committee of IEEE Signal Processing Society, Associate Editor of IEEE/ACM Transaction on Audio, Speech, and Language Processing and recipient of the ERC 2014 Starting Grant.

Mark Plumbley is Professor of Signal Processing at the Centre for Vision, Speech and Signal Processing (CVSSP) at the University of Surrey, in Guildford, UK. After receiving his Ph.D. degree in neural networks in 1991, he became a Lecturer at King's College London, before moving to Queen Mary University of London in 2002. He subsequently became Professor and Director of the Centre for Digital Music, before joining the University of Surrey in 2015. He is known for his work on analysis and processing of audio and music, using a wide range of signal processing techniques, includingindependent component analysis, sparse representations, and deep learning. He has also a keen to promote the importance of research software and data in audio and music research, including training researchers to follow the principles of reproducible research, and he led the 2013 D-CASE data challenge on Detection and Classification of Acoustic Scenes and Events. He currently leads two EU-funded research training networks in sparse representations, compressed sensing and machine sensing, and leads two major UK-funded projects on audio source separation and making sense of everyday sounds. He is a Fellow of the IET and IEEE.

Dan Ellis joined Google Inc., in 2015 as a Research Scientist after spending 15 years as a tenured professor in the Electrical Engineering department of Columbia University, where he founded and led the Laboratory for Recognition and Organization of Speech and Audio (LabROSA) which conducted research into all aspects of extracting information from sound. He is also an External Fellow of the International Computer Science Institute in Berkeley, CA, where he researched approaches to robust speech recognition. He is known for his contributions to Computational Auditory Scene Analysis, and for developing and transferring techniques between all different kinds of audio processing including speech, music, and environmental sounds. He has a long track record of supporting the community through public releases of code and data, including the Million Song Dataset of features and metadata for one million pop music tracks, which has become the standard large-scale research set in the Music Information Retrieval field.

Zusammenfassung
Gives an overview of methods for computational analysis of sounds scenes and events, allowing those new to the field to become fully informed
Covers all the aspects of the machine learning approach to computational analysis of sound scenes and events, ranging from data capture and labeling process to development of algorithms
Includes descriptions of algorithms accompanied by a website from which software implementations can be downloaded, facilitating practical interaction with the techniques
Inhaltsverzeichnis
1 Introduction to sound scene and event analysis2 The Machine Learning Approach for Analysis of Sound Scenes and Events3 Acoustics and psychacoustics of sound scenes and events4 Acoustic features for environmental sound analysis5 Statistical Methods for Scene and Event Classification6 Datasets and evaluation7 Everyday Sound Categorization8 Approaches to complex sound scene analysis9 Multiview approaches to event detection and scene analysis10 Sound sharing and retrieval11 Computational bioacoustic scene analysis12 Audio Event Recognition in the Smart Home13 Sound Analysis in Smart Cities14 Future PerspectiveIndex
Details
Erscheinungsjahr: 2018
Fachbereich: Nachrichtentechnik
Genre: Mathematik, Medizin, Naturwissenschaften, Technik
Rubrik: Naturwissenschaften & Technik
Medium: Taschenbuch
Inhalt: x
422 S.
27 s/w Illustr.
54 farbige Illustr.
422 p. 81 illus.
54 illus. in color.
ISBN-13: 9783319875590
ISBN-10: 3319875590
Sprache: Englisch
Ausstattung / Beilage: Paperback
Einband: Kartoniert / Broschiert
Redaktion: Virtanen, Tuomas
Ellis, Dan
Plumbley, Mark D.
Herausgeber: Tuomas Virtanen/Mark D Plumbley/Dan Ellis
Auflage: Softcover reprint of the original 1st ed. 2018
Hersteller: Springer International Publishing
Springer International Publishing AG
Verantwortliche Person für die EU: Springer Verlag GmbH, Tiergartenstr. 17, D-69121 Heidelberg, juergen.hartmann@springer.com
Maße: 235 x 155 x 24 mm
Von/Mit: Tuomas Virtanen (u. a.)
Erscheinungsdatum: 18.08.2018
Gewicht: 0,651 kg
Artikel-ID: 115378868
Über den Autor

Tuomas Virtanen is Professor at Laboratory of Signal Processing, Tampere University of Technology (TUT), Finland, where he is leading the Audio Research Group. He received the [...]. and Doctor of Science degrees in information technology from TUT in 2001 and 2006, respectively. He has also been working as a research associate at Cambridge University Engineering Department, UK. He is known for his pioneering work on single-channel sound source separation using non-negative matrix factorization based techniques, and their application to noise-robust speech recognition, music content analysis and audio event detection. In addition to the above topics, his research interests include content analysis of audio signals in general and machine learning. He has authored more than 100 scientific publications on the above topics, which have been cited more than 5000 times. He has received the IEEE Signal Processing Society 2012 best paper award for his article "Monaural Sound Source Separation by Nonnegative Matrix Factorization with Temporal Continuity and Sparseness Criteria" as well as three other best paper awards. He is an IEEE Senior Member, a member of the Audio and Acoustic Signal Processing Technical Committee of IEEE Signal Processing Society, Associate Editor of IEEE/ACM Transaction on Audio, Speech, and Language Processing and recipient of the ERC 2014 Starting Grant.

Mark Plumbley is Professor of Signal Processing at the Centre for Vision, Speech and Signal Processing (CVSSP) at the University of Surrey, in Guildford, UK. After receiving his Ph.D. degree in neural networks in 1991, he became a Lecturer at King's College London, before moving to Queen Mary University of London in 2002. He subsequently became Professor and Director of the Centre for Digital Music, before joining the University of Surrey in 2015. He is known for his work on analysis and processing of audio and music, using a wide range of signal processing techniques, includingindependent component analysis, sparse representations, and deep learning. He has also a keen to promote the importance of research software and data in audio and music research, including training researchers to follow the principles of reproducible research, and he led the 2013 D-CASE data challenge on Detection and Classification of Acoustic Scenes and Events. He currently leads two EU-funded research training networks in sparse representations, compressed sensing and machine sensing, and leads two major UK-funded projects on audio source separation and making sense of everyday sounds. He is a Fellow of the IET and IEEE.

Dan Ellis joined Google Inc., in 2015 as a Research Scientist after spending 15 years as a tenured professor in the Electrical Engineering department of Columbia University, where he founded and led the Laboratory for Recognition and Organization of Speech and Audio (LabROSA) which conducted research into all aspects of extracting information from sound. He is also an External Fellow of the International Computer Science Institute in Berkeley, CA, where he researched approaches to robust speech recognition. He is known for his contributions to Computational Auditory Scene Analysis, and for developing and transferring techniques between all different kinds of audio processing including speech, music, and environmental sounds. He has a long track record of supporting the community through public releases of code and data, including the Million Song Dataset of features and metadata for one million pop music tracks, which has become the standard large-scale research set in the Music Information Retrieval field.

Zusammenfassung
Gives an overview of methods for computational analysis of sounds scenes and events, allowing those new to the field to become fully informed
Covers all the aspects of the machine learning approach to computational analysis of sound scenes and events, ranging from data capture and labeling process to development of algorithms
Includes descriptions of algorithms accompanied by a website from which software implementations can be downloaded, facilitating practical interaction with the techniques
Inhaltsverzeichnis
1 Introduction to sound scene and event analysis2 The Machine Learning Approach for Analysis of Sound Scenes and Events3 Acoustics and psychacoustics of sound scenes and events4 Acoustic features for environmental sound analysis5 Statistical Methods for Scene and Event Classification6 Datasets and evaluation7 Everyday Sound Categorization8 Approaches to complex sound scene analysis9 Multiview approaches to event detection and scene analysis10 Sound sharing and retrieval11 Computational bioacoustic scene analysis12 Audio Event Recognition in the Smart Home13 Sound Analysis in Smart Cities14 Future PerspectiveIndex
Details
Erscheinungsjahr: 2018
Fachbereich: Nachrichtentechnik
Genre: Mathematik, Medizin, Naturwissenschaften, Technik
Rubrik: Naturwissenschaften & Technik
Medium: Taschenbuch
Inhalt: x
422 S.
27 s/w Illustr.
54 farbige Illustr.
422 p. 81 illus.
54 illus. in color.
ISBN-13: 9783319875590
ISBN-10: 3319875590
Sprache: Englisch
Ausstattung / Beilage: Paperback
Einband: Kartoniert / Broschiert
Redaktion: Virtanen, Tuomas
Ellis, Dan
Plumbley, Mark D.
Herausgeber: Tuomas Virtanen/Mark D Plumbley/Dan Ellis
Auflage: Softcover reprint of the original 1st ed. 2018
Hersteller: Springer International Publishing
Springer International Publishing AG
Verantwortliche Person für die EU: Springer Verlag GmbH, Tiergartenstr. 17, D-69121 Heidelberg, juergen.hartmann@springer.com
Maße: 235 x 155 x 24 mm
Von/Mit: Tuomas Virtanen (u. a.)
Erscheinungsdatum: 18.08.2018
Gewicht: 0,651 kg
Artikel-ID: 115378868
Sicherheitshinweis

Ähnliche Produkte

Ähnliche Produkte