Professor Jon Barker
PhD
School of Computer Science
Personal Chair
School Ethics Lead
Member of the Speech and Hearing (SpandH) research group


Full contact details
School of Computer Science
Regent Court (DCS)
211 Portobello
Sheffield
S1 4DP
- Profile
-
Professor Jon Barker is a member of the Speech and Hearing Research Group. He has a first degree in Electrical and Information Sciences from Cambridge University, UK. After receiving a PhD from the University of Sheffield in 1999, he worked for some time at GIPSA-lab, Grenoble and IDIAP research institute in Switzerland before returning to Sheffield where he has had a permanent post since 2002.
His research interests lie in noise-robust speech processing. Key application areas include distant-microphone speech recognition, speech intelligibility prediction and improved speech processing for hearing-aid users.
- Research interests
-
Professor Barker’s research interests are focused around machine listening and the computational modelling of human hearing. A recent focus has been on modelling speech intelligibility, ie can we predict whether or not a speech signal will be intelligible to a given listener?
This understanding will help us produce better signal processing for application such as hearing aids and cochlear implants. Another strand of his work is about taking insights gained from human auditory perception and using them to engineer robust automatic speech processing systems.
- Publications
-
Journal articles
- Objective and subjective evaluation of speech enhancement methods in the UDASE task of the 7th CHiME challenge. Computer Speech & Language, 89, 101685-101685.
- The Cadenza Woodwind Dataset: Synthesised Quartets for Music Information Retrieval and Machine Learning. Data in Brief, 111199-111199.
- Lyric intelligibility of musical segments for older individuals with hearing loss. The Journal of the Acoustical Society of America, 156(4_Supplement), A121-A121.
- Development of the 2nd Cadenza challenge for improving music listening for people with a hearing loss. The Journal of the Acoustical Society of America, 155(3_Supplement), A277-A277.
- Muddy, muddled, or muffled? Understanding the perception of audio quality in music by hearing aid users. Frontiers in Psychology, 15, 1310176.
- A systematic review of measurements of real-world interior car noise for the “Cadenza” machine-learning project. The Journal of the Acoustical Society of America, 153(3_supplement), A332-A332.
- Results of the second “clarity” enhancement challenge for hearing devices. The Journal of the Acoustical Society of America, 153(3_supplement), A48-A48.
- Dataset of British English speech recordings for psychoacoustics and speech processing research: The clarity speech corpus. Data in Brief, 41. View this article in WRRO
- Acoustic Modelling From Raw Source and Filter Components for Dysarthric Speech Recognition. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 30, 2968-2980.
- Launching the first “Clarity” Machine Learning Challenge to revolutionise hearing device processing. The Journal of the Acoustical Society of America, 148(4), 2711-2711.
- Lexical frequency effects in English and Spanish word misperceptions.. The Journal of the Acoustical Society of America, 145(2), EL136-EL141. View this article in WRRO
- A corpus of audio-visual Lombard speech with frontal and profile views. Journal of the Acoustical Society of America, 143(6), 523-529. View this article in WRRO
- The impact of the Lombard effect on audio and visual speech recognition systems. Speech Communication, 100, 58-68. View this article in WRRO
- The impact of automatic exaggeration of the visual articulatory features of a talker on the intelligibility of spectrally distorted speech. Speech Communication, 95, 127-136. View this article in WRRO
- An analysis of environment, microphone and data simulation mismatches in robust speech recognition. Computer Speech & Language, 46, 535-557. View this article in WRRO
- The third ‘CHiME’ speech separation and recognition challenge: Analysis and outcomes. Computer Speech and Language, 46, 605-626. View this article in WRRO
- Evaluation of scene analysis using real and simulated acoustic mixtures: Lessons learnt from the CHiME speech recognition challenges. The Journal of the Acoustical Society of America, 141(5), 3693-3693.
- Guest Editorial for the special issue on Multi-Microphone Speech Recognition in Everyday Environments. Computer Speech & Language. View this article in WRRO
- Spectral Reconstruction and Noise Model Estimation Based on a Masking Model for Noise Robust Speech Recognition. Circuits, Systems, and Signal Processing. View this article in WRRO
- A corpus of noise-induced word misperceptions for English.. Journal of the Acoustical Society of America, 140(5), EL458-EL463. View this article in WRRO
- An Overview, 137-172.
- The second 'CHiME' speech separation and recognition challenge: An overview of challenge systems and outcomes. 2013 IEEE Workshop on Automatic Speech Recognition and Understanding, ASRU 2013 - Proceedings, 162-167.
- MMSE-based missing-feature reconstruction with temporal modeling for robust speech recognition. IEEE Transactions on Audio, Speech and Language Processing, 21(3), 624-635.
- Speech spectral envelope enhancement by HMM-based analysis/resynthesis. IEEE Signal Processing Letters, 20(6), 563-566.
- A hearing-inspired approach for distant-microphone speech recognition in the presence of multiple sources. Computer Speech and Language.
- Special issue on speech separation and recognition in multisource environments. Computer Speech and Language.
- The PASCAL CHiME speech separation and recognition challenge. Computer Speech and Language.
- Speech fragment decoding techniques for simultaneous speaker identification and speech recognition. COMPUT SPEECH LANG, 24(1), 94-111.
- Energetic and Informational Masking Effects in an Audiovisual Speech Recognition System. IEEE T AUDIO SPEECH, 17(3), 446-458.
- Stream weight estimation for multistream audio-visual speech recognition in a multispeaker environment. SPEECH COMMUN, 50(4), 337-353.
- The foreign language cocktail party problem: Energetic and informational masking effects in non-native speech perception.. J Acoust Soc Am, 123(1), 414-427.
- Exploiting correlogram structure for robust speech recognition with multiple speech sources. SPEECH COMMUN, 49(12), 874-891. View this article in WRRO
- Modelling speaker intelligibility in noise. SPEECH COMMUN, 49(5), 402-417.
- An automatic speech recognition system based on the scene analysis account of auditory perception. SPEECH COMMUN, 49(5), 384-401.
- An audio-visual corpus for speech perception and automatic speech recognition.. J Acoust Soc Am, 120(5 Pt 1), 2421-2424.
- Mask estimation for missing data speech recognition based on statistics of binaural interaction. IEEE T AUDIO SPEECH, 14(1), 58-67.
- Decoding speech in the presence of other sources. SPEECH COMMUN, 45(1), 5-25.
- Techniques for handling convolutional distortion with 'missing data' automatic speech recognition. SPEECH COMMUN, 43(1-2), 123-142.
- Machine recognition of sounds in mixtures. The Journal of the Acoustical Society of America, 113(4), 2230-2230.
- Is the sine-wave speech cocktail party worth attending?. Speech Communication, 27, 159-174.
- Modeling the recognition of sine-wave sentences. Journal of the Acoustical Society of America, 100, 2682-2682.
Chapters
- The CHiME Challenges: Robust Speech Recognition in Everyday Environments, New Era for Robust Speech Recognition (pp. 327-344). Springer International Publishing
- Multichannel Spatial Clustering Using Model-Based Source Separation, New Era for Robust Speech Recognition (pp. 51-77). Springer International Publishing
- Missing Data Techniques: Recognition with Incomplete Spectrograms In Virtanen T, Singh R & Raj B (Ed.), Techniques for Noise Robustness in Automatic Speech Recognition (pp. 371-398). Wiley
Conference proceedings papers
- The 2nd Clarity Prediction Challenge: A Machine Learning Challenge for Hearing Aid Intelligibility Prediction. ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 14 April 2024 - 19 April 2024.
- The ICASSP SP Cadenza Challenge: Music Demixing/Remixing for Hearing Aids. 2024 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW) (pp 93-94), 14 April 2024 - 19 April 2024.
- The 2nd Clarity Enhancement Challenge for Hearing Aid Speech Intelligibility Enhancement: Overview and Outcomes. ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 4 June 2023 - 10 June 2023.
- Overview of the 2023 ICASSP SP Clarity Challenge: Speech Enhancement for Hearing Aids. ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 4 June 2023 - 10 June 2023.
- Improved Simulation of Realistically-Spatialised Simultaneous Speech Using Multi-Camera Analysis in The Chime-5 Dataset. ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 23 May 2022 - 27 May 2022.
- Auditory-Based Data Augmentation for end-to-end Automatic Speech Recognition. ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 23 May 2022 - 27 May 2022.
- MULTI-MODAL ACOUSTIC-ARTICULATORY FEATURE FUSION FOR DYSARTHRIC SPEECH RECOGNITION. ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings, Vol. 2022-May (pp 7372-7376)
- Optimising hearing aid fittings for speech in noise with a differentiable hearing loss model. Interspeech 2021 (pp 691-695). Brno, Czechia, 30 August 2021 - 3 September 2021.
- Time-Domain Speech Extraction with Spatial Information and Multi Speaker Conditioning Mechanism. ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 6 June 2021 - 11 June 2021.
- DHASP: Differentiable Hearing Aid Speech Processing, Vol. 00 (pp 296-300)
- The use of Voice Source Features for Sung Speech Recognition. ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 6 June 2021 - 11 June 2021.
- Modelling the Effects of Hearing Aid Algorithms on Speech and Speaker Intelligibility as Perceived by Listeners with Simulated Sensorineural Hearing Impairment. SoutheastCon 2021, 10 March 2021 - 13 March 2021.
- On End-to-end Multi-channel Time Domain Speech Separation in Reverberant Environments. ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 4 May 2020 - 8 May 2020.
- Source Domain Data Selection for Improved Transfer Learning Targeting Dysarthric Speech Recognition. ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 4 May 2020 - 8 May 2020. View this article in WRRO
- Exploring Appropriate Acoustic and Language Modelling Choices for Continuous Dysarthric Speech Recognition. ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 4 May 2020 - 8 May 2020. View this article in WRRO
- Phonetic Analysis of Dysarthric Speech Tempo and Applications to Robust Personalised Dysarthric Speech Recognition. ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 12 May 2019 - 17 May 2019.
- Exploring the use of group delay for generalised VTS based noise compensation. 2018 IEEE International Conference on Acoustics, Speech and Signal Processing Proceedings, 15 April 2018 - 20 April 2018. View this article in WRRO
- Robust Source-Filter Separation of Speech Signal in the Phase Domain. Proceedings of the Annual Conference of the International Speech Communication Association View this article in WRRO
- Statistical normalisation of phase-based feature representation for robust speech recognition. 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 5 March 2017 - 9 March 2017. View this article in WRRO
- A Data Driven Approach to Audiovisual Speech Mapping (pp 331-342)
- View this article in WRRO
- Exploiting synchrony spectra and deep neural networks for noise-robust automatic speech recognition. 2015 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU), 13 December 2015 - 17 December 2015. View this article in WRRO
- Chime-home: A dataset for sound source recognition in a domestic environment. 2015 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), 18 October 2015 - 21 October 2015.
- The third ‘CHiME’ speech separation and recognition challenge: Dataset, task and baselines. 2015 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU), 13 December 2015 - 17 December 2015.
- Long-Term Statistical Feature Extraction from Speech Signal and Its Application in Emotion Recognition (pp 173-184)
- Combining missing-data reconstruction and uncertainty decoding for robust speech recognition. ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings (pp 4693-4696)
- A pitch based noise estimation technique for robust speech recognition with missing data. ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings (pp 4808-4811)
- Robust automatic transcription of english speech corpora. 2010 8th International Conference on Communications, COMM 2010 (pp 79-82)
- Integrating Hidden Markov Model and PRAAT: A toolbox for robust automatic speech transcription. Proceedings of SPIE - The International Society for Optical Engineering, Vol. 7745
- View this article in WRRO
- On monoaural speech enhancement for automatic recognition of real noisy speech using mixture invariant training. Interspeech 2022
- Dysarthric Speech Recognition From Raw Waveform with Parametric CNNs. Interspeech 2022
- Modelling Turn-taking in Multispeaker Parties for Realistic Data Simulation. Interspeech 2022
- Unsupervised Uncertainty Measures of Automatic Speech Recognition for Non-intrusive Speech Intelligibility Prediction. Interspeech 2022
- Exploiting Hidden Representations from a DNN-based Speech Recogniser for Speech Intelligibility Prediction in Hearing-impaired Listeners. Interspeech 2022
- The 1st Clarity Prediction Challenge: A machine learning challenge for hearing aid intelligibility prediction. Interspeech 2022
- Teacher-Student MixIT for Unsupervised and Semi-Supervised Speech Separation. Interspeech 2021
- Parental Spoken Scaffolding and Narrative Skills in Crowd-Sourced Storytelling Samples of Young Children. Interspeech 2021
- Clarity-2021 Challenges: Machine Learning Challenges for Advancing Hearing Aid Processing. Interspeech 2021
- Simulating Realistically-Spatialised Simultaneous Speech Using Video-Driven Speaker Detection and the CHiME-5 Dataset. Interspeech 2020
- Automatic Lyric Transcription from Karaoke Vocal Tracks: Resources and a Baseline System. Interspeech 2019
- The Fifth 'CHiME' Speech Separation and Recognition Challenge: Dataset, Task and Baselines. Interspeech 2018
- DNN Driven Speaker Independent Audio-Visual Mask Estimation for Speech Separation. Interspeech 2018
- On the Usefulness of the Speech Phase Spectrum for Pitch Extraction. Interspeech 2018 View this article in WRRO
- Channel Compensation in the Generalised Vector Taylor Series Approach to Robust ASR. Interspeech 2017 View this article in WRRO
- Binary Mask Estimation Strategies for Constrained Imputation-Based Speech Enhancement. Interspeech 2017
- Use of Generalised Nonlinearity in Vector Taylor Series Noise Compensation for Robust Speech Recognition. Interspeech 2016 View this article in WRRO
- Multichannel Spatial Clustering for Robust Far-Field Automatic Speech Recognition in Mismatched Conditions. Interspeech 2016
- Language Effects in Noise-Induced Word Misperceptions. Interspeech 2016
- Misperceptions Arising from Speech-in-Babble Interactions. Interspeech 2016
- Data compression techniques in image compression for multimedia systems. Southcon/96 Conference Record
- View this article in WRRO
Posters
Theses / Dissertations
Other
Preprints
- Using Speech Foundational Models in Loss Functions for Hearing Aid Speech Enhancement, arXiv.
- Objective and subjective evaluation of speech enhancement methods in the UDASE task of the 7th CHiME challenge, arXiv.
- Non-Intrusive Speech Intelligibility Prediction for Hearing-Impaired Users using Intermediate ASR Features and Human Memory Models, arXiv.
- The First Cadenza Signal Processing Challenge: Improving Music for Those With a Hearing Loss, arXiv.
- Auditory-Based Data Augmentation for End-to-End Automatic Speech Recognition, arXiv.
- Objective and subjective evaluation of speech enhancement methods in the UDASE task of the 7th CHiME challenge. Computer Speech & Language, 89, 101685-101685.
- Grants
-
Current grants
- EnhanceMusic: Machine Learning Challenges to Revolutionise Music Listening for People with Hearing Loss, EPSRC, 06/2022 - 11/2026, £377,568, as PI
- Challenges to Revolutionise Hearing Device Processing, EPSRC, 10/2019 to 10/2025, £480,416, as PI
- UKRI Centre for Doctoral Training in Speech and Language Technologies and their Applications, EPSRC, 04/2019 to 09/2027, £5,508,850, as Co-PI
Previous grants
- TAPAS: Training Network on Automatic Processing of PAthological Speech, EC H2020, 11/2017 to 06/2022, £468,000, as Co-PI
-
Deep Probabilistic Models for Making Sense of Unstructured Data, EPSRC, 03/2016 - 09/2019, £974,161, as Co-PI
-
Deep learning of articulatory-based representations of dysarthric speech, Industrial, 02/2016 to 01/2017, £46,624, as Co-PI
- Towards visually-driven speech enhancement for cognitively-inspired multi-modal hearing-aid devices (AV-COGHEAR), EPSRC, 10/2015 to 09/2018, £125,493, as PI
- INSPIRE: Investigating Speech In Real Environments, EC FP7, 01/2012 to 12/2015, £308,473, as PI
- ACAS: Analysis of Complex Acoustic Scenes, EPSRC, 07/2010 to 09/2010, £9,978, as PI
- CHIME: Computational Hearing in Multisource Environments, EPSRC, 06/2009 to 05/2012, £326,245, as PI
- Audio-Visual Speech Recognition in the Presence of Non-Stationary Noise, EPSRC, 02/2005 to 05/2007, £116,853, as PI
- Professional activities and memberships
-
- Member of the Speech and Hearing research group
- Co-founder of the CHiME series of International Workshops and Robust Speech Recognition Evaluations, 2011 onwards.
- EURASIP Best Paper Award, 2009; for best paper in Speech Communication during 2005.
- ISCA Best Paper Award, 2008; for best paper in Speech Communication 2005-2007.