Skip links

Digital Future Initiative – personalise hearing models

Google Research, CochlearMacquarie University HearingNational Acoustic Laboratories (NAL), NextSense and The Shepherd Centre, together, will  focus on new applications of AI and machine learning to develop listening and communications technologies, overcome its current challenges – and pave the way for more customised hearing healthcare.


The first project seeks to personalise hearing models to better address individual listening needs to enhance hearing aids and other listening devices.


This technology could be particularly beneficial for people using listening devices in complex listening environments – such as busy restaurants, group brainstorms or live orchestral performances. The overlapping sounds in these kinds of settings can make it strenuous or overwhelming for people using these kinds of devices to process and decipher various types of sound.

This project will explore new applications of AI to better identify, categorise and segregate sound sources. Ultimately, this might make it easier for people using assistive listening devices to follow a conversation or activity as the technology could help to prioritise sounds, such as a person speaking – and filter out others, such as background noise.

Prof Greg Leigh AO (NextSense), Dr Simon Carlile (Google Research), Prof David McAlpine (Macquarie University), Dr Zachary Smith (Cochlear), Prof Catherine McMahon (Macquarie University), Dr Aleisha Davis (Shepherd Centre), Dr Malcolm Slaney (Google Research), Sam Sepah (Google Research), Dr Brent Edwards (National Acoustic Laboratories)
Prof Greg Leigh AO (NextSense), Dr Simon Carlile (Google Research), Prof David McAlpine (Macquarie University), Dr Zachary Smith (Cochlear), Prof Catherine McMahon (Macquarie University), Dr Aleisha Davis (Shepherd Centre), Dr Malcolm Slaney (Google Research), Sam Sepah (Google Research), Dr Brent Edwards (National Acoustic Laboratories)
This website uses cookies to improve your web experience.