On the 12 – 14 December a Hearing Hub Ideas – Hackathon was organised by the Machine Learning Group. The goal was to develop collaborations within the Australian Hearing Hub and across the University, generating grant applications and projects leading to innovative technologies in hearing research.
35 people attended the hackathon and were tasked with exploring solutions to the two challenge involving data. The challenges were:
- How well can speech understanding be predicted from electroencephalogram (EEG) data?
- Can we determine what factors are causing hearing loss in young people, using a dataset of 1400 respondents that includes lifestyle surveys, physiological measurements and hearing tests?
On the third day, nine groups presented their projects to four expert judges, Professor Catherine McMahon, Department of Linguistics, Associate Professor Mark Dras, Department of Computing, Adam Dunn, Associate Professor Adam Dunn, Centre for Health Informatics and Dr Jia Wu, Department of Computing.
Congratulations to Yang Guo for winning Challenge 1. His project explored a random forest approach to predict intelligibility performance from electrophysiological measures. Young examined data dimensionality reduction techniques, feature selectivity and physical electrode locations that yielded high prediction performance. Yang was creative and innovative in his approach.
Second prize was awarded to Judy Zhu, James Qin and Kyle Liu. Their project predicted intelligibility performance from electrophysiological measures. The team examined dimensionality reduction methods using PCA approaches, as well as applying a combination of state-vector-machine (SVM) and decision tree regression models to improve prediction performance.
Challenge 2, first prize went to George Kennedy, Nicky Chong-White and Jason Heeris. Their project aimed to predict abnormal hearing difficulties in noise from audiometric measures. The team applied deep feed-forward classification network method to large-scale audiometric data to successfully identify groups of people likely to self-report hearing issues in everyday listening conditions.
Second prize was awarded to Matthieu Recugnat, Fabrice Brady and Robert Luke. Their neural network model was highly successful in predicting perceived handicap from a combination of psycho-acoustic models and neural networks. Their innovative solution combined deep levels of understanding of hearing impairment and machine learning
The people’s choice award when to Derek Wang, Vivian Pao and Harrison Nguyen for their high level of enthusiasm when presenting their project. The team use a data-driven approach to defining normal hearing.
All the projects were of high quality, innovative and well presented. Well done to all the teams for their hard work.