About us
We are a research lab within the University of Michigan's Computer Science and Engineering department, dedicated to advancing auditory intelligence–technologies that expand human capabilities related to sound. Our mission is to create human-AI systems that interpret, enrich, and personalize sound, driving new possibilites in accessibility, hearing health, and beyond.
Our interdisciplinary team spans computer scientists, electrical engineers, audiologists, physicians, social scientists, designers, and ux researchers. Together, we advance our vision through three core areas:
Auditory Modeling. Developing computational models of human hearing to improve diagnostics, tailor treatments, and inform new auditory technologies.
Projects:
HACSound
| Pediatric Sound Database
| DizzModels
Auditory Augmentation Devices. Creating adaptive devices and algorithms to deliver seamless, personalized auditory feedback.
Projects:
SonicMold
| SoundShift
| SoundActions
Audio Feedback Interfaces. Designing multimodal interfaces that support diverse auditory needs, enhancing accessibility and interaction.
Projects:
AdaptiveSound
| ProtoSound
| HoloSound
| SoundModVR
| MaskSound
Accessibility drives much of our work (e.g., SoundWatch, HACSound, MaskSound), as it offers a glimpse into the future—technologies originally pioneered for assistive use, such as noise cancellation and environment detection, often evolve into mainstream applications, as seen in the transition from hearing aids to modern headphones.
Our innovations have been publicly released (with one deployment reaching over 100,000 users), and have directly impacted products at Microsoft, Google, Apple, and Oticon. Our research has earned multiple paper awards at top-tier computer science venues, has been featured in major media outlets (e.g., CNN, Forbes, New Scientist), and is included in curricula worldwide.
We are actively recruiting PhD students and postdocs eager to shape the future of auditory intelligence. If interested, please apply.
Recent News
Oct 30: Our CARTGPT work received the best poster award at ASSETS!
Oct 11: Soundability lab students are presenting 7 papers, demos, and posters at the upcoming UIST and ASSETS 2024 conferences!
Sep 30: We were awarded the Google Academic Research Award for Leo and Jeremy's project!
Jul 28: Two demos and one poster accepted to ASSETS/UIST 2024!
Jul 02: Two papers, SoundModVR and MaskSound, accepted to ASSETS 2024!
May 22: Our paper SoundShift, which conceptualizes mixed reality audio manipulations, accepted to DIS 2024! Congrats, Rue-Chei and team!
Mar 11: Our undergraduate student, Hriday Chhabria, accepted to the CMU REU program! Hope you have a great time this summer, Hriday.
Feb 21: Our undergraduate student, Wren Wood, accepted to the PhD program at Clemson University! Congrats, Wren!
Jan 23: Our Masters student, Jeremy Huang, has been accepted to UMich CSE PhD program. That's two good news for Jeremy this month (the CHI paper being the first). Congrats, Jeremy!
Jan 19: Our paper detailing our brand new human-AI collaborative approach for sound recognition has been accepted to CHI 2024! We can't wait to present our work in Hawaii later this year!
Oct 24: SoundWatch received the best student paper nominee at ASSETS 2023! Congrats, Jeremy and team!
Aug 17: New funding alert! Our NIH funding proposal on "Developing Patient Education Materials to Address the Needs of Patients with Sensory Disabilities" has been accepted!
Mar 16: Professor Dhruv Jain elected as the inaugral ACM SIGCHI VP for Accessibility!