October 1st 2020 marked the start of ACOUST.IA, an Inria-funded 3 years collaborative projects jointly lead by my collaborator Cédric Foy from the acoustic research team UMRAE at the Cerema of Strasbourg and myself. Our goal is to develop new techniques at the intersection of acoustics, audio signal processing and machine learning, to make the acoustic diagnosis of rooms better, simpler and cheaper. The core question is: « Can one retrieve the acoustical properties of surfaces inside a room, such as their absorption coefficients, from the audio recordings of a few claps? ». Stéphane Dilungana just started his PhD thesis in Strasbourg on this topic, and is co-supervized by me, Cédric, and Sylvain Faisan from the laboratory iCube of the University of Strasbourg.
Shooting the report was a great experience! Here are some pictures from the shooting:
I learned that shooting a movie means spending a lot of time walking in corridors, going up and down stairs, passing doors multiple times… all of this looking as natural as possible! 🙂
Cédric Foy, our former intern Corto Bastien and myself (virtually) presented our preliminary work on the topic at the international conference Forum Acusticum on Dec. 4, 2020. In this work, we trained different neural network architectures on simulated datasets to estimate mean absorption coefficients in octave bands from room impulse responses. Check out this video of my presentation (slides here):
A short two-pages paper describing the study is available here. We are currenly working on a longer journal submission.
Our long paper is now published in the prestigious Journal of the Acoustical Society of America (JASA)! You can download the published version here [© 2021 Acoustical Society of America https://asa.scitation.org/doi/10.1121/10.0005888].