Acoust.IA : Making Acoustic Diagnosis Better, Simpler and Cheaper

October 1st 2020 marked the start of ACOUST.IA, an Inria-funded 3 years collaborative projects jointly lead by my collaborator Cédric Foy from the acoustic research team UMRAE at the Cerema of Strasbourg and myself. Our goal is to develop new techniques at the intersection of acoustics, audio signal processing and machine learning, to make the acoustic diagnosis of rooms better, simpler and cheaper. The core question is: « Can one retrieve the acoustical properties of surfaces inside a room, such as their absorption coefficients, from the audio recordings of a few claps? ».  Stéphane Dilungana just started his PhD thesis in Strasbourg on this topic, and is co-supervized by me, Cédric, and Sylvain Faisan from the laboratory iCube of the University of Strasbourg.


Lire la suite

Les Maths du Group Testing : Mélanger des Prélèvements pour Accélérer la Detection du COVID-19

English Version Here


Update 17/12/2020 : Depuis la création de ce billet, un groupe de chercheur français a créé un site web regroupant les publications et articles de presse récents sur l’utilisation des tests groupés pour le dépistage du COVID-19 dans le monde. Allez-y faire un tour!

Alors que la majorité de la population mondiale est désormais confinée pour lutter contre la propagation du nouveau coronavirus, il se pose la question de l’après. Une fois que les mesures de confinement seront progressivement levées, la clé pour éviter des deuxièmes et troisièmes vagues de la pandémie sera un dépistage massif et rapide, combiné avec un suivi des cas et des quarantaines ciblées. Hélas, la capacité de test de la plupart des pays est à l’heure actuelle loin d’être suffisante.

Lire la suite

The Maths of Group Testing: Mixing Samples to Speed Up COVID-19 Detection

Version Française Ici  


Update 17/12/2020 : Since the publication of this post, a group of French researchers has created a website gathering recent publications and news articles on the use of group testing for COVID-19 detection around the world. Check it out!

Now that a large part of the world’ population is in lockdown to fight against the global spread of the new coronavirus, the crucial question is: What’s next? As lock-down measures will progressively be lifted, the key to avoid second and third waves of the pandemics will be massive and rapid testing, combined with case tracking and targeted quarantining. Unfortunately, the testing capacity of most countries is currently not nearly enough.

Lire la suite

Signal Processing Cup 2019

I had the pleasure and honour to initiate and coordinate the IEEE Signal Processing Cup 2019 on the theme « Search & Rescue with Drone-Embedded Sound Source Localization ». The SPCup is an international competition aiming at promoting real world applications of signal processing amongst undergraduate students . It took place from November the 14th 2018 to May the 13th 2019.

The three finalist teams of the SPCup 2019 at ICASSP, Brighton, UK.

Lire la suite

Serbia Science Festival 2018

From November 29th to December 1st, I had the great joy and honor to introduce a young audience to some of the science behind robots, artificial intelligence and sounds at the 12th edition of Serbia Science Festival (Festival Nauke). I gave four lectures each in front of 450 people, most of them pupils between 5 and 15 years old. The slides of the presentation can be found here.

The DREGON dataset

Martin Strauss, Pol Mordel, Victor Miguet and myself just released the DREGON dataset. DREGON stands for DRone EGonoise and localizatiON. It consists in sounds recorded with an 8-channel microphone array embedded into a quadrotor UAV (Unmanned Aerial Vehicle). The recordings are annotated with the precise 3D position of the sound source relative to the drone as well as additional internal characteristics of the drone state such as motor speed and intertial measurements. It aims at promoting research in UAV-embedded sound source localization, notably for the application of semi-autonomous search-and-rescue with drones.

The VAST project

VAST stands for virtual acoustic space traveling and is a new paradigm for learning-based sound source localization and audio scene geometry estimation. Most existing methods that estimate the position of a sound source or other audio geometrical properties are either based on an approximate physical model (physics-driven) or on a specific-purpose calibration set (data-driven). With VAST, the idea is to learn a mapping from audio features to desired geometrical properties using a massive dataset of simulated room impulse responses. The dataset is designed to be maximally representative of the potential audio scenes the considered system may be evolving in while remaining reasonably compact. The aim is to demonstrate the good generalizability of mappings learned on a virtual datasets  to real-world data and to provide a useful tool for research teams interested in sound source localization.


Clément Gaultier, Saurabh Kataria, Diego Di Carlo and myself are working on the release of datasets for VAST. Two binaural datasets are already available on the project website. We co-authored two publications demonstrating this paradigm for binaural 3D sound source localization and wall absorption estimations using these datasets.