{"id":310,"date":"2016-10-27T10:27:22","date_gmt":"2016-10-27T08:27:22","guid":{"rendered":"http:\/\/members.loria.fr\/SOuni\/?page_id=310"},"modified":"2025-08-30T23:07:38","modified_gmt":"2025-08-30T21:07:38","slug":"activities","status":"publish","type":"page","link":"https:\/\/members.loria.fr\/SOuni\/accueil\/activities\/","title":{"rendered":"Activities"},"content":{"rendered":"<h3>Doctoral and scientific supervision<\/h3>\n<ul>\n<li><strong>Guilhem Faure (2024\u20132027)<\/strong> \u2013 <strong>Project COLAF<\/strong>: End-to-end speech-to-sign language generation.<\/li>\n<li><strong>Micka\u00eblla Grondin-Verdon (2021\u20132025)<\/strong> \u2013 <strong>CNRS PRIME 80<\/strong>: Modeling gestures and speech in interaction.<\/li>\n<li><strong>Louis Abel, Universit\u00e9 de Lorraine (2021\u20132025)<\/strong> \u2013 Audiovisual speech synthesis in interactive contexts.<\/li>\n<li><strong>Shakeel Ahmad Sheikh, Universit\u00e9 de Lorraine (2019\u20132022)<\/strong> \u2013 <strong>ANR BENEPHIDIRE<\/strong>: Neural network-based detection and rehabilitation of speech disfluencies in stuttering.<\/li>\n<li><strong>Th\u00e9o Biasutto, Universit\u00e9 de Lorraine (2016\u20132021)<\/strong> \u2013 <strong>PIA2 e-Fran METAL<\/strong>: Multimodal coarticulation modeling for intelligible talking head animation.<\/li>\n<li><strong>Sara Dahmani, Universit\u00e9 de Lorraine (2017\u20132020)<\/strong> \u2013 Audiovisual speech synthesis: Deep learning-based modeling of emotional expressions.<\/li>\n<li><strong>Utpala Musti, Universit\u00e9 de Lorraine (2009\u20132013)<\/strong> \u2013 <strong>INRIA CORDI-S<\/strong>: Bimodal unit selection for audiovisual speech synthesis.<\/li>\n<li><strong>Imen Jemaa (2009\u20132013)<\/strong> \u2013 <strong>PHC UTIQUE (Cotutelle: Universit\u00e9 de Lorraine \u2013 Universit\u00e9 El Manar, Tunisia)<\/strong>: Multi-resolution analysis for formant tracking.<\/li>\n<li><strong>Postdoctoral supervision:<\/strong> 1. Elodie Gauthier (2018 &#8211; 2020), Manfred Past\u00e4tter (2019 -2020), Asterios Toutios (2007-2009), Ingmar Steiner (2011-2012), S\u00e9bastien Demange (2009-2010).<\/li>\n<li><strong>Master students supervision:<\/strong> Several master students for the long final project (5-6 months period).<br \/>\n<small><em>These students are from Computer Science department, Cognitive sciences department, and Automatic Signal processing department.<\/em><\/small><\/li>\n<\/ul>\n<h3 class=\"p1\">Participation in PhD and HDR Defense Juries<\/h3>\n<ul>\n<li><strong>Thesis Examiner &amp; Jury Member<\/strong> \u2013 Yanis Ouakrim (Universit\u00e9 Grenoble-Alpes, May 23, 2025)<\/li>\n<li><strong>Thesis Reviewer &amp; Jury Member<\/strong> \u2013 Nezih Younsi (ISIR, defense scheduled for April 23, 2025).<\/li>\n<li><strong>HDR Examiner &amp; Jury Member<\/strong> \u2013 Lina Rojas (Universit\u00e9 de Lorraine, June 2024).<\/li>\n<li><strong>Jury President &amp; Examiner<\/strong> \u2013 Hamza Bayd (IMT Mines Al\u00e8s, 2024).<\/li>\n<li><strong>Jury President &amp; Examiner<\/strong> \u2013 Evan Dufraisse (CEA-LIST, 2024).<\/li>\n<li><strong>Thesis Reviewer &amp; Jury Member<\/strong> \u2013 Sanjana Sankar (Universit\u00e9 Grenoble-Alpes, 2024).<\/li>\n<li><strong>Thesis Reviewer &amp; Jury Member<\/strong> \u2013 Samir Sadok (CentraleSup\u00e9lec Rennes, 2024).<\/li>\n<li><strong>Thesis Reviewer &amp; Jury Member<\/strong> \u2013 Nicolas Olivier (Universit\u00e9 de Rennes, 2022).<\/li>\n<li><strong>Thesis Reviewer &amp; Jury Member<\/strong> \u2013 Meysam Shamsi (Universit\u00e9 de Rennes, 2020).<\/li>\n<li><strong>Thesis Reviewer &amp; Jury Member<\/strong> \u2013 Diandra Fabre (Universit\u00e9 Grenoble-Alpes, 2016).<\/li>\n<li><strong>Thesis Examiner &amp; Jury Member<\/strong> \u2013 Dodji Gbedahou (Universit\u00e9 Paul-Val\u00e9ry Montpellier 3, 2020).<\/li>\n<li><strong>Thesis Examiner &amp; Jury Member<\/strong> \u2013 Adela Barbulescu (Universit\u00e9 de Grenoble, 2015).<\/li>\n<\/ul>\n<h3>Conference Organization<\/h3>\n<ul>\n<li><em>JEP-TALN 2020<\/em> (Joint organization of the <em>Journ\u00e9es d\u2019\u00c9tudes sur la Parole<\/em> and <em>Traitement Automatique du Langage Naturel<\/em>), Nancy, France<\/li>\n<li><em>AVSP 2019<\/em> (<em>Conference on Auditory-Visual Speech Processing<\/em>), Melbourne, Australia<\/li>\n<li><em>AVSP 2017<\/em>, Stockholm, Sweden.<\/li>\n<li><em>AVSP 2015<\/em> \u2013 Organized jointly with <em>FAAVSP 2015<\/em> (<em>The 1st Joint Conference on Facial Analysis, Animation, and Audio-Visual Speech Processing<\/em>), Vienna, Austria.<\/li>\n<li><em>AVSP 2013<\/em>, Annecy, France.<\/li>\n<li><strong>Special Session Organizer<\/strong> \u2013 <em>Interspeech 2013<\/em>, Lyon, France: <em>\u201cArticulatory Data Acquisition and Processing\u201d<\/em> (25 presentations).<\/li>\n<li><strong>Chair of the Organizing Committee<\/strong> \u2013 <em>International Seminar on Speech Production (ISSP\u201908)<\/em>, Strasbourg, France. Co-organized with the <em>Institut de Phon\u00e9tique de Strasbourg<\/em>, <em>ZAS\/Phonetik Berlin<\/em>, and <em>LORIA Nancy<\/em> \u2013 Managed review process, reviewer coordination, website development, and proceedings editing (150 participants).<\/li>\n<\/ul>\n<p style=\"font-weight: 400\"><strong>Program Committee Member<\/strong>:<\/p>\n<ul>\n<li><em>JEP<\/em>: 2016, 2018, 2020, 2022\u00a0; <em>AVSP<\/em>: 2011, 2013, 2015 (FAAVSP), 2017, 2019\u00a0; <em>ISSP 2008<\/em>.<\/li>\n<\/ul>\n<ul>\n<li><strong>Reviewer for International Journals<\/strong>: <em>Journal of the Acoustical Society of America, Speech Communication, Journal of Phonetics, IEEE Transactions on Speech and Audio Processing, Computer Speech and Language, Computer Assisted Language Learning, Logopedics Phoniatrics Vocology, JASA Express Letters, Journal of Speech, Language, and Hearing Research, Language Resources and Evaluation<\/em>, etc.<\/li>\n<li><strong>Reviewer for International Conferences<\/strong>: <em>ICASSP, EUSIPCO, INTERSPEECH, AVSP, FAAVSP, ISSP, ICMI, IVA<\/em>, etc.<\/li>\n<\/ul>\n<h3>Software<\/h3>\n<ul>\n<li>\u00a0<a href=\"http:\/\/visartico.loria.fr\/\">Visartico<\/a> is an articulatory visualization that can be used with an articulograph.<\/li>\n<li>Plavis: Multimodal data analysis, processing and visualization software.<\/li>\n<li><a href=\"http:\/\/members.loria.fr\/SOuni\/accueil\/projets\/multimod-platform\">MultiMod Platform<\/a>: Multimodal data acquisition platform\u00a0allowing recording motion capture data using Vicon, depth DRGB data using RealSense and electromagnetography (EMA) data using the articulograh AG501.<\/li>\n<li>Talking head:\u00a0a system animating a talking head with realistic rendering and dynamic articulation.<\/li>\n<\/ul>\n<h3>Scientific &amp; teaching related activities<\/h3>\n<ul>\n<li><strong>Co-responsible <\/strong>of the executive board of the Computer Science division of the IAEM Doctoral School (since 2025).<\/li>\n<li><strong>Coordinator<\/strong> of the recruitment committee for ATER (since 2016).<\/li>\n<li><strong>Representative<\/strong> of the University of Lorraine at AIDA \u2013 AI Doctoral Academy (since 2021).<\/li>\n<li><strong>Head<\/strong> of the Special Year Computer Science DUT Program (2016\u20132022).<\/li>\n<li><strong>Head<\/strong> of the MULTISPEECH research team (since 2022).<\/li>\n<li><strong>Co-responsible<\/strong> for a selection committee for the recruitment of a professor (2025).<\/li>\n<li><strong>Member<\/strong> of selection committees for the recruitment of faculty members.<\/li>\n<li><strong>Member<\/strong> of the executive board of the Computer Science division of the IAEM Doctoral School.<\/li>\n<li><strong>Member<\/strong> of CNU27 (2020\u20132023 mandate).<\/li>\n<li><strong>Member<\/strong> of HCERES committees.<\/li>\n<li><strong>Representative<\/strong> of LORIA in the CLAIRE network (now CAIRNE \u2013 Confederation of Laboratories for Artificial Intelligence Research in Europe).<\/li>\n<li><strong>Co-responsible<\/strong> for the \u201cApplication Development \u2013 Software Engineering\u201d track (RA-IL), BUT2 and BUT3 (since 2023).<\/li>\n<li><strong>Vice President<\/strong> of the <em>Association Fran\u00e7aise de la Communication Parl\u00e9e (AFCP)<\/em> (2021\u20132024).<\/li>\n<li><strong>Secretary and Treasurer<\/strong> of the <em>Auditory-Visual Speech Association (AVISA)<\/em> (since 2013).<\/li>\n<li><strong>Member<\/strong> of the Board of the <em>Francophone Association of Spoken Communication (AFCP)<\/em> (2017\u20132020).<\/li>\n<li><strong>Member<\/strong> of several scientific associations: ISCA, IEEE Signal Processing Society (IEEE SPS), ACM.<\/li>\n<li><strong>Member<\/strong> of the <em>Charles Hermite Federation<\/em> (2013\u20132017).<\/li>\n<li><strong>Elected board member<\/strong> of the <em>LORIA researcher center<\/em> (2011\u20132017).<\/li>\n<li>Member of\u00a0<a href=\"http:\/\/www.fr-hermite.univ-lorraine.fr\">The <\/a>Charles Hermite Federation\u00a0(2013-2017)<\/li>\n<li>Elected board member\u00a0of the researcher center LORIA (2011-2017)<\/li>\n<\/ul>\n<h3>Invited Speaker<\/h3>\n<ul>\n<li>Artificial Companions, and Interactions (WACAI 2024)\u201d, <em>Bordeaux, 2024.<\/em><br \/>\n\u2022 Presentation: \u201cMultimodal Speech: Data and Models\u201d,<em> LISN Seminar, scheduled for March 3, 2025.<\/em><br \/>\n\u2022 \u201cObserving Humans to Animate an Expressive Talking Face\u201d,<em> SdL Day, University of Lorraine, Nancy, 2021.<\/em><br \/>\n\u2022 \u201cMultimodal Data Acquisition and Processing for Spoken Communication\u201d, Technologies of Human Language and Multimodality, TLH-AFIA, Paris, 2020.<\/li>\n<li>Language learning, 2nd D-TRANSFORM Leadership School Program. Nancy 19-23 May 2017.<\/li>\n<li>Audiovisual speech:\u00a0facilitating oral-based communication,\u00a0<em>Praxiling, University of Montpellier 3, October 2016, Montpellier, France.<\/em><\/li>\n<li>Toward Realistic Expressive Audiovisual Speech Synthesis, <em>Expressive Virtual Actors workshop, Nov 2015, Grenoble, France.<\/em><\/li>\n<li>Production of articulatory speech, <em>Conference on\u00a0Corpus and Tools in Linguistics, Languages and Speech, Strasbourg, France, July 2013<\/em><\/li>\n<li>Acquisition of articulatory data by an articulograph, <em>Workshop on\u00a0Typology of rhotics: phonetic manifestations and phonological issues, Paris, France, june 2011.<\/em><\/li>\n<li>Tongue Control and its Implication in Pronunciation Training, <em>Natural Language Processing and Language Learning Workshop\u00a0(NaTAL\u201910), Nancy, june 2010.<\/em><\/li>\n<li>Studying Pharyngealisation Using an Articulograph,<em>Workshop Pharyngeal and Pharyngealisation, \u00e0 Newcastle, Angleterre, March 2009.<\/em><\/li>\n<li>Talking heads: A framework to study audiovisual speech, <em>Institut de Phon\u00e9tique de Strasbourg, France, May 2009. \u00a0<\/em><\/li>\n<\/ul>\n<h3>Reviewer<\/h3>\n<ul>\n<li>Reviewer for International Conferences: Interspeech, AVSP, IVA, ICASSP, ISSP, JEP, ..<\/li>\n<li>Reviewer for Research Funding Agencies (<em>Austrian Academy of Sciences,\u00a0Research Foundation Flanders,..)<\/em><\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>Doctoral and scientific supervision<\/p>\n<ul>\n<li>Guilhem Faure (2024\u20132027) \u2013 Project COLAF: End-to-end speech-to-sign language generation.<\/li>\n<li>Micka\u00eblla Grondin-Verdon (2021\u20132025) \u2013 CNRS PRIME 80: Modeling gestures and speech in interaction.<\/li>\n<li>Louis Abel, Universit\u00e9 de Lorraine (2021\u20132025) \u2013 Audiovisual speech synthesis in interactive contexts.<\/li>\n<li>Shakeel Ahmad Sheikh, Universit\u00e9 de Lorraine (2019\u20132022) \u2013 ANR BENEPHIDIRE: Neural network-based detection and rehabilitation of speech disfluencies in stuttering.<\/li>\n<li>Th\u00e9o Biasutto, Universit\u00e9 de Lorraine (2016\u20132021) \u2013 PIA2 e-Fran METAL: Multimodal coarticulation modeling for intelligible talking head animation.<\/li>\n<li>Sara Dahmani, Universit\u00e9 de Lorraine (2017\u20132020) \u2013 Audiovisual speech synthesis: Deep learning-based modeling of emotional expressions.<\/li>\n<\/ul>\n","protected":false},"author":116,"featured_media":0,"parent":56,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-310","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/members.loria.fr\/SOuni\/wp-json\/wp\/v2\/pages\/310","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/members.loria.fr\/SOuni\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/members.loria.fr\/SOuni\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/members.loria.fr\/SOuni\/wp-json\/wp\/v2\/users\/116"}],"replies":[{"embeddable":true,"href":"https:\/\/members.loria.fr\/SOuni\/wp-json\/wp\/v2\/comments?post=310"}],"version-history":[{"count":14,"href":"https:\/\/members.loria.fr\/SOuni\/wp-json\/wp\/v2\/pages\/310\/revisions"}],"predecessor-version":[{"id":764,"href":"https:\/\/members.loria.fr\/SOuni\/wp-json\/wp\/v2\/pages\/310\/revisions\/764"}],"up":[{"embeddable":true,"href":"https:\/\/members.loria.fr\/SOuni\/wp-json\/wp\/v2\/pages\/56"}],"wp:attachment":[{"href":"https:\/\/members.loria.fr\/SOuni\/wp-json\/wp\/v2\/media?parent=310"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}