{"id":56,"date":"2015-06-16T13:44:07","date_gmt":"2015-06-16T11:44:07","guid":{"rendered":"http:\/\/members.loria.fr\/thierrygartiser\/?page_id=56"},"modified":"2026-01-29T21:12:13","modified_gmt":"2026-01-29T19:12:13","slug":"accueil","status":"publish","type":"page","link":"https:\/\/members.loria.fr\/SOuni\/","title":{"rendered":"Slim Ouni"},"content":{"rendered":"<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p><strong>\ud83c\uddec\ud83c\udde7 Slim Ouni<\/strong> is a Professor of Computer Science at the University of Lorraine (IUT Nancy-Charlemagne) and the head of the <a href=\"https:\/\/team.inria.fr\/multispeech\/\">Multispeech<\/a> research team, a joint team between Inria, CNRS, and the University of Lorraine. The team is dedicated to studying speech as a multimodal signal, integrating its acoustic, facial, articulatory, and gestural dimensions.<\/p>\n<p>His research focuses on <strong>multimodal speech communication<\/strong>. He investigates the intricate relationships between acoustic signals, articulatory movements, facial expressions, and co-speech gestures (head, hands, posture). His work aims to analyze, model, and synthesize these interactions for applications in expressive audiovisual\/multimodal speech synthesis (talking heads), articulatory modeling, and technology-enhanced second language learning. He is currently developing a new research axis on sign language generation and recognition.<\/p>\n<p>He is the coordinator of the ANR project <strong>Syncogest<\/strong> (2025\u20132029), which investigates the synchronization of speech and gesture. He is also a Co-principal Investigator of the Inria Challenge<strong> <a href=\"https:\/\/colaf.huma-num.fr\">COLaF<\/a><\/strong> (2023\u20132027), dedicated to developing resources for the languages of France. <span class=\"ng-star-inserted\">His research extends to interdisciplinary collaborations, including the ANR <a href=\"https:\/\/www.ihu-infiny.fr\/recherche\/rhu-i-deal-prevention-et-suivi-a-domicile-des-patients\/\">RHU <\/a><\/span><strong class=\"ng-star-inserted\"><span class=\"ng-star-inserted\">I-DEAL <\/span><\/strong><span class=\"ng-star-inserted\">project. <\/span>Beyond these recent projects, he has actively contributed to a variety of other research initiatives.<\/p>\n<p>In addition to his research, Slim Ouni is involved in the academic and scientific community. He is currently co-responsible for the Computer Science section of the IAEM Doctoral School. He has also served as a member of the National Council of Universities (<strong>CNU<\/strong> 27, 2020\u20132023) and as Vice President of the French Association for Spoken Communication (<strong>AFCP<\/strong>, 2021\u20132024).<\/p>\n<p>He received his HDR from the University of Lorraine in 2013 and his PhD in Computer Science from the University of Henri Poincar\u00e9, Nancy, in 2001. From 2002 to 2004, he was a postdoctoral researcher and a lecturer at the Baskin School of Engineering at the University of California, Santa Cruz (UCSC).<\/p>\n<p>&nbsp;<\/p>\n<p class=\"p1\">\ud83c\uddeb\ud83c\uddf7 <strong>Slim Ouni<\/strong> est professeur en informatique \u00e0 l\u2019Universit\u00e9 de Lorraine (IUT Nancy-Charlemagne) et responsable de l\u2019\u00e9quipe de recherche <a href=\"https:\/\/team.inria.fr\/multispeech\/\"><span class=\"s1\"><b>Multispeech<\/b><\/span><\/a>, une \u00e9quipe commune entre Inria, le CNRS et l\u2019Universit\u00e9 de Lorraine. Cette \u00e9quipe se consacre \u00e0 l\u2019\u00e9tude de la parole comme signal multimodal, int\u00e9grant ses dimensions acoustiques, faciales, articulatoires et gestuelles.<\/p>\n<p class=\"p1\">Ses recherches portent sur la communication parl\u00e9e multimodale. Il \u00e9tudie les relations complexes entre le signal acoustique, les mouvements articulatoires, les expressions faciales et les gestes co-verbaux (t\u00eate, mains, posture). Ses travaux visent \u00e0 analyser, mod\u00e9liser et synth\u00e9tiser ces interactions pour des applications telles que la synth\u00e8se audiovisuelle\/multimodale expressive de la parole (avatars parlants), la mod\u00e9lisation articulatoire et l\u2019apprentissage des langues. Il d\u00e9veloppe actuellement un nouvel axe de recherche sur la g\u00e9n\u00e9ration et la reconnaissance de la langue des signes.<\/p>\n<p class=\"p1\">Il est coordinateur du projet <span class=\"s1\"><b>ANR Syncogest (2025\u20132029)<\/b><\/span>, qui \u00e9tudie la synchronisation de la parole et du geste. Il est \u00e9galement co-responsable scientifique du <span class=\"s1\"><b>D\u00e9fi Inria COLaF (2023\u20132027)<\/b><\/span>, consacr\u00e9 au d\u00e9veloppement de ressources pour les langues de France. Ses recherches s\u2019\u00e9tendent \u00e0 des collaborations interdisciplinaires, notamment dans le cadre du projet <span class=\"s1\"><b>ANR RHU I-DEAL<\/b><\/span>. Au-del\u00e0 de ces projets r\u00e9cents, il a particip\u00e9 activement \u00e0 de nombreuses autres initiatives de recherche.<\/p>\n<p class=\"p1\">En parall\u00e8le de ses activit\u00e9s de recherche, Slim Ouni s\u2019implique activement dans la vie acad\u00e9mique et scientifique. Il est actuellement co-responsable de la mention Informatique de l\u2019\u00c9cole Doctorale IAEM. Il a \u00e9galement \u00e9t\u00e9 membre du <span class=\"s1\"><b>CNU 27 <\/b>(2020\u20132023)<\/span> et vice-pr\u00e9sident de l\u2019<strong>AFCP<\/strong>, (<span class=\"s1\">Association Fran\u00e7aise de Communication Parl\u00e9e, 2021\u20132024)<\/span>.<\/p>\n<p class=\"p1\">Il a obtenu son <span class=\"s1\"><b>HDR<\/b><\/span> de l\u2019Universit\u00e9 de Lorraine en 2013 et son doctorat en informatique de l\u2019Universit\u00e9 Henri Poincar\u00e9 de Nancy en 2001. De 2002 \u00e0 2004, il a \u00e9t\u00e9 post-doctorant et enseignant \u00e0 la <span class=\"s1\"><b>Baskin School of Engineering<\/b><\/span> de l\u2019Universit\u00e9 de Californie \u00e0 Santa Cruz (UCSC).<\/p>\n<ul>\n<li><strong><a href=\"http:\/\/members.loria.fr\/SOuni\/projets\/\">Research Interests<\/a><\/strong><\/li>\n<li><strong><a href=\"http:\/\/members.loria.fr\/SOuni\/accueil\/projects\/\">Projects<\/a><\/strong><\/li>\n<li><strong><a href=\"http:\/\/members.loria.fr\/SOuni\/accueil\/activities\/\">Activities<\/a><\/strong><\/li>\n<li><strong><a href=\"http:\/\/members.loria.fr\/SOuni\/publications\/\">Publication<\/a><\/strong><\/li>\n<li><a href=\"http:\/\/members.loria.fr\/SOuni\/accueil\/projets\/multimod-platform\/\"><strong>Multimodal Motion Capture Platform<\/strong><\/a><\/li>\n<li><a href=\"https:\/\/www.dynalips.com\"><strong>Dynalips<\/strong><\/a> (spin-off)<\/li>\n<\/ul>\n<h4>Open positions<\/h4>\n<ul>\n<li><a href=\"https:\/\/members.loria.fr\/SOuni\/phd-position-f-m-multimodal-speech-analysis-for-early-detection-of-crohns-disease-flares-through-deep-learning-methodologies\/\"><strong>(2026<\/strong>) 1 PhD &#8211; open position<\/a><\/li>\n<li><a href=\"https:\/\/members.loria.fr\/SOuni\/ingenieur-f-h-en-traitement-et-en-modelisation-de-donnees-multimodales\/\"><strong>(2025)<\/strong> 1 Poste d&rsquo;ing\u00e9nieur en informatique<\/a><\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<h4>Examples of our latest work on automatic co-speech gesture generation (PhD thesis of Louis Abel).<\/h4>\n<div style=\"width: 648px;\" class=\"wp-video\"><video class=\"wp-video-shortcode\" id=\"video-56-1\" width=\"648\" height=\"365\" poster=\"http:\/\/members.loria.fr\/SOuni\/wp-content\/blogs.dir\/133\/files\/sites\/133\/2024\/04\/co-speech.jpg\" preload=\"metadata\" controls=\"controls\"><source type=\"video\/mp4\" src=\"http:\/\/members.loria.fr\/SOuni\/wp-content\/blogs.dir\/133\/files\/sites\/133\/2024\/04\/out_slm.mp4?_=1\" \/><a href=\"http:\/\/members.loria.fr\/SOuni\/wp-content\/blogs.dir\/133\/files\/sites\/133\/2024\/04\/out_slm.mp4\">http:\/\/members.loria.fr\/SOuni\/wp-content\/blogs.dir\/133\/files\/sites\/133\/2024\/04\/out_slm.mp4<\/a><\/video><\/div>\n\n","protected":false},"excerpt":{"rendered":"<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>\ud83c\uddec\ud83c\udde7 Slim Ouni is a Professor of Computer Science at the University of Lorraine (IUT Nancy-Charlemagne) and the head of the <a href=\"https:\/\/team.inria.fr\/multispeech\/\">Multispeech<\/a> research team, a joint team between Inria, CNRS, and the University of Lorraine. The team is dedicated to studying speech as a multimodal signal, integrating its acoustic, facial, articulatory, and gestural dimensions.<\/p>\n<p>His research focuses on multimodal speech communication. He investigates the intricate relationships between acoustic signals, articulatory movements, facial expressions, and co-speech gestures (head, hands, posture). His work aims to analyze, model, and synthesize these interactions for applications in expressive audiovisual\/multimodal speech synthesis (talking heads),<\/p>\n","protected":false},"author":5,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-56","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/members.loria.fr\/SOuni\/wp-json\/wp\/v2\/pages\/56","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/members.loria.fr\/SOuni\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/members.loria.fr\/SOuni\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/members.loria.fr\/SOuni\/wp-json\/wp\/v2\/users\/5"}],"replies":[{"embeddable":true,"href":"https:\/\/members.loria.fr\/SOuni\/wp-json\/wp\/v2\/comments?post=56"}],"version-history":[{"count":85,"href":"https:\/\/members.loria.fr\/SOuni\/wp-json\/wp\/v2\/pages\/56\/revisions"}],"predecessor-version":[{"id":771,"href":"https:\/\/members.loria.fr\/SOuni\/wp-json\/wp\/v2\/pages\/56\/revisions\/771"}],"wp:attachment":[{"href":"https:\/\/members.loria.fr\/SOuni\/wp-json\/wp\/v2\/media?parent=56"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}