{"id":329,"date":"2017-02-10T02:07:44","date_gmt":"2017-02-10T00:07:44","guid":{"rendered":"http:\/\/members.loria.fr\/SIvaldi\/?page_id=329"},"modified":"2022-06-23T01:57:35","modified_gmt":"2022-06-22T23:57:35","slug":"research","status":"publish","type":"page","link":"https:\/\/members.loria.fr\/SIvaldi\/research\/","title":{"rendered":"Research"},"content":{"rendered":"<p>My research is at the intersection of robotics, learning, control and interaction. I am more and more interested into developing <strong>collaborative robots<\/strong>, that are able to interact physically and socially with humans, especially those that are not experts in robotics. I am currently interested in two forms of collaboration: the human-humanoid collaboration, in the form of teleoperation; and the human-cobot or human-exoskeleton physical collaboration. To collaborate, the robot needs a very smart human-aware whole-body control: Human-aware here means that it has to consider the human status, its intent, and its optimization criteria. The robot needs good models of human behavior, and that is why I am interested into human-robot interaction.<\/p>\n<p>My goal is to &#8220;close the loop&#8221; on the human, taking into account their feedback into the robot learning and control process. I am also interested in questions related to the &#8220;social&#8221; impact of collaborative robotics technologies on humans, for example the robot acceptance and trust.<\/p>\n<p>&nbsp;<\/p>\n<hr \/>\n<h3>Tele-operation of humanoid robots<\/h3>\n<p>This research started during the project AnDy, where we were facing the problem of teaching collaborative whole-body behaviors to the iCub. In short, whole-body teleoperation is the whole-body version of kinesthetic teaching for learning from demonstration. We developed the tele-operation of the iCub robot, showing that it is possible to replicate the human operator&#8217;s movements even if the two systems have different dynamics and the operator could even make the robot fall (Humanoids 2018). We also optimized the humanoid&#8217;s whole-body movements, demonstrated by the human, for the robot&#8217;s dynamics (Humanoids 2019). Later, we proposed a multimode teleoperation framework that enabled an immersive experience where the operator was watching the robot&#8217;s visual feedback inside a VR headset (RAM 2019). To improve the precision in tracking the human&#8217;s desired trajectory, we proposed to optimize the whole-body controller&#8217;s parameters with the purpose of being &#8220;generic&#8221; (RA-L 2020). Finally, we focused on the problem of tele-operating the robot in presence of large delays (up to 2 seconds), proposing the technique &#8220;prescient teleoperation&#8221; that consists in anticipating the human operator using human motion prediction models (arXiv 2021 &#8211; under review). We have demonstrated our teleoperation suite on two humanoid robots: iCub and Talos.<\/p>\n<hr \/>\n<h3>Exoskeletons<\/h3>\n<p>This line of research started during the project AnDy. We collaborated with the consortium and particularly with Ottobock gmbh to validate their passive exoskeleton Paexo for overhead works (TNSRE 2019). Later, during covid-19 pandemic, we used our skills with assistive exoskeletons to help the physicians of the University Hospital of Nancy, who used a passive exoskeleton for back assistance during the prone positioning maneuvers of covid patients in the ICU (project ExoTurn, AHFE 2020). We are currently studying active upperbody exoskeletons in the project ASMOA.<\/p>\n<hr \/>\n<h3>Machine learning to improve whole-body control<\/h3>\n<table  class=\" table table-hover\" style=\"width: 100%;vertical-align: top\">\n<tbody>\n<tr>\n<td><a href=\"http:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2018\/08\/icubnancy.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-602 alignleft\" src=\"http:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2018\/08\/icubnancy-225x300.jpg\" alt=\"\" width=\"225\" height=\"300\" srcset=\"https:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2018\/08\/icubnancy-225x300.jpg 225w, https:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2018\/08\/icubnancy-768x1024.jpg 768w, https:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2018\/08\/icubnancy.jpg 960w\" sizes=\"auto, (max-width: 225px) 100vw, 225px\" \/><\/a>This line of research started during the project <a href=\"http:\/\/members.loria.fr\/SIvaldi\/projets\/codyco-2013-2017\/\">CoDyCo<\/a>, where we explored how to use machine learning techniques to improve the whole-body control of iCub, and the control of its contacts. We learned contact models to improve the inverse dynamics in <a href=\"http:\/\/www.ausy.tu-darmstadt.de\/uploads\/Site\/EditPublication\/Calandra_ICRA15.pdf\">ICRA 2015<\/a>, then learned how to improve torque control in presence of contacts in <a href=\"https:\/\/hal.inria.fr\/hal-01205501\/document\">HUMANOIDS 2015<\/a>. We didn&#8217;t scale to the full humanoid though. At the same time, to enable balancing on our humanoid we were developing whole-body controllers based on multi-task QP controllers, which have the known problem of requiring tedious expert tuning of task priorities and trajectories. So we started to explore how to automatically learn optimal weights and trajectories for whole-body controllers: in <a href=\"https:\/\/hal.archives-ouvertes.fr\/hal-01273409\/document\">ICRA 2016<\/a> we showed that it was possible to learn the evolution of the task weights in time for complex motions of redundant manipulators; in <a href=\"https:\/\/hal.inria.fr\/hal-01377690\/document\">HUMANOIDS 2016<\/a> we scaled the problem to the humanoid case,\u00a0 while ensuring that the optimized weights were &#8220;safe&#8221;, i.e., never violating any of the problem constraints &#8211; and for this we benchmarked different constrained optimization algorithms; in <a href=\"https:\/\/hal.archives-ouvertes.fr\/hal-01613646\/document\">HUMANOIDS 2017<\/a> we applied our learning method to optimize task trajectories rather than task priorities, as in many multi-task QP controllers the task priorities are fixed but the trajectories still need to be optimized, and they can be critical as in some challenging balancing problems.<br \/>\nIn the project <a href=\"http:\/\/members.loria.fr\/SIvaldi\/projets\/andy-2017-2020\/\">ANDY<\/a>, I am pushing this line of research so as to realize robust and safe whole-body control, which we need for human-robot collaboration. For example, in <a href=\"https:\/\/hal.archives-ouvertes.fr\/hal-01569948\/document\">HUMANOIDS 2017<\/a> we used trail-and-error algorithms to adapt in few trials on the real robot a QP controller that was good for the simulated robot, but not for the real robot (reality gap problem).<\/td>\n<\/tr>\n<tr>\n<td>Related topics:<\/p>\n<ul>\n<li>Human-aware whole-body control for human-robot physical interaction (<a href=\"https:\/\/hal.archives-ouvertes.fr\/hal-01620789\/document\">RAL 2018<\/a>) and assistance (<a href=\"https:\/\/hal.archives-ouvertes.fr\/hal-01590678v2\/document\">ICRA 2018<\/a>)<\/li>\n<li>Whole-body tele-operation of a humanoid robot (<a href=\"https:\/\/hal.archives-ouvertes.fr\/hal-01790597\/document\">ICRA 2018 WS<\/a>)<\/li>\n<\/ul>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<hr \/>\n<h3>Human-robot interaction (social &amp; physical)<\/h3>\n<table  class=\" table table-hover\" style=\"width: 100%;vertical-align: top\">\n<tbody>\n<tr>\n<td><a href=\"http:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2017\/02\/icub_physical_interaction.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignleft wp-image-342 size-medium\" src=\"http:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2017\/02\/icub_physical_interaction-300x228.png\" alt=\"icub_physical_interaction\" width=\"300\" height=\"228\" srcset=\"https:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2017\/02\/icub_physical_interaction-300x228.png 300w, https:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2017\/02\/icub_physical_interaction.png 608w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/><\/a>When the robot interacts with a human, we must look at the whole set of exchanged signals: physical (i.e., exchanged forces) and social (verbal and non-verbal, i.e., gaze, speech, posture).\u00a0Is it possible to alter the dynamics of such signals by playing with the robot behavior? Is the production and the dynamics\u00a0of these signals influenced by individual factors?<br \/>\nI am interested in studying these signals during human-robot collaborative tasks, especially when agnostic\/ordinary people without robotics background interact with the robot. We showed that the robot&#8217;s proactivity changes the rhythm of interaction (<a href=\"http:\/\/www.frontiersin.org\/Neurorobotics\/10.3389\/fnbot.2014.00005\/abstract\" target=\"_blank\" rel=\"noopener\">Frontiers 2014 <\/a>); that extroversion and negative attitude towards robots appear in the dynamics of gaze and speech during collaborative tasks (<a href=\"https:\/\/hal.inria.fr\/hal-01322231\/document\">IJSR 2016<\/a>). In the project <a href=\"http:\/\/members.loria.fr\/SIvaldi\/projets\/codyco-2013-2017\/\">CoDyCo<\/a>, I focused on the analysis of physical signals (e.g., contact forces, tactile signals) during a collaborative task requiring physical contact between humans and iCub. In the project <a href=\"http:\/\/members.loria.fr\/SIvaldi\/projets\/andy-2017-2020\/\">ANDY<\/a>, I am pushing this line of research so as to realize efficient multimodal collaboration between humans and humanoids.<\/td>\n<\/tr>\n<tr>\n<td>Related topics:<\/p>\n<ul>\n<li>Trust in Human-Robot Interaction (<a href=\"https:\/\/hal.inria.fr\/hal-01298502\/file\/Trust%20CHB-D-15-01455.pdf\">Computers in Human Behavior 2016<\/a>)<\/li>\n<li>Ethical issues for human-centered technologies and collaborative robots (<a href=\"https:\/\/hal.archives-ouvertes.fr\/hal-01826487\/document\">ARSO 2018<\/a>)<\/li>\n<li>Prediction of intention during HRI with Probabilistic Movement Primitives (<a href=\"https:\/\/www.frontiersin.org\/articles\/10.3389\/frobt.2017.00045\/pdf\">Frontiers in Robotics and AI 2017<\/a>, <a href=\"https:\/\/hal.archives-ouvertes.fr\/hal-01644585\/file\/multimodal_prompV2.pdf\">HFR 2019<\/a>)<\/li>\n<\/ul>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<hr \/>\n<h3>Multimodal learning and deep-learning for robotics<\/h3>\n<table  class=\" table table-hover\" style=\"width: 100%;vertical-align: top\">\n<tbody>\n<tr>\n<td><a href=\"http:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2017\/02\/icub_numbers_deep_learning.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignleft wp-image-340 \" src=\"http:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2017\/02\/icub_numbers_deep_learning-300x126.png\" alt=\"icub_numbers_deep_learning\" width=\"319\" height=\"134\" srcset=\"https:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2017\/02\/icub_numbers_deep_learning-300x126.png 300w, https:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2017\/02\/icub_numbers_deep_learning-768x323.png 768w, https:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2017\/02\/icub_numbers_deep_learning-1024x431.png 1024w, https:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2017\/02\/icub_numbers_deep_learning.png 1721w\" sizes=\"auto, (max-width: 319px) 100vw, 319px\" \/><\/a>I have been using neural networks since my master thesis, for optimal coding\/decoding using game theory (<a href=\"http:\/\/www.isir.upmc.fr\/files\/2009ACLI2174.pdf\">IJCNN 2009<\/a>), then in my PhD thesis for learning\u00a0sequences of optimal controllers in a model predictive control\u00a0framework (<a href=\"http:\/\/www.isir.upmc.fr\/files\/2010ACLI2173.pdf\">IROS 2010<\/a>). During my postdoc in ISIR, we proposed an architecture based on auto-encoders for multimodal learning of robot skills (<a href=\"https:\/\/hal.archives-ouvertes.fr\/file\/index\/docid\/1065741\/filename\/camera_ready.pdf\" target=\"_blank\" rel=\"noopener\">ICDL 2014<\/a>). We applied the architecture to the iCub, to solve the problem of learning to draw numbers, combining visual, proprioceptual and auditory information (<a href=\"https:\/\/tel.archives-ouvertes.fr\/hal-01083521\/document\" target=\"_blank\" rel=\"noopener\">RAS 2014 <\/a>). We showed that using a multimodal framework we can not only improve classification, but also compensate for missing modalities in recognition.<br \/>\nThis line of research is now pursued in the <a href=\"http:\/\/members.loria.fr\/SIvaldi\/projets\/andy-2017-2020\/\">ANDY<\/a> project, for classifying and recognizing human actions from several types of sensors.<\/td>\n<\/tr>\n<tr>\n<td>Related topics:<\/p>\n<ul>\n<li>Activity recognition using multimodal wearable sensors (<a href=\"https:\/\/hal.inria.fr\/hal-01701996\/document\">ACHI 2018<\/a>)<\/li>\n<li>Activity recognition and prediction using variational auto-encoders (<a href=\"https:\/\/arxiv.org\/pdf\/1807.02350.pdf\">arXiv 2018<\/a>)<\/li>\n<\/ul>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p><strong>Past\u00a0topics:<\/strong><\/p>\n<hr \/>\n<h3>Multimodal and active learning<\/h3>\n<table  class=\" table table-hover\" style=\"width: 100%;vertical-align: top\">\n<tbody>\n<tr>\n<td style=\"vertical-align: top\"><\/td>\n<td><a href=\"http:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2017\/02\/HRI_distinghuish.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignleft wp-image-339 size-medium\" src=\"http:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2017\/02\/HRI_distinghuish-300x194.png\" alt=\"HRI_distinghuish\" width=\"300\" height=\"194\" srcset=\"https:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2017\/02\/HRI_distinghuish-300x194.png 300w, https:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2017\/02\/HRI_distinghuish.png 693w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/><\/a>In the\u00a0<a href=\"projects.htm\">MACSI project<\/a> we took inspiration from infant development to make the iCub learn incrementally through active exploration and social interaction with a tutor, exploiting its multitude of sensors (<a href=\"https:\/\/members.loria.fr\/SIvaldi\/publications\/papers\/macsijournal.pdf\">TAMD 2014<\/a>).<br \/>\nMultimodality was fundamental in many ways: to improve object recognition (<a href=\"http:\/\/www.isir.upmc.fr\/files\/2013ACLI2906.pdf\" target=\"_blank\" rel=\"noopener\">ROBIO 2013<\/a>), discriminate human and robot body from objects (<a href=\"https:\/\/hal.archives-ouvertes.fr\/hal-01166110\/file\/Lyubova_Auro_2015.pdf\">AURO 2016<\/a>), by combining visual and proprioceptive information; to track the human partner and his gaze during interactions (<a href=\"http:\/\/www.isir.upmc.fr\/files\/2012ACTI2525.pdf\">BICA 2012<\/a>, <a href=\"http:\/\/www.isir.upmc.fr\/files\/2012ACTI2527.pdf\" target=\"_blank\" rel=\"noopener\">Humanoids 2012<\/a>), by combining audio and visual information, etc.Instructions for reproducing the experiments:\u00a0<a href=\"http:\/\/chronos.isir.upmc.fr\/~ivaldi\/macsi\/doc\/learning_kinematics.html\">online documentation<\/a>, <a href=\"http:\/\/eris.liralab.it\/wiki\/UPMC_iCub_project\/MACSi_Software\">code (svn repository)<\/a><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<hr \/>\n<h3>Dealing with uncertainty in object&#8217;s pose estimation<\/h3>\n<table  class=\" table table-hover\" style=\"width: 100%;vertical-align: top\">\n<tbody>\n<tr>\n<td><a href=\"http:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2017\/02\/grasping_uncertainty.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignleft wp-image-336 size-medium\" src=\"http:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2017\/02\/grasping_uncertainty-300x120.png\" alt=\"grasping_uncertainty\" width=\"300\" height=\"120\" srcset=\"https:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2017\/02\/grasping_uncertainty-300x120.png 300w, https:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2017\/02\/grasping_uncertainty-768x308.png 768w, https:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2017\/02\/grasping_uncertainty.png 808w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/><\/a>The real world is uncertain and full of noise: as humans do, the robot should take into account this uncertainty before acting and to compute its action. We proposed a new grasp planning method that explicitly takes into account the uncertainty in the object pose estimated by the robot noisy cameras through a sparse point cloud.<\/p>\n<p>References: <a href=\".\/papers\/graspingRAS2014.pdf\" target=\"_blank\" rel=\"noopener\">RAS 2014<\/a><\/td>\n<td style=\"vertical-align: top\"><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<hr \/>\n<h3>Whole-body dynamics estimation and control of physical interaction<\/h3>\n<table  class=\" table table-hover\" style=\"width: 100%;vertical-align: top\">\n<tbody>\n<tr>\n<td><a href=\"http:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2017\/02\/icub_idyn.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignleft wp-image-331 size-medium\" src=\"http:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2017\/02\/icub_idyn-300x284.png\" alt=\"icub_idyn\" width=\"300\" height=\"284\" srcset=\"https:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2017\/02\/icub_idyn-300x284.png 300w, https:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2017\/02\/icub_idyn-768x728.png 768w, https:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2017\/02\/icub_idyn.png 841w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/><\/a>To interact physically, the robot needs a reliable estimation of its whole-body dynamics and its contact forces. For robots such as iCub, not equipped with joint torque sensors, the main problems are: 1) how to retrieve the interaction forces without dedicated sensors, in presence of multiple contacts in arbitrary locations 2) how to estimate the joint torques in such conditions. During my PhD in IIT, we developed the theory of the Enhanced Oriented Graphs (EOG), that makes it possible to compute online the whole-body robot dynamics combining the measurements of force\/torque, inertial and tactile sensors. This estimation method has been implemented in the library iDyn, included in the iCub software (now significantly improved in iDynTree thanks to the <a href=\"http:\/\/members.loria.fr\/SIvaldi\/projets\/codyco-2013-2017\/\">CoDyCo project<\/a>).<\/p>\n<p>References: <a href=\"http:\/\/www.ausy.tu-darmstadt.de\/uploads\/Site\/EditPublication\/ivaldi2014simulators.pdf\" target=\"_blank\" rel=\"noopener\">survey paper robotic simulators <\/a>, <a href=\"http:\/\/www.isir.upmc.fr\/files\/2011ACTI2062.pdf\">Humanoids 2011<\/a>, ICRA 2015, <a href=\"http:\/\/link.springer.com\/article\/10.1007\/s10514-012-9291-2\">Autonomous Robots 2012<\/a><\/p>\n<p>Software for library and experiments:\u00a0<a href=\"http:\/\/wiki.icub.org\/wiki\/ICub_Software_Installation\">code (svn)<\/a><br \/>\nTutorial of iDyn: <a href=\"http:\/\/wiki.icub.org\/wiki\/IDyn\">manual (wiki)<\/a>,\u00a0<a href=\"http:\/\/wiki.icub.org\/iCub_documentation\/idyn_introduction.html\">introduction (online doc<\/a>),\u00a0<a href=\"http:\/\/wiki.icub.org\/viki\/images\/f\/f8\/Idyn_tutorial_vvv10.pdf\">introduction (slides VVV10)<\/a>,\u00a0<a href=\"http:\/\/wiki.icub.org\/iCub_documentation\/idyn_one_chain_tutorial.html\">tutorial (online doc)<\/a><\/td>\n<td style=\"vertical-align: top\"><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<hr \/>\n<h3>From Humans to Humanoids<\/h3>\n<table  class=\" table table-hover\" style=\"width: 100%;vertical-align: top\">\n<tbody>\n<tr>\n<td style=\"vertical-align: top\"><\/td>\n<td><a href=\"http:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2017\/02\/h2h.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\"alignleft wp-image-337 size-medium\" src=\"http:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2017\/02\/h2h-300x189.jpg\" alt=\"h2h\" width=\"300\" height=\"189\" srcset=\"https:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2017\/02\/h2h-300x189.jpg 300w, https:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2017\/02\/h2h.jpg 561w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/><\/a>Is it possible to transfer some principles of learning and control from humans to robots? Yes, and it is generally a good idea (see <a href=\"http:\/\/www.isir.upmc.fr\/files\/2012ACLI2471.pdf\" target=\"_blank\" rel=\"noopener\">survey paper Humans to Humanoids<\/a>).<br \/>\nI investigated on this question during my PhD in IIT, where I showed that it is possible to use optimal control to transfer\u00a0some optimization criteria typical of human planning movements to robot (<a href=\"http:\/\/www.isir.upmc.fr\/files\/2010ACLI2173.pdf\" target=\"_blank\" rel=\"noopener\">IROS 2010<\/a>). We also\u00a0used stochastic optimal control for controlling the robot movement and its compliance in presence of uncertainties, noise or delays (<a href=\"http:\/\/www.isir.upmc.fr\/files\/2011ACTI2017.pdf\" target=\"_blank\" rel=\"noopener\">IROS 2011<\/a>). Then during my postdoc in ISIR, where we took inspiration from developmental psychology and learning in toddlers to develop a cognitive architecture for the iCub to learn in an incremental and multimodal way objects through autonomous exploration and interaction with a human tutor (TAMD 2014).<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n","protected":false},"excerpt":{"rendered":"<p>My research is at the intersection of robotics, learning, control and interaction. I am more and more interested into developing collaborative robots, that are able to interact physically and socially with humans, especially those that are not experts in robotics. I am currently interested in two forms of collaboration: the human-humanoid collaboration, in the form [&hellip;]<\/p>\n","protected":false},"author":53,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-329","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/members.loria.fr\/SIvaldi\/wp-json\/wp\/v2\/pages\/329","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/members.loria.fr\/SIvaldi\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/members.loria.fr\/SIvaldi\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/members.loria.fr\/SIvaldi\/wp-json\/wp\/v2\/users\/53"}],"replies":[{"embeddable":true,"href":"https:\/\/members.loria.fr\/SIvaldi\/wp-json\/wp\/v2\/comments?post=329"}],"version-history":[{"count":19,"href":"https:\/\/members.loria.fr\/SIvaldi\/wp-json\/wp\/v2\/pages\/329\/revisions"}],"predecessor-version":[{"id":1267,"href":"https:\/\/members.loria.fr\/SIvaldi\/wp-json\/wp\/v2\/pages\/329\/revisions\/1267"}],"wp:attachment":[{"href":"https:\/\/members.loria.fr\/SIvaldi\/wp-json\/wp\/v2\/media?parent=329"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}