My research is at the intersection of robotics, learning, control and interaction. I am more and more interested into developing collaborative robots, that are able to interact physically and socially with humans, especially those that are not experts in robotics. I am currently interested in two forms of collaboration: the human-humanoid collaboration, in the form of teleoperation; and the human-cobot or human-exoskeleton physical collaboration. To collaborate, the robot needs a very smart human-aware whole-body control: Human-aware here means that it has to consider the human status, its intent, and its optimization criteria. The robot needs good models of human behavior, and that is why I am interested into human-robot interaction.
My goal is to “close the loop” on the human, taking into account their feedback into the robot learning and control process. I am also interested in questions related to the “social” impact of collaborative robotics technologies on humans, for example the robot acceptance and trust.
Tele-operation of humanoid robots
This research started during the project AnDy, where we were facing the problem of teaching collaborative whole-body behaviors to the iCub. In short, whole-body teleoperation is the whole-body version of kinesthetic teaching for learning from demonstration. We developed the tele-operation of the iCub robot, showing that it is possible to replicate the human operator’s movements even if the two systems have different dynamics and the operator could even make the robot fall (Humanoids 2018). We also optimized the humanoid’s whole-body movements, demonstrated by the human, for the robot’s dynamics (Humanoids 2019). Later, we proposed a multimode teleoperation framework that enabled an immersive experience where the operator was watching the robot’s visual feedback inside a VR headset (RAM 2019). To improve the precision in tracking the human’s desired trajectory, we proposed to optimize the whole-body controller’s parameters with the purpose of being “generic” (RA-L 2020). Finally, we focused on the problem of tele-operating the robot in presence of large delays (up to 2 seconds), proposing the technique “prescient teleoperation” that consists in anticipating the human operator using human motion prediction models (arXiv 2021 – under review). We have demonstrated our teleoperation suite on two humanoid robots: iCub and Talos.
Exoskeletons
This line of research started during the project AnDy. We collaborated with the consortium and particularly with Ottobock gmbh to validate their passive exoskeleton Paexo for overhead works (TNSRE 2019). Later, during covid-19 pandemic, we used our skills with assistive exoskeletons to help the physicians of the University Hospital of Nancy, who used a passive exoskeleton for back assistance during the prone positioning maneuvers of covid patients in the ICU (project ExoTurn, AHFE 2020). We are currently studying active upperbody exoskeletons in the project ASMOA.
Machine learning to improve whole-body control
![]() In the project ANDY, I am pushing this line of research so as to realize robust and safe whole-body control, which we need for human-robot collaboration. For example, in HUMANOIDS 2017 we used trail-and-error algorithms to adapt in few trials on the real robot a QP controller that was good for the simulated robot, but not for the real robot (reality gap problem). |
Related topics:
|
Human-robot interaction (social & physical)
![]() I am interested in studying these signals during human-robot collaborative tasks, especially when agnostic/ordinary people without robotics background interact with the robot. We showed that the robot’s proactivity changes the rhythm of interaction (Frontiers 2014 ); that extroversion and negative attitude towards robots appear in the dynamics of gaze and speech during collaborative tasks (IJSR 2016). In the project CoDyCo, I focused on the analysis of physical signals (e.g., contact forces, tactile signals) during a collaborative task requiring physical contact between humans and iCub. In the project ANDY, I am pushing this line of research so as to realize efficient multimodal collaboration between humans and humanoids. |
Related topics:
|
Multimodal learning and deep-learning for robotics
![]() This line of research is now pursued in the ANDY project, for classifying and recognizing human actions from several types of sensors. |
Related topics:
|
Past topics:
Multimodal and active learning
![]() Multimodality was fundamental in many ways: to improve object recognition (ROBIO 2013), discriminate human and robot body from objects (AURO 2016), by combining visual and proprioceptive information; to track the human partner and his gaze during interactions (BICA 2012, Humanoids 2012), by combining audio and visual information, etc.Instructions for reproducing the experiments: online documentation, code (svn repository) |
Dealing with uncertainty in object’s pose estimation
![]() References: RAS 2014 |
Whole-body dynamics estimation and control of physical interaction
![]() References: survey paper robotic simulators , Humanoids 2011, ICRA 2015, Autonomous Robots 2012 Software for library and experiments: code (svn) |
From Humans to Humanoids
![]() I investigated on this question during my PhD in IIT, where I showed that it is possible to use optimal control to transfer some optimization criteria typical of human planning movements to robot (IROS 2010). We also used stochastic optimal control for controlling the robot movement and its compliance in presence of uncertainties, noise or delays (IROS 2011). Then during my postdoc in ISIR, where we took inspiration from developmental psychology and learning in toddlers to develop a cognitive architecture for the iCub to learn in an incremental and multimodal way objects through autonomous exploration and interaction with a human tutor (TAMD 2014). |