Research

My research is at the intersection of robotics, learning, control and interaction. I am more and more interested into developing collaborative robots, that are able to interact physically and socially with humans, especially those that are not experts in robotics. To collaborate, the robot needs good models of human behavior, and that is why I am interested into human-robot interaction experiments focused at finding the relation between individual factors and interaction signals. My goal is to “close the loop” on the human, taking into account their feedback into the robot learning and control process. I am also interested in questions related to the “social” impact of collaborative robotics technologies on humans, for example the robot acceptance and trust.

 


Human-robot interaction

icub_physical_interactionWhen the robot interacts with a human, we must look at the whole set of exchanged signals: physical (i.e., exchanged forces) and social (verbal and non-verbal, i.e., gaze, speech, posture). Is it possible to alter the dynamics of such signals by playing with the robot behavior? Is the production and the dynamics of these signals influenced by individual factors?
I am interested in studying these signals during human-robot collaborative tasks, especially when agnostic/ordinary people without robotics background interact with the robot. We showed that the robot’s proactivity changes the rhythm of interaction (Frontiers 2014 ); that extroversion and negative attitude towards robots appear in the dynamics of gaze and speech during collaborative tasks (IJSR 2016). In the project CoDyCo, I focused on the analysis of physical signals (e.g., contact forces, tactile signals) during a collaborative task requiring physical contact between humans and iCub. In the project ANDY, I am pushing this line of research so as to realize efficient multimodal collaboration between humans and humanoids.

Multimodal deep-learning for robotics

icub_numbers_deep_learningI have been using neural networks since my master thesis, for optimal coding/decoding using game theory (IJCNN 2009), then in my PhD thesis for learning sequences of optimal controllers in a model predictive control framework (IROS 2010). During my postdoc in ISIR, we proposed an architecture based on auto-encoders for multimodal learning of robot skills (ICDL 2014). We applied the architecture to the iCub, to solve the problem of learning to draw numbers, combining visual, proprioceptual and auditory information (RAS 2014 ). We showed that using a multimodal framework we can not only improve classification, but also compensate for missing modalities in recognition.
This line of research is now pursued in the ANDY project, for classifying and recognizing human actions from several types of sensors.

 

 

Past topics:


Multimodal and active learning

HRI_distinghuishIn the MACSI project we took inspiration from infant development to make the iCub learn incrementally through active exploration and social interaction with a tutor, exploiting its multitude of sensors (TAMD 2014).
Multimodality was fundamental in many ways: to improve object recognition (ROBIO 2013), discriminate human and robot body from objects (AURO 2016), by combining visual and proprioceptive information; to track the human partner and his gaze during interactions (BICA 2012, Humanoids 2012), by combining audio and visual information, etc.Instructions for reproducing the experiments: online documentation, code (svn repository)

Dealing with uncertainty in object’s pose estimation

grasping_uncertaintyThe real world is uncertain and full of noise: as humans do, the robot should take into account this uncertainty before acting and to compute its action. We proposed a new grasp planning method that explicitly takes into account the uncertainty in the object pose estimated by the robot noisy cameras through a sparse point cloud.

References: RAS 2014


Whole-body dynamics estimation and control of physical interaction

icub_idynTo interact physically, the robot needs a reliable estimation of its whole-body dynamics and its contact forces. For robots such as iCub, not equipped with joint torque sensors, the main problems are: 1) how to retrieve the interaction forces without dedicated sensors, in presence of multiple contacts in arbitrary locations 2) how to estimate the joint torques in such conditions. During my PhD in IIT, we developed the theory of the Enhanced Oriented Graphs (EOG), that makes it possible to compute online the whole-body robot dynamics combining the measurements of force/torque, inertial and tactile sensors. This estimation method has been implemented in the library iDyn, included in the iCub software (now significantly improved in iDynTree thanks to the CoDyCo project).

References: survey paper robotic simulators , Humanoids 2011, ICRA 2015, Autonomous Robots 2012

Software for library and experiments: code (svn)
Tutorial of iDyn: manual (wiki)introduction (online doc), introduction (slides VVV10)tutorial (online doc)


From Humans to Humanoids

h2hIs it possible to transfer some principles of learning and control from humans to robots? Yes, and it is generally a good idea (see survey paper Humans to Humanoids).
I investigated on this question during my PhD in IIT, where I showed that it is possible to use optimal control to transfer some optimization criteria typical of human planning movements to robot (IROS 2010). We also used stochastic optimal control for controlling the robot movement and its compliance in presence of uncertainties, noise or delays (IROS 2011). Then during my postdoc in ISIR, where we took inspiration from developmental psychology and learning in toddlers to develop a cognitive architecture for the iCub to learn in an incremental and multimodal way objects through autonomous exploration and interaction with a human tutor (TAMD 2014).