Research

PhD

Title

Movement Prediction for human-robot collaboration : from simple gesture to whole-body movement

Summary

This thesis lies at the intersection between machine learning and humanoid robotics, under the theme of human-robot interaction and within the cobotics (collaborative robotics) field. It focuses on prediction for non-verbal human-robot interactions, with an emphasis on gestural interaction. The prediction of the intention, understanding, and reproduction of gestures are therefore central topics of this thesis. First, the robots learn gestures by demonstration: a user grabs its arm and makes it perform the gestures to be learned several times. The robot must then be able to reproduce these CODYCOdifferent movements while generalizing them to adapt them to the situation. To do so, using its proprioceptive sensors, it interprets the perceived signals to understand the user’s movement in order to generate similar ones later on. Second, the robot learns to recognize the intention of the human partner based on the gestures that the human initiates. The robot can then perform gestures adapted to the situation and corresponding to the user’s expectations. This requires the robot to understand the user’s gestures. To this end, different perceptual modalities have been explored. Using proprioceptive sensors, the robot feels the user’s gestures through its own body: it is then a question of physical human-robot interaction. Using visual sensors, the robot interprets the movement of the user’s head. Finally, using external sensors, the robot recognizes and predicts the user’s whole body movement. In that case, the user wears sensors (in our case, a wearable motion tracking suit by XSens) that transmit his posture to the robot. In addition, the coupling of these modalities was studied. From a methodological point of view, the learning and the recognition of time series (gestures) have been central to this thesis. In that aspect, two approaches have been developed. The first is based on the statistical modeling of movement primitives (corresponding to gestures) : ProMPs. The second adds Deep Learning to the first one, by using auto-encoders in order to model whole-body gestures containing a lot of information while allowing a prediction in soft real time. Various issues were taken into account during this thesis regarding the creation and development of our methods. These issues revolve around: the prediction of trajectory durations, the reduction of the cognitive and motor load imposed on the user, the need for speed (soft real-time) and accuracy in predictions

Other projects

During my master intituled Intelligent and communicatign systems, I have done some robotics projects in the neurocybernetics team of the ETIS laboratory.
These projects are based on the idea that actions modify perceptions. Thus, robots learn behavior thanks to links between perceptions and actions, following the enact ion paradigm and the PerAc (Perception-Action) model proposed by P. Gaussier.

Beginnings of intentionality mechanisms applied to human-robot interaction

This is my main master project, supervised by Alexandre Pitti and Sofiane Boucenna. In this project, the hydraulic robot Tino learned to :

A. Recognize the area of interest (right/center/left).

https://youtu.be/Y60sgxqlvLs

B. Recognize in this area the object of interest in a multimodal way (using vision audition and proprioception).

See the video of the multimodal project that follow the same algorithm

Multimodal learning using neural network

This is a master project, supervised by Alexandre Pitti and Sofiane Boucenna. In this project, the robot learned to recognize vowels (/a/, /o/ and /i/) combining its audition and vision.

 https://youtu.be/kuJ6c71gnQU

Simulation of a robotic arm, inverse model (with neuronal networks)

This master project was supervised by Arnaud Blanchard. The goal was to use make the robot learn how to position its arm to hit a desired target. To do that, I used an algorithm based on the inverse model of Denavit_Hartenberg during a learning step and then, the least mean square algorithm to learn to associate previous results to the goal position.