Research

PhD

Title

Robots that learn to control the physical interaction with their environment and humans

Summary

Robots working in man-made environments must be capable of interacting socially and physically with humans, a skill that calls upon online learning, control and adaptation. The control of the physical interaction (precisely, the control of contact forces) between the robot and the human, particularly when the robot is moving, is a major challenge. CODYCO In mobile robots, this problem is usually addressed by planning the robot movement taking into account the human as an obstacle or as a target, then delegating the execution of this “high-level” motion to whole-body controllers, where a mixture of weighted/prioritized tasks, describing the robot posture, constraints and desired end-effectors trajectories, is used to control the robot movement. Currently the whole control chain is not easy to deploy in real robotics systems, as it requires a lot of tuning and can easily become very complex to handle the interaction with humans. This issue is currently one among many addressed by the supervisor in the context of the European Project CODYCO.

Other projects

During my master intituled Intelligent and communicatign systems, I have done some robotics projects in the neurocybernetics team of the ETIS laboratory.
These projects are based on the idea that actions modify perceptions. Thus, robots learn behavior thanks to links between perceptions and actions, following the enact ion paradigm and the PerAc (Perception-Action) model proposed by P. Gaussier.

Beginnings of intentionality mechanisms applied to human-robot interaction

This is my main master project, supervised by Alexandre Pitti and Sofiane Boucenna. In this project, the hydraulic robot Tino learned to :

A. Recognize the area of interest (right/center/left).

https://youtu.be/Y60sgxqlvLs

B. Recognize in this area the object of interest in a multimodal way (using vision audition and proprioception).

See the video of the multimodal project that follow the same algorithm

Multimodal learning using neural network

This is a master project, supervised by Alexandre Pitti and Sofiane Boucenna. In this project, the robot learned to recognize vowels (/a/, /o/ and /i/) combining its audition and vision.

 https://youtu.be/kuJ6c71gnQU

Simulation of a robotic arm, inverse model (with neuronal networks)

This master project was supervised by Arnaud Blanchard. The goal was to use make the robot learn how to position its arm to hit a desired target. To do that, I used an algorithm based on the inverse model of Denavit_Hartenberg during a learning step and then, the least mean square algorithm to learn to associate previous results to the goal position.