Master Internship (6 months, March-August 2025) with possibility of pursuing a PhD Project
The effect of Joint Attention and Decision-Making in Playing Hex-game with a humanoid robot
Joint attention (JA) is a fundamental skill for human social cognition [1], allowing individuals to focus on things together with others. Through this capacity, humans can share experiences about the world, coordinate thoughts and behaviors, and successfully cooperate with others. Reduced JA is associated with deficits in social cognition, which is observed in psychological conditions such as autism spectrum disorders (ASD) [2].
By continuing previous studies (see TOP-JAM [3] and AEGO [4]), the objective of this project is to develop an experimental prototype for JA skill training, consisting in playing Hex-game with humanoid robots. Through the prototype developed, the following hypothesis are going to be evaluated:
i) Shorter decision-making time is obtained by engaging in JA with the robot from gaze cueing.
ii) The game situation and embodiment can help training JA skills in ASD persons.
Methodology
A face-to-face interaction scenario is planned, consisting of playing Hex-game with the robot through a touch screen. In Hex-Game, the first-mover in the best case is a must-win, and no draws occur. The robot will produce behavior cues announcing its next move intention and then act by emulating touching the screen with its finger. Participants will play the game in two conditions: with and without the robot (i.e. against an AI algorithm). Participants’ behavior will be registered from RGBD cameras and an eye-tracker system Tobii ProGlasses.
Missions
The experiment version of Hex-Game has been implemented in Python 3 programming language. The program ensures communication with the robot for synchronizing the game’s screen view with the robot’s emulated decision-making and behavior. Hence, the work to be done consists in:
- Programming the robot (several platforms are available, e.g. iCub, Furhat, Tiago, and Unitree G1).
- Estimating JA states by fusing data from the human and the robot.
- Ensuring synchronized data acquisition in the experiment.
- Conducting experiments with a sample of subjects.
- Analyzing results.
Environment
This research is contextualized in an international collaboration project between the Kyushu Institute of Technology (Kyutech) from Kitakyushu, Japan, and the University of Lorraine from Nancy, France.
The internship will be developed at LORIA-CNRS lab between March and August 2025 (6 months). A short stay is envisaged at the Human and Social Intelligence Systems lab (Kyutech, Japan).
Profile
- Deep Interest in human-robot interaction, embodiment, cognitive sciences and bio-inspiration.
- Programming skills (mostly Python, though C++ would be a plus).
- Notions of classical geometric modeling and behavior regulation in robotics (you understand what a direct / inverse geometric and kinematic model is).
- Knowledge in robotics middle-ware software (e.g. YARP, ROS).
- You can communicate and do presentations in English or French.
- Strong academic records.
- Motivation for pursuing a PhD project would be a plus.
How to apply ?
Applications are evaluated as received until fulfilling the position.
If you are interested in applying please send a motivation letter, CV, and the most recent transcript of your academic records to Hendry Ferreira Chame at the e-mail address: hendry.ferreira-chame@loria.fr
References
[1] Siposova, B., & Carpenter, M. (2019). A new look at joint attention and common knowledge. Cognition, 189, 260-274.
[2] Hyman, S. L., Levy, S. E., Myers, S. M., Kuo, D. Z., Apkon, S., Davidson, L. F., … & Bridgemohan, C. (2020). Identification, evaluation, and management of children with autism spectrum disorder. Pediatrics, 145(1).
[3] Chame, H. F., Clodic, A. & Alami, R. TOP-JAM: A bio-inspired topology-based model of joint attention for human-robot interaction. 2023 IEEE International Conference on Robotics and Automation (ICRA), London, United Kingdom, 2023, pp. 7621-7627, doi: 10.1109/ICRA48891.2023.10160488.
[4] Chame, H. F., & Alami, R. (2024, October). AEGO: Modeling Attention for HRI in Ego-Sphere Neural Networks. In 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).