ICRA 2020 Workshop

Shared Autonomy: Learning and Control

May 30th 2020 or June 4th 2020 (TBC) during ICRA 2020, Palais de Congres, Paris, France

Objectives

Shared autonomy enables humans to effectively and comfortably realize complex tasks by physically or remotely interacting with robotic systems. Nowadays, research on shared autonomy systems is playing a relevant role in a wide spectrum of robotic applications ranging from surgical to industrial, from single arm systems to robot swarms. Recently, the availability of datasets and the advancement of machine learning techniques have enabled enhanced flexibility of shared autonomy systems that are now capable of providing contextual or personalized assistance and seamless adaption of the autonomy level. However, this desirable trend raises new challenges for safety and stability certification of shared autonomy robotic systems, thus requiring new advanced control methods to implement the continuously evolving division of roles.

This workshop brings together renowned scientists from the machine learning and control communities devoted to shared autonomy robotics research. The goal is to foster the discussion about the state-of-art in the research field from different perspectives in pursuit of coordinated solutions for building the next generation of shared autonomy robotic systems.

 The workshop will consists of invited talks (30 min) and presentations from selected contributions after a call for papers/posters. The workshop will close with a panel discussion.

Program at a glance

Time Talk
8:45 – 9:00 Introduction by the organizers
9:00 – 9:30 Allison Okamura
Human Interface for Teleoperated Object Manipulation with a Soft Growing Robot
9:30 – 10:00 Leonel Rozo
Leveraging domain knowledge for efficient learning and adaptation of robotic skills
10:00 – 10:30 Paolo Robuffo Giordano
Human-assisted robotics
10:30 – 11:00 Coffee Break with hands-on demos
11:00 – 11:30 Arash Ajoudani
Shared authority control of a MObile Collaborative robot Assistant (MOCA)
11:30 – 12:00 Daniele Pucci
Nonlinear Ergonomic Control of Human-Robot and Robot-Robot Collaboration
12:00 – 12:30 Jan Peters
TBA 
12:30 – 13:00 Poster / video teasers
13:00 – 14:00 Lunch Break
14:00 – 14:30 Siddartha Srinivasa
A Bayesian Perspective of Shared Autonomy
14:30 – 15:00 Cristian Secchi
Shared control for human-robot interaction: an energy based perspective
15:00 – 15:30 Byron Boots
Online Learning for Adaptive Robotic Systems
15:30 – 16:00 Coffee Break with hands-on demos
16:00 – 16:30 Julie Shah
Mixed Initiative Human-Machine Collaboration through Effective Team Communication
16:30 – 17:00 Paolo Rocco
Occlusion-free visual servoing for the shared autonomy teleoperation of dual-arm robots
17:00 – 17:30 Oussama Khatib
Human-Robot Collaboration: Interfaces and Control Architecture
17:30 – 18:30 Panel discussion 

Invited speakers

Allison Okamura (Stanford Univ.)
Title: Human Interface for Teleoperated Object Manipulation with a Soft Growing Robot
Abstract: Soft growing robots are proposed for use in applications such as complex manipulation tasks or navigation in disaster scenarios. Safe interaction and ease of production promote the usage of this technology, but soft robots can be challenging to teleoperate due to their unique degrees of freedom. We propose a human-centered interface that allows users to teleoperate a soft growing robot for manipulation tasks using arm movements. A study was conducted to assess the intuitiveness of the interface and the performance of our soft robot, involving a pick-and-place manipulation task. The results show that users completed the task with a success rate of 97%, achieving placement errors below 2 cm on average. These results demonstrate that our body-movement-based interface is an effective method for control of a soft growing robot manipulator. We believe that these results may be further improved with the implementation of shared autonomy protocols. By allowing the robot to participate in the execution of the task, the role of the human operator will be simplified and the different strengths of the human and the robot can be exploited.
Siddhartha Srinivasa (Univ. of Washington)
Title: A Bayesian Perspective of Shared Autonomy
Abstract: Working together inevitably involves understanding each others intentions. Bayesian Reinforcement Learning provides an elegant framework for formalizing the exploration of latent intent with the exploitation of performing the task. In this talk Ill present some of our work on efficient Bayesian RL and some of the challenges we face when implementing it in the HRI domain.
Paolo Robuffo Giordano (CNRS/IRISA)
Title: Human-assisted robotics
Abstract: Nowadays and future robotics applications are expected to address more and more complex tasks in increasingly unstructured environments and in co-existence or co-operation with humans. Achieving full autonomy is clearly a ”holy grail” for the robotics community: however, one could easily argue that real full autonomy is, in practice, out of reach for years to come, and in some cases also not desirable. The leap between the cognitive skills (e.g., perception, decision making, general scene understanding) of us humans w.r.t. those of the most advanced nowadays robots is still huge. In most applications involving tasks in unstructured environments, uncertainty, and interaction with the physical word, human assistance is still necessary, and will probably be for the next decades. These considerations motivate research efforts into the (large) topic of shared control for complex robotics systems: on the one hand, empower robots with a large degree of autonomy for allowing them to effectively operate in non-trivial environments. On the other hand, include human users in the loop for having them in (partial) control of some aspects of the overall robot behavior. In this talk I will then review several recent results on novel shared control architectures meant to blend together diverse fields of robot autonomy (sensing, planning, control, machine learning) for providing a human operator an easy ”interface” for commanding the robot at high-level. Applications to the control of single/multiple mobile robots for remote navigation, and of manipulation systems for remote telemanipulation will be illustrated.
Oussama Khatib (Stanford Univ.)
Title: Human-Robot Collaboration: Interfaces and Control Architecture
Abstract: TBD
Daniele Pucci (IIT)
Title: Nonlinear Ergonomic Control of Human-Robot and Robot-Robot Collaboration
Abstract: TBD
Arash Ajoudani (IIT)
Title: Shared authority control of a MObile Collaborative robot Assistant (MOCA): From close proximity collaboration to remote loco-manipulation
Abstract: This talk covers the HRI2 laboratory’s recent progress in shared authority control of a mobile collaborative robot. MOCA is a new research platform developed at IIT, which is composed by a lightweight manipulator arm, a Pisa/IIT SoftHand, and a mobile platform driven by four Omni-directional wheels. The loco-manipulation behaviour of the robot is controlled by a whole-body torque controller, which takes into account the causal interactions in such a dynamic system. The planning of the loco-manipulation trajectories in close proximity collaboration with humans or remote teleoperation tasks is achieved using a shared control system, which reacts based on the human sensory and intentional inputs and physical effort, and the task requirements.
Cristian Secchi (Univ. Modena e Reggio Emilia)
Title: Shared control for human-robot interaction: an energy based perspective
Abstract: Controlling the way energy is exchanged between among humans, robots and the environment they are interacting with is crucial for ensuring a natural and stable behavior of the overall system, despite of variations of authority in the team. In this talk, I will illustrate our recent research on energy based control for human robot(s) interaction and I will show how to develop energy based controller for a flexible and natural shared control of a human robots team.
Leonel Rozo (Bosch center for AI)
Title: Leveraging domain knowledge for efficient learning and adaptation of robotic skills
Abstract: Robotic skills learning from human demonstrations usually allows the robot to execute tasks for a subset of different task instances. However, when this instance involves significant environment changes, failure and/or high uncertainty, the learned skills need to be adapted on the fly. Such an adaptation process needs to be safe, fast and data-efficient as the robot is a physical system interacting with the environment or a human partner, and therefore every single adaptation trial is costly. Data-efficient robot learning has been tackled from different perspectives, for example, through the use of hierarchical structures, using sample-efficient optimization methods, or by integrating prior knowledge in the skill representation or the reward function. In this talk, I will present how domain knowledge extracted from the geometry of the robot parameters can be exploited to learn, from demonstrations, variable impedance skills or time-varying profiles of manipulability ellipsoids. I will also show how this geometry information is exploited to efficiently adapt robot skills using a geometry-aware Bayesian optimization framework. Finally, I will show current applications in robotic manipulation and physical human-robot interaction that benefit from the presented approaches. Future research will highlight how variable impedance behaviors and manipulability ellipsoids may be exploited in human-robot collaboration scenarios.
Byron Boots (Univ. of Washington & NVIDIA)
Title: Online Learning for Adaptive Robotic Systems
Abstract: There are few things more frustrating than a machine that repeats the same mistake over and over again. To contend with a complex and uncertain world, robots must learn from their mistakes and rapidly adapt to their environment. The main goal of this talk is to illustrate how machine learning can start to address some of the fundamental perceptual and control challenges involved in building intelligent robots. I’ll start by introducing an online learning perspective on robot adaptation that unifies well-known algorithms and suggests new approaches. Along the way, I’ll focus on the use of prior knowledge and expert advice to augment learning: I’ll discuss how imperfect models can be leveraged to rapidly update simple control policies and imitation can accelerate reinforcement learning. I will also show how we have applied these ideas to an autonomous “AutoRally” robot built at Georgia Tech and an off-road racing task that requires impressive sensing, speed, and agility to complete.
Julie Shah (MIT)
Title: Mixed Initiative Human-Machine Collaboration through Effective Team Communication
Abstract: TBD
Paolo Rocco (Politecnico di Milano)
Title: Occlusion-free visual servoing for the shared autonomy teleoperation of dual-arm robots
Abstract: In this talk we discuss the shared control of a dual-arm teleoperation system, where one robot is autonomous and equipped with a camera, while the other is teleoperated. We developed a unified visual servoed controller for occlusion-free teleoperation in dynamic environments. The proposed controller relies on a quadratic programming optimization formulation, that simultaneously takes into account both robot arms. While one arm is tasked with tracking the input from a user operating a master station, the camera relies on feature information to avoid occlusion and keep the teleoperated arm in the field of view. To this end, an occlusion constraint is defined in the image space based on the minimum distance criterion and made robust against noisy measurements and dynamic environment. A state machine is used to switch the control policy whenever an occlusion might occur. We validate our approach with experiments on a 14 d.o.f. dual-arm ABB YuMi robot equipped with an red, green, blue (RGB) camera and teleoperated by a 3 d.o.f. Novint Falcon device.

Organizers

Mario Selvaggio
Univ. Napoli, Italy
mario.selvaggio@unina.it
Marco Cognetti
CNRS, France
marco.cognetti@irisa.fr
Serena Ivaldi
Inria, France
serena.ivaldi@inria.fr
Bruno Siciliano
Univ. Napoli, Italy
siciliano@unina.it

Support from IEEE RAS Technical Committee

  • Human-Robot Interaction & Coordination
  • Robot Learning

Call for papers and contributions

We would like to invite all the prospective participants to submit an extended abstract (up to 2 pages) to be presented at the workshop. The manuscripts should use the IEEE ICRA two-column format. Please submit a PDF copy of your manuscript through our EasyChair platform before April 24th. Papers will be selected based on their originality, relevance to the workshop topics, contributions, technical clarity, and presentation. Authors of the accepted papers will be invited to submit an extended paper to a special issue to be organized. Accepted papers require that at least one of the authors attend the workshop. This workshop is an excellent opportunity to present and discuss your ongoing work and get an early feedback from the participants. All the accepted papers will have a relevant role in the workshop. In particular, the authors will be requested to perform a teaser and a poster presentation at the workshop. Hands-on demo are highly encouraged.

The topics of interest include but are not limited to:
• Shared autonomous systems
• Modeling and learning human-robot interaction
• Shared and supervisory control
• Human-in-the-loop systems
• Collaborative and assistive robotics
• Telerobotics control and interfaces
• Haptic feedback and guidance
• Robot safety
• Co-adaptation between human and robot
• Intention recognition, skill level/gap evaluation and role allocation
• Learning from demonstration
• Applications in robotic teleoperation, mobile robotics, humanoid robotics, and medical robotics

How to contribute to the workshop

To submit the paper please follow the Easychair link: https://easychair.org/cfp/salc2020

Important dates

Submission deadline: April 24th
Acceptance notification: April 30th
Camera-ready deadline: May 6th
Workshop day: May 31st or Jun 4th

Acknowledgments

The workshop is supported by the European Project An.Dy.