{"id":815,"date":"2020-01-08T18:12:20","date_gmt":"2020-01-08T16:12:20","guid":{"rendered":"http:\/\/members.loria.fr\/SIvaldi\/?page_id=815"},"modified":"2020-06-19T15:35:02","modified_gmt":"2020-06-19T13:35:02","slug":"icra-2020-workshop","status":"publish","type":"page","link":"https:\/\/members.loria.fr\/SIvaldi\/icra-2020-workshop\/","title":{"rendered":"ICRA 2020 Workshop"},"content":{"rendered":"<h1>Shared Autonomy: Learning and Control<\/h1>\n<p><strong>June 4th 2020 during<\/strong> <a href=\"http:\/\/www.icra2020.org\/\"><strong>ICRA 2020<\/strong><\/a>, <del>Palais de Congres, Paris, France<\/del><\/p>\n<p><span style=\"color: #3366ff\"><strong>=&gt; virtual meeting on zoom, starting at 8:30 AM CEST<br \/>\nLink ZOOM: <a class=\"oajrlxb2 g5ia77u1 qu0x051f esr5mh6w e9989ue4 r7d6kgcz rq0escxv nhd2j8a9 nc684nl6 p7hjln8o kvgmc6g5 cxmmr5t8 oygrvhab hcukyx3x jb3vyjys rz4wbd8a qt6c0cv9 a8nywdso i1ao9s8h esuyzwwr f1sip0of lzcic4wl py34i1dx gpro0wi8\" style=\"color: #3366ff\" role=\"link\" href=\"https:\/\/l.facebook.com\/l.php?u=https%3A%2F%2Foulu.zoom.us%2Fj%2F68686148336%3Ffbclid%3DIwAR3wN5XS9d4yBfQR3D6xPlzASNiZg1HP-1pkJVYtSiDvH9gzwd1JAb-09lI&amp;h=AT1HzFQl6b-2KrEGpE0VWBjoLy7iTsS3M7KwzA9HAANQMzFBxGdQcpPMhtP12xQdJ78EjQrD-6gh-JQwFwKpBRtxeYopzFbZtPeB4C5TWlrEVWFYl_migqh8Pbwkhy_1x8uVSX0&amp;__tn__=-UK-R&amp;c[0]=AT1xphaHt9MCUIb5RcmZDBlL5viTxUh2K-Y5YD1kEAnV16AsG6mCuCp8MWDBrer7-S_ZvX7fh4gnLaST97zswy1WZfjtLLFIwT9NpeHQMO680jPTB9wHcOqtKSeH2mxcObBTUpclWr2LZyvoc6HKvg\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">https:\/\/oulu.zoom.us\/j\/68686148336<\/a><br \/>\nPassword to enter the zoom: @SALC2020 (included the @)<\/strong><\/span><\/p>\n<p><strong>=&gt; On the ICRA2020 slack, look for the <a href=\"https:\/\/icra20.slack.com\/app_redirect?channel=ws22\">channel #ws22 <\/a>\u00a0<\/strong><\/p>\n<h2>Video recordings<\/h2>\n<p>See the Youtube playlist: <a href=\"https:\/\/www.youtube.com\/playlist?list=PLaViAl2WLPMecZ8q58U5Nxi8vFwgMLKTs\">https:\/\/www.youtube.com\/playlist?list=PLaViAl2WLPMecZ8q58U5Nxi8vFwgMLKTs<\/a><\/p>\n<p style=\"text-align: center\">\n<!-- iframe plugin v.6.0 wordpress.org\/plugins\/iframe\/ -->\n<iframe loading=\"lazy\" src=\"https:\/\/www.youtube.com\/embed\/videoseries?list=PLaViAl2WLPMecZ8q58U5Nxi8vFwgMLKTs\" width=\"560\" height=\"315\" frameborder=\"0\" allowfullscreen=\"allowfullscreen\" scrolling=\"yes\" class=\"iframe-class\"><\/iframe>\n<\/p>\n<h2>Objectives<\/h2>\n<p>Shared autonomy enables humans to effectively and comfortably realize complex tasks by physically or remotely interacting with robotic systems. Nowadays, research on shared autonomy systems is playing a relevant role in a wide spectrum of robotic applications ranging from surgical to industrial, from single arm systems to robot swarms. Recently, the availability of datasets and the advancement of machine learning techniques have enabled enhanced flexibility of shared autonomy systems that are now capable of providing contextual or personalized assistance and seamless adaption of the autonomy level. However, this desirable trend raises new challenges for safety and stability certification of shared autonomy robotic systems, thus requiring new advanced control methods to implement the continuously evolving division of roles.<\/p>\n<p><strong>This workshop brings together renowned scientists from the machine learning and control communities devoted to shared autonomy robotics research. The goal is to foster the discussion about the state-of-art in the research field from different perspectives in pursuit of coordinated solutions for building the next generation of shared autonomy robotic systems.<\/strong><\/p>\n<p><strong>\u00a0<\/strong>The workshop will consists of invited talks (30 min) and presentations from selected contributions after a call for papers\/posters. The workshop will close with a panel discussion.<\/p>\n<h2>Program at a glance<\/h2>\n<table  class=\" table table-hover\" >\n<tbody>\n<tr>\n<td><strong>Time (CEST)<\/strong><\/td>\n<td><strong>Talk<\/strong><\/td>\n<\/tr>\n<tr>\n<td><span style=\"color: #800000\"><strong>8:30 &#8211; 9:00<\/strong><\/span><\/td>\n<td><strong><span style=\"color: #800000\">Introduction by the organizers<\/span><br \/>\n<\/strong><\/td>\n<\/tr>\n<tr>\n<td>9:00 \u2013 9:30<\/td>\n<td><b>Oussama Khatib<\/b><br \/>\n<span style=\"color: #339966\"><strong>Human-Robot Collaboration: Interfaces and Control Architecture<\/strong><\/span><\/td>\n<\/tr>\n<tr>\n<td>9:30 \u2013 10:00<\/td>\n<td><b>Leonel Rozo<\/b><br \/>\n<strong><span style=\"color: #339966\">Leveraging domain knowledge for efficient learning and adaptation of robotic skills<\/span><\/strong><\/td>\n<\/tr>\n<tr>\n<td>10:00 \u2013 10:30<\/td>\n<td><b>Paolo Robuffo Giordano<\/b><br \/>\n<strong><span style=\"color: #339966\">Human-assisted robotics<\/span><\/strong><\/td>\n<\/tr>\n<tr>\n<td><span style=\"color: #800000\"><strong>10:30 &#8211; 11:00<\/strong><\/span><\/td>\n<td><span style=\"color: #800000\"><strong>Coffee Break\u00a0<\/strong><\/span><\/td>\n<\/tr>\n<tr>\n<td>11:00 \u2013 11:30<\/td>\n<td><b>Arash Ajoudani<\/b><br \/>\n<span style=\"color: #339966\"><strong>Shared authority control of a MObile Collaborative robot Assistant (MOCA)<\/strong><\/span><\/td>\n<\/tr>\n<tr>\n<td>11:30 \u2013 12:00<\/td>\n<td><b>Daniele Pucci<\/b><br \/>\n<strong><span style=\"color: #339966\">Wearable Technologies, Estimation, and Control for Human Robot Collaboration with an Application to COVID-19<\/span><\/strong><\/td>\n<\/tr>\n<tr>\n<td>12:00 \u2013 12:30<\/td>\n<td><b><del>Jan Peters<\/del> cancelled<\/b><br \/>\n<del><span style=\"color: #339966\"><strong>Motor skill learning\u00a0<\/strong><\/span><\/del><\/td>\n<\/tr>\n<tr>\n<td>12:30 \u2013 13:00<\/td>\n<td><b>Poster \/ video teasers pt.1<\/b><\/td>\n<\/tr>\n<tr>\n<td><span style=\"color: #800000\"><strong>13:00 &#8211; 14:00<\/strong><\/span><\/td>\n<td><span style=\"color: #800000\"><strong>Lunch Break<\/strong><\/span><\/td>\n<\/tr>\n<tr>\n<td>14:00 \u2013 14:30<\/td>\n<td><b>Poster \/ video teasers pt.2<\/b><\/td>\n<\/tr>\n<tr>\n<td>14:30 \u2013 15:00<\/td>\n<td><b>Cristian Secchi<\/b><br \/>\n<strong><span style=\"color: #339966\">Shared control for human-robot interaction: an energy based perspective<\/span><\/strong><\/td>\n<\/tr>\n<tr>\n<td>15:00 \u2013 15:30<\/td>\n<td><b>Paolo Rocco<\/b><br \/>\n<span style=\"color: #339966\"><strong>Occlusion-free visual servoing for the shared autonomy teleoperation of dual-arm robots<\/strong><\/span><\/td>\n<\/tr>\n<tr>\n<td><span style=\"color: #800000\"><strong>15:30 \u2013 16:30<\/strong><\/span><\/td>\n<td><span style=\"color: #800000\"><strong>Coffee Break\u00a0<\/strong><\/span><\/td>\n<\/tr>\n<tr>\n<td>16:30 \u2013 17:00<\/td>\n<td><strong>Allison Okamura<br \/>\n<span style=\"color: #339966\">Human Interface for Teleoperated Object Manipulation with a Soft Growing Robot<\/span><\/strong><\/td>\n<\/tr>\n<tr>\n<td>17:00 \u2013 17:30<\/td>\n<td><b>Byron Boots<\/b><br \/>\n<strong><span style=\"color: #339966\">Online Learning for Adaptive Robotic Systems<\/span><\/strong><\/td>\n<\/tr>\n<tr>\n<td>17:30 \u2013 18:00<\/td>\n<td><b>Tapomayukh Bhattacharjee<\/b><br \/>\n<span style=\"color: #339966\"><strong>Robot assisted feeding: exploring autonomy with perceived errors<\/strong><\/span><\/td>\n<\/tr>\n<tr>\n<td><strong>18:00 \u2013 18:30<\/strong><\/td>\n<td><strong>Panel discussion\u00a0<\/strong><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h2>Invited speakers<\/h2>\n<table  class=\" table table-hover\" >\n<tbody>\n<tr>\n<td width=\"15%\"><a href=\"http:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2020\/01\/A.Okamura-small.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\"alignleft wp-image-833 size-thumbnail\" src=\"http:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2020\/01\/A.Okamura-small-150x150.jpg\" alt=\"\" width=\"150\" height=\"150\" srcset=\"https:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2020\/01\/A.Okamura-small-150x150.jpg 150w, https:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2020\/01\/A.Okamura-small-60x60.jpg 60w\" sizes=\"auto, (max-width: 150px) 100vw, 150px\" \/><\/a><\/td>\n<td><a href=\"https:\/\/charm.stanford.edu\/Main\/AllisonOkamura\/\"><strong>Allison Okamura (Stanford Univ.)<\/strong><\/a><br \/>\nTitle: <strong>Human Interface for Teleoperated Object Manipulation with a Soft Growing Robot<\/strong><br \/>\nAbstract: Soft growing robots are proposed for use in applications such as complex manipulation tasks or navigation in disaster scenarios. Safe interaction and ease of production promote the usage of this technology, but soft robots can be challenging to teleoperate due to their unique degrees of freedom. We propose a human-centered interface that allows users to teleoperate a soft growing robot for manipulation tasks using arm movements. A study was conducted to assess the intuitiveness of the interface and the performance of our soft robot, involving a pick-and-place manipulation task. The results show that users completed the task with a success rate of 97%, achieving placement errors below 2 cm on average. These results demonstrate that our body-movement-based interface is an effective method for control of a soft growing robot manipulator. We believe that these results may be further improved with the implementation of shared autonomy protocols. By allowing the robot to participate in the execution of the task, the role of the human operator will be simplified and the different strengths of the human and the robot can be exploited.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<table  class=\" table table-hover\" >\n<tbody>\n<tr>\n<td width=\"15%\"><a href=\"http:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2020\/05\/tapo.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-medium wp-image-915\" src=\"http:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2020\/05\/tapo-300x300.png\" alt=\"\" width=\"300\" height=\"300\" srcset=\"https:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2020\/05\/tapo-300x300.png 300w, https:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2020\/05\/tapo-150x150.png 150w, https:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2020\/05\/tapo-60x60.png 60w, https:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2020\/05\/tapo.png 453w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/><\/a><\/td>\n<td><strong><a href=\"http:\/\/www.tapomayukh.com\/\">Tapomayukh Bhattacharjee<\/a> (Univ. of Washington)<\/strong><br \/>\nTitle: <strong>Robot-assisted Feeding: Exploring Autonomy with Perceived Errors<\/strong><br \/>\nAbstract: Robot-assisted feeding can potentially enable people with upper-body mobility impairments to eat independently. Eating free-form food is one of the most intricate manipulation tasks we perform in our daily lives, and manual teleoperation of such an intricate task which involves controlling a high-DoF robot arm with a low-DoF user interface can be very challenging and time consuming. We focus on an autonomous solution but successful robot-assisted feeding depends on reliable bite acquisition of hard-to-model deformable food items and easy bite transfer. Using insights from human studies, I will showcase algorithms and technologies that leverage multiple sensing modalities to perceive varied food item properties and determine successful strategies for bite acquisition and transfer. However, successful autonomous assistance in the real world was still challenging because of the possibility of errors in uncertain and unstructured environments. Through a study with people with mobility limitations, we found no clear preference for higher levels of autonomy but interestingly, when grouped according to their mobility limitations, ratings from participants with higher mobility limitations were correlated with lower expectations of robot performance and higher levels of autonomy even with perceived errors.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<table  class=\" table table-hover\" >\n<tbody>\n<tr>\n<td width=\"15%\"><a href=\"http:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2020\/01\/prg.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\"alignleft wp-image-837 size-thumbnail\" src=\"http:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2020\/01\/prg-150x150.jpg\" alt=\"\" width=\"150\" height=\"150\" srcset=\"https:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2020\/01\/prg-150x150.jpg 150w, https:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2020\/01\/prg-60x60.jpg 60w\" sizes=\"auto, (max-width: 150px) 100vw, 150px\" \/><\/a><\/td>\n<td><a href=\"https:\/\/team.inria.fr\/rainbow\/team\/prg\/\"><strong>Paolo Robuffo Giordano (CNRS\/IRISA)<\/strong><\/a><br \/>\nTitle: <strong>Human-assisted robotics<\/strong><br \/>\nAbstract: Nowadays and future robotics applications are expected to address more and more complex tasks in increasingly unstructured environments and in co-existence or co-operation with humans. Achieving full autonomy is clearly a \u201dholy grail\u201d for the robotics community: however, one could easily argue that real full autonomy is, in practice, out of reach for years to come, and in some cases also not desirable. The leap between the cognitive skills (e.g., perception, decision making, general scene understanding) of us humans w.r.t. those of the most advanced nowadays robots is still huge. In most applications involving tasks in unstructured environments, uncertainty, and interaction with the physical word, human assistance is still necessary, and will probably be for the next decades. These considerations motivate research efforts into the (large) topic of shared control for complex robotics systems: on the one hand, empower robots with a large degree of autonomy for allowing them to effectively operate in non-trivial environments. On the other hand, include human users in the loop for having them in (partial) control of some aspects of the overall robot behavior. In this talk I will then review several recent results on novel shared control architectures meant to blend together diverse fields of robot autonomy (sensing, planning, control, machine learning) for providing a human operator an easy \u201dinterface\u201d for commanding the robot at high-level. Applications to the control of single\/multiple mobile robots for remote navigation, and of manipulation systems for remote telemanipulation will be illustrated.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<table  class=\" table table-hover\" >\n<tbody>\n<tr>\n<td width=\"15%\"><a href=\"http:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2020\/01\/Khatib.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\"alignleft wp-image-838 size-thumbnail\" src=\"http:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2020\/01\/Khatib-150x150.jpg\" alt=\"\" width=\"150\" height=\"150\" srcset=\"https:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2020\/01\/Khatib-150x150.jpg 150w, https:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2020\/01\/Khatib-60x60.jpg 60w\" sizes=\"auto, (max-width: 150px) 100vw, 150px\" \/><\/a><\/td>\n<td><a href=\"https:\/\/cs.stanford.edu\/groups\/manips\/ok.html\"><strong>Oussama Khatib (Stanford Univ.)<\/strong><\/a><br \/>\nTitle: <strong>Human-Robot Collaboration: Interfaces and Control Architecture<\/strong><br \/>\nAbstract: TBD<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<table  class=\" table table-hover\" >\n<tbody>\n<tr>\n<td width=\"15%\"><a href=\"http:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2020\/01\/pucci.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\"alignleft wp-image-839 size-thumbnail\" src=\"http:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2020\/01\/pucci-150x150.jpg\" alt=\"\" width=\"150\" height=\"150\" srcset=\"https:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2020\/01\/pucci-150x150.jpg 150w, https:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2020\/01\/pucci-60x60.jpg 60w, https:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2020\/01\/pucci.jpg 250w\" sizes=\"auto, (max-width: 150px) 100vw, 150px\" \/><\/a><\/td>\n<td><a href=\"https:\/\/www.iit.it\/people\/daniele-pucci\"><strong>Daniele Pucci (IIT)<\/strong><\/a><br \/>\nTitle: <strong>Wearable Technologies, Estimation, and Control for Human Robot Collaboration with an Application to COVID-19<\/strong><br \/>\nAbstract: This talk overviews the technologies derived for estimating and controlling human robot collaboration (HRC) scenarios developed in the framework of the H2020 European project An.Dy. Furthermore, it will be presented how the technologies for HRC have been used to derive solutions that help maintain social distancing** and monitor COVID-19 symptoms.**[<em>Note from the organizers<\/em>: &#8220;Social distancing&#8221; is a widely used term in relation to COVID-19 to indicate that people should keep meters away from the others. Many researchers pointed out that\u00a0it may be sending the wrong message and contributing to\u00a0social isolation. &#8220;Physical distancing&#8221; sounds more appropriate term, it simplifies the concept with the emphasis on keeping physical distance from others.]<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<table  class=\" table table-hover\" >\n<tbody>\n<tr>\n<td width=\"15%\"><a href=\"http:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2020\/01\/arash.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\"alignleft wp-image-840 size-thumbnail\" src=\"http:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2020\/01\/arash-150x150.jpg\" alt=\"\" width=\"150\" height=\"150\" srcset=\"https:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2020\/01\/arash-150x150.jpg 150w, https:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2020\/01\/arash-60x60.jpg 60w, https:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2020\/01\/arash.jpg 250w\" sizes=\"auto, (max-width: 150px) 100vw, 150px\" \/><\/a><\/td>\n<td><a href=\"https:\/\/www.iit.it\/people\/arash-ajoudani\"><strong>Arash Ajoudani (IIT)<\/strong><\/a><br \/>\nTitle: <strong>Shared authority control of a MObile Collaborative robot Assistant (MOCA): From close proximity collaboration to remote loco-manipulation<\/strong><br \/>\nAbstract: This talk covers the HRI2 laboratory&#8217;s recent progress in shared authority control of a mobile collaborative robot. MOCA is a new research platform developed at IIT, which is composed by a lightweight manipulator arm, a Pisa\/IIT SoftHand, and a mobile platform driven by four Omni-directional wheels. The loco-manipulation behaviour of the robot is controlled by a whole-body torque controller, which takes into account the causal interactions in such a dynamic system. The planning of the loco-manipulation trajectories in close proximity collaboration with humans or remote teleoperation tasks is achieved using a shared control system, which reacts based on the human sensory and intentional inputs and physical effort, and the task requirements.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<table  class=\" table table-hover\" >\n<tbody>\n<tr>\n<td width=\"15%\"><a href=\"http:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2020\/01\/ars-control-cristian-secchi.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\"alignleft wp-image-841 size-thumbnail\" src=\"http:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2020\/01\/ars-control-cristian-secchi-150x150.jpg\" alt=\"\" width=\"150\" height=\"150\" srcset=\"https:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2020\/01\/ars-control-cristian-secchi-150x150.jpg 150w, https:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2020\/01\/ars-control-cristian-secchi-60x60.jpg 60w\" sizes=\"auto, (max-width: 150px) 100vw, 150px\" \/><\/a><\/td>\n<td><a href=\"https:\/\/www.arscontrol.org\/team\/cristian-secchi\/\"><strong>Cristian Secchi (Univ. Modena e Reggio Emilia)<\/strong><\/a><br \/>\nTitle: <strong>Shared control for human-robot interaction: an energy based perspective<\/strong><br \/>\nAbstract: Controlling the way energy is exchanged between among humans, robots and the environment they are interacting with is crucial for ensuring a natural and stable behavior of the overall system, despite of variations of authority in the team. In this talk, I will illustrate our recent research on energy based control for human robot(s) interaction and I will show how to develop energy based controller for a flexible and natural shared control of a human robots team.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<table  class=\" table table-hover\" >\n<tbody>\n<tr>\n<td width=\"15%\"><a href=\"http:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2020\/01\/rozo.jpeg\"><img loading=\"lazy\" decoding=\"async\" class=\"alignleft wp-image-842 size-thumbnail\" src=\"http:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2020\/01\/rozo-150x150.jpeg\" alt=\"\" width=\"150\" height=\"150\" srcset=\"https:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2020\/01\/rozo-150x150.jpeg 150w, https:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2020\/01\/rozo-60x60.jpeg 60w, https:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2020\/01\/rozo.jpeg 225w\" sizes=\"auto, (max-width: 150px) 100vw, 150px\" \/><\/a><\/td>\n<td><a href=\"https:\/\/leonelrozo.weebly.com\/\"><strong>Leonel Rozo (Bosch center for AI)<\/strong><\/a><br \/>\nTitle: <strong>Leveraging domain knowledge for efficient learning and adaptation of robotic skills<\/strong><br \/>\nAbstract: Robotic skills learning from human demonstrations usually allows the robot to execute tasks for a subset of different task instances. However, when this instance involves significant environment changes, failure and\/or high uncertainty, the learned skills need to be adapted on the fly. Such an adaptation process needs to be safe, fast and data-efficient as the robot is a physical system interacting with the environment or a human partner, and therefore every single adaptation trial is costly. Data-efficient robot learning has been tackled from different perspectives, for example, through the use of hierarchical structures, using sample-efficient optimization methods, or by integrating prior knowledge in the skill representation or the reward function. In this talk, I will present how domain knowledge extracted from the geometry of the robot parameters can be exploited to learn, from demonstrations, variable impedance skills or time-varying profiles of manipulability ellipsoids. I will also show how this geometry information is exploited to efficiently adapt robot skills using a geometry-aware Bayesian optimization framework. Finally, I will show current applications in robotic manipulation and physical human-robot interaction that benefit from the presented approaches. Future research will highlight how variable impedance behaviors and manipulability ellipsoids may be exploited in human-robot collaboration scenarios.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<table  class=\" table table-hover\" >\n<tbody>\n<tr>\n<td width=\"15%\"><a href=\"http:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2020\/01\/boots.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\"alignleft wp-image-843 size-thumbnail\" src=\"http:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2020\/01\/boots-150x150.jpg\" alt=\"\" width=\"150\" height=\"150\" srcset=\"https:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2020\/01\/boots-150x150.jpg 150w, https:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2020\/01\/boots-60x60.jpg 60w\" sizes=\"auto, (max-width: 150px) 100vw, 150px\" \/><\/a><\/td>\n<td><a href=\"https:\/\/homes.cs.washington.edu\/~bboots\/\"><strong>Byron Boots (Univ. of Washington &amp; NVIDIA)<\/strong><\/a><br \/>\nTitle: <strong>Online Learning for Adaptive Robotic Systems<\/strong><br \/>\nAbstract: There are few things more frustrating than a machine that repeats the same mistake over and over again. To contend with a complex and uncertain world, robots must learn from their mistakes and rapidly adapt to their environment. The main goal of this talk is to illustrate how machine learning can start to address some of the fundamental perceptual and control challenges involved in building intelligent robots. I\u2019ll start by introducing an online learning perspective on robot adaptation that unifies well-known algorithms and suggests new approaches. Along the way, I\u2019ll focus on the use of prior knowledge and expert advice to augment learning: I\u2019ll discuss how imperfect models can be leveraged to rapidly update simple control policies and imitation can accelerate reinforcement learning. I will also show how we have applied these ideas to an autonomous \u201cAutoRally\u201d robot built at Georgia Tech and an off-road racing task that requires impressive sensing, speed, and agility to complete.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<table  class=\" table table-hover\" >\n<tbody>\n<tr>\n<td width=\"15%\"><a href=\"https:\/\/www.ias.informatik.tu-darmstadt.de\/uploads\/Member\/BioJanPeters\/OfficialPhoto.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\"alignleft wp-image-845 size-full\" src=\"https:\/\/www.ias.informatik.tu-darmstadt.de\/uploads\/Member\/BioJanPeters\/OfficialPhoto.jpg\" alt=\"\" width=\"147\" height=\"141\" \/><\/a><\/td>\n<td><del><a href=\"https:\/\/www.ias.informatik.tu-darmstadt.de\/Member\/JanPeters\"><strong>Jan Peters (TU Darmstadt)<\/strong><\/a><\/del> cancelled<br \/>\n<del>Title: <strong>Motor Skill Learning<\/strong><\/del><br \/>\n<del>Abstract: TBA<\/del><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<table  class=\" table table-hover\" >\n<tbody>\n<tr>\n<td width=\"15%\"><a href=\"http:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2020\/01\/Paolo-Rocco.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\"alignleft wp-image-845 size-full\" src=\"http:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2020\/01\/Paolo-Rocco.jpg\" alt=\"\" width=\"147\" height=\"141\" \/><\/a><\/td>\n<td><a href=\"http:\/\/home.deib.polimi.it\/rocco\/\"><strong>Paolo Rocco (Politecnico di Milano)<\/strong><\/a><br \/>\nTitle: <strong>Occlusion-free visual servoing for the shared autonomy teleoperation of dual-arm robots<\/strong><br \/>\nAbstract: In this talk we discuss the shared control of a dual-arm teleoperation system, where one robot is autonomous and equipped with a camera, while the other is teleoperated. We developed a unified visual servoed controller for occlusion-free teleoperation in dynamic environments. The proposed controller relies on a quadratic programming optimization formulation, that simultaneously takes into account both robot arms. While one arm is tasked with tracking the input from a user operating a master station, the camera relies on feature information to avoid occlusion and keep the teleoperated arm in the field of view. To this end, an occlusion constraint is defined in the image space based on the minimum distance criterion and made robust against noisy measurements and dynamic environment. A state machine is used to switch the control policy whenever an occlusion might occur. We validate our approach with experiments on a 14 d.o.f. dual-arm ABB YuMi robot equipped with an red, green, blue (RGB) camera and teleoperated by a 3 d.o.f. Novint Falcon device.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h2>Organizers<\/h2>\n<table  class=\" table table-hover\" >\n<tbody>\n<tr>\n<td><a href=\"http:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2020\/01\/selvaggio1.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-824 size-thumbnail aligncenter\" src=\"http:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2020\/01\/selvaggio1-150x150.jpg\" alt=\"\" width=\"150\" height=\"150\" srcset=\"https:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2020\/01\/selvaggio1-150x150.jpg 150w, https:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2020\/01\/selvaggio1-300x300.jpg 300w, https:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2020\/01\/selvaggio1-60x60.jpg 60w, https:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2020\/01\/selvaggio1.jpg 400w\" sizes=\"auto, (max-width: 150px) 100vw, 150px\" \/><\/a><\/td>\n<td><a href=\"http:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2020\/01\/cognetti.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-825 size-thumbnail aligncenter\" src=\"http:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2020\/01\/cognetti-150x150.jpg\" alt=\"\" width=\"150\" height=\"150\" srcset=\"https:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2020\/01\/cognetti-150x150.jpg 150w, https:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2020\/01\/cognetti-60x60.jpg 60w\" sizes=\"auto, (max-width: 150px) 100vw, 150px\" \/><\/a><\/td>\n<td><a href=\"http:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2015\/06\/3786602.jpeg\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-220 size-thumbnail aligncenter\" src=\"http:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2015\/06\/3786602-150x150.jpeg\" alt=\"\" width=\"150\" height=\"150\" srcset=\"https:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2015\/06\/3786602-150x150.jpeg 150w, https:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2015\/06\/3786602-300x300.jpeg 300w, https:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2015\/06\/3786602-60x60.jpeg 60w, https:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2015\/06\/3786602.jpeg 460w\" sizes=\"auto, (max-width: 150px) 100vw, 150px\" \/><\/a><\/td>\n<td><a href=\"http:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2020\/01\/bruno_dark.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-826 size-thumbnail aligncenter\" src=\"http:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2020\/01\/bruno_dark-150x150.jpg\" alt=\"\" width=\"150\" height=\"150\" srcset=\"https:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2020\/01\/bruno_dark-150x150.jpg 150w, https:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2020\/01\/bruno_dark-60x60.jpg 60w\" sizes=\"auto, (max-width: 150px) 100vw, 150px\" \/><\/a><\/td>\n<\/tr>\n<tr>\n<td style=\"text-align: center\"><a href=\"http:\/\/wpage.unina.it\/mario.selvaggio\/\">Mario Selvaggio<\/a><br \/>\nUniv. Napoli, Italy<br \/>\nmario.selvaggio@unina.it<\/td>\n<td style=\"text-align: center\"><a href=\"https:\/\/team.inria.fr\/rainbow\/marco-cognetti\">Marco Cognetti<\/a><br \/>\nCNRS, France<br \/>\nmarco.cognetti@irisa.fr<\/td>\n<td style=\"text-align: center\"><a href=\"https:\/\/members.loria.fr\/SIvaldi\/\">Serena Ivaldi<\/a><br \/>\nInria, France<br \/>\nserena.ivaldi@inria.fr<\/td>\n<td style=\"text-align: center\"><a href=\"http:\/\/wpage.unina.it\/sicilian\/\">Bruno Siciliano<\/a><br \/>\nUniv. Napoli, Italy<br \/>\nsiciliano@unina.it<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h2>Support from IEEE RAS Technical Committee<\/h2>\n<ul>\n<li>Human-Robot Interaction &amp; Coordination<\/li>\n<li>Robot Learning<\/li>\n<\/ul>\n<h2>Call for papers and contributions<\/h2>\n<p>We would like to invite all the prospective participants to submit an extended abstract (up to 2 pages) to be presented at the workshop. The manuscripts should use the IEEE ICRA two-column format. Please submit a PDF copy of your manuscript through our <a href=\"https:\/\/easychair.org\/conferences\/?conf=salc2020\"><strong>EasyChair<\/strong><\/a> platform <strong>before April 24th<\/strong>. Papers will be selected based on their originality, relevance to the workshop topics, contributions, technical clarity, and presentation. Authors of the accepted papers will be invited to submit an extended paper to a special issue to be organized. Accepted papers require that at least one of the authors attend the workshop. This workshop is an excellent opportunity to present and discuss your ongoing work and get an early feedback from the participants. All the accepted papers will have a relevant role in the workshop. In particular, the authors will be requested to perform a teaser and a poster presentation at the workshop. Hands-on demo are highly encouraged.<\/p>\n<p>The topics of interest include but are not limited to:<br \/>\n\u2022 Shared autonomous systems<br \/>\n\u2022 Modeling and learning human-robot interaction<br \/>\n\u2022 Shared and supervisory control<br \/>\n\u2022 Human-in-the-loop systems<br \/>\n\u2022 Collaborative and assistive robotics<br \/>\n\u2022 Telerobotics control and interfaces<br \/>\n\u2022 Haptic feedback and guidance<br \/>\n\u2022 Robot safety<br \/>\n\u2022 Co-adaptation between human and robot<br \/>\n\u2022 Intention recognition, skill level\/gap evaluation and role allocation<br \/>\n\u2022 Learning from demonstration<br \/>\n\u2022 Applications in robotic teleoperation, mobile robotics, humanoid robotics, and medical robotics<\/p>\n<p><strong>How to contribute to the workshop<\/strong><\/p>\n<p>To submit the paper please follow the <span style=\"color: #000099\">Easychair link<a href=\"https:\/\/easychair.org\/conferences\/?conf=salc2020\">: https:\/\/easychair.org\/cfp\/salc2020<\/a><\/span><\/p>\n<p><strong>Important dates<\/strong><\/p>\n<p>Submission deadline: <del>April 24th<\/del> <strong>May 20th<\/strong><br \/>\nAcceptance notification: <del>April 30th<\/del> <strong>May 27th<\/strong><br \/>\nCamera-ready deadline: <del>May 6th<\/del> May 30th<br \/>\nWorkshop day: June 4th<\/p>\n<h2>Selected contributed papers<\/h2>\n<ul>\n<li>Rahaf Rahal, Giulia Matarese, Marco Gabiccini, Alessio Artoni, Domenico Prattichizzo, Paolo Robuffo Giordano and Claudio Pacchierotti. <strong>Haptic shared control for enhanced user comfort in robotic telemanipulation<\/strong> <a href=\"http:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2020\/05\/SALC2020_paper_3c.pdf\">SALC2020_paper_3c<\/a><\/li>\n<li>Yu She. <strong>Control and Planning of Variable Stiffness Links for Inherently Safe Physical Human Robots Interaction<\/strong> <a href=\"http:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2020\/05\/SALC2020_paper_4.pdf\">SALC2020_paper_4<\/a><\/li>\n<li>Marco Ferro, Claudio Gaz and Marilena Vendittelli. <strong>A framework for sensorless identification of needle-tissue interaction forces in robot-assisted biopsies<\/strong>\u00a0<a href=\"http:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2020\/05\/SALC2020_paper_15c.pdf\">SALC2020_paper_15c<\/a><\/li>\n<li>Santiago Iregui, Cristian Vergara, Joris De Schutter and Erwin Aertbeli\u00ebn. <strong>Generating Reactive Virtual Guidance Fixtures for Assisted Telemanipulation Tasks<\/strong> <a href=\"http:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2020\/06\/SALC2020_paper_9c.pdf\">SALC2020_paper_9c<\/a><\/li>\n<li>Reut Nomberg and Ilana Nisky. <strong>Human-in-the-loop stability analysis of haptic rendering of a virtual stiffness with delay \u2013 the effect of arm impedance<\/strong> <a href=\"http:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2020\/05\/SALC2020_paper_5.pdf\">SALC2020_paper_5<\/a><\/li>\n<li>Yoojin Oh, Shao-Wen Wu, Marc Toussaint and Jim Mainprice. <strong>Natural Gradient Shared Control<\/strong> <a href=\"http:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2020\/05\/SALC2020_paper_6.pdf\">SALC2020_paper_6<\/a><\/li>\n<li>Marco Tognon, Rachid Alami and Bruno Siciliano. <strong>Human Physical Guidance by a Tethered Aerial Vehicle<\/strong> <a href=\"http:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2020\/05\/SALC2020_paper_8c.pdf\">SALC2020_paper_8c<\/a><\/li>\n<li>Jan Peters, Bani Anvari and Helge Wurdemann. <strong>Towards an Intelligent Driver Seat for Safe Autonomy Level Transitions in Autonomous Cars<\/strong> <a href=\"http:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2020\/05\/SALC2020_paper_1c.pdf\">SALC2020_paper_1c<\/a><\/li>\n<li>Rafael Papallas, Anthony G. Cohn and Mehmet R. Dogar. <strong>Optimization-based Motion Planning with Human in The Loop forNon-Prehensile Manipulation<\/strong> <a href=\"http:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2020\/05\/SALC2020_paper_12.pdf\">SALC2020_paper_12<\/a><\/li>\n<li>Connor Brooks and Daniel Szafir. <strong>Perspective Taking for Shared Control<\/strong> <a href=\"http:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2020\/05\/SALC2020_paper_7.pdf\">SALC2020_paper_7<\/a><\/li>\n<li>Oliver Roesler. <strong>Enhancing Unsupervised Language Grounding through Online Learning<\/strong> <a href=\"http:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2020\/05\/SALC2020_paper_14.pdf\">SALC2020_paper_14<\/a><\/li>\n<\/ul>\n<h2>Acknowledgments<\/h2>\n<p>The workshop is supported by the <a href=\"https:\/\/andy-project.eu\/\">European Project An.Dy<\/a>.<\/p>\n<p><a href=\"http:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2017\/07\/AnDy-banner.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-medium wp-image-448\" src=\"http:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2017\/07\/AnDy-banner-212x300.png\" alt=\"\" width=\"212\" height=\"300\" srcset=\"https:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2017\/07\/AnDy-banner-212x300.png 212w, https:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2017\/07\/AnDy-banner-768x1086.png 768w, https:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2017\/07\/AnDy-banner-724x1024.png 724w, https:\/\/members.loria.fr\/SIvaldi\/wp-content\/blogs.dir\/70\/files\/sites\/70\/2017\/07\/AnDy-banner.png 1240w\" sizes=\"auto, (max-width: 212px) 100vw, 212px\" \/><\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Shared Autonomy: Learning and Control June 4th 2020 during ICRA 2020, Palais de Congres, Paris, France =&gt; virtual meeting on zoom, starting at 8:30 AM CEST Link ZOOM: https:\/\/oulu.zoom.us\/j\/68686148336 Password to enter the zoom: @SALC2020 (included the @) =&gt; On the ICRA2020 slack, look for the channel #ws22 \u00a0 Video recordings See the Youtube playlist: [&hellip;]<\/p>\n","protected":false},"author":53,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"page-fullwidth.php","meta":{"footnotes":""},"class_list":["post-815","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/members.loria.fr\/SIvaldi\/wp-json\/wp\/v2\/pages\/815","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/members.loria.fr\/SIvaldi\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/members.loria.fr\/SIvaldi\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/members.loria.fr\/SIvaldi\/wp-json\/wp\/v2\/users\/53"}],"replies":[{"embeddable":true,"href":"https:\/\/members.loria.fr\/SIvaldi\/wp-json\/wp\/v2\/comments?post=815"}],"version-history":[{"count":45,"href":"https:\/\/members.loria.fr\/SIvaldi\/wp-json\/wp\/v2\/pages\/815\/revisions"}],"predecessor-version":[{"id":970,"href":"https:\/\/members.loria.fr\/SIvaldi\/wp-json\/wp\/v2\/pages\/815\/revisions\/970"}],"wp:attachment":[{"href":"https:\/\/members.loria.fr\/SIvaldi\/wp-json\/wp\/v2\/media?parent=815"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}