École polytechnique fédérale de Lausanne
Dexterous control of force and prediction of load for safe and robust human-robot handover.
For robots to safely pick up objects handed over to them, they must make an accurate guess of the object's weight. This assessment may be done through inferring the load from observing the way the human carries the object. The assessment may still be partly incorrect, and the robot may need to also very rapidly re-assess this information once it senses the load in its grip.
Human-Robot Handover, as a canonical Human-Robot Joint Action
This talk aims to contribute to the analysis of the Human-Robot Handover as a canonical Human-Robot Joint Action for a cognitive and interactive robot which shares space and tasks with humans. We adopt a constructive approach based on the identification and the effective implementation of the needed skills and interaction schemes, including human-aware task planning and interleaved execution of shared plans. We will also discuss the key issues linked to the pertinence and the acceptability by the human of the robot behaviour.
Influence-Aware Handovers and Beyond.
The actions robots take influence the actions that people take. When a robot decides on the pose of a handover, it influences what grasp the person might choose. Further, the actions robots take influence the beliefs people have. The way the robot times its handover motion influences people's perceptions about how heavy the object is. In this talk, we'll cover how robots might be able to model these influences and account for them when generating their behavior.
University of Bielefeld
Exploiting tactile information for human-robot handover tasks.
We studied human-human hand over tasks, recording both, hand motion and interaction forces using a tactile data glove. This provides valuable hints, how to design human-robot hand overs. Further I will discuss how tactile- based robot controllers can be employed to achieve robust handover. Eventually, I could also report about our work on planning robot-robot handover tasks.
Institute of Cognitive Sciences and Technologies
Models for Sensorimotor Communication in Humans and Robots.
In the current research in modeling fast online social interactions an increasing crucial role is played by Sensorimotor Communication (SMC). SMC refers to (often subtle) communicative signals embedded within pragmatic actions: in order to optimize joint actions, co-actors exchange communication signals on the same (motor) channel of the to-be-performed action. Here, we focus on computational models implementing SMC. Under the hypothesis that both performer and perceiver agents share at least in part the same action repertoire, the same internal models that are implied in action production can be reused “in simulation” for the recognition of the action when it is performed by somebody else. As a result, we suggest how a robotic device with SMC abilities should optimize a cost-benefit trade-off mechanism between (i) the costs of modifying one’s behavior (e.g., a motor cost) and (ii) the benefits in terms of the success of the human-robot interaction (e.g., a joint goal).
Imperial College London
Explainable HRI: Lessons from Human Motor Control
Recent progress in robotics and AI has led us ever closer in physical contact with robots. We now have robots performing tasks close to and in collaboration with or assistance of humans; be this in medical robots, self-driving cars, robot-assisted care or robots in collaborative industrial settings. Be it during handovers or other types of direct physical interaction, we have robot control directly facing human motor control. In my talk, I will be giving an overview on our experiences with physical human-robot interaction in industrial, medical and assistive robotics settings as well as in more conceptual interactive studies, to better understand the intricacies of human-(intelligent)robot interaction. I will use our experiences to highlight the critical importance of achieving explainability in physical human-robot interaction. Explainability and ultimately trust in AI-driven robots will set fundamental limits to how robot-friendly our futures will be, particularly when physical contact is concerned. I will describe how we can achieve this by drawing inspiration from human motor control, creating a counterpart, hierarchical and interpretable robot control architecture that can adapt to new physical tasks and new interacting humans over time.
Honda Research Institute EU
There is consent both in academia as well as in the industry that physical human-robot interaction technology has a large potential for future robotics applications. Opportunities emerge increasingly in factory domains, with a prospect to apply also to other areas such as assisted living. In this workshop contribution, we will present concepts for intuitive and understandable physical human-robot collaboration. Our research focuses on the cooperative handling large and heavy objects that are difficult to handle for a human by itself. We will discuss approaches of how to share intentions between human and robot to improve awareness about the partner's next actions using haptics, augmented reality and human modeling. We will conclude with interesting findings on the topic of "handing over initiative", and show experiments in which humans and robots cooperatively handle large boxes and motorcycle wheels.
Learning Proper Handovers from Human Givers: Haptic interaction and Object Orientations in Handovers
Humans can perform handovers smoothly and reliably in many different situations and with many different objects. There are much a robot giver can learn from humans on how to properly hand over objects in order to facilitate effective human-robot collaborations. The first part of this talk will discuss an investigation of the haptic interaction between a human giver and human receive. We identified the grip force control strategy used by human givers and receivers, and created a human-inspired handover controller that allows a robot giver to mimic human grip force control when handing over objects. User study results show that the proposed controller yields robot-human handovers that are robust, intuitive, efficient, and preferred by users. The second part of this talk will discuss how robots can learn to orient objects properly for handovers. Existing methods relying on manual labelling lack generalizability and scalability. We present an affordance-focused framework and its implementation for enabling robots to learn proper handover orientations through observation and interaction with humans.