ERF Workshop

"Trustworthy Robots – Safety, Credibility, Explainability"

European Robotics Forum, 21 March 2019, 14:00 - 15:30, Salon C

Motivation and Objective

Robot safety is accepted as a paramount characteristic of modern robot systems that operate within a shared human-robot environment. But safety alone represents only a fraction of the interaction characteristics that we, as humans, request from other actors within our shared world that significantly build upon trust and trustworthy behavior. But trust is not a concept that can be built into systems in the same way as we integrate functional safety.

We gain trust in engineering artefacts by relying on certified properties and the ability to explain its executed behavior. Modern robots, however, utilize AI systems as high-level control authority that rely on complex models which are mostly opaque to humans. This challenges the assumption of being fully explainable.

This workshop brings together experts from robotics/engineering, computer science, AI, ethics and social sciences to initiate a discussion on this aspect that will be paramount for future robot systems.

Speakers & Abstracts

Explainable AI for Trusted Human-Machine Partnership, Daniele Magazzeni, Kings College London

As autonomous systems are increasingly being adopted into application solutions, the challenge of supporting interaction with humans is becoming more apparent. Partly this is to support integrated working styles, in which humans and intelligent systems cooperate in problem-solving, but also it is a necessary step in the process of building trust as humans migrate greater responsibility to such systems. The challenge is to find effective ways to communicate the foundations of autonomous-system behaviours, when the algorithms that drive them are far from transparent to humans. In this talk we consider the opportunities that arise in AI Planning, exploiting model-based reasoning and the ROSPlan framework, and we highlight a number of open issues, particularly in scenarios involving human-machine teaming.
 

Trust by Physical and Behavioral Design, Roel Pieters, Tampere University of Technology

Robots developed for social and industrial human-robot interaction show great promise as a tool to assist people. While the functionality and capability of such robots is crucial in their acceptance, their visual appearance and behaviour plays a large role as well. Our approach tries to understand how robot trust can be gained by considering physical and behavioural design. While physical design has limitations due to closed source designs and expensive design tools, behavioural design requires advanced communication techniques to achieve trust. We propose solutions and examples for both.

 

Credibility – An Enabling Factor for Trustworthiness, Michael Hofbaur, JOANNEUM RESEARCH

Modern robotics promotes applications, where humans and robots operate side-by-side and collaboratively in industrial and every-day environments. Safety of humans is obligatory for these robots. But safety alone might not be sufficient for future robot applications. In the same way, as we obey rules and norms we require other actors within our shared world to do so as well. These rules and other societal norms define the desired and required behavioural spectrum that enables trust. Today’s robot control systems, however, mostly deal with basic functionalities of these mechatronic artefacts to embody a desired operational purpose.  Concepts of trustworthiness are still beyond the scope of current systems. It is our understanding that directly working on trustworthiness might be too direct. We have to clarify the notion of trustworthiness in terms of human robot interaction and build the technological foundations, i.e. credible systems, in order to be able to implement machines that humans would ascribe as trustworthy robots.

 

Explainable AI: Concepts, Methodologies & Challenges, Nadia El Bekri, Fraunhofer IOSB

Systems which are based on Artificial Intelligence (AI) are becoming increasingly complex and critical for areas that directly affect individuals and society. The decisions of such systems must be based on legal and ethical principles. Due to the black box nature that many AI systems have assimilated and the lack of transparency in the decision-making process this is an extremely difficult task. Explainable AI (XAI) focuses on this problem by explaining the black boxes actions, decisions and behaviors to human operators. This presentation gives an overview of the different principles and highlights important concepts in XAI. In addition, open challenges that need to be solved in the future will be addressed.
 

Extracting Explanations from Deep Neural Networks, Marco Huber, University of Stuttgart

Modern AI applications frequently use deep neural networks to find patterns in large datasets and learn complex relations in the data. Unfortunately, these models are considered a black box, which limits the trust in their reasoning. In this talk, a practical method for extracting simple rules or decision trees from (deep) neural networks is introduced. In doing so, a user can understand the reasoning of the network, for example by identifying which features are particularly important and how they interact. It is shown that simply fitting a decision tree to a learned neural network usually leads to unsatisfactory results in terms of accuracy and fidelity. This is because the complex structure of a neural network cannot be easily mapped to a rather simple decision tree. Thus, it is demonstrated how to influence the structure of a neural network during training in such a way that fitting a decision tree leads to highly accurate results, which is demonstrated by means of various benchmark datasets.

 

Trust in Robots – Priorities in Research and Industry, Markus Vincze, Vienna University of Technology

While cognitive robotics is still an evolving discipline and much research remains to be done, we nevertheless need to have a clear idea of what cognitive robots will be able to do if they are to be useful to industrial developers and end users. The RockEU2 project canvassed the views of thirteen developers to find out what they and their customers want. The results of this survey follow, cast as a series of eleven functional abilities. Key abilities are to be safe and exhibit transparent operation, the ability to easily program the robot using high-level instructions, and the steady acquisition of knowledge. Autonomy was not requested, or only in the set limits.

 

Trust and Predictability of Robot Behavior, Martina Mara, Johannes Kepler University Linz

Trust is a fundamental human need. It serves as an important underpinning of social interaction and collaboration, both in the interpersonal context and between man and machine. One of the cognitive factors that psychologists have defined as crucial to trust formation is predictability, that is, the occurrence of specific behavioral patterns in the expected way. With respect to human-robot interaction, this means that it is essential for the human partner to know in advance what to expect from the robot and to be able to identify its planned actions. Ideally, it should be intuitively apparent when, for example, a collaborative robot is about to intervene in a working process or in which direction a mobile robot will move next. Nevertheless, the matter of which robotic signals improve predictability for which users in which contexts still confronts research and practice with a need to gain greater insights. This talk provides examples of robot intention signals and highlights open research questions.

Organizers

Michael Hofbaur (JOANNEUM RESEARCH, Austria), Nadia El Bekri (Fraunhofer IOSB), Mark Coeckelbergh (University of Vienna), Marco Huber (University of Stuttgart), Markus Vincze (Vienna University of Technology)