ROBOTICS

Seminar on Cybersecurity and Control

12/09/2019

13:00 - 16:00

Alpen-Adria-Universität Klagenfurt

z.1.09

Fachdialog

A series of four expert talks about security in the robotics domain, with a special focus on control aspects, vulnerability and security by design for future robots.

 

Speakers:

Nacim Ramdani (Université d’Orléans): Control Perspectives of Cyber Attacks

Abstract: Autonomous robots and most today’s critical infrastructures are cyber-physical systems (CPS) that operate in highly networked environments as they need to communicate remotely with control and management systems. This feature makes them more vulnerable to cyber-attacks. For instance, a scenario of importance is posed by a malicious adversary that can arbitrarily corrupt the measurements of a subset of (remote) sensors in the CPS. Because sensor measurements data are used to generate control commands, corrupted measurements data will lead to corrupted commands, thus critically affecting the behaviour of the CPS. 

From a control theory perspective, one needs to develop algorithms and architectures for the detection of cyber-attacks on either sensors or actuators, and the mitigation of their impact on the resilience and the overall performance of the CPS. State-of-the-art methods often consider active attack detection, control algorithms that work directly with encrypted sensor data, or secure state estimation methods that show resilience when sensors are under cyber-attacks. 

In this talk, first, I will briefly review recent literature on cyber-security from the perspective of control theory. Then, I will describe our approach to secure state estimation, a secure interval state estimator for linear continuous-time systems with discrete-time measurements subject to both bounded-error noise and cyber-attacks. The interval state estimator is modelled as an impulsive system, where impulsive corrections are made periodically using measurement. The approach includes a new selection strategy that can endow the state estimation with resiliency to attacks, when assuming that only a subset of the whole set of sensors can be attacked although this subset is unknown a priori. The approach will be illustrated in simulation with robot navigation under cyber-attack. 

 

Bernhard Dieber (JOANNEUM RESEARCH ROBOTICS): LiveDemo: Hacking the safety systems of a mobile robot

Abstract: Robots have integrated safety systems that are used to prevent incidents like collisions with bystanding humans. For mobile robots, this is mostly done with laser sensors. Since those systems are crucial components of a robot, they are engineered at a very high safety integrity level and use specialized, ultra-reliable, certified hardware components. Despite that, we have found a way to remotely disable the safety subsystem of a mobile robot thus turning it into a harmful device. In this live demonstration, we will show this hack and go into detail on the underlying vulnerabilities. We show supporting material and go into some details on current state and issues of robot cybersecurity.

 

Michael Hofbaur (JOANNEUM RESEARCH ROBOTICS): Active Diagnosis of Cyber Attacks using Physical System's Redundancy

Abstract: Control systems define the physical interactions of robots or mechatronic systems in general. In that sense, they operate on the boundary between the physical and the cyber-physical world. Cyberattacks on a robot’s controller can thus cause physical danger to humans, systems and infrastructure. Security measures typically build upon advanced schemes for the underlying software components of robots or its underlying mechatronic devices. The talk, however, will explore schemes where the physical properties of a robot are deliberatively utilized to identify harmful cyber-attacks and thus draft possible corridors for novel cybersecurity measures for robot systems at the system’s level.

 

Stefan Rass (Technische Wissenschaften, Alpen-Adria-Universität Klagenfurt): Incentive-Based Robot Security - Game Theory for Safe and Secure Collaboration

Abstract: Developing robots in a team is a complex joint venture of achieving not only functional correctness, but also safety and security at the same time. While many mechanisms from system security (like cryptographic and others) are known and available, implementing these in a correct and effective way needs (i) a decent understanding of how the mechanism works, and (ii) considerable awareness that the mechanism is also needed. Time pressure and the complexity of the system’s function itself may be inhibitors that can cause security to receive secondary priority. When it comes to humans collaborating with robots or at least working in their proximity without protection, safety and security become primary concerns. This talk motivates an approach to system security that stems from game theory, and strives for incentives for people to care for security throughout the entire life cycle of a robotics system. The mere fact that a person can be harmed by a not-well-programmed robot may be insufficient, and better incentives are needed for security to become an effective part of future robotics. System security has lots of protections to offer, and game theory is anticipated as an aid for putting them to work. The talk will give an introduction to the ideas and possibilities that game theory has to offer to this end.