A Workshop on Responsible Robotics

Event start date
Event start time
8.45
Event end date
Event end time
17.30
Place

Main building (address: Kalevantie 4) and Pinni B building (address: Kanslerinrinne 1).

Organiser(s)

Robotics and the Future of Welfare Services (Academy of Finland; University of Tampere)
School of Social Sciences and Humanities, University of Tampere;
Responsibility in Society and Technology: Bridging the Gap (NWO, Netherlands);
Foundation of Responsible Robotics

Programme

8.45-12.00 Main Building, auditorium A4

8:45-9:00 Opening words
9.00 – 10.30 Joanna Bryson (Princeton, Bath) Five Reasons Not to Personify AI

Abstract: Artificial Intelligence is often treated as an alien force or an unruly, potentially dangerous child.  In fact, it is just a special case of computation being commodified, which is to say that the means by which it is changing society are not trivial, but are less transparent than simple opposition.  Intelligence is the triggering of appropriate actions in response to perceived events.  Information technology has been allowing us to enhance our capacity to do this arguably for thousands of years. 

It allows us to both remember and perceive more than we could as individuals, which in turn allows us to innovate at cooperate in unprecedented ways, sometimes at the expense of each other, other groups or the rest of the ecosystem. In this talk I will first redescribe AI as an ecological feature of one species and show how it affects not only our world but ourselves as individuals.  Then I will talk about efforts to regulate AI, with a focus on the British efforts going back six years now with the Principles of Robotics.  Finally I will address why we should not construct AI to be legal or moral agents – not because such construction is impossible, but because it is ill advised and easily avoided, at least for commercial products.

10.30 – 12.00  Filippo Santoni de Sio (TU Delft): Meaningful human control over autonomous systems: a philosophical analysis

Abstract: Fully Autonomous Weapon Systems (AWS), or “killer robots”, once activated, can select and attack targets without further human intervention. Autonomous Weapons Systems raise two related ethical concerns: a) it may be wrong to give a machine control over lethal activities, b) the use of AWS may create undesired gaps in responsibility attributions for military (wrong) actions. Governmental and non-governmental actors have insisted on the ethical principle of “meaningful human control” over Autonomous Weapon Systems to preserve human moral responsibility; but they have recognized the lack of a philosophical theory to give this principle a precise content. This paper aims at laying the foundation of a philosophical theory of meaningful human control over autonomous systems based on insights from the “compatibilist” literature on free will and moral responsibility, in particular the concept of “guidance control” as elaborated by Fischer & Ravizza (1998).

The paper aims at giving a fresh contribution to computer and robot ethics, by systematically introducing into it an analysis of control based on the philosophical literature on free will and moral responsibility; it also aims at giving fresh contribution to the compatibilist theory of moral responsibility, by elaborating a new philosophical framework for understanding one particular kind of human control, i.e. meaningful human control over autonomous robotic systems.

LUNCH

13.15-20 Pinni B4115

13:15 - 14:45 Philosophical issues of responsibility and robotics Raul Hakli & Pekka Mäkelä (Helsinki): Robots, Autonomy, and Responsibility

Abstract: We study whether robots can satisfy the conditions for agents fit to be held responsible in a normative sense, with a focus on autonomy and self-control. An analogy between robots and human groups enables us to modify arguments concerning collective responsibility for studying questions of robot responsibility. On the basis of Alfred R. Mele's history-sensitive account of autonomy and responsibility it can be argued that even if robots were to have all the capacities usually required of moral agency, their history as products of
engineering would undermine their autonomy and thus responsibility.

Säde Hormio (Helsinki): The tale of disappearing responsibility in the processing device Abstract: The moral implications of artificial intelligence has inspired intriguing science fiction books and well as some great works of cinema, where imagination is the only limit to the complex creations in robotics and software engineering. These works often share a central worry about the unintended consequences of these creations. The worry of this talk, in contrast, is about the intended consequences: can (and do) we design machines that create responsibility voids? What happens to individual moral responsibility when we make ever more sophisticated machines?
 
Arto Laitinen (Tampere): Forward-looking collective responsibility, mandates, and the development of robotics

Abstract: This paper approaches forward-looking collective responsibility in four steps. One question is identification of tasks that ought to tackled (e.g. climate change). An argument is given that various issues of responsible robotics are included.

Second, is whether there already exists a mandated agency set up for the purpose of tackling those issues (say, fire brigades to tackle future fires; the welfare state to take care of preconditions of good life generally; but seldom global agencies for global challenges or disruptive technology).

Third, whether there should be new ones, and whether the current agencies should be revised (e.g. to distribute burdens and democratic control fairly, to bridge responsibility gaps, to prevent immorality). The fourth question is the distribution of responsibilities “here and now”, in the situation where mandated agencies do not cover the issue in its entirety (because institutionalized agency is only one of many levels on which action is required, or because the agencies do not yet cover the whole scope of what they could cover, or because they do not in fact function); where new or revised agencies should be set up; and where the likelihood that adequate agencies will be set up is less than 100%; and where effectively tackling the issue will take more than individual action.

The distribution of responsibilities is approached via the thought-experiment of unlimited original responsibility, and the question: why isn’t every agent responsible for absolutely everything? Various principles of restriction concerning what agents can do and know (you are responsible only for what you can do and know), and are entitled or mandated to do (the missing mandate of corporations to act on our behalf; the right for agents to be primarily responsible about themselves, but also the right to contribute), and what it would be fair and reasonable to expect, are discussed.

For instance, the explanation for the responsibility to “pick up the slack” (when someone whose primary responsibility something is does not deliver) is that some justified restriction of original responsibility is lifted, rather than new responsibility created.
Commentator: Lauri Lahikainen (Tampere)
 
15.15-16:45 ROSE-project presents: Marketta Niemelä (VTT, Tampere): Responsibility in robotics research
Pertti Koistinen & Tuomo Särkikoski (Univ. of Tampere): Robotics and employment Jaana Parviainen (Univ. of Tampere): Motions with emotions? A double body perspective and human-robot interaction in elderly care
 
16.45 -17.30 Closing discussion (+ short presentations, TBA)

Additional information