Matthias formulation of the responsibility gap has been quite influential especially in relation to the development of autonomous weapon systems (Sparrow, 2007)(Human Right Watch, 2015). Therefore, we as humanity are facing a dilemma: either we go on with the design and use of learning systems, thereby giving up on the possibility of having human persons responsible for their behaviour, or we preserve human responsibility, and thereby give up on the introduction of learning systems in society. In a nutshell, intelligent systems equipped with the ability to learn from the interaction with other agents and the environment will make human control and prediction over their behaviour very difficult if not impossible, but human responsibility requires knowledge and control. In 2004, Andreas Matthias introduced what he called the problem of “responsibility gap” with “learning automata” (Matthias, 2004). The paper also outlines a more comprehensive approach to address the responsibility gaps with AI in their entirety, based on the idea of designing socio-technical systems for “meaningful human control", that is systems aligned with the relevant human reasons and capacities. It proposes a critical review of partial and non-satisfactory attempts to address the responsibility gap: those which present it as a new and intractable problem (“fatalism”), those which dismiss it as a false problem (“deflationism”), and those which reduce it to only one of its dimensions or sources and/or present it as a problem that can be solved by simply introducing new technical and/or legal tools (“solutionism”). The paper clarifies which aspect of AI may cause which gap in which form of responsibility, and why each of these gaps matter. Responsibility gaps may also happen with non-learning systems. The responsibility gap, it is argued, is not one problem but a set of at least four interconnected problems – gaps in culpability, moral and public accountability, active responsibility-caused by different sources, some technical, other organisational, legal, ethical, and societal. Building on literature in moral and legal philosophy, and ethics of technology, the paper proposes a broader and more comprehensive analysis of the responsibility gap. The notion of “responsibility gap” with artificial intelligence (AI) was originally introduced in the philosophical debate to indicate the concern that “learning automata” may make more difficult or impossible to attribute moral culpability to persons for untoward events.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |