Responsibility is a crucial issue in AI, especially when things go wrong. If you hire an autonomous car and it crashes, who is responsible: you, the hire firm, the manufacturer, the government body who establishes safety standards, or the academic researcher who developed the algorithms used in the car?
It is argued that if the developers of AI have to take legal responsibility (algorithmic accountability), then this will itself lead to a level of self-policing ensuring the safest AI. However, as intimated, responsibility is usually spread, and we may bear ethical responsiblity even if there is no legal accountability. For this reason many call for researchers to follow responsible innovation practices, looking forward to potential applications and problems even at early stages of research.
Used in Chap. 19: page 312; Chap. 23: pages 361, 362, 367, 370
Used in glossary entries: algorithmic accountability, responsible innovation