responsibility

Terms from Artificial Intelligence: humans at the heart of algorithms

Responsibility iis a crucial issue in AI, especially when things go wrong. If you hire an autonomous car crashes, who is responsible: tyou, the hire form, the manufacturer, the government body who establishes safety standards, or the academic researcher who developed the algorithms used in the car? It is argued that if the developers of AI have to take legal responsibility ({{algorithmic accountability}), then this will intself lead to a level of slef-policing ebnsuring the safest AI. However, as intimated, responsibility is usually spread, and we may bear ethical responsiblity even if there is no legal accountability. For this reason many call for researchers to follow responsible innovation practices, looking forward to potential applications and problems even at early stages of research.

Used on pages 485, 561, 563, 570, 575