ethics

Terms from Artificial Intelligence: humans at the heart of algorithms

Page numbers are for draft copy at present; they will be replaced with correct numbers when final book is formatted. Chapter numbers are correct and will not change now.

Ethics is about considering what is right or wrong from a moral perspective, not merely accuracy. The term implies a level of societal agreement as to rules of principles of conduct as opposed to personal morality, which is about an individual's moral compass. Within AI we may be faced with ethcial issues regarding the direct use of technology, for example the development of autonomous weapons or the indirect impact, for example increasing digital exclusion. There are fundamental philisophical and legal questions as to whether a machine itself is an ethical actor, whether it can be held responsible for its actions, or whether ths lies solely with its designers and/or owners. Certanly the process of programming or training automated systems often means we have to make ethical positions explicit. A classic example of this is the trolley problem.

Used in Chap. 20: pages 334, 337

Also known as ethical