Contents
- 20.1 Overview
- 20.2 Introduction
- 20.3 Wrong on Purpose?
- 20.3.1 Intentional Bad Use
- 20.3.2 Unintentional Problems
- 20.4 General Strategies
- 20.4.1 Transparency and Trust
- 20.4.2 Algorithmic Accountability
- 20.4.3 Levels of Opacity
- 20.5 Sources of Algorithmic Bias
- 20.5.1 What Is Bias?
- 20.5.2 Stages in Machine Learning
- 20.5.3 Bias in the Training Data
- 20.5.4 Bias in the Objective Function
- 20.5.5 Bias in the Accurate Result
- 20.5.6 Proxy Measures
- 20.5.7 Input Feature Choice
- 20.5.8 Bias and Human Reasoning
- 20.5.9 Avoiding Bias
- 20.6 Privacy
- 20.6.1 Anonymisation
- 20.6.2 Obfuscation
- 20.6.3 Aggregation
- 20.6.4 Adversarial Privacy
- 20.6.5 Federated Learning
- 20.7 Communication, Information and Misinformation
- 20.7.1 Social Media
- 20.7.2 Deliberate Misinformation
- 20.7.3 Filter Bubbles
- 20.7.4 Poor Information
- 20.8 Summary
Glossary items referenced in this chapter
accountability, accuracy, adversarial attacks, adversarial learning, aggregation for privacy, algorithmic accountability, anonymisation, anti-discrimination laws, automated decision, autonomous car, autonomous vehicles, autonomous weapons, backpropagation, base rate, bias, big data, black-box machine learning, Cambridge Analytica scandal, centroid, chatbot, choice of features, clustering, COMPAS, cyberattack, cyberwarfare, cyberweapons, de-bias, deep neural network, deliberate misinformation, delta, denial of service (DoS), echo chambers, ethics, expert system, explainable AI, explanation, Facebook, facial recognition, fake news, false negative, false positive, federated learning, fitness function, GDPR, generative AI, Google, Google search, GPT-4, human bias, human labelling, human-in-the-loop, ID3, identity theft, image processing, image recognition, k-means algorithm, labelling, machine learning, Microsoft, Microsoft Tay, misinformation, misinformation detection, natural language algorithms, natural language processing, network analysis, neural network, obfuscation, OpenAI, optimal classification, overfitting, PageRank, perturbation techniques, pragmatic, privacy, privacy preserving algorithms, protected characteristic, proxy indicator, pseudonymisation, search engine, search engine personalisation, semi-autonomous car, simulated data, social media, sources of bias, statistical bias, statistics, Stuxnet, symbolic systems, threshold, transparency, trust, Twitter bot, unintended bias, unique identifier, user interface, visualisation, web search