Contents
- 11.1 Overview
- 11.2 Introduction
- 11.3 Characteristics of Game Playing
- 11.4 Standard Games
- 11.4.1 A Simple Game Tree
- 11.4.2 Heuristics and Minimax Search
- 11.4.3 Horizon Problems
- 11.4.4 Alpha--beta Pruning
- 11.4.5 The Imperfect Opponent
- 11.5 Non-zero-sum Games and Simultaneous Play
- 11.5.1 The Prisoner's Dilemma
- 11.5.2 Searching the Game Tree
- 11.5.3 No Alpha--Beta Pruning
- 11.5.4 Pareto-optimality
- 11.5.5 Multi-party Competition and Co-operation
- 11.6 The Adversary Is Life!
- 11.7 Probability
- 11.8 Neural Networks for Games
- 11.8.1 Where to Use a Neural Network
- 11.8.2 Training Data and Self Play
- 11.9 Summary
Glossary items referenced in this chapter
adversarial learning, alpha–beta pruning, AlphaGo, AlphaGo Zero, Analytical Engine, Babbage, Charles, bootstrapping, branch and bound search, branching factor, breadth first search, chess heuristic, chess program, coin-weighing problem, computer chess, Cuban missile crisis, deep neural network, deterministic search, distributed AI, dominoes, game playing!heuristics, game theory, game tree, genetic algorithm, Go, heuristic evaluation function, hill climbing algorithm, human intelligence, iterative deepening, knowledge-rich search, Lee Sedol, machine learning, magic square, minimax score, minimax search, Monte Carlo tree search, neural network, non-zero-sum game, noughts and crosses, optimal solution, Pareto-optimal, pattern matching, placing dominoes, plateau, policy network, prisoner's dilemma, probabilistic reasoning, probability, probability-based cut-off, risk avoidance, robotics, search horizon, search space, search tree, self play, stochastic search, uncertainty, value network, zero-sum game