Simple algorithms run to completion in a short tme on a single computer. If the computer has a fault, you simply start again. However big data analysis often runs for long periods on large numbers of cloud computers and failure of at least one of the computers becomes likely if not inevitable. It is therefore critical that the algiorithms used are robust to failure, ensuring that if one processor fails it is possible to detect this and redo its work without needing to modify the behaviour of more than a small number of other processors. A key aspect of MapReduce and Apace Hadoop is this abilty to 'pick themselves up' after processor failures.
Used in Chap. 8: page 113