cross-validation

Terms from Artificial Intelligence: humans at the heart of algorithms

Cross-validation is when you retain some portion of a training set to verify learning on the rest. It is a way to test the generalisation of the resulting machine learning or statistical model. Typically models perform better on the data on which they are trained than on fresh data, but can overfit the data. When the goal is to evaluate an algorithm k-fold cross validation may be used where the data is divided into k subsets (folds) and the cross-validation is perfoemd k tiemes, in each pass withholding out one of the folds from the rtainung set and using ot to validate the outcome..

Used on page 178