A–B test is a form of automated usability evaluation, used particularly on the web. It involves releasing two (or more) minor variants of an application, A and B, for example two different colour schemes or positions for a call-to-action button. Metrics are gathered automatically, such as click-through rate or time on page, and these are used to determine which of the variants is best.
Also used in hcistats2e: Chap. 1: page 11; Chap. 13: page 161; Chap. 15: page 188
Also known as: a-b testing, a---b testing
Used in glossary entries: usability evaluation
