Computational Foundry, Swansea University, Wales |
Talk at Evaluation, SummerPIT 2019, Aarhus University, 15th August 2019
AbstractSometimes evaluation is straightforward. Perhaps our goal is to create a system in a well-understood environment that is fastest to use or with least errors. In this case, and if we believe design choices are effectively independent, then we can run a lab or in-situ study to compare design alternatives. However many things do not fit into this easy-to-evaluate category. Sometimes our goals or more diffuse or long term: sustainability, behavioural change, improving education. Sometimes the thing we wish to 'evaluate' is 'generative' such as toolkits or frameworks used by developers or designers to create systems that then are used by others. In these cases simple post-hoc 'try it and measure it' approaches to evaluation fail, or at best give partial results. However post-hoc evaluation is only one way to validate work – data (quantitative or qualitative) should be combined with an understanding of mechanism, how things work, in order to justify, generalise and innovate. Keywords: evaluation, validation, mechanism, sustainability, user experience, statistics, empirical methods, user studies, UX, HCI
|
|
http://alandix.com/academic/talks/PIT-2019-validation-and-mechanism/ |
Alan Dix 13/8/2019 |