It is clear from informal discussions with colleagues at different institutions, and more formal feedback sessions, that many are unsure of the ways in which the different aspects of REF2014 were graded. Many myths persist, for example, on journal vs. conference papers.
The intention of the entire REF exercise is to be as transparent as possible about process, while maintaining absolute confidentiality about specific details. Many of the processes are described in the pre-submission guidance, the REF manager’s report and Panel minutes and reports.
Each panel and sub-panel differed in details, and here I will give a step-by step account of the sub-panel 11 evaluation processes. Much of this will repeat information in the above documents, and in particular the SP11 section of the Panel B report, but included here in stepwise order.
It should be noted that while each panel and sub-panel had its own processes depending on the nature of the disciplines involved, SP11 was unusual in its reliance on algorithms and the consequent impact on process and practice.
Although there was some overlap, the evaluation proceeded approximately as follows:
- Output evaluation (Jan–May)
- Impact evaluation (June–July)
- Environment (Aug–Sept)
- Textual Feedback (Sept–Oct)
Each of the three main sections is described in a separate page.