Performance indicators

I am chairing a University working group concerned with identifying the key performance indicators we should be producing and reviewing at programme level. This is in many ways quite a challenge for a number of reasons I shall explore in this post. I should add that I did volunteer for the role because I find it so interesting.

I shall illustrate the issues through an example, retention data, which is currently the subject of review by our internal audit function. When interviewed for the audit, I had to admit with some embarrassment that I did not know how the calculation is done, couldn’t understand the standard programme data reports on retention and had some awareness that there was an official institutionally adjusted benchmark, whilst not knowing what it was. This is partly because retention has not been a great problem at Newcastle; we haven’t worried too much about monitoring it, because we didn’t think much was at stake for us. The advent of £9,000 fees has sharpened the profile of the issue though, as drop out between confirmation in August and entry in September has emerged as an issue, never mind drop out at later stages.

At a practical level we have been doing more and more to support students and to maximise retention through a stronger emphasis on good induction, the introduction of clearer requirements for personal tutorial meetings in the first semester and through introducing student mentoring schemes. In some individual academic units much good work is done in indentifying students at risk and supporting them. My focus here is on the question of key performance indicators.

The principal issues which I see are:

a) how should we measure retention, or drop-out, its converse? Are we just looking at how many of the starters made it through to Stage 2 or how many completed the whole programme? What’s our starting date? It could be confirmation in August, registration or even the magical 1 December when data returns are made. If we focus on progression to Stage 2 of the programme, what about students who transfer to another degree course? They are not lost to the University and may be happier in their new degree programme. What about students whose progress is delayed by academic failure or personal problems, but eventually make it? How should we account for them? Do they represent a failure to retain or an eventual success? What seems a relatively simple issue of measurement now looks much more complex and might lead us to use a number of different indicators of retention. It is also clearly a very difficult thing to measure technically because it requires a system which can track individual students from start to finish and then amalgamate the data.

b) even within an institution it is clear that retention has a subject dimension. Hard technical disciplines seem to lose more students than other disciplines, perhaps because more fail to proceed from Stage 1, perhaps because more are frightened off by the demands of the discipline. Key performance indicators mean nothing without something to compare them to, whether this be historical trend data, targets or subject benchmarks. So what is the relevant comparison? How should we know how well we’re doing?

c) retention also varies between institutions largely because of the quality of the intake. Highly qualified entrants are less likely to drop out that those less well qualified. Oxford and Cambridge don’t have a retention problem. Institutional benchmarks will of course take such factors into account, but cross-sector comparisons are clearly tricky.

d) the most important issue though concerns whether retention is something we should be monitoring at all, or at least whether monitoring it might have unintended consequences. We need to remember that key performance indicators are not neutral technical measures, but measures which drive (and are designed to drive) behaviour. We have seen numerous examples in the NHS of how government targets drive behaviour and not always in beneficial ways. Retention in principle sounds desirable. It is clearly wasteful if students don’t complete their programmes; the HEI concerned will of course lose money. However, in some cases it will be in the student’s best interests to leave the programme because they are unhappy, are ill suited to it or would prefer to be somewhere else. In other cases students may be successfully supported through a wobble to eventual success. Issues about standards also come into play. Whilst it’s clear that we’d want the overwhelming majority of any cohort to be successful, there still has to be a minimum standard of achievement and it is probable that some students won’t make this, either through lack of effort or through lack of ability.

e) retention or its converse, drop-out, is also linked to efforts to widen participation and to mission. HEIs which make great efforts to widen participation take more risks with their applicants and have a wider recruitment funnel. For example the Open University has a higher drop-out rate because it is so open. Continental European universities are often regarded in the UK with disapproval because of their huge drop out rates in the first year which appears so wasteful and inefficient. However, it reflects the openness of HE to those with school leaving qualifications. Is it wrong to give so many the chance to experience HE, even when you know a high proportion won’t make it to later stages? An emphasis on retention could easily discourage risk-taking in admissions and initiatives which widen access to HE.

In summary this one example illustrates:

a) the difficulties of measurement

b) the difficulties of setting appropriate benchmarks for both subjects and institutions

c) the potential impact on practice of performance indicators which may achieve one goal (higher retention), but only at the expense of another (poorer access).

d) that performance indicators can encourage an overly simplistic approach (e.g. assuming that dropping out is always undesirable) which is heavily influenced by financial considerations.