University policy: evidence and evaluation

The ICaMBlog this week features an article from Professor Bernard Connolly. Bernard is retiring today (March 31st 2016) and so we asked him if he could write a post on ‘anything he wanted’. Here Bernard discusses his frustrations with the frequent lack of evidence based policy making in universities.

Prof ConnollyI began my academic career as an undergraduate student at Sheffield University in 1973, I finish as the Professor of Biochemistry at Newcastle in 2016. Possibly the biggest change to have occurred over this forty or so years is the degree of surveillance and monitoring to which everybody at the University is subject. During my undergraduate studies attendance at lectures was voluntary and it was up to me to decide if, and when, I should consult my tutor. While a PhD student and postgraduate there was no concept of mandated and recorded meetings with my supervisors and the only reports I was expected to prepare were first drafts of eventual publications. When starting as a lecturer at Southampton University, I had about five minutes with the Head of Department, was shown an office and left to get on with it. I had no formal meetings with my “line manager” (indeed this concept did not exist) and I was prepared for undergraduate teaching with a single three hour session. Today undergraduate attendance at lectures is recorded as are compulsory meetings with tutors; sanctions are applied for non-compliance. PhD students have multiple supervisors/mentors and are required to record compulsory meetings with them. A number of intermediate reports must be produced and assessed along the road to graduation with a doctoral degree. Staff have a PDR once a year and are required to complete forms detailing the work they perform and how they occupy every hour at, and away from, work. Although I personally prefer the old system, this article is not aimed at discussing which is better. Rather, it is to enquire how the University implements policy changes, often introduced at the cost of great disruption, to ensure they are based on best current practice. Further, how are the outcomes of these changes determined and any benefits measured?

Conquest quote

A few years ago all undergraduate teachers were informed that they would be required to use the “buddy system”. Here, two academics are paired and required to attend, and write a report, on one of their buddie’s lectures. Although not onerous, I enquired about evidence that such a scheme was beneficial. After much badgering I was eventually sent a link to two publications. The first, admittedly in a peer reviewed journal, consisted of about ten subjects beingcartoon asked if they found the scheme helpful. This paper could be used as an example of how not to do science and I can only assume that the author, a young medical doctor, felt that any publication, no matter how poor, would benefit his future career. The second was not even peer reviewed, rather a trenchant statement of a scheme supporter bereft of any evidence. The purpose of the buddying is assuredly to improve undergraduate teaching and, as we survey our students almost to destruction, I enquired if an improvement in quality had been observed. I received no reply and concluded that this scheme was started on the whim of the dean involved, based on little evidence and with no mechanism in place to observe its consequences. This example is trivial but similar considerations apply to the much more consequential issues addressed in the first paragraph. I have yet to be presented with evidence that constant monitoring and assessing of students and staff is based on rigorous studies that clearly demonstrate positive effects. We survey our undergraduates continually and for postgraduates and postdoctoral workers have data on their accomplishments (do PhDs graduate on time, how many publications result from Bernard phototheir work, what are their future job successes). But this data is never correlated with policy changes to measure their efficacy. Similarly for staff, following the introduction of PDR, has teaching improved, has grant funding increased, have more and better publications resulted? Overall is the University a better place in which to work, perhaps monitored by absenteeism rates, which correlate well with staff happiness. As academics we must insist that anybody introducing new policies should present the evidence underpinning the change. A system for monitoring outcomes, which places minimal burden on students and staff, should be demonstrably in place. While benefits at the individual level may be small they should surely be apparent over the entire University body. Finally anybody introducing procedures based on little evidence or not leading to favourable outcomes should rapidly be removed from any position of authority.

 

 

2 thoughts on “University policy: evidence and evaluation

  1. A very insightful article written by Prof. Connolly on how there is an increasingly lack of evidence to support policies made by Universities. This is definitely a must read because it comes from a man with great experience. I loved the last sentence, “Finally, anybody introducing procedures based on little evidence or not leading to favourable outcomes should rapidly be removed from any position of authority” and I think every University should implement it.

Leave a Reply to Nyemahame Okwu Cancel reply

Your email address will not be published. Required fields are marked *