Could Artificial Intelligence Free Up Valuable Time for Academics?

A busy academic might be sent many requests the peer review papers each month, those involved in teaching will have student marking to contend with too. They may sit on grant awarding committees or editorial boards for journals or be required to decide whether to accept conference presentations for meetings. This will all be on top of the processes they need to go through in order to store their own data appropriately, ensure their projects have appropriate ethical approval and rectify any changes requested of their own work, as they respond to reviewers and editors comments.

All of these processes are not actually research or teaching, they are what research and teaching require to be robust and carefully monitored. As I tell PhD students early in their studies (1) there is not a governing body for academic research, the whole process is self-policing. Peer review is essential for confidence that research processes are robust. Marking students’ work is also a necessity to ensure robust judgements on students’ grades.

However, is everything being done as efficiently and as objectively as it could be? Assuming appropriate ethical approval has been provided and the research work has been considered worthy of funding from some source then it seems a logical extension that the findings should be published. Even if the results are null the academic community should be made aware of the work. Even an experiment that goes nowhere to support the hypothesis being tested may avoid that hypothesis being tested again when null results are provided.

So assuming all research work should be published there is then a question as to why journals choose to accept or reject articles for publication. In the days of hard copy journals this selective process had a genuine practical relevance. Now with journals being almost entirely viewed online there is no real pressure on the number of articles they have to present. A question about the merit or the noteworthiness of the research would seem be a key aspect.  Journals retaining their esteem by selecting research work of a particular standard.

To some extent a judgement on the research question was made before the results are derived, analysed, discussed, written about and published because funders decided whether the research question was worthy of investigation. The concept of pre-registration of studies has been presented on several occasions and is being put into practice, with Prof Marcus Munafò championing this cause (2). So with ways to determine whether research is interesting or worth discussion already in place why should there be a decision after the work has been written.

It might also be expected the work has been conducted ethically, since ethical approval would be required before commencing. It is actually a little ironic that the ethical approval process can be completed without human intervention if no risks are flagged. I would seem reasonable to suggest that studies that are pre-registered would be unbiased in the set-up of the research and the analysis of the data generated.  Then the only question that remains for the peer reviewer is to check whether the paper has been written up in a logical way, with good syntax and that is grammatically sound for the readers to interpret. So why not employ Artificial Intelligence to check the readability of submissions and make suggestions on improvements to the text if required. These are methods that are employed by software packages such as Grammarly.

There could also be a case made for the marking of students’ work using a developing computerised algorithm (or AI). It would probably prove an uncomfortable step for many to replace human markers entirely but it would seem like a reasonable approach to replace one marker of a pair with an AI moderator.  Running numerous previous years’ exam scripts through a developing algorithm should allow the system to be tuned-in to identify the key factors in giving scores. If the AI and human maker disagree then a second human marker could moderate. There are experiments taking place with school students’ work (3) that could help show how well this might work.

It would be a brave step by any University that instigated marking partially by a computerised algorithm and also it would also be a bold move for a publishers that to review articles. However, if they were to do this they may benefit from freeing up the time of their academics to more creative in the research and teaching processes.

References

  1. https://workshops.ncl.ac.uk/fms/integrity/
  2. https://academic.oup.com/ntr/article/19/7/773/3106460
  3. https://ofqual.blog.gov.uk/2020/01/09/exploring-the-potential-use-of-ai-in-marking/