Could Artificial Intelligence Free Up Valuable Time for Academics?

A busy academic might be sent many requests the peer review papers each month, those involved in teaching will have student marking to contend with too. They may sit on grant awarding committees or editorial boards for journals or be required to decide whether to accept conference presentations for meetings. This will all be on top of the processes they need to go through in order to store their own data appropriately, ensure their projects have appropriate ethical approval and rectify any changes requested of their own work, as they respond to reviewers and editors comments.

All of these processes are not actually research or teaching, they are what research and teaching require to be robust and carefully monitored. As I tell PhD students early in their studies (1) there is not a governing body for academic research, the whole process is self-policing. Peer review is essential for confidence that research processes are robust. Marking students’ work is also a necessity to ensure robust judgements on students’ grades.

However, is everything being done as efficiently and as objectively as it could be? Assuming appropriate ethical approval has been provided and the research work has been considered worthy of funding from some source then it seems a logical extension that the findings should be published. Even if the results are null the academic community should be made aware of the work. Even an experiment that goes nowhere to support the hypothesis being tested may avoid that hypothesis being tested again when null results are provided.

So assuming all research work should be published there is then a question as to why journals choose to accept or reject articles for publication. In the days of hard copy journals this selective process had a genuine practical relevance. Now with journals being almost entirely viewed online there is no real pressure on the number of articles they have to present. A question about the merit or the noteworthiness of the research would seem be a key aspect.  Journals retaining their esteem by selecting research work of a particular standard.

To some extent a judgement on the research question was made before the results are derived, analysed, discussed, written about and published because funders decided whether the research question was worthy of investigation. The concept of pre-registration of studies has been presented on several occasions and is being put into practice, with Prof Marcus Munafò championing this cause (2). So with ways to determine whether research is interesting or worth discussion already in place why should there be a decision after the work has been written.

It might also be expected the work has been conducted ethically, since ethical approval would be required before commencing. It is actually a little ironic that the ethical approval process can be completed without human intervention if no risks are flagged. I would seem reasonable to suggest that studies that are pre-registered would be unbiased in the set-up of the research and the analysis of the data generated.  Then the only question that remains for the peer reviewer is to check whether the paper has been written up in a logical way, with good syntax and that is grammatically sound for the readers to interpret. So why not employ Artificial Intelligence to check the readability of submissions and make suggestions on improvements to the text if required. These are methods that are employed by software packages such as Grammarly.

There could also be a case made for the marking of students’ work using a developing computerised algorithm (or AI). It would probably prove an uncomfortable step for many to replace human markers entirely but it would seem like a reasonable approach to replace one marker of a pair with an AI moderator.  Running numerous previous years’ exam scripts through a developing algorithm should allow the system to be tuned-in to identify the key factors in giving scores. If the AI and human maker disagree then a second human marker could moderate. There are experiments taking place with school students’ work (3) that could help show how well this might work.

It would be a brave step by any University that instigated marking partially by a computerised algorithm and also it would also be a bold move for a publishers that to review articles. However, if they were to do this they may benefit from freeing up the time of their academics to more creative in the research and teaching processes.

References

  1. https://workshops.ncl.ac.uk/fms/integrity/
  2. https://academic.oup.com/ntr/article/19/7/773/3106460
  3. https://ofqual.blog.gov.uk/2020/01/09/exploring-the-potential-use-of-ai-in-marking/

3 Replies to “Could Artificial Intelligence Free Up Valuable Time for Academics?”

  1. AI already saves time as computer-typed scripts are easier to read than hand-written scripts. I think a broader role for AI is possible, but I doubt whether they can autonomously assess papers. Never mind, as long as the algorithm is fine, they do not need autonomy? Even that creates problems, as every algorithm starts from a particular bias that is simply taken for granted. If this bias is relatively uncontroversial, we may have a significant advance here. If the bias is controversial and we come to rely on it, we would be in real trouble. As long as we use AI systems as valuable aids to assessing, rather than as assessors, we should be fine with your proposal, Richard.

    1. thanks very much for this Jan. Do you think the biases that humans carry are any different to the ones an algorithm has? My perception of it is a computer algorithm can be adapted to be the product of many people’s views. Humans do seem to get entrenched in ways of thinking. To be honest I’m more interested in the prospect of AI for peer review than marking, just because I think all the work that has been completed appropriately should be published and if the AI can determine that the work has been completed appropriately then the humans can decide where the work is published….. the question within that is do papers and Journals need ranked?

  2. Some really interesting ideas about the potential role of AI in academia in this blog post! A related point about AI in the earlier stages: predicting research impact and funding…

    I spotted a Nature article published recently (https://rdcu.be/ckUv5) which claims that a machine learning system could be used to predict which research papers/authors will be impactful in the future, and this could be used to influence/streamline funding portfolios ahead of time… but a lot of people are very het up about the authors’ approaches, claiming these methods would perpetuate the inherent biases in academia and that this will translate to a “rich-get-richer”, “poor-get-poorer” result in terms of funding and recognition. Here’s the tweet from Nature Portfolio that caused a bit of a stir in the replies section: https://twitter.com/NaturePortfolio/status/1394625844967088130

    On a very tangible level, I’d be troubled by the algorithm’s inability to predict how literature surrounding unexpected occurences becomes very important very quickly – events like global pandemics, say! Secondly, newcomers can publish excellent, high-impact research – I’d worry that this machine-learning approach would decrease mobility if it were able to influence funding portfolios.

    On a less tangible note: as a non-expert, it seems to me that we’d have to be very very careful in how to train any kind of AI-led method so we can avoid perpetuating biases – using literature from the past 10 years to train the algorithm would still contain some biases, but imagine if we were to use literature from 1950 onwards, say! The very worst -isms of academia would no doubt come back to bite us.

    Put rubbish in, get rubbish out, seems to be the overall warning message, from what I can see!

Leave a Reply

Your email address will not be published. Required fields are marked *