Supporting Placements

The Placements system has a focus on supporting assessment, reflection, and three-way communication and file-sharing during placements (students, external supervisors & NU tutors/staff). For 2023/4, the system was extended to support evidencing of individual competencies/outcomes, with a student dashboard and ability for ad-hoc assessors to provide assessment without the need to log-in.

Background

The Placements system was developed by FMS TEL as a sub-system of NU Reflect. It has been used by PGCE Programmes (ECLS) since 2017/18. The system absorbed established practice from the PGCE programmes, but was designed to be configurable for potential use by other programmes. It has been available to all programmes at the University from September 2019. It has been used by DClinPsy and other programmes.

Governance of NU Reflect (including Placements) is via a management team (academic lead, LTDS lead & FMS TEL lead), which reports to Digital Education Sub-Committee (DESC). A ‘baseline’ for the Placements system was developed following a period of consultation in 2022/23:

Baseline requirement  Description Purpose 
Custom forms Manage/create custom forms for a placement scheme The ability to develop context specific bespoke forms for student completion in line with programme requirements, supporting a range of functions e.g., assessment, reflection, placement evaluation etc. 
Rubric-based assessment Including scheme-specific skills/competencies and level identifiers The ability to assess student work against skills/competencies in line with programme context 
Three-way file sharing Between student, external supervisor & University tutor The ability to share forms with relevant users to support scheme requirements 
Competency sign-off Sign-off of skills/competencies/ outcomes, including by external supervisors. To support sign-off by external supervisors and other third parties. 
Reporting Data feeds or data download Required for reporting to funding/regulatory bodies. 
Admin access Tools to monitor placement cohort/assign to placement, location & supervisor etc. Set automated deadlines relative to placement dates. Scheme specific control to amend information as/when required, delegated to school/programme teams. 
Baseline requirements and features of the Placement system

PGCE students spend the bulk of their time on placements in schools around the region, with school-based supervisors who support and assess the students. A rubric-based assessment tool was developed and configured so that these supervisors can assess progress and provide feedback against national Teaching Standards (see Screenshot). They click on the desired Level Descriptors and can also add qualitative feedback.

Rubric-based Assessment in the Placements System

Additional forms (e.g. weekly progress) were set up; these were designed to be customisable as there were different requirements for Primary and Secondary PGCE programmes. A key design feature was to reduce the burden on admin teams, in particular by automating deadlines – in particular, forms are configurable with deadlines set relative to placement start and end dates, and set to who will complete these (supervisor, student, University tutor etc).

Students are automatically imported into the system, based on their programme or module registrations with the University. Programme administrators manage external accounts for placement supervisors, who do not have University logins.

Initially rubrics and forms were set-up by the development team; however, over time, the team in ECLS generally self-manage their use of the system in creating new forms and making old forms inactive. Form and rubrics data can be exported for reporting purposes and University tutors can view supervisors marks for the main Teaching Standards collated across the students (2 or 3) placements, before entering a final overall assessment. When there have been major changes to the professional frameworks, support was needed from the development team.

The DClinPsy programme (Psychology), where students spend the bulk of their time on placements, followed a similar model to the above PGCE programmes.

Developments for 2023/4

Pharmacy began using the Placements system in 2023/4. Requirements were significantly different to those of existing programmes using the system. These required extending the software, which was resourced by FMS. In particular, the focus is on evidencing individual competencies (rather than all competencies being assessed in the same rubric). Also, rather than a set-supervisor competing forms, there was a requirement for sign-off of competencies by ad-hoc supervisors, without the need to log-in. Established practice in MBBS was applied, with students entering the assessor’s Email address, which generates an Email with a secure link to the required assessment form. A ‘dashboard’ was developed for students and their tutors to see evidence by competency by year of the programme.

Student dashboard showing evidence by competency

The assessment tools and processes are indented to closely match those which students’ will need to use after graduating and using professional vocational ePortfolio. The new features of the software were designed to be configurable, to support the competency/skills frameworks of other programmes.

Further work will be undertaken, including developing a process for students to select a sub-set of items from their portfolio for each competency, for their end-point assessment.

FMS Feedback – From Excel sheets to detailed online feedback

The MBBS programme collects a lot of assessment information that needs to be displayed in a way that is useful for the students, so that they can improve their grades through focusing on the right areas. They called upon the web skills of the FMS TEL team to design and implement a web application that could take assessment results, process them, and show them to the students in a visual and interactive way.

Using templates

The first step towards this was agreeing a template that the results could be stored in. The types of assessments the MBBS programme uses are varied and quite detailed in their scope. They required a means to take all of this variation and detail and create something useable. The assessment team started by using complex Excel documents to collect and store all the assessments results. Each assessment type (WriSkE, MOSLER, SBA, OSCE…) needed a unique Excel template to store all the student scores and a way to map the assessment structure to the curriculum outcomes.

spreadsheet template
Example WriSkE template

Processing the Spreadsheets

Once we had agreed on the template structure, we could then focus on how we would process them for use in a web application. We planned for the heavy concurrent use the web application would undergo when all the students logged in and tried to access their grades at the same time. The best solution for this was to minimise the use of complex database structures and instead store the results in pre-processed files, one for each student per assessment. The format we chose for this was JSON and this allowed us to rely on the speed of the server to provide the data.

Custom processing scripts were written for each assessment type to create these files. This would mean an administrator from the assessment team could log in to the site, go to the admin tools, choose an assessment type, set up a few settings including a release date, attach the assessment to an uploaded taxonomy (this attached the exam structure to learning outcomes), attach Excel file and process. The site would then go through the spreadsheet and create each assessment file ready for the students.

Admin Upload Form

Display the Results

The final step was to decide how to display this detailed assessment information to the student. We chose to use an online chart library called highcharts. This allowed us to utilise a whole suite of charts and graphs to display the raw results in an interactive way.

One of the core charts we used was quartile (boxplot) graphs, which allowed us to plot the students scores against the cohorts. This means that students can see how they are performing in the context of their cohorts, which many of them appreciate. We also heavily use bar charts you can drill down into, and spiderweb charts that could show the same information in a visually different way. Letting the students modify and change the display to their preferences was also key.

Finally, we added tabs to show the exam structure and an area to which support documents could be added, such as assessor comments.

WriSkE results of a student

Each time a student clicked on an assessment, the site would load their file and convert into a visually rich interface of charts, graphs and links to drill down into the data.

System Success and Expansion

The FMS Feedback system has been successfully used by the MBBS programme since 2014. It is held in high regard by the administrative team who appreciate its flexibility and ease of use. For students, the detailed data displayed allows them to have a very granular understanding about their levels of mastery of individual skills and topic areas, and to spot patterns in their performance. This means they can focus their studies on areas identified for improvement, with a view to increasing their overall competency.

It has since been expanded to cover a few assessments from dental programmes that had a similar structure and process to the original MBBS assessments, such as SBA and OSPE’s.

If you would like to learn more about the FMS Feedback system, please contact the FMS TEL Team.

Talking about ChatGPT

There has been a lot of discussion in the FMS TEL office about ChatGPT over the past couple of weeks. Michael and Eleanor have spent some time putting this post together to help our readers to learn a bit more about ChatGPT, and to catch up on the discourse around it in HE in general.

We used ChatGPT to generate an about page for the blog last week, which you may have seen. It provides a good example of an introduction for a website. It feels somewhat generic, it has gotten a few things wrong, and expresses itself a little strangely in places. At the same time, it’s a decent starting point to work from.

What is ChatGPT?

ChatGPT is one of many advanced Artificial Intelligence models which has been trained to understand inputs (e.g. questions) in natural language (hence ‘chat’) and answer them. Interacting with ChatGPT feels similar to interacting with a chatbot or virtual assistant on the surface, but the technologies are completely different.

How does Chat GPT work?

The data that ChatGPT draws from is an offline dataset which was created in 2021. The exact content of the dataset is not clear, however ChatGPT is able to formulate responses to a massive range of topics, so it is safe to assume the dataset is enormous, and most likely taken from public internet sites (i.e. not password-protected sites). Part of the training process involved feeding computer code into the model, which helped it learn logic and syntax – something that is present in natural languages in the form of grammar. Feedback was also given to the model to help it improve – this is called ‘supervised learning’.

Q: How were you trained?
A: I was trained by OpenAI using a deep learning technique called Transformer-based neural network architecture on a massive amount of text data sourced from the internet. The training process involved inputting large amounts of text data into the model and updating the model's parameters to minimize the difference between the model's predicted output and the actual text in the data. This process was repeated multiple times until the model learned to generate text that is similar to the input data. The end result is a model that can generate human-like text based on the input it receives.
Some of the language is somewhat technical, but the summary is quite clear!

While ChatGPT can produce extended responses drawing on its huge dataset, it doesn’t understand what it is producing – similar to a parrot repeating after you. It can mimic the sounds but has no true understanding of the meaning.

What are people saying about ChatGPT and University Education?

Assessment Security

With its ability to generate text that looks like a human wrote it, it is natural to be worried that students may use ChatGPT for assessed work. Many automated text editors and translators are already in this market, though they work in a different way. Tools like Word’s spellchecker and Grammarly can both be employed to boost writing skills – though these do not generate text. ChatGPT is different in this respect, and it is free, quick, and easy for anyone to use.

“…The big change is that this technology is wrapped up in a very nice interface where you can interact with it, almost like speaking to another human. So it makes it available to a lot of people.”

Dr Thomas Lancaster, Imperial College London in The Guardian

Assessment security has always been a concern, and as with any new technology, there will be ways we can adapt to its introduction. Some people are already writing apps to detect AI text, and OpenAI themselves are exploring watermarking their AI’s creations. Google Translate has long been a challenge for language teachers with its ability to generate translations, but a practiced eye can spot the deviations from a student’s usual style, or expected skill level.

Within HE, clear principles are already in place around plagiarism, essay mills and other academic misconduct, and institutions are updating their policies all the time. One area in which ChatGPT does not perform well is in the inclusion of references and citations – a cornerstone of academic integrity for many years.

Authentic assessment may be another element of the solution in some disciplines, and many institutions have been doing work in this area for some time, our own included. On the other hand for some disciplines, the ability to write structured text is a key learning outcome and is inherently an authentic way to assess.

Consider ChatGPT’s ability to summarise and change the style of the language it uses.

  • Could ChatGPT be used to generate lay summaries of research for participants in clinical trials?
  • Would it do as good a job as a clinician?
  • How much time could be saved by generating these automatically and then simply tweaking the text to comply with good practice?

The good practice would still need to be taught and assessed, but perhaps this is a process that will become standard practice in the workplace.

Digital skills, critical thinking and accessibility

Prompting AI is certainly a skill in itself – just as knowing what to ask Google to get your required answer. ChatGPT reacts well to certain prompts such as ‘let’s go step by step’ which tells it you’d like a list of steps. You can use that prompt to get a broad view of the task ahead. A clear example of this would be to get a structure for a business plan, or outline what to learn in a given subject. As a tool, ChatGPT can be helpful in collating information and combining it into an easily readable format. This means that time spent searching can instead be spent reading. At the same time, it is important to be conscious of the fact that ChatGPT does not understand what it is writing and does not check its sources for bias, or factual correctness.

ChatGPT can offer help to students with disabilities, or neurodivergent students who may find traditional learning settings more challenging. It can also parse spelling errors and non-standard English to produce its response, and tailor its response to a reading level if prompted correctly.

Conclusions

ChatGPT in its current free-to-use form prompts us to change how we think about many elements of HE. While naturally it creates concerns around assessment security, we have always been able to meet these challenges in the past by applying technical solutions, monitoring grades, and teaching academic integrity. Discussions are already ongoing in every institution on how to continue this work, with authentic assessment coming to the fore as a way of breaking our heavy reliance on the traditional essay.

As a source of student assistance, ChatGPT offers a wealth of tools to help students gather information and access it more easily. It is also a challenge for students’ critical thinking skills, just like the advent the internet or Wikipedia. It is well worth taking the time to familiarise oneself with the technology, and to explore how it may be applied in education, and in students’ future workplaces.

Resources

  • Try ChatGPT – you will need to make an account with OpenAI and possibly wait quite a while as the service is very busy.
  • Try DALL-E – this AI generates images based on your inputs.

Sources and Further Reading

Assessment and Marking Refresher

The January assessment period will be upon us all very soon. Why not take some time over the next few weeks to refresh your knowledge.

Below are a few resources previously delivered or created by the FMS TEL team:


? Canvas Assessment Training 

  • Setting Up Assessments
  • Creating Rubrics
  • Turnitin Plagiarism Detection
  • Marking and Moderation

? Effective Rubrics

  • Designing Effective Rubrics for Marking and Feedback
  • Best practice when deciding how to put your rubric together
  • Rubric design workflow
  • Setting up and using Rubrics in Turnitin

? Adaptive Release Feedback

  • Comments before grades
  • Technological affordances available
  • Recorded audio feedback

? Multiple Markers 

  • How to allocate students to a marker
  • How to filter Speedgrader to see only your marking section

Canvas Assessments from Start to Finish

We recently delivered a bespoke training session for the Graduate school about running assessments on Canvas. The session was aimed at Professional Services and Teaching staff, and covered the following:

  • Setting up assessments and enabling Turnitin
  • Creating Canvas rubrics
  • Monitoring submissions and managing different circumstances
  • Plagiarism checks and Marking
  • Moderation and release of grades

The resources are available on our Canvas Community to all Newcastle staff. You may need to enrol in the community if this is your first visit.

Case Study: Virtual Oral Presentations as a summative assessment

How do oral presentations work for 100% online modules?

Presentations helps students put across an idea while expressing their personalities, which is hard to do in an essay.

Introduction

Oral presentations are a popular choice of assessment in the Faculty of Medical Sciences, especially in our e-Learning modules. Students are asked to submit a pre-recorded presentation to Canvas and the markers watch the presentations at a time and place that suits them.

Diarmuid Coughlan, module leader for ONC8028 Practical Health Economics for Cancer, has kindly agreed to walk us through how the Virtual Oral Presentation element works on his module.

The Assessment

This year we had 14 students on the module. We asked the students to create a 15 minute presentation using either Zoom, Panopto (Recap) or PowerPoint.

We informed the students right at the start of the module that an oral presentation was part of the assessment and 4 weeks into the module we provided a formative assessment. The formative assessment allowed students to familiarise themselves with their chosen software, gain experience talking to a camera and also get some limited feedback on their presentation skills.

The submissions are double marked by 2 markers. Marking is completed separately by each marker outside of Canvas, then markers meet to discuss which marks/comments would be entered into Canvas and made visible to each student.

The Set Up

We provided 2 submission points in Canvas:

Recording Submission Point:

This area was used for the marking. It was set up as Media Recording for MP4 uploads (max of 500 mb) with a Text Entry option for Panopto users (no size limit).

We allowed students to choose which technology they were most comfortable with and provided video and written instructions for Panopto and Zoom. PowerPoint instructions were added later as an option with links to guidance provided by Microsoft.

View of instructions in Canvas

We also provided some instructions so students could crop their recordings to comply with the 15 minute time limit.

You are limited by time so remember to edit your recording so it is no longer than 15 minutes. Instructions: Windows | Mac | Panopto

Slide Submission Point:

This area had a 0 point value. It was set up as a File upload area for students to submit their slides as .ppt or .pdf, this allowed us to get a turnitin plagiarism score for each presentation as well as a reference copy of the slides, should anything be unclear in the video recordings.

How did it go?

There was a lot of fear from students initially. We encouraged students to give it a go, informing them that we were not trying to trick them. We provided clear guidance on what we expected and provided a rubric with a breakdown of points, clearly showing only a small percentage of the grade would be based on their presentation style and delivery. The content of the presentation was the most important part!

The use of technology was varied:

As markers we also had to overcome our fears of technology.

PowerPoint is easier once you know how to access recordings (you have to download the file, then click start slideshow).

Sometimes the Panopto recordings were hard to find, especially if students had experience of using the technology in Blackboard and did not follow the Canvas instructions correctly.

What are your next steps?

  • We only provided grades with a short feedback comment last year, we plan to provide more extensive feedback going forward
  • We will add more video content into the module as examples of how to create engaging slides and showcase our presentation styles – hopefully leading by example
  • We would also like to provide examples of a good presentation vs a bad presentation

Probity in Online Exams – Success at TEPARG for SDS and FMS TEL

This post highlights the work done by SDS and FMS TEL to support remote exams over the past few years. Successful techniques for keeping students informed and supported are discussed, including student voice.

Work done by the School of Dentistry and FMS TEL Team won first prize at the Trans-European Pedagogic Anatomical Research Group Hybrid Conference this year, for the presentation on ‘Adopting a flexible approach to professional anatomy spotter exams during COVID’. You can read the internal news item on Sharepoint.

This work was also subsequently presented at the Newcastle University Learning and Teaching Conference 2022. Newcastle Staff can view the poster here for an overview.

This work centres on how exams subject to oversight from professional bodies – in this case the General Dental Council – could be run with adequate probity when these could not be undertaken in person.

The first element of this is to run the exams ‘live’ rather than as a 24h format. This meant that teaching staff and professional staff could be on hand to resolve any technical difficulties or make invigilation decisions.

A variety of question designs were also thought through. The final format was a ‘stimulus’ question type on Canvas, allowing an image to be shown with answer options alongside. Now this question type can also be used with Inspera exams.

The support of students with SSPs was also a key consideration, and rest breaks were granted across the board when exams were very lengthy. Students were asked what they thought about the probity measures put in place, such as the use of an exam declaration, and checking responses which may have been copy-pasted. Overall, the response was positive.

A key element in the smooth running of these exams was the preparation offered to students beforehand, such as clear messaging via email and the opportunity to practice with the exam environment before undertaking their summative assessments. 99% of students agreed that they had been well-supported throughout the process and during the exams themselves via the Zoom exam hotline. Most calls were students double-checking their responses had been submitted, rather than having technical issues.

Learning from these exams has already been carried forward for digital exams running through the new Inspera exam system, and the confidence staff and students have in the procedure means that any future changes to circumstances will be much easier to navigate for these teams and cohorts.

The Unessay – NULTConf

Dr Stephanie Holton presented her Unessay assessment task – what could you assess with an Unessay?

At the Learning and Teaching Conference Dr Stephanie Holton, along with two of her students, presented their experiences trialling a new approach to assessment – giving students freedom in how to present their learning, rather than setting a traditional essay task. This work was done in a module examining Ancient Greek texts.

What is an ‘unessay’ – and how exactly does it work? This talk explores the increasingly popular unessay as alternative assessment type, taking as a case study its implementation across several compulsory language modules in the School of History, Classics & Archaeology during 2020-21. Delivered by both the staff and students involved, it highlights the wide range of benefits – as well as the challenges – of diversifying assessment in some of our most traditional modules.

Dr Stephanie Holton

The full talk can be viewed here.

It was fantastic to see how the students were able to approach this new assessment methodology, and the outputs themselves were diverse, including models and digitally-created choose-your-own adventure books. The support provided included workshops and one-to-one sessions so that students knew what to expect, and were confident they were on the right track. While somewhat time-intensive in terms of support, the level of engagement and ownership the students felt around their creations was a clear benefit of this methodology.

Students were able to share their extra curricular creative skills, and the diversity of their approaches meant that they were prompted to explore their texts in new ways – for example researching clothing colours for their model characters that would be appropriate for the time. The need for these extra details prompted students to do more research around their text that they may not have otherwise done, broadening their subject knowledge.

These kinds of assessments open the door to students using and expanding their digital skills, even though this isn’t the focus of the assessment. The element of choice allows them to choose their own level of comfort with how they’d like to present their project – whether this is in a digital format or physical.

What could you assess with an Unessay?

Designing Effective Rubrics

A written summary of our training on using rubrics with links to the full webinar resources.

We have recently delivered some training for BNS (School of Biomedical, Nutritional and Sport Sciences) in collaboration with Rebecca Gill and Susan Barfield from LTDS. The two sessions covered Rubrics – both their design and how they can be implemented in Turnitin. You can access the training recordings and resources at the foot of this page.

Examples covered included:

  • a rubric with very few criteria and letter grading
  • a rubric with weighted criteria and bands
  • a very fine-grained rubric that awarded numerical points based on ten different criteria.

What are Rubrics for?

Rubrics can be used to evaluate assessments, whether you use a quantitative rubric to calculate marks, or a qualitative one with more wiggle room. Using a rubric makes it easier to identify strengths and weaknesses in a submission, and creates common framework and assessment language for staff and students to use. This in turn can help make learning expectations explicit to learners, and assist in the provision of effective feedback.

What is the best way?

There is no one way to design a perfect rubric, as assessments are very individual.

Before you begin you may want to consider how you can design your rubric to lessen the marking or feedback workload. Quantitative rubrics can reduce decision-making difficulties as this means you don’t need to consider what mark to give within a band. On the other hand, you may need this flexibility to use professional judgement. A detailed rubric with less wiggle room per descriptor also acts as detailed feedback for students, reducing the need for writing long additional comments, but also takes longer to design.

Descriptors

When writing descriptors, ensure that there is enough clear and objective difference between each band. You may find that aligning your descriptors with an external framework helps you write them. This is critical for secure marking, and is helpful for students receiving that feedback. Using positive language also helps make this feedback easier to digest, and allows students to see what they need to include to improve.

Rubric Workflow

When creating a rubric, you can follow this basic process. At every stage it is important to consult local assessment guidelines and discuss progress with your colleagues for constructive feedback.

  1. Determine your assessment criteria – ideally these should be aligned with the learning outcomes of the task.
  2. Consider the weighting of each element, if required – is presentation as important as content?
  3. Decide whether you will need defined marks or flexible ranges. This may be partly determined by your in-house guidelines.
  4. How do marks in various criteria interact with or depend upon one another? For example, if there is a very low mark in a content criterion, does that mean that the assessment can never be a pass?
  5. Try to write out individual descriptors – if you’re having difficulty discriminating between bands you may need to adjust your structure.
  6. Test your rubric against former or dummy submissions and adjust as necessary. Does it work for a lower level of mastery as well as a middle-scoring and high-scoring submission? If you had difficulty deciding between criteria, or discover a double-credit/penalty, you will need to adjust.

Technical Setup

Turnitin allows for the use of Grading Forms and Rubrics. You can watch how to implement these in the Using Turnitin video in the session resources below.

Turnitin grading forms can be created to assist with marking assignments, allowing you to add marks and feedback under various criteria. When using these forms, the highest mark entered will become the grade for the assignment. You can also use this without scoring to give feedback.

Turnitin rubrics allow for marking under multiple criteria and bands. You can have standard rubrics that calculate grades, or qualitative rubrics that do not include scoring. Custom rubrics can be used for more flexibility within a band.

An alternative to using Turnitin is to integrate a rubric into the assignment itself by using a coversheet. (see the ‘Effective Rubrics – Using Turnitin’ video at 28m25s, link in the Canvas below).

Resources