When you’re putting together course materials, it’s important to think about how it’ll look to your students. Laptops and monitors come in all shapes and sizes, so what looks good on your screen might not on someone else’s.
It is also becoming increasingly common for students to access content on their mobile phones and tablets. How can you efficiently ensure that everything appears visually pleasing and functional across these diverse devices?
Windows: F12 Mac: Cmd + Opt + I
This opens the developer tools, which includes a ‘device mode’ where you can see how your content will look and function on different devices. The below example is using Chrome on a windows machine:
Michelle Miller shared her work in digital skills and accessibility at the Learning and Teaching Conference in March this year. This poster shows how you can improve your PDFs’ accessibility using Adobe Acrobat Pro, including common issues flagged by Ally, the accessibility checker in Canvas. All colleagues have access to this software.
Find out what the move to the cloud means for you – it’s all good news!
(Article reproduced from LTDS Newsletter)
In July 2023, the University will upgrade the ReCap service to the latest version of Panopto (the software that provides ReCap) and transition the service to being cloud hosted.
To achieve this there will be a period of downtime for ReCap during week commencing 24 July 2023. NUIT and Panopto are finalising the exact dates and duration of this downtime and further details will be communicated to colleagues when available.
All current functionality for the service will be maintained and existing content migrated to the cloud hosted service in line with the University’s retention policy (6 years for teaching room recordings and indefinitely for other content e.g. ‘My Folder’ content, conference recordings).
The benefits of upgrading include:
Improved copying within Canvas – making sharing links to recordings from previous years easier and quicker for colleagues, with automatic updating of links reducing the number of students encountering error messages.
Improved live streaming functionality – facilitated by the capacity provided by cloud hosting.
Captioning improvements – including an update to automatic speech-to-text that has improved ASR captions quality, and additional functionality within caption editing including ‘find and replace’ and confidence highlighting.
Distributed recording – allowing presenters in different locations to join the same recording.
Better integrations with other centrally supported software – including Zoom, Teams and H5P. A programme of communication will take place in the coming months to make colleagues aware of the downtime. During the 2023/24 academic year, we will also provide support in using the upgraded and new functionality.
Please send any questions about the upgrade, or any other aspects of the ReCap service, to firstname.lastname@example.org.
Quick wins in accessibility by using Adobe Acrobat Pro’s built-in tool.
In FMS TEL we are currently working on accessibility as new module materials are being prepared for release. Did you know that Adobe Acrobat Pro has a built in Accessibility tool that not only checks your document’s accessibility, but helps you improve accessibility with the click of a few buttons?
If you find that your PDFs are receiving low accessibility scores in Canvas, you can use Adobe Acrobat to improve their scores quickly and efficiently. Simply open the file in Adobe Acrobat Pro. Then select the Accessibility tool from the Tools options. (Click on More Tools then scroll down to the Protect and Standardize section.)
Begin by selecting the Accessibility Check. This will tell you what can be improved in your document.
Next you can use the Autotag tool if your document is missing tags such as headings.
You can choose Set Alternative Text to add alt text to your images.
If necessary use the Reading Order tool to set the order in which the document should be read by a screen reader.
With these three easy steps you have vastly improved the accessibility of your document! Try adding it back into Canvas and you will be surprised by the results you get.
When creating a transcript, you will need to decide a few things before you start.
Are timings important, and if so, what level of accuracy do you need? Does every utterance need a time, or would it suffice to signal timing points at the start of each slide, for example?
Is it important to know who is speaking? Training videos may not need this.
The first step is to download your caption file and copy and paste the text into Word. Then, transform your lines of text into a table using ‘convert text to table‘ and setting the number of columns to 4.
Copy your table from Word and paste it into Excel. Once this is done, you can use Excel to trim down your times and make them more human-friendly, using the MID function in a new column. Copy the formula down the whole row to quickly tidy all of the timestamps. This example trims the hours from the start time, and gets rid of the milliseconds and the end time completely.
Pasting the resultant data back into Word means you can then use Word’s formatting tools to shape your transcript, adding columns as needed to denote speakers or other important information, and removing columns that contain redundant data.
Cutting and pasting the individual cells containing speakers’ words into a single cell will help improve the look of the document. Paste without formatting to combine the cells into one. Spare rows can then be deleted. Using Find and Replace to get rid of line breaks (search for ^p) and replacing them with spaces further tidies this up. You can then delete the redundant rows.
If you like, you can then remove the borders from your table. You could also set a bottom cell margin to automatically space out each row.
It is important to check your resulting transcript’s accessibility. As this method uses a table, make sure that a screen reader can read it in the correct order. You can do this by clicking into the first cell of your table and then pressing the tab key to move between cells – this is the order most screen readers will relay the text, so check it makes sense.
You can also choose to add a header row to your table to give each column a title. After adding this, highlight the row, click ‘table properties‘ and in ‘row‘ select ‘repeat as header row’. Ally in Canvas will not give a 100% score without this.
Once your transcript is formatted correctly, you can correct your text and fix mistakes in the usual way if you have not already done so for captions. The previous post on Faster Captioning has some tips and tricks to speed this up.
This post details some easy tips and tricks to speed up your caption editing process using Notepad and Word.
This post assumes users are using Panopto (ReCap), therefore Panopto guidance is linked for uploading and downloading caption files. Guidance for other products such as Streams, Vimeo and YouTube can be found on their own sites.
In FMS TEL many team members regularly work with captioning videos – whether these are our own instructional videos or webinars, or student learning materials. Recently a few of us in the team have been talking about how we caption videos – specifically, what processes we use. There are some of us in the team who use the inline caption editor in Panopto, and use speed controls to manage the flow of speech so they can correct as they go. Others prefer to download the caption files and work with them in a separate program.
Both methods have their pros and cons. Working within the online editor is often best for short videos, or those with very few corrections to be made. Sometimes, though, it is easier to manage longer or more error-prone caption files in their own window. This gives more space to see what you’re doing – as long as you can avoid messing up the file structure. You can also use proofing tools in Word to speed things along or cut out repeated mistakes.
The rest of this post details some tips you can try to speed up your own process if not using the online editor.
To work with captions outside of Panopto, you’ll first need to download the caption file. If there is no file to download, you’ll need to request automatic captions first. The caption file can be opened in Notepad. From there, you can edit each line of text separately. You must not change the file structure – so do not edit any of the other lines in the file, even the empty ones.
If you want to take advantage of some more proofing tools, try copy and pasting your entire file into Word. This will allow you to use tools such as Spelling and Grammar check to remove duplicate words or transcribed stuttering sounds, and can also draw your attention to other oddities. The Spelling and Grammar tool automatically moves you through your document, saving time scrolling and searching. As well as checking spelling, this can also help trim unnecessary words from the text, making it faster to read.
Find and Replace is useful, and can help…
If a name has been consistently misspelled – for example Jo/Joe.
If the speaker has a filler word that can be removed (I say “kind of” as filler so always search for and remove it from the captions!).
To replace key numbers or years that have been spelled out with their numerical representations (e.g. ninety-nine percent -> 99%).
Filtering out inappropriate language if it has been misheard by the auto software – if you see it once you can search the whole document quickly.
Filtering out colloquial spellings (gonna -> going to).
A good tip to ensure you only find whole words is to search them with spaces before and after the word itself. You can also use the ‘more’ option dialog and check the ‘whole words only’ box.
If you have been deleting a lot of items and adding spaces in their place, you might also want to do a find and replace for two spaces together and replace with one space. Run this a few times until there are no results. Similarly, you could look for comma-space-comma if you have removed a lot of filler.
These steps won’t fix everything, but can cut out some of the bulk and help speed up your process. After using these tools, read through the captions carefully again to fix any leftover errors.
When captioning and transcribing, what is meant by ‘accuracy’? When are captions good enough?
In FMS TEL and LTDS many team members regularly work with captioning videos, in particular for our own instructional videos or webinars. Recently a few of us have been talking about how we caption videos and how we decide what to correct. After discovering we all had differences of opinion about what to keep and what to edit, it seemed like a good idea to think through the issues.
This webinar from the University of Kent features Nigel Megitt from the BBC talking about priorities when captioning and audio describing TV programme. It includes research on how people with different levels of hearing feel about captions.
Different Types of Captioning and Transcription
Commercial captioning companies offer a range of levels of detail. We do not outsource these tasks, but the predefined service levels can help clarify what decisions are made when captioning. Is verbatim captioning better than a lightly edited video? An accurate set of captions or transcript should include hesitations and false starts, but a more readable one might remove these for fast comprehensibility and more closely resemble the script of a speech.
Destination – who is the audience? What do they need?
Speaker(s) – how can they be best represented? How do they feel about you editing their speech for clarity (e.g. removing filler words) vs correcting captions to verbatim?
Timescale – how fast do you need to turn this around? Longer videos and heavier editing takes longer.
Longevity – will this resource be around for a long time and reach a wider audience? If so it may merit extra polish.
Once you have broadly decided on the above, you can deal with the nitty-gritty of deciding what to fix, edit or remove. Deciding on your approach to these common issues means you won’t have to make a decision each time you find an error in your transcript. If working with a few other colleagues on a larger project you might want to agree with each other what standard you are aiming for to create uniformity.
We don’t usually speak in the same way we write. Normal speech is full of little quirks that don’t appear in text. Some of these include…
False starts (If we take… no actually let’s start with… yes, OK, if we take question 4 next…)
Filler Words (you know, like, so…)
Repeated words (You can do this by… by reading the text)
Other Considerations for Captioning
Remember that captions will be read on screen at the pace of the video. This means that anything that you can do to increase readability may be useful for the viewer. This includes simple things like…
Fixing initialisms and acronyms (PGR not p g r, SAgE not sage)
Fixing web and email addresses (email@example.com, not A B C One At Newcastle Dot A See Dot UK)
Adding quotation marks around quotes.
You may also consider…
Presenting numbers using figures rather than words (99% not ninety-nine percent)
Removing awkward breaks (When Panopto separates a final word from its sentence.)
Fixing inaccurate punctuation like full stops in the wrong places, or commas and apostrophes (this is quite time consuming).
Considerations for Transcription
As well as the editing and tidying jobs above, before beginning to work with your file, consider whether or not the timing points are going to be important, and how you are going to denote different speakers, or break up the text. For example, for an interview you may need to denote various speakers very clearly. By contrast, for a training webinar, even if there are two presenters it might not be crucial to distinguish them. Instead it might be better to add headings for each slide so that the two resources can be used side by side.
Once you have decided on what to edit and what to ignore, your process will move along much faster as you won’t need to decide on the fly.
Keep an eye on the blog over the next few weeks for tips on how to quickly manage and edit your caption and transcription files.
The Faculty of Medical Sciences Digital Skills provides lifelong learning to students throughout the faculty on a bespoke basis. Our tutorials cover the use of Microsoft Office programmes such as Word, Excel, and PowerPoint, as well as tutorials on how to work with specific media such as images and posters.
Recently, we have increased focus on the value of our tutorials by highlighting lifelong learning skills for accessibility. As part of our Word tutelage, we teach students to use Styles to format their documents. Styles are packets of information that control how text looks and behaves. Namely, we teach students to work with heading and caption styles. In addition to being effective and efficient methods of formatting text, styles have an important role to play in accessibility. Screen readers can analyse a Word document using styles and accurately interpret headings. This allows users to easily navigate through documents. And, when converted to a PDF, styles automatically create tags in the document affording the same benefit for screen reader navigation.
Additionally, we teach students how to add alternative text (alt text) to images they insert into Word or PowerPoint. Alt text allows users with impaired visibility to understand what an image depicts. By using alt text, student increase accessibility of their digital documents. And, like styles, when converted to PDF, images retain tags of their alt text.
These skills are truly lifelong learning skills as they provide students with the knowledge and ability to create accessible documents. These skills will serve them in their future careers where digital documents will be required to meet specific accessibility regulations.
In December 2020 I had the opportunity to attend the Association for Learning Technologies’ Winter Conference. One of the presentations at the conference really struck a chord with me and I would like to share a synopsis of what was discussed.
Presenters Sharon Flynn, Natalie Lafferty, John Traxler, Bella Abrams, and Lyshi Rodrigo sat on a panel discussing an Ethical Framework for learning technology. They discussed what they perceived as the biggest issues around ethical teaching and learning digitally.
One of the primary concerns driving the development of an Ethical Framework is the inevitable power relationship learning technologies create between teachers and their students. For example, how can monitoring work in the right way, where it is not there as a policing tool, but rather as a tool for aiding engagement and learning. One of the panellists suggested a simplified form of terms and conditions could go a long way to pacifying student concerns over any form of monitoring.
There are inherent principles about trust and reliability in the digital world. This is evident in many sectors but likely not more than in the surveillance culture of the digital world. We have, therefore, the responsibility to help protect students, and colleagues, as we become more aware of ethical challenges in the digital world.
Another concern relates to fair access. What ethical role does the institution have in ensuring all students have access to the digital tools, such as laptops and broadband internet? What is considered adequate and equitable? How logistically can this be accomplished? And, this is not simply a problem for students. Some teachers will also experience digital tools poverty. This would also include training for students and teachers in the systems, programs, and tools they would be expected to use. (Something that Newcastle University is working hard to ensure exists to support students and teachers in the unique set of circumstances following of from Covid-19.)
Another question brought up was what constitutes harm? This question would be at the heart of an Ethical Framework. How do we as institutions identify harm caused by digital teaching and learning and mitigate it? For example, how does proctoring and the use of e-resources impact students. What about productivity measures? These could potentially be arbitrary and misrepresent what really matters. Some people think these are easy solutions for the current challenges, but they invite the need for an Ethical Framework.
The implications of GDPR and its potential successor also impact the need for an Ethical Framework. Professional bodies are not necessarily thinking of the problems related to approaches like proctoring. So, any Ethical Framework must be rooted in context of principles and be ever aware of the needs and where importance lies withing various other cultures.
This all leads to the need to develop an Ethical Framework for teaching and learning digitally. The panellist suggested that we start from a position of respect and use our values to build an Ethical Framework including student voice.
This summary of the impetus and content of what may be needed in an Ethical Framework for teaching and learning online is certainly worth considering as we enter into the new normal that will likely contain more online teaching than we had pre-Covid. I would be interested to hear (reply below) what you think about what the ALT panellists had to say and what your views on such an Ethical Framework should and could be.