the ongoing dialogue between the problem and the solution….

We have been talking and thinking about how messy design is. It’s so easy to present it as a linear, procedural activity. But the reality is that it is backwards and forwards.

Our mentor @Jannah Aljafri kindly pointed us to the Double Diamond Process (Design Council). There’s an exploration – lots of divergent ideas – what is the problem we are trying to fix? how are we hoping to add value, before coming to shared understanding and definition of “the thing” followed by yet more exploration in the develop stage.

As the design council say “this isn’t a linear process“.

What might this look like from a L&T perspective?

Discover and Design stages with MOF in the  middle

Maybe in our context the skeleton MOF could be thought of as the definition.

In the develop stage, experience tells us that:

  • videos will be started and end up on the cutting room floor
  • technologies will be abandoned and new ones adopted
  • new requirements or stakeholders will appear
  • prerequisites will emerge….

But, what about moving back and forth between the two diamonds?

When online or blended approaches are integral we may have time to get feedback on samples of the approach, but more often than not we are up against it to develop and deliver modules – laying the tracks just in time.

Unless there is a healthy design window it can be hard in practice to return to discover/define — changing MOFs brings with it variable amounts of friction depending on what has been published and the nature of the change. Ideally we need time for exploration, experiments and feedback early on.

Planning for evaluation

At our fourth bootcamp workshop we had a masterclass from the OU team on how they designed in evaluation as part of the module planning. Student feedback, a student reference group, analytic dashboards (and custom reports), and tutor reports come together to build a picture of how the module performs and informs decisions about the following presentation.

During our Jamboard exercise we thought through some of the ways we could plan evaluation into our SML module. Here we aren’t dealing with students at a distance, instead there will be lots of opportunities to actually see and hear how they are getting on:

While we don’t have a reference group we do have lots of opportunities to gather feedback:

  • reviews of content pre-run
  • regular informal in-class check-ins
  • end of module survey / focus group
  • student reflections on the process and what they are learning
  • views from student reps via Student Voice / Student Staff Committees

Thinking about this further, its obvious that one size doesn’t fit all. Yes, there is a need for standard questions in centrally run surveys, but you need to know what the module is about to design how to evaluate it effectively. And, in many respects where our focus is on skills development -the “mentor” role of the academic leads will give an immediacy to feedback and permit in-flight corrections.

Parameters for a NCL Bootcamp

We have also begun the process of thinking about how we gather up our learning to present an in-house variant of the Bootcamp. Some things are clear:

  • it needs to reflect our focus as a predominantly campus based university (blended is normal, online is rare)
  • tools and techniques need to complement and extend our existing module approval processes
  • any learning design frameworks or approaches need to be easy to pick up and easy to pass on (i.e. Made to Stick)
  • centred on a mentored or action learning approach – a supported journey
  • it needs a pacemaker – a structure and metronome to enable teams to complete the design (too fast and we will drop people, too slow and people will disengage)
  • it needs to be offered as a pilot, and developed with feedback.
  • it need to be flexible enough to support our gloriously diverse range of disciplinary cultures

We have lots of options. In some ways this is freeing – we can take the best of our current practice, add in elements that are helpful, and document other approaches to come at the design task from a different angle.

However, if we are to develop something that could have traction we need input from stakeholders, and that is what is next on the agenda.

Storyboards, episodes and patterns

One of the really interesting ideas from last week’s “Developing your Design” bootcamp was that of considering an “episode level design” between the module level design and the the detail of the activities. The episode could represent a week (or maybe a fortnight) and it could have one or more design patterns giving a rhythm and predictability to teaching.

Thinking about some of our examples, a pattern for an episode (a week) could be something like:

  • Big question
  • Unpacking theory and practice
  • Group activity
  • Q&A
  • Discussion and reflection

Once learning outcomes are authored for each episode, then then, tools like ABC activity cards, CoDesign cards, or OU’s Activity cards can help to structure each each element of the pattern into a set of tasks and content (e-tivities) geared towards meeting the learning outcomes.

But, before learning outcomes can be in the drivers seat, we were reminded of the importance of carefully crafting them – with active verbs, defining a level – so we can evaluate whether the proposed activities will work i.e. enable students to meet these learning goals.

I’ve really enjoyed working with ABC as storyboarding tool, but am aware that people can get confused on the level they are working. Some are happy to abstract them “this is the pattern for weeks 2-6” but in other situations I’ve found participants bogged down in the detail of what week 3 will contain and, in the time constrained workshop, be unable to see the sweep of the module. As a facilitator, you do encourage participants to work at the overview level, but placing an emphasis on a pattern for the week as a prelude, or additional step, may well help.

Lots to think about…

Design is….

Top down? iterative? collaborative? creative? hard work? systematic? hard work? stimulating? fun? messy?

This week my head has been full of design. We’ve had a Bootcamp workshop on Developing your Design and I’ve had some healthy conversations with colleagues as we scope out a NEPS unit on Programme Design. From where I sit now, all the descriptions above apply.

I’m intrigued by the fact that there is no one answer on how to design learning. We’ve been pointed to the Learning Design Family Tree – demonstrating the ongoing evolution of approaches (and tools), and at curriculum level Mick Healey collates a treasure trove of approaches.

Constructive alignment provides a sturdy skeleton for both programmes and modules but it needs to be clothed and we need other perspectives to inform and evaluate the many potential answers to “how?”. I had a chuckle when I read Jenkin’s Ouija board metaphor describing how curriculum design is influenced and shaped by forces: Assessment as Learning, student time, pedagogy, costs and resources, subject benchmarks… these are all super relevant. But rather than seeing them as forces – I view these as helpful factors which enable us to work within a smaller design space and provide us with insights that help us iterate towards better design.

For both of these task I’ve referred to (our Bootcamp module and the NEPS unit) we are the slightly messy idea generating stage, but importantly, the conversations that we are having now hold the aims and values of both. Convergence isn’t that far off.

Of courses and resources

Last year I was part of a team that authored our Flexible Learning 2020 (FL2020) course – 11 topics on how to rethink teaching and learning in the shadow of a global pandemic, while changing VLE from Blackboard to Canvas.

Canvas’ stats of page views confirms that the course had a relatively short shelf life. What do we do with it now – do we replace it with another course or a set of resources? Is one approach better than the other? I made a table…

Course Resource 
It has a beginning a middle and an end .
There’s some kind of feedback (formative, auto marked) or it is moderated.
There is motivation (intrinsic or extrinsic) to complete it.   
Once complete there is little motivation to return to it, apart from reference . 
There may be an idea of a cohort progressing through it at key times.   
It’s designed to take participants from a defined level of knowledge/skill to a more advanced place.
Ideally, it contains activities  for participants to do with the information.
Something that’s designed to work just-in-time.
Signposts further resources and information.
Designed to be searchable – jump in at any point.  
Visit multiple (short) times.
Digestible chunks – works on the web . 

Now that we are in Semester 2, it’s clear that the questions we are asked are not ones that our FL2020 course answers. We have all moved on. There’s a temptation to add more content for the intermediate audience, but we know this will make everything harder to find and our sense is that what will now be the most valuable is a set of searchable resources.

Co-incidentally, the University of Kent have been running a series of “Digitally Enhanced Education Webinars” and I stumbled on Dominik Lukes’ presentation What should educators know: User interface and User Experience . I was struck by his description of how design needs to be aimed at intermediate users (with routes in for beginners). 2020’s collective baptism-into-blended leaves us with very different mental models than we had pre-covid – and being reminded of the basics is plain annoying.

From this perspective what’s sensible now is to retire the course, it did a reasonable job, but the scaffolding isn’t needed any more. We can pull out some nuggets into shorter help guides, articles and case studies that colleagues can find more easily.

Persona Journeys

In this week’s bootcamp session we used a couple of techniques to put students at the heart of design.

The first was a “Student Profile” in which we imagined a “typical student” and noted motivations, expectations, enablers and barriers to study, alongside their educational background and experiences.

In UX domains these might be called personas becoming the foundations of user stories, keeping users at the centre of design. I was introduced to the idea a while back and have found personas really helpful in curriculum design projects to date, particularly when thinking about vision, values and teaching methods.

We might suggest coming up with a few different personas – maybe an international student, a home student, one with some access needs. The personas can be informed by student feedback, real students, NSS feedback etc – and these imaginary people can be “walked through the design”. Although imaginary, our personas can take on a life of their own — when we were designing HSS8002 we asked questions like “which options would George take?”

We can also use personas to design from the future, by imagining graduates, say three years post-graduation. What did they most value about their programme? What job are they doing? What were the most valuable skills they picked up? What would they like to have seen more of? We can talk to alumni, and employers to flesh these out alongside attributes in our own graduate framework. We can use these insights to prioritise objectives and teaching approaches.

Our second activity involved thinking about the student journey. In our case, the journey through a module – What concerns or difficulties did we envisage students having at certain points? What could we build in at those times to mitigate? This is a really useful exercise to do once the module concept is fixed.

Digital tools: love them? hate them?

What tools do you use in work and outside work and how do you feel about them? In our introductory Bootcamp session we were asked to draw and then discuss our personal view.

There wasn’t a huge amount of time to do this in the session, so I redid my own diagram afterwards using it as an excuse to try out Mural. The diagram is adapted from work by Lancos and Phipps and, for me, the best bit was adding emoji’s to some of the tools. You will see that email and Teams warranted sad crying but Excel and mindmapping put me in happier zones. (I know I am not exactly normal in my love of data visualisation and infographics.)

Like others in our team I used a laptop inside work, and ignored this when not at work. In many ways Covid-safe working from home has polarised this even more. I don’t do any digital creation outside work, and whilst I would listen to leisure podcasts in my own time – its only a rare occasion that I’d tune into the Wonkhe podcast in work time.

How does this help?

The point of this was to help us get into the shoes of our learners. Our students aren’t a homogenous blob living up to Prensky’s musings on digital natives – they have gaps and preferences. They might feel differently about digital tools? How does that impact our choice of tools to use for learning? How do we scaffold the learning? Is the learning curve worth the reward? Is it inclusive? These questions come well before Privacy/GDPR/Impact Assessment.

Taking a curriculum viewpoint

A tool focus is helpful, but I agree the real trick here is to take a step back, to a curriculum view. How are we developing digital skills and digital agility across the curriculum? Katharine Reedy blog post describes how the OU use card sets to map this out.

Bootcamp beginnings

We are at the start of our Learning Design Bootcamp journey and have been encouraged to reflect – so lets do it!

I’m not completely new to designing online learning: I’ve supported colleagues through the design of free online courses on FutureLearn, I’ve authored self-paced learning units (on Canvas), and have worked with module teams to redesign Masters level modules for blended delivery.  I also had a rich online student experience studying on OU’s MAODE (Masters in Online and Distance Education).

Current approaches to learning design

As far as I know, we don’t really have an institution-wide approach to designing learning, but our programme and module approval process ensures that modules can be articulated in terms of learning outcomes, teaching methods and an assessment rationale.  Constructive alignment is hard-baked in! 

I’m based in LTDS, and with colleagues we have looked at a few design approaches in detail: Carpe Diem, Cairo, ViewPoints, ABC and I’ve dipped into others during my MAODE travels (Ulster’s Hybrid Learning Model  and various frameworks from Grainne Conole), I dabbled with rhizomatic with Dave Cormier in Change11 and lasted a few weeks in OLDS MOOC.

I’ve been involved in supporting a small number of modules/programmes where I have been allocated to them on a “project basis”.   Project work could involve delivering a series of workshops running over a year, or a redevelopment project involving both design and content development.

“Blended” is not the goal

For the projects I’ve supported we’ve found UCL’s ABC particularly useful.  It works in our campus-based context and has been effective in helping module teams to consider blended approaches as options (rather than starting out with a goal of N% online). 

But ABC only works well when you come to it with a clear view of aims, students, learning objectives and possible assessment approaches. 

If these haven’t been thrashed out already, say for a new module or programme, we choose from a range of tools to come up with a shared view. 

Go-to tools for “vision” are things like student personas which we draw up to reflect our prospective students.  We can also imagine them in the future – and ask “what will students most value about the programme 2 years after graduation?”  And, where possible we back this up with input from prospective students, current students (in person or via student voice) and employers as we form the feel, shape and values of the project.

Once the concept is fixed, we’ll work with colleagues to write and refine clear learning outcomes – using guidance from our own institution (and QMU have a great guide too).  Next we weigh up appropriate assessment options -what methods will sit best with with the outcomes and skills we want to develop.  If we want to encourage creative assessment we’ll offer some form of an assessment sorting hat activity and use prompts on viewpoint cards to spark conversation around feedback or authenticity. 

Atisan?

The tools and activities we use are dotted around different workshop folders – we’ve not brought them to a single place.  Our pick and mix approach at the moment is somewhat  “artisan” and isn’t scalable, or easily communicable to colleagues.  I up for picking up new ideas and learning new approaches. One of the things I’d like to see by way of output from this project is a clear pathway of activities leading towards a design goal.

The power of collaboration

In my experience multiple viewpoints and an understanding of the interactive nature of design makes for a better end product.  I know design to be a messy, and sometimes contentious process.  But with experience comes the knowledge that the uncomfortable thrashing it out process is essential.  It helps the project to become “our thing” and at the end there are artifacts and storyboards that articulate what the thing is about, and almost as importantly what it is not about.  If there isn’t a shared vision and understanding there will be trouble down the line!

Ups and downs of redesign

How is it possible to represent three and a half years of work in a poster? About a year ago Ros Beaumont and I met up to do just that. Our project: the redesign of the HaSS PG Cert Research Methods involved a huge effort, the gantt chart was redrawn regularly, there were dramas and ups and downs.

We wanted to be honest about our struggles and to foreground the importance of the collaboration at the heart of the project. So, we represented it as a board game, with ups (the ladders) and downs (the snakes). It’s never a good idea to put an A1 poster on a blog post, but do have a closer look at it as a pdf.

The ladders

  • Early champions: We wouldn’t have got off the blocks without Dr Adam Potts taking on the module lead for HSS8007 and the library team being willing to rework HSS8002.
  • Teamwork: this was a shared endeavour, we had difficulties to overcome, we had to be open, supportive and bring a can-do attitude.
  • Capturing Student feedback: early results showed that this was a positive experience for majority, the flexibility was great for diverse learners, including those new to UK HE. Initial positive feedback helped us recruit collaborators.
  • Building momentum: we also encouraged our early contributors to share positive stories and talk about the impact on their own development
  • Review and revise: we made annual adjustments in response to student feedback, eg. introducing a face to face session for HSS8002/007 introduction and expectation setting session.
  • Having a back catalogue: As more content was blended it became easier to articulate and model possibilities for new topic leads.

The Snakes

  • Complexity: 32 contributors with competing priorities and workloads; inadequate time and a fixed project timetable – we needed to be able to deliver modules to students.
  • Reconceptualising learning: we had to take our contributors on a journey, to rethink their topics in a student-centred versus teacher-focused way, incorporating active learning and having a different relationship with students.
  • Reality Check: Understanding what can be achieved by when, and when to acknowledge defeat and re-scope.
  • Academic Presence and identity: We found that pre-work familiarises students with the session leader without the involvement of the session leader. How then do they start a face to face session?
  • Loss of Momentum: Time pressures resulted in development of materials being put on hold – these though needed to be picked up later.
  • Own goals: Not trusting expectations which had been set and using valuable in person time to repeat pre-work rather than extend and explore.

The enablers

  • Vision: Clear goals, Sell and keep on selling the vision to different audiences
  • Expertise: Technology, pedagogy and content knowledge
  • Project management: prioritising, monitoring
  • Resource: Team members who have the project/ work as major part of role or responsibility. Time for contributors to engage.

The poster was made for our Covid-cancelled 2020 Learning and Teaching Conference; but we managed to submit this to the event in 2021 and won the poster competition!

What tools should I invest in?

tools for interactionsWe start 2020 with our new VLE, Canvas, and a rich array of digital learning tools that can be used to support teaching. There are so many possibilities and it could easily be overwhelming.

This is a short post to begin to answer one of the questions I hear last week “What 5 tools should I invest in?”.

But, let’s back up a bit,  before considering tools we need to think about what we want these tools to help us to achieve? Way back in 1998 Anderson and Garrison described the three more common types of interaction involving students:

  • Student-content interactions
  • Student-teacher interactions
  • Student-student interactions

Let’s use this to come up with our list…

Student-content interactions

Your starting point here is Canvas itself. You can present information on pages, embed documents, link to resources on library reading list, include videos, audio and ReCap recordings.

Go to tool #1 has to be Canvas itself.

Linked to this is tool #2 Canvas quizzes.

Canvas support a wide range of question types: multiple choice, gap fill, short answer, matching, multiple answer.  Quizzes can help students practice skills, check their learning and encourage them revisit material.

For short PowerPoint narrations the easiest place to start is the recording features that come as part of ReCap.  We tend to think of ReCap as a lecture recording tool, but there is also a fabulous ReCap Personal Capture tool that you can use to record yourself, and publish in Canvas.  There are several bonuses with using ReCap – you have the ability to do make simple edits, you can use automatic speech recognition to generate captions, and students have the ability pause, rewind and make notes on the recordings that you publish.  ReCap personal capture comes in as tool #3 – you can install on your computer, or if you prefer you can use the new browser based recorder – Panopto Capture (beta).

Student to Teacher interactions

Outside the limited amount of PiP time you are likely to be meeting your students online.  For synchronous meetings there is increasingly little to choose from between Zoom and Teams – the only significant factor being that Zoom permits people to connect by phone – so supports those on lower bandwidth.

Now is a great time to become confident with the online meeting tool you are planning on using throughout your module.  I’ll leave it to you if #3 for you is Teams or Zoom – it would be sensible to settle on one, for you and your students.  Teams could be a strong contender if you plan to use this as a collaboration space over the module/stage, in which case do review the article on Building an online community using Teams.

Once you setting on your meeting tool, now is a great time to explore options for using whiteboards, polling, breakout rooms in these spaces and to begin to plan active online sessions.

For tool #4 I’d go with Canvas Discussions – these are easy to use, work really well in the Canvas Student and Teacher apps and are great for Q&A sessions, introductions, crowd-sourcing activities, and of course discussions!

Student to Student interactions

Learning at university is a social! There are huge limitations on what we can do in person – but what can we do to help learning be as social as it can be?  This isn’t so much about tools, but about the activities we design in: break out room discussions, group tasks, peer reviews, debates – things that might start in a timetabled session and then spill out.

For synchronous meetings and study sessions all our students have access to Zoom and Teams.  We can model how to use these, build students’ confidence in these spaces and show them how they can collaborate in Microsoft 365 collaborative spaces (Word documents, OneNote…).   I’ve already mentioned Teams and Zoom (#3), so for tool #5 I’ll pitch for Microsoft 365 with an emphasis on collaboration.

What do you think?

These are my top 5 tools, you may have a different list.  What have I missed out?

“Technology Enhanced”

If I want to write,, I can use a pen or type it in some form using computer/tablet/phone – is this technology enhanced writing?  Or is it just writing?

In my view technology is something to be adopted if it is right, fitting and appropriate to the task.  If that is the case then it just becomes the way we do things.  Say I want to “share photos” – in 1983 I would get them developed at Boots, put them in a sticky album and inflict them on unsuspecting family and visitors.  The thought of doing this in 2018 is laughable.  Sharing photos is something you do digitally.  I still use the same language  “sharing photos” but what we now understand by that phrase has changed.  I wouldn’t dream about talking about “technology enhanced photo sharing”.

You’ll imagine from this that I really dislike the term “Technology Enhanced Learning”.

  • If the tech-way is the best way the technology becomes invisible. At some point in the future  – e-submission is just going to be the way we do “submission” and e-marking is just going to become marking.  (Forcing the adoption of immature tech creates agro.)
  • If you’ve worked in education for any time, there is a good chance you’ve had some form of post-traumatic stress from badly behaving technology that has ruined a session, it’s all too easy to jump on “shiny” and force it into the classroom (or VLE).  We need to be more critical about whether the technology *will* add or detract.
  • Lots of the so called TEL is actually e-administration. Take a tool like WebPA – it gives group members on-line forms to evaluate each other and works out a peer score. Here all the tech tool is doing is the drudgery of collating, counting and presenting the scores.  It’s doing the background admin for the peer assessment task.  In my view this is a sensible way to administer the process.  The clever bit comes in how you frame the group task, how you introduce it to students, how you interpret the results and whether it is a good fit for your programme aims.  Doing this well is all about the skill of the academic lead, WebPA is the administrative enabler.
  • We need to think about the investment and payback. Technology (apps, devices, systems) have learning curves. It’s all too easy to be hijacked by this learning curve and lose sight of the actual learning that is our focus.  Tools and tech needs to be “frictionless” or easy enough to learn so that there is a real payback.  Talk to any academics involved in teaching – time is not something they have in abundance.

Critical quadrants

Now, neatly sidestepping the debate on “learning gain” let’s imagine we could graph “improved learning” (for students) against academic time.

 More academic timeLess academic time
Enhanced learningB
Diminished learningCD  

We get 4 quadrants – C is the one to avoid at all costs – and sadly it’s easy to come up with a scenario that fits here.  Prof Smith spends weeks developing an online simulation exercise that bombs with the students.  Dr Jones runs a one off webinar and the time is taken up with “can you hear me?” Result: It’s mothballed, never to return.   I’ve used tech examples here, but let’s face it this grid is tech neutral.  You could get in this quadrant just as easily by developing lecture materials pitched at completely the wrong level or by having to work with bad systems that gobble up time and leave your creative energies depleted!

If we leave technology in the mix B is perhaps the only one that could potentially qualify as being “technology enhanced”… students learn extra stuff and the academic gains time.  You may get here after you’ve been through a learning curve with online marking.  You spend less time, but students get richer feedback.

Living in quadrant A is only something you can do for a short time. It’s easy to see how you could be here – you could have more meetings with students, smaller group sizes in seminars (more seminars), you could make short supplemental videos, you could develop a multi-choice quiz that helps students revise key topics.  You need to be selective about what you do in here, and may want to venture in for short sprints if you can make gains elsewhere.  Notice that my examples aren’t all about Tech.  Any redevelopment or redesign work can bring you here.

Technology and automation in the extreme can take us into sector D – standard or no responses, little chance for interaction, impoverished experiences.  A mentor-free MOOC won’t deliver a campus based experience.

Where Tech is e-administration it sits somewhere along the x-axis – depending on whether it is students or faculty doing the processes.  Online Module choices e-administration.

My point – let’s drop TEL as a phrase.  It’s meaningless.  Instead let’s consider what works, what is effective teaching – let’s use technology if it helps.  But please don’t call it TEL.

————

This post provoked by:Sian Bayne (2015) What’s the matter with ‘technology-enhanced learning’?, Learning, Media and Technology, 40:1, 5-20, DOI: 10.1080/17439884.2014.915851Kirkwood, Adrian and Price, Linda (2014). Technology-enhanced learning and teaching in higher education: what is ‘enhanced’ and how do we know? A critical literature review. Learning, Media and Technology, 39(1) pp. 6–36.

Blended Learning 101

Here are a few practical tips on getting started.

Bling it up

Our institutional VLE Blackboard can be made to look more interesting.  It’s down to you…

  • add images to items
  • think about what you can embed – Video, Slideshare, Polls and quizzes
  • Add links to discussion boards – and if you use these make sure you have time to contribute and set the tone (be a good cocktail party host at the start).  Create an introductory post on each discussion board – it’s far too scary to be the first one who posts.

Signpost

Make the journey really clear on your course and use weekly emails/announcements to reinforce what’s to be done and reflect on what’s been good.  Use one of the new module templates based on our Blackboard baseline so that students get a consistent experience.

If you include a link to a document tell the students what you want them to do with it – read it? skim it? make notes on specific points? focus on pp 14-19?

Include some getting started material – how the course is organised,  how to get help, how to subscribe to discussion boards (so you get notified when someone posts!)….

Set expectations of how much time student should spend on sections of the material, so that they know what are the essential parts and don’t burn out on the introductory elements.

Media

If you are going to be making any videos think about where you are going to host them.  You can then include them on Blackboard with links or embed code.  We have a few choices:

Once you have the video hosted make sure you add your captions so that these play in the embedded player.

Making videos

If you don’t want to spend money – you can make great narrated powerpoints, talking heads or screen captures without spending money…

  • Use ReCap personal capture – talk to LTDS about getting a pCap folder set up, get the software installed and have a hand to get stared
  • Use Powerpoint to record your slides (Slide Show/record slide show),  or use it to do a screen recording (insert/screen recording), then Save As mp4

If you have a longer lead time and need some professionally produced media get in touch with our crack Digital Media Services team.

Recording Audio

Have a smartphone?  Use a “sound recorder” app on your phone to record an audio track. Or you could download and use Audacity to record and edit yourself!  You can upload audio to ReCap and NUVision.   On ReCap you can add bookmarks (eg three views on one question).

Polls and Quizzes

Blackboard quizzes can look a bit scary, so if your questions are formative and you don’t need to know who answered what you can use Google Forms or Microsoft Forms to make a quiz.  Then you can embed the quiz, provide a link to it (or both), and share the results back.

Prototype and Get feedback

Ask colleagues or students to comment on your initial designs: does it work? Does it make sense? Is it clear what you are being asked to do?  Are the discussion spaces inviting?

Talk to colleagues

Shamelessly pick the brains of colleagues who have done similar projects! You will have more ideas together.

Talk to LTDS for “how-to” support on Blackboard, ReCap, ePortfolio and for tips on designing your learning materials.

Connect with NUTELA and join in their workshops and community (there is free pizza and pop).

 

Feeling Connected

lecturemontage

LTDS hosted the first of our Feeling Connected events this week.  The series is all about how to help to engage large students cohorts in the classroom and beyond. In this initial session we considered how to engage students in the lecture.

Tony Chapman-Wilson brought an actors perspective on how to get the best out of, and look after our vices.  He had us humming, thinking about our diaphragms and tripping over tongue twisters.

Sue Gill spoke about Powerpoint as an aid; we presented feedback from the recent TEA awards from NUSU on what student appreciate; heard top tips from colleagues via podcasts and Dr Alison Graham rounded the session off with real examples and insights from her use of the OMBEA student response system.

Handouts

Podcasts

We interviewed Dr Sylvia de-MarsDr Julian Knight and Dr Keith Brewster and shared their insights in the session.

The links below take you to the recordings on ReCap.

Thanks to all who attended and contributed to a lively session, and to and to colleagues who gave up their time to contribute before and after.

Next in the series…staying connected

Our next Feeling Connected event will focus on how to stay connected between lectures.  It will take place on Tuesday 19 September in the Herschel Learning Lab and have a hands-on feel.  You can book via our web-page: http://www.ncl.ac.uk/ltds/about/training/feelingconnected/

Subtitles and captions

Whenever we make a video for our free online courses we also make a transcript and add subtitles.

It’s sensible for us to make sure these digital assets can be used by all our learners, and it is also mandated as a course requirement by FutureLearn.

Thankfully, along with this stipulation is capability.  We have, via FutureLearn, an account with the transcription services of 3playmedia.com to take the pain out transcription.  Here’s how it works:

What’s really interesting is that when these transcripts and subtitles are in place it’s clearly not only learners with hearing difficulties that make use of them.

  • Some learners prefer to read the transcript instead of listening along. Maybe it’s because they can skim the contents and find the pertinent points.
  • Non-native speakers find these transcripts helpful
  • People who can’t listen to videos on their desktop computers or have forgotten their earbuds
  • We find them useful to remind ourselves of the content of the videos, and work out if we can reuse portions later.

Certainly, we have found if they aren’t there, or we’ve uploaded the wrong transcript, learners are quick to point it out.

Transcripts do take time to produce, but, the time and cost is only a fraction of the overall production costs of the video.

IMHO if it is worth spending time and money on making an professional video it is daft not to take a bit more time to add a transcript – especially if the video will be watched by a good number of people.

But to make it normal for those commissioning videos to take the extra trouble we need to make it easy.

  • provide access to easy to use services such as 3playmedia (other services are available).
  • provide an initial seed funding to cover the modest costs of producing transcripts.

 

Learning@Scale Edinburgh 2016

This 2 day event advertised itself as being at the junction of computer science and learning science brought together researchers involved in a wide variety of practice across the mooc—o-sphere.  The conference welcomed keynotes from Prof Sugata Mitra, Prof Mike Sharples and Prof Ken Koedinger.  Presentations were generally 20 minutes long with 10 minutes scheduled for questions.  Delegates came from all over the US (MIT, Stanford, Carnegie Mellon), Holland, Korea…..

The full proceedings are published here http://tinyurl.com/las2016program and the 5 flipped sessions are here: http://tinyurlcom/las16flipped

Highlights

Unsupported Apps for Literacy

Being something of a socialist I was impressed by the work of Tinsley Galyean and Stephanie Gottwald Mobile Devices for Early Literacy Intervention and Research with Global Reach who had provided apps to groups of children on android tablets to promote literacy development.   An interesting feature of their study was that they used the same approach in three radically different settings.  no school, low quality school and no preschool settings.  Even though the apps were not supervised the students literacy (word recognition and letter recognition) improved.

Rewarding a Growth Mindset

One of the most thought provoking sessions explored how a game was redesigned to promote a growth mindset.  Gamification is not a panacea and badges don’t motivate all students – Dweck’s work shows that if students have a fixed mindset points can become disincentives.  Brain Points: A Deeper Look at a Growth Mindset Incentive Structure for an Educational Game is well worth a read, showing how a game was redesigned to reward resilience, effort and trying new strategies.  One counter intuitive finding was that an introductory animation explaining the rational for the scoring caused players to quit – they seemed much happier just working it out as they went along.  (Perhaps a warning to us not to front load anything with too much explanation!)  I was particularly impreseed with the way the researchers developed  a number of different versions of the game to probe the nuances of motivation.  While rewarding resilient effort based approaches increased motivation, awarding points randomly had no effect at all.

Remember to measure the right thing!

What can we learn about seek patterns where videos have in-video tests? Effects of In-Video Quizzes on MOOC Lecture Viewing  Well not much really apart from the fact that learners tend to use these as seek points either seeking backwards to review content or seeking forwards to go straight to the test.

Learner’s engagement with video was explored using an analysis of transcripts as a proxy for complexity (the language, the use of figure etc).  Bizarrely low and high complexity both increased dwelling time, leaving the authors questioning the value of inferring too much from any measuresExplaining Student Behavior at Scale: The Influence of Video Complexity on Student Dwelling Time  They ended by throwing out a challenge about what we choose to measure and whether it is relevant at all:  for example “are the number of times you pick up a pencil in class meaningful?”. We can measure it but is it of consequence?

Some MOOC platforms suggest that students have video calls using google hangouts using “TalkAbout”.  Stnkewicz and Kulkmari have been exploring automated methods to indicate whether these are good conversations or not.  ($1 Conversational Turn Detector: Measuring How Video Conversations Affect Student Learning in Online Classes) TalkAbout – has a helpful API that has permitted researchers to examine turn-taking in video conversations – using a change of the primary video feed as proxy for who is talking.  They find that students find they learn more when they talk more and when they listen to a variety of speakers.  The system has the capability to identify where calls are being dominated by one voice.  Limitations were: background noise causing the video focus to switch erroneously, facilitating behaviour could be flagged as dominance, screen sharing would fix the video focus.

 

Cheating

Some MOOC learners use multiple accounts to harvest quiz answers. 
Using Multiple Accounts for Harvesting Solutions in MOOCs
explored their analysis of identifying these behaviours and attempts at minimising it.  They described the “harvesting account” the one used to find answers and the master account the one used to give the right answer and get the certificate. They were able to identify the paired accounts by looking at those where the submitting the right answers shortly after the harvesting account finds out the right answer.  The suspect practice was examined from log data, using the same IP address, where the master account followed the harvesting account within 30 minutes.  In reality right answers were often resubmitted within seconds.  A number of solutions were suggested to get around this

  • Don’t give feedback on summative MCQs
  • Delay feedback
  • Incorporate randomness or variables into the questions (NUMBAS is a good example)

In How Mastery Learning Works at Scale Ritter and colleagues explored whether teachers followed the rules when working with Carnegie Learning’s Cognitive Tutor – a system to present students with maths material.  The concept behind these are that students master key topics before moving on to new “islands” of knowledge.  But doing so will ultimately result a class being distributed across a variety of topics.  The reality is that Teachers can be naughty in violating rules  – they unlock islands so that students study the same material at once – but this means that lower performing students do not benefit from the program – less learning happens, they are forced to move on before they have mastered the topics.

In MOOC conversations do learners join groups that are politically siloed? 
The Civic Mission of MOOCs: Measuring Engagement across Political Differences in Forums
  Well not so apparently – there’s evidence of relatively civil behaviour, with upvoiting applying to those holding contrary view points.  (This would echo our experience of FutureLearn courses being generally civil and respectful).

Automated Grading

A few of the presenters presented work where they were attempting to develop automated approaches for grading of text based work.  Adaptive learning approaches where the focus was on “ranking” rather than grading appeared to be more robust, particularly if the machine learning process could work with gold standard responses and then choose additional pieces of work to be TA graded where these were uncertain.

Harnessing Peer Assessment

Generating a grade from peer marked assignments based on a mean scores was examined via a Luxemburg study where student peer grades were compared with TA grading.  (Peer Grading in a Course on Algorithms and Data Structures: Machine Learning Algorithms do not Improve over Simple Baselines ) Surprisingly no more effective method was found that applying the mean of the peer grades.  There was some student bias (higher marks than TAs) which could be accounted for, but the element that could not easily be overcome was the variability resulting from students’ lack of knowledge.  While TAs marked some assignments poorly due to errors, students did not recognise these errors and provided a higher score.  This led to an interesting discussion on those circumstances where peer grading was valid.

Flipping in a conference?

On day two the afternoon was given over to flipped sessions –  even though most of the jet lagged audience had failed to engage with the materials.  Of those who did it seemed that most were only really prepared to spend 30 minutes working through the content.

Peer assessment was also featured in two of the flipped sessions. In one (Improving the Peer Assessment Experience on MOOC Platforms) we looked at improvements to the workflow to include an ability to rate the usefulness of reviews. In the other (Graders as Meta-Reviewers: Simultaneously Scaling and Improving Expert Evaluation for Large Online Classrooms) the authors presented peer assessments to TAs to enable them to in effect do meta reviews.  Importantly this resulted in better (more comprehensive feedback), they found it was better to present comments, not grades to reviewers to reduce bias.

Although positively received by the audience, not all the flipped presentations worked well. And as one who had tried my best over breakfast to do the preparation I wasn’t too convinced.  Having said that we did find ourselves the subject of  live experiment where we reviewed reviews of conference papers the previous day.   One of the flipped presenters spoke of how preparing the materials in a flipped manner enabled him to make them available in an accessible form – after all not everyone wants to read an academic paper.

Keynotes

Prof Sugata Mitra presented the development of his ideas from hole in the wall through school in the cloud.  The data driven audience admired his zeal but asked questions about data/evidence and visibly flinched at thoughts of times-tables being a thing of the past.  “Can you really apply trigonometry by asking big questions?”  One of them asked me between presentations.  A point of balance came from Mike Sharples (
Effective Pedagogy at Scale: Social Learning and Citizen Inquiry
) who spoke about his exploration of pedagogy at scale, the history of “Innovating Pedagogy” and rationale behind FutureLearn’s design – as a social learning platform. He was keen to point out that not all approaches worked for all domains. The challenge that Mike presented was how we make learner objectives explicit so that they meaningful inferences could be made.  A really great set of slides worth a look.

Ken Koedinger’s session on Practical Learning Research at Scale at the end of the conference rounded the two days off well.  You cannot see learning he posited, so beware illusions

  • students watching lectures
  • instructors watching students
  • students reporting their learning
  • liking is not learning
    low correlation between students course ratings with after training skills
  • observations of engagement or confusion are not strongly predictive

Summary

A fascinating two days in which we did battle with the central problems of how to measure learning; how dialogue can be hard when there are fixed views on what learning is (fundamentally cognitive or a fundamentally social); and the ongoing difficulties of providing meaningful and robust feedback at scale.