Revolutionising Learning: AI and Group Work Unveil a New Approach to Reading Activities

Navigating through the extensive volume of reading material in certain modules can be a daunting task for students, often leaving them overwhelmed by the sheer magnitude of information. Recognising this challenge, the module leaders of ONC8017 took a pioneering approach to ease the burden on students. In a bold move towards innovation, they harnessed the power of artificial intelligence (AI) and embraced the collaborative spirit of group work to revolutionise the learning experience.

tablet showing research paper and a robot saying i can help with that
Image used in Discussion Board task

The Task

Article Allocation:

The first step involved compiling a comprehensive list of articles relevant to the module’s curriculum. Each article was carefully selected to contribute significantly to the students’ understanding of the subject matter. Subsequently, the articles were allocated to individual students, ensuring that each student had a unique piece of content to delve into. Students were asked to read and summarise their assigned article.

Student Autonomy:

To cater to diverse learning preferences, students were given the autonomy to choose their preferred approach in engaging with the assigned article. They could opt to read and summarise the content independently, a traditional method fostering individual comprehension and analysis. Alternatively, students had the option to choose an AI tool for summarisation, exploring the cutting-edge capabilities of technology to streamline the information extraction process.

Students who opted to use an AI tool were tasked with critiquing the summaries generated. This not only encouraged a deeper engagement with the material but also honed their analytical skills as they assessed the accuracy, coherence, and relevance of the AI-generated summaries.

Following consultations with the Newcastle University Library, we recommended the AI tools Scholarcy and TLDR This. However, students were able to choose any tool that best suited their preferences. The library, also provided valuable insights, including a copyright statement and links to AI Guidance, as well as the Uses and Limitations of AI.

If your allocated article is behind a sign in wall we kindly request that you do not upload or share this licensed material with third party AI tools

Copyright statement

Group Collaboration:

The students were asked to share their summaries to a discussion board and to look through the summaries posted by others. They could then identify which literature was most relevant to them and read the articles in depth themselves.

Recognising the significance of collaborative learning, the module leaders fostered a sense of community among students. Group discussions and collaborative sessions were encouraged, providing a platform for students to share insights, discuss varying perspectives, and collectively enhance their understanding of the subject matter. This collaborative element not only enriched the learning experience but also mirrored the collaborative environments students are likely to encounter in their future careers.

The Student Experience

40% used, 47% didnt use, 13% unable to use AI

53% of students opted for AI-assisted summarisation, showcasing a keen interest in exploring the capabilities of technology for academic purposes. This choice not only demonstrated a desire for efficiency but also provided students with valuable hands-on experience in harnessing AI tools for practical applications.

However, the practical application of AI tools had its challenges. 25% of students who chose AI encountered difficulties, with the tools unable to produce a summary at all.

tldr this 4 scholarcy 3 chat gpt 4 unknown 1

In their candid feedback, students highlighted both positive and negative aspects of their experiences. While some were impressed by the efficiency of AI tools, all students expressed concerns about gaps and missing details in the generated summaries. Specific instances of errors, omissions, and disjointed reading experiences were noted, revealing the practical limitations of relying solely on AI for complex tasks. The majority of students who opted for AI, eventually opted to manually summarise the articles anyway, indicating a less-than-ideal outcome from the AI tools.

The AI tool also provided a second longer summary. This summarised most sections of the paper individually, which was presented like a smaller version of the paper. There was still important information missing, which was clear from the disjointed reading experience. Even so, I was still quite impressed with how well the AI tool had summarised the vast amount of information in the original paper into something relatively usable. 

Student experience of Scolarcy

No inaccuracies were noted. Good summary of the epidemiology, although it seems that the AI summary has basically just been derived from the abstract of the article. A number of gaps were identified. 

Student experience of TLDR This

The article has been summarised into ten key points, but these are not detailed. For example. only one of the statistics provided in the article have been included in the AI summary.

Student experience of Chat GPT

Final Thoughts

These nuanced results underscore the importance of balancing technological innovation with practical considerations. While the incorporation of AI offered students valuable exposure to emerging technologies, the ultimate outcome indicated that, as of now, AI tools might not be the ultimate solution we were hoping for.

Despite the unexpected challenges encountered in the use of AI, this experiment has provided invaluable insights. Recognising the evolving nature of technology, we remain committed to maintaining the task, observing how AI technology progresses year after year and see if, as the technology advances, the dialogue from students changes.


This post was written with the assistance of AI tool, Chat GPT.

2023 Roundup

We hope you have enjoyed learning more about the work we do in FMS TEL, below are a few areas we covered and successes we had in 2023. If you have any suggestions for content for 2024 we would love to hear from you.


FMS TEL Successes

FMS TEL’s Ashley Reynolds was involved in a piece of work, entitled Evaluation of the Training in Early Detection for Early Intervention (TEDEI) e-learning course using Kirkpatrick’s method, in collaboration with Dr Anna Basu and Janice Pearse, which was published in the BMC Medical Education online journal. Read our blog piece from March 2023.

Simon Cotterill, Gemma Mitchelson and Michael Hughes succeeded in securing funding from the Educational Research Development and Practice fund to explore use of A.I. with contextualised and personalised data.


Conference

FMS TEL attended Newcastle University Learning and Teaching Conference 2023 with a stand, where we answered questions about what we do and how to contact us, and handed out a booklet detailing some case studies of our work.

FMS TEL stand

We also displayed a poster by Dr Michelle Miller and presented a video from Eleanor Gordon and Gemma Mitchelson.

Dr Iain Keenan presented MOOC Adventures: From Conception to Reality at the Newcastle University Learning and Teaching Conference 2023. FMS TEL worked with Iain on this course and he highlighted how helpful it has been to work alongside FMS TEL to bring the course to life.


Software and Systems

We explained how you can create your own WordPress blog, and presented our experience of running this blog to the Directors of Education Forum.

With all the discussion of AI and ChatGPT, we blogged about ChatGPT, what it is and how it works. We reviewed AI Gamma.app, which is a tool for generating presentations, documents or web pages.

We were invited to review Audience Interaction System, and we profiled the FMS Feedback System, produced by our development team in FMS TEL.

We attended a great presentation on GigXR, which is a clinical simulation platform. It is an immersive technology which projects 3D holographic objects, which you can interact with when wearing a headset. We also introduced a new tool for PowerPoint which allows you to put a live video feed into PowerPoint: Cameo for PowerPoint.


Tips and Guides

We published a series called Taking Ctrl, which details keyboard shortcuts you can use to perform some actions. Here is an example: Taking Ctrl: Paste text without Formatting

We posted some advice on Spring cleaning your digital clutter, such as tidying up teams and One Drive, and we provided tips on displaying meetings in your Outlook Calendars

We had a special guest post by Module Leader David Thewlis discussing Overlays in videos using OpenShot Video Editor. We also featured work with Associate Lecturer Ann Johnson on developing Online Asynchronous Materials and looking at Unconscious Bias in Healthcare.

Prompted by an enquiry from Michelle’s poster presentation we detailed how to add audio to pdf documents.

We presented a case study on Giving life to old presentations, showcased a Branching Activity, bringing an exercise to life with videos in H5P and Canvas, and showcased some of our other favourite creations in H5P, such as interactive videos and 360 tours.


Michelle Miller Tributes

We’d like to take this opportunity to also pay tribute to our colleague Dr Michelle Miller, who sadly passed away in June of this year. Michelle was our Student Digital Skills Officer, training FMS students in writing long documents, using Word, Excel, PowerPoint, and much more.

Below are comments from some members of our team.

It was an absolute pleasure to work with Michelle. She was a ray of light who lit up the whole room. I loved her positive outlook on life (no matter what she was going through), and her passion for cats.

Tracy Connell

Such a truly lovely lady, and a big miss from the team.
Will raise a Guinness to her soon.

Ash Reynolds

Her empty desk is a cheerful nod to the vibrant presence she brought to our office. Always ready to help, she played a key role in my growth, both professionally and personally. I am very grateful for the impact she had on me.

Emily Smith

Michelle was an esteemed colleague who is a huge miss both in our TEL team, and across many of the schools that she supported. She was selfless and always willing to help. I feel honoured to have worked with such a wonderful colleague.

Gemma Mitchelson


Thank You!

The blog is edited by a different FMS TEL team member every month, and many team members have taken on this task, as well as contributing posts to the blog – thank you to all of you! Our thanks also go to those colleagues who have offered their examples of practice for us to showcase here. We look forward to working with many more of you in 2024.

ERDP Project: Exploring AI

As we develop our understanding of AI and its capabilities, we are looking at how advancing technologies such as ChatGPT might assist colleagues and students with day to day tasks. FMS TEL team members Simon Cotterill, Gemma Mitchelson and Michael Hughes succeeded in securing ERDP funding to explore such possibilities.

Project aims:

  1. To enable staff and students to access contextualised and personal data via AI machine learning software
  2. To investigate a process for generating AI responses in a more ethical way 
  3. To improve the University’s understanding of AI machine learning in an HE context.

We are investigating use of contextualised data, formatted​ with natural language, optimized for A.I. For example, using a student’s programme and module information, their timetable data, and MOF information to assist the student in accessing key information more easily.​ Later this could be enhanced with richer information, such as programme/module study guides, VLE course information and other sources. via APIs. Likewise, a chatbot for staff could draw together University, Faculty and School-specific information.

An exciting new feature to be explored is that of agents (aka ‘Assistants’) and their ability to take on different functions and different personas; effectively acting as a small workforce to support, user needs. Up to now, most of us are familiar with having a conversation with a single agent, yet there is growing scope for multi-agent use. In the visual below you can see an Ai Agents overview from Chat Dev. By setting instructions and ’embedding’ information into the system users can encourage each agent to behave differently. For example, “You are a member of the Design team who will come up with simple ways to achieve a set goal”, “You are a CEO who will talk to the CTO on what steps should be taken to achieve X,Y and Z.”…

A picture showing agents positioned in various roles, lead by a virtual CEO.

image source: https://github.com/OpenBMB/ChatDev

The technologies are evolving very rapidly. At an “AI Sprint” in late November, the FMS TEL team were able to work with newly released features from OpenAI; these make the embedding of customised information and personalising Assistants (agents) much more accessible. These and other new features support the integration of AI features within other systems. As such, there is likely to be a proliferation of new AI products and plugins based on these features – and hopefully eventually within the systems supported by FMS TEL. All work in our project will be cross-referenced to our university principles on AI which can be accessed here:  Artificial Intelligence (AI) | Learning and Teaching @ Newcastle | Newcastle University (ncl.ac.uk)

There are challenges to consider; in particular those related to Data Protection, which we continue to review. There are also financial considerations when using external AI services like OpenAI or via Azure API, which are metered (pay according to use), rather than fixed price plans, which need investigating as part of our intended trial/pilot.

We are in the early stages of fact-finding but will be reaching out to FMS schools in the new year with an invite to workshops to share our proof of concept.

Taking Ctrl: Ctrl Taken

control taken, wrapping up the taking control series

Thanks to everyone who submitted their favourite keyboard shortcuts. Here is a round up of all the shortcuts mentioned:

ShortcutFunction
Windows key +EOpen File Explorer
Windows: Ctrl + Shift + V
Mac: Option + Shift + Cmd + V
Paste text without Formatting
Windows: Ctrl + Z
Mac: Cmd + Z
Undo
Windows: Ctrl + Shift + Z or Ctrl + Y
Mac: Cmd + Shift + Z
Redo
Windows: Windows key + L
Mac: Control + Cmd + Q
Lock your Computer
Windows: Ctrl + Shift + T
Mac: Cmd + Shift + T
Reopen Closed Tabs
Reopen a tab in Chrome
Chrome: Ctrl/ Cmd + Shift + N
Firefox: Ctrl/ Cmd + Shift + P
Safari: Cmd + Shift + N
Incognito Mode
Windows: Windows + Shift + S
Mac: Shift + Cmd + 4
Snipping Tool
Windows: Ctrl + K
Mac: Cmd + K
Insert a Link
Windows key + left/right/up/downArrange Windows
Windows: F12
Mac: Cmd + Opt + I
Developer Tools Device Mode
Windows: Window + A > Focus Assist
Mac: Option + click Notification Center icon
Focus Assist/Do not disturb
Summary Table

We will be starting a new and exciting series in 2024!

Taking Ctrl: Focus Assist – Managing Your Digital Wellbeing

Managing Your Digital Wellbeing
Managing Your Digital Wellbeing

The Problem

With a constant influx of notifications, emails, and alerts on our computers, staying focused on the task at hand can be a challenge. Distractions break our concentration, decrease productivity, and contribute to digital fatigue. How can you ensure a distraction-free environment when you need to concentrate on your work?

The Solution

Windows: Focus Assist

  • Quick Toggle: Windows key + A (to open the Action Center), then click on Focus Assist to toggle it between Off, Priority Only, or Alarms Only.
  • Customise: Go to Settings > System > Focus Assist to customize which notifications you want to see and when.

Mac: Do Not Disturb

  • Quick Toggle: Option + click the Notification Center icon at the top right of your screen.
  • Schedule: Go to System Preferences > Notifications > Do Not Disturb to set a schedule for when you want to silence notifications.

The Result

Activating Focus Assist on Windows or Do Not Disturb on Mac allows you to control when and how notifications appear. This can lead to better focus, improved productivity, and more significant periods of uninterrupted work. Plus, managing your digital wellbeing can reduce stress and enhance your overall experience while working on your computer.

Unveiling Role Play North: A Dive into Specifications

Infographic Process

Image by Trang Le from Pixabay

In a world where the demands on medical professionals reach an all-time high, the need for effective communication has never been more crucial. Imagine a scenario where physicians must deliver devastating news to a patient—news that could alter the course of a life, news that might even imply the end of life. I have often wondered how people can deal with that as part of their job. I found out through my first project as a Learning Technologies Developer at the University with the redevelopment of an web service that helps facilitate roleplaynorth (RPN).

RPN provides a crucial role in preparing our MBBS students to deal with those types of scenarios. Through tailored and realistic scenarios they help in the theory of good communication, transitioning customers to be ready to apply these findings in practice.

In the first of a series of 3 posts about the redevelopment of RPN, we’ll look at the following aspects of the project; specifications, system features, and future plans. In this post I will walk you through the specification process.

Specifications: Building the Foundations

Firstly, we looked into what the existing system currently does. Amongst mainly other features it allowed RPN staff to add events and update role players using spreadsheets to manage who was on what event and the role they would play on the event. This information would then be added to the event on system. RPN also had available a separate AccessDB that provided queries required for payroll reports.

Action Function Requirement Document (AFR)

I used a document our team has worked with previously that lists the tasks/actions, function, data requirements to get RPN to think about the whole process and break it down into steps. This document was used to get feedback on how RPN expect the actions to work and gave the opportunity to add tasks if required.

Here’s an example on what that looks like:

ACTION/TASKFUNCTIONDATA REQUIREMENTSNOTES/ FEEDBACK
a) eventsCreate, Update, Delete EventsStart date, End date, description of event
b) event TAGAssign Event Tag to event. This can be used to filter the eventsEvent Tag – Name/Title
c) Add/Remove Role Player – EventOn the events page add a role player to the eventRole Player, EventDoes this need to be automatic or do you want to add them as pending first until they have confirmed participation?
d) ScenarioCreate, Update, Delete ScenarioScenario – title, description

After feedback from RPN, we then converted this document into a more functional focused document. Breaking down actions into tasks allowed us to plan what needed to be developed first, including what actions could be worked on independently or by another members of the team. Here’s what we ended up with.

Role Play North Gannt Chart
Role Play North Development Gannt Chart

From this gantt chart you can see that the Events block (1a, 1b) and the Events Group block (2a, 2b) need to be completed before the Venues block (4a) is worked on. You can also see that once 1a is completed someone else can start work on Role Player Management (3a,3b). We also gave time for testing and reviewing what the new system does, giving RPN staff the chance to influence the build and find bugs. We had multiple testing stages one after the Event Groups and Role Player Management blocks were completed, and another back in April after all listed tasks were completed.

The first iteration included all the fundamental features and actions of the website including, user authentication, role based access, basic management of events and customers and communication tools. Nothing too complicated, but essential to the functioning of the website and a good starting point for any future project.

The above image and the AFR document does not show how long each section took – as due to unknown variables like feedback, change in requirements and staff commitments effected the timeframes. However, with a continuous review process we added weight or complexity to these tasks.

You can also see that 1a, 1b is part of the events block by looking at the legend below the branches and you can see what tasks were completed. We also added a cross next to an item once complete to keep us updated on where we were in the build without digging through the code.

We then created wireframes showing how the new system could work. With the AFR, Gantt Chart and the Wireframes we were able to clearly outline what we intended to build for RPN. In my next post I will go through the features.


Case Study: ONC8030 Branching Activity

The Idea

Inspired by an H5P talk at the NU Learning and Teaching conference, Kay McAlinden, module leader for ONC8030 Psychosocial Issues in Advanced Disease, approached the FMS TEL team about creating a branching activity.

We jumped at the opportunity to be involved and offered our services to film and edit the videos, and also build the activity in H5P and Canvas.

Bringing to idea to life

Kay took the lead in crafting the scripts and coordinating an actor. Pip Davies, roleplaynorth Co-Ordinator, kindly offered to play our patient and Kay would take the role as the Health Care Professional.

Emily Smith and Tracy Connell from the FMS TEL Team volunteered to be our videographers and canvased the campus to find a suitable room to be our ‘doctors office’. Once the scripts were finalised and the room was staged, filming commenced. Using two cameras and a team of two videographers we were able to film both the patient and the health care professional scenes at the same time. This aided in the flow of the conversation for the actors and filming was able to be completed within a couple of hours, with brief interruptions for the inevitable giggles and occasional bloopers.

showing branching activity in H5P editor mode
Editor View of H5P Activity

Next, we moved onto the task of editing the footage, which was completed in Adobe Premiere Pro, and building the task in H5P. We had filmed two options for the patients thoughts so a few different variations were created. After choosing a variation, making any further adjustments and proofreading of the captions, the activity was finalised.

All in all, the process took around a week of work.

The Final Layout

The activity is situated within a discussion board on Canvas, providing students with the opportunity to complete the task and subsequently engage in discussions with their peers. This setup also enables us to gather valuable feedback regarding the students’ perceptions of the task.

Canvas discussion board with embedded H5P activity
Canvas Discussion Board

After the introduction and instruction slides the first statement is introduced. Students are then prompted to pick from 3 responses. The corresponding response video is played, and students get a opportunity to reflect on the response and consider the patients perspective. Finally, we get to hear the patient’s thoughts. There are 7 statements for the students to work through, guiding students through a sequential conversation with the patient, fostering a sense of engagement and interaction.

Student Responses

While the activity is not actively monitored we are able to track which students have completed the task, view their selected options, and read their written responses. These insights will prove valuable when we conduct future reviews of the activity.

H5P results page showing student chose option 3
Results page – Branching Route
results view of H5P showing free text comments
Results page – Free text comments

Student feedback

We have been inundated with wonderful feedback from all the students. Below are some of our favourite quotes:

“This was a really valuable reflection for me. I could think back to how I had previously handled discussions like this. Had I perhaps been too quick to try and offer solutions and fix things. “

This was quite an interesting exercise. It is quite easy to tip into problem solving and fixing, however what this highlights is that stepping back, allowing space for difficult emotions to be expressed”

“This was really interesting, I liked that there was not necessarily a wrong answer but makes you think about how you are wording answers to patients and the sequence to presenting the information to the patient.”

“I thought that was an interesting activity. It highlighted the importance of actively listening to what your patient is telling you and not just them with more information…. It was interesting sometimes the one i picked wasnt said in the way I would have said and therefore the response from the patient wasn’t what I would have expected.”

“I thought this was a really effective activity; it made me really think about how I would respond in these situations and how to put myself in Linda’s shoes to try and understand her feelings. I found that my responses developed through the activity and could see how by offering more space and time for Linda to talk it allowed her to open up and feel understood. At the start I definitely could see myself trying to find solutions to her problems but it became apparent that answers weren’t what Linda was seeking, she just wanted reassurance and to be understood. This will be useful to take into practice.”

“As a healthcare professional, this activity has opened my eyes to the fact that more often then not, we have been taught to focus on the physical issues and to prioritise first the physical health, and then the psychosocial aspects of the person’s life. During this activity, I realised how therapeutic empathy can be in itself.

Final Thoughts

The creation of this branching activity, led by Kay McAlinden and supported by RolePlay North and the FMS TEL team, has been a successful collaboration. Scriptwriting, filming, and editing, came together to create an engaging and interactive learning experience for the students.

The use of technology, including Premiere Pro and H5P, was essential for executing the activity seamlessly and collecting valuable data for future improvements.

The overwhelmingly positive feedback from students underscores the effectiveness of this activity in enhancing their learning experience. It’s a fantastic example of how technology and teamwork can result in innovative and impactful activities that students truly enjoy and appreciate.


Your Next Step: Resource and Support

This activity has kindly been shared with all staff at Newcastle University and is available in the Faculty of Medical Sciences > Generic Content folder within H5P.

all content, FMS, generic content, making empathic responses
Folder View in Canvas H5P app.

Taking Ctrl: Device Mode

The Problem

When you’re putting together course materials, it’s important to think about how it’ll look to your students. Laptops and monitors come in all shapes and sizes, so what looks good on your screen might not on someone else’s.

It is also becoming increasingly common for students to access content on their mobile phones and tablets. How can you efficiently ensure that everything appears visually pleasing and functional across these diverse devices?

The Solution

Windows: F12
Mac: Cmd + Opt + I

This opens the developer tools, which includes a ‘device mode’ where you can see how your content will look and function on different devices. The below example is using Chrome on a windows machine:

Example showing responsive and iphone views

Learn more: Chrome Documentation


Enjoy this post? Check out the others in our Taking Ctrl series.

FMS AI Project

A drawing of two different halves of a brain left side is connected with electronic circuits representing logic and the right side full of 70s style paint drops representing creativity.

With the rise of Large Language Models (LLM) and their potential this year the FMS TEL Team have been successful in an application for funding. We are in the very early stages of planning out how we can integrate some of our services with a LLM whilst also maintaining security over the data.

We have looked at feedback from a recent survey and are taking on board ideas from colleagues and students, to help guide us through this exploratory work.

This is just the beginning and we will keep you updated on our progress.

Presentations powered by A.I – Gamma.app Review

Gamma.app is an A.I-based tool which generates presentations, documents, and webpages. It’s focus on presentations makes it of potential interest to those involved in teaching and learning. https://gamma.app

Quick Look:

In ‘guided mode’, I gave a title ‘history of Newcastle upon Tyne’, and Gamma provided a choice of templates and then generated a suggested presentation structure within a few seconds. It then generated a deck of 8 slides in about 1 minute. The slide deck included relevant images and could be exported as Powerpoint or PDF. Additionally, Gamma allows for the import of custom text, which it adapts and converts into a slide deck or document. The ‘AI editor’ provides options, such as “Suggest a professional theme”, “Give more detail”, “Give me a more exciting way to say this” etc.

The Gamma app in use showing chat interaction with AI and the 3 slides generated.
Cost:

Gamma currently (October 2023) has a three tier model:

  1. Free limited use – you get a one-off 400 ‘AI Credits’ (credits used each time you generate a document), exported slides and documents are branded
  2. ‘Plus’ – £78/year, 400 ‘AI Credits’ per month
  3. ‘Pro’ – £147/year, unlimted credits and extra features
Thoughts:

Gamma is a powerful tool which can quickly generate slide decks and documents which are ‘usable’ with little modification. With all the focus on the tools of the ‘big players’, such as Microsoft/Chat-GPT and Google, it is refreshing to see a tool from a seemingly independent company (though, like many other A.I. apps, it may well be using the back-end services of Chat-GPT ).

Of course, to use A.I. generated materials, it is important to have grounded subject knowledge and critically review and adapt outputs, to avoid mistakes. It is also important to carefully word the prompts which you provide to the A.I.; for example, a presentation generated for me by Gamma, about “Newcastle University”, included accurate information about the 19th century pre-cursors of the University of Newcastle upon Tyne, but then mentioned a merger with UCL in 2002, and included a photo of Newcastle University in Australia.

There are obvious plagiarism and academic integrity issues to consider. In common with most other A.I. apps, there is no acknowledgement of the source materials used in training of the A.I. As such it may be part-based on copyrighted materials and licenced content such as Wikipedia, which has an Attribution-Share-Alike licence. Likewise, the source of images aren’t acknowledged – though the ‘A.I. Editor’ does give the option of ‘all images’ (even if licencing unknown), ‘Free to use’ (which seem ‘loose’, by including sources which don’t generally display image licence information, such as Facebook and Twitter) and ‘Free to use commercially’ – and you can click through to the source of the image. The pricing model for Gamma is similar to that of other A.I. tools, all of which lead the universal problem of inequality of access, giving advantage to students from more well-off backgrounds. But these tools are widely available now, and this is the new reality that H.E. needs to adapt to.