How to motivate and engage students with authentic video

Sue Kay
A teacher holding a tablet in a classroom with students around her also looking at the tablet smiling
Reading time: 4 minutes

Sue Kay has been an ELT materials writer for over 25 years. She is the co-author of app's Focus Second Edition and is one of the co-founders of . In this article, Sue takes us through her experience of using video in the classroom and shows us how to motivate and engage students with authentic video.

Videos are no longer a novelty

When I started teaching in the early 80s, video was a novelty in the classroom. We only had one video player for the whole school and had to book it a week in advance. There was very little published material available, but thanks to the rarity factor, the students lapped it up.

There was no problem with getting them motivated, even if the lessons accompanying the videos were not particularly exciting and consisted mainly of comprehension questions. Lucky for me, our school had a very dynamic Director of Studies who gave great teacher training sessions and I was very taken with a presentation he did on active viewing tasks.

I was, and still am, a big fan of the Communicative Approach and I embraced the more interactive video tasks enthusiastically: freeze frame and predict, watch with the sound down and guess what people are saying, listen with the screen hidden to guess the action, etc.

When I’m preparing a video lesson, I still try to include at least one of these activities because the information gap provides an ideal motivation for students to watch the video and check their ideas.

Motivating students with video
Play
Privacy and cookies

By watching, you agree app can share your viewership data for marketing and analytics for one year, revocable by deleting your cookies.

Is video a good motivational tool to use in the classroom today?

In the old days, video could motivate and engage a class because it was something relatively new, but what about nowadays when video has gradually moved from a ‘nice-to-have’ element of new courses to a ‘must-have’?

We teachers have unlimited access to videos, either those that accompany our course or on the internet. But has this affected motivation? Has video also become harder to use as a motivating factor in the classroom?

Yes and no.

Teenagers have grown up with a smartphone in their hand. They live their lives through video - filming themselves or one another, uploading and sharing video content on TikTok or other similar platforms, accessing YouTube on a variety of devices, and even aspiring to be like the YouTubers they spend hours and hours watching.

The importance of this to the way we teach is summed up by this quote from The Age of the Image, “We can’t learn or teach what we can’t communicate – and increasingly that communication is being done through visual media.”

Video is the ideal medium for teaching 21st-century skills and visual literacy. None of my students bat an eyelid when I ask them to make a video for homework, film themselves telling an anecdote, watch a grammar explanation online, or do some online research.

But in terms of what we watch in class, our videos need to work harder than before. In my experience, students won’t tolerate boring or unnatural videos, just because they’re in English. Because they watch so many online films, documentaries and series, students are used to high production values, strong narratives and authentic material.

What makes a motivating and engaging video?

When we were writing the second edition of Focus, we were lucky enough to have access to the BBC archives. However, just because something has appeared on the BBC it doesn’t mean it is suitable for our students. In my experience, there are certain criteria that needs to be fulfilled in order to motivate and engage students with video.

The wow factor

First of all, it helps if a video has a visual wow factor. This may be an unusual setting or a location with breathtaking scenery. If there’s no visual interest, you may as well do an audio lesson. However, stunning places and incredible landscapes won’t hold the students’ interest for very long.

Relatability

There also has to be something in the video that the student can relate to their own lives. For instance, one of the clips we chose for Focus Second Edition is set in an amazing place in Turkey, popular with tourists who visit in hot air balloons. The students are unlikely to have visited this place, but to make it relatable and interesting for students, we chose an extract that focuses on the caves that older generations still inhabit, while the younger generation have moved to the nearby cities. The topic of young people leaving the countryside for the big city is a topic that will be familiar anywhere in the world.

An inspiring story

A generation that have access to endless TV series and films on demand expect a good story. While this can be an episode from a drama - it doesn’t have to be fiction. It can be an inspiring story of human achievement or any kind of human-interest story that follows a journey and has a story arc.

Social relevance

Generation Z and Alpha tend to be very socially engaged and open-minded; they want to change the world. So videos that air social issues are ideal as stimulus for discussion. For example, in Focus Second Edition we’ve included a video clip about a project that’s underway in Holland, where students can have low-priced accommodation in a Care Home in return for some help with the elderly residents. In class, we’ve used this video clip as a springboard for discussing relationships across generations.

Why are these types of video more motivating?

Videos that fulfil these criteria raise motivation in class because they facilitate more interesting lessons. If the video is visually engaging, it’s easier to write the active viewing tasks I mentioned earlier. If the topic is relatable on some level, the lesson can include personalization and discussion, which wouldn’t work if the content was so far removed from the students’ reality that they have nothing to say about it.

I’m particularly keen on videos that are engaging enough to facilitate follow-up tasks that might spark the students’ imagination and help to put them in other people’s shoes. For example, in Focus Second Edition, we’ve included a video about window cleaners on the Burj Khalifa in Dubai, the tallest building in the world.

As you can imagine, the film shots are breathtaking and have not only the wow factor but the ‘agh factor’ too for people who are afraid of heights. The presenter, Dallas, joins the experienced window cleaners, and you can’t help but hold your breath as he climbs out onto the side of the building, which is a sheer drop of 800 meters below him. No wonder he has a dry mouth.

The short clip is so engaging that the lesson practically writes itself. Here are a couple of examples of follow-up tasks that are only possible because the video holds the students’ attention and ignites their imagination:

After you watch

  • You are Dallas and you want to learn more about the daily routine of the window cleaners at the Burj Khalifa. In pairs, decide on a list of five questions you want to ask the window cleaners about their job.
  • Imagine that you are Dallas and write an article about your work experience on the tallest building in the world.

All this increased volume and choice of video is great, and video certainly still has the power to motivate our students, but I believe we teachers and materials providers need to focus more on the quality of the videos we use in class than the quantity.

Bibliography

  • Apkon, S. (2013) The Age of the Image. Farrar, Straus and Giroux
  • DLA. Digitallearningassociates.com
  • Donaghy, Kieran (2015) Film in Action. Delta Publishing
  • Goldstein, Ben. & Driver, Paul. (2015) Language Learning with Digital Video. Cambridge University Press
  • Keddie, Jamie. and .
  • Donaghy, K & Whitcher A., , ELT Teacher 2 Writer

More blogs from app

  • Hands typing at a laptop with symbols

    Can computers really mark exams? Benefits of ELT automated assessments

    By app Languages

    Automated assessment, including the use of Artificial Intelligence (AI), is one of the latest education tech solutions. It speeds up exam marking times, removes human biases, and is as accurate and at least as reliable as human examiners. As innovations go, this one is a real game-changer for teachers and students. 

    However, it has understandably been met with many questions and sometimes skepticism in the ELT community – can computers really mark speaking and writing exams accurately? 

    The answer is a resounding yes. Students from all parts of the world already take AI-graded tests.  aԻ Versanttests – for example – provide unbiased, fair and fast automated scoring for speaking and writing exams – irrespective of where the test takers live, or what their accent or gender is. 

    This article will explain the main processes involved in AI automated scoring and make the point that AI technologies are built on the foundations of consistent expert human judgments. So, let’s clear up the confusion around automated scoring and AI and look into how it can help teachers and students alike. 

    AI versus traditional automated scoring

    First of all, let’s distinguish between traditional automated scoring and AI. When we talk about automated scoring, generally, we mean scoring items that are either multiple-choice or cloze items. You may have to reorder sentences, choose from a drop-down list, insert a missing word- that sort of thing. These question types are designed to test particular skills and automated scoring ensures that they can be marked quickly and accurately every time.

    While automatically scored items like these can be used to assess receptive skills such as listening and reading comprehension, they cannot mark the productive skills of writing and speaking. Every student's response in writing and speaking items will be different, so how can computers mark them?

    This is where AI comes in. 

    We hear a lot about how AI is increasingly being used in areas where there is a need to deal with large amounts of unstructured data, effectively and 100% accurately – like in medical diagnostics, for example. In language testing, AI uses specialized computer software to grade written and oral tests. 

    How AI is used to score speaking exams

    The first step is to build an acoustic model for each language that can recognize speech and convert it into waveforms and text. While this technology used to be very unusual, most of our smartphones can do this now. 

    These acoustic models are then trained to score every single prompt or item on a test. We do this by using human expert raters to score the items first, using double marking. They score hundreds of oral responses for each item, and these ‘Standards’ are then used to train the engine. 

    Next, we validate the trained engine by feeding in many more human-marked items, and check that the machine scores are very highly correlated to the human scores. If this doesn’t happen for any item, we remove it, as it must match the standard set by human markers. We expect a correlation of between .95-.99. That means that tests will be marked between 95-99% exactly the same as human-marked samples. 

    This is incredibly high compared to the reliability of human-marked speaking tests. In essence, we use a group of highly expert human raters to train the AI engine, and then their standard is replicated time after time.  

    How AI is used to score writing exams

    Our AI writing scoring uses a technology called . LSA is a natural language processing technique that can analyze and score writing, based on the meaning behind words – and not just their superficial characteristics. 

    Similarly to our speech recognition acoustic models, we first establish a language-specific text recognition model. We feed a large amount of text into the system, and LSA uses artificial intelligence to learn the patterns of how words relate to each other and are used in, for example, the English language. 

    Once the language model has been established, we train the engine to score every written item on a test. As in speaking items, we do this by using human expert raters to score the items first, using double marking. They score many hundreds of written responses for each item, and these ‘Standards’ are then used to train the engine. We then validate the trained engine by feeding in many more human-marked items, and check that the machine scores are very highly correlated to the human scores. 

    The benchmark is always the expert human scores. If our AI system doesn’t closely match the scores given by human markers, we remove the item, as it is essential to match the standard set by human markers.

    AI’s ability to mark multiple traits 

    One of the challenges human markers face in scoring speaking and written items is assessing many traits on a single item. For example, when assessing and scoring speaking, they may need to give separate scores for content, fluency and pronunciation. 

    In written responses, markers may need to score a piece of writing for vocabulary, style and grammar. Effectively, they may need to mark every single item at least three times, maybe more. However, once we have trained the AI systems on every trait score in speaking and writing, they can then mark items on any number of traits instantaneously – and without error. 

    AI’s lack of bias

    A fundamental premise for any test is that no advantage or disadvantage should be given to any candidate. In other words, there should be no positive or negative bias. This can be very difficult to achieve in human-marked speaking and written assessments. In fact, candidates often feel they may have received a different score if someone else had heard them or read their work.

    Our AI systems eradicate the issue of bias. This is done by ensuring our speaking and writing AI systems are trained on an extensive range of human accents and writing types. 

    We don’t want perfect native-speaking accents or writing styles to train our engines. We use representative non-native samples from across the world. When we initially set up our AI systems for speaking and writing scoring, we trialed our items and trained our engines using millions of student responses. We continue to do this now as new items are developed.

    The benefits of AI automated assessment

    There is nothing wrong with hand-marking homework tests and exams. In fact, it is essential for teachers to get to know their students and provide personal feedback and advice. However, manually correcting hundreds of tests, daily or weekly, can be repetitive, time-consuming, not always reliable and takes time away from working alongside students in the classroom. The use of AI in formative and summative assessments can increase assessed practice time for students and reduce the marking load for teachers.

    Language learning takes time, lots of time to progress to high levels of proficiency. The blended use of AI can:

    • address the increasing importance of formative assessmentto drive personalized learning and diagnostic assessment feedback 

    • allow students to practice and get instant feedback inside and outside of allocated teaching time

    • address the issue of teacher workload

    • create a virtuous combination between humans and machines, taking advantage of what humans do best and what machines do best. 

    • provide fair, fast and unbiased summative assessment scores in high-stakes testing.

    We hope this article has answered a few burning questions about how AI is used to assess speaking and writing in our language tests. An interesting quote from Fei-Fei Li, Chief scientist at Google and Stanford Professor describes AI like this:

    “I often tell my students not to be misled by the name ‘artificial intelligence’ — there is nothing artificial about it; A.I. is made by humans, intended to behave [like] humans and, ultimately, to impact human lives and human society.”

    AI in formative and summative assessments will never replace the role of teachers. AI will support teachers, provide endless opportunities for students to improve, and provide a solution to slow, unreliable and often unfair high-stakes assessments.

    Examples of AI assessments in ELT

    At app, we have developed a range of assessments using AI technology.

    Versant

    The Versant tests are a great tool to help establish language proficiency benchmarks in any school, organization or business. They are specifically designed for placement tests to determine the appropriate level for the learner.

    PTE Academic

    The  is aimed at those who need to prove their level of English for a university place, a job or a visa. It uses AI to score tests and results are available within five days. 

    app English International Certificate (PEIC)

    app English International Certificate (PEIC) also uses automated assessment technology. With a two-hour test available on-demand to take at home or at school (or at a secure test center). Using a combination of advanced speech recognition and exam grading technology and the expertise of professional ELT exam markers worldwide, our patented software can measure English language ability.

  • Two ladies in a pottery studio, one with a clipboard, both looking at a laptop together

    11 ways you can avoid English jargon at work

    By

    From “blue-sky thinking” to “lots of moving parts”, there are many phrases used in the office that sometimes seem to make little sense in a work environment. These phrases are known as ‘work jargon’ – or you might hear it referred to as ‘corporate jargon’, ‘business jargon’ or ‘management speak’. It’s a type of language generally used by a profession or group in the workplace, and has been created and evolved over time. And whether people use this work jargon to sound impressive or to disguise the fact that they are unsure about the subject they are talking about, it’s much simpler and clearer to use plain English. This will mean that more people understand what they are saying –both fluent and second-language English speakers.

    The preference for plain English stems from the desire for communication to be clear and concise. This not only helps fluent English speakers to understand things better, but it also means that those learning English pick up a clearer vocabulary. This is particularly important in business, where it’s important that all colleagues feel included as part of the team and can understand what is being said. This, in turn, helps every colleague feel equipped with the information they need to do their jobs better, in the language they choose to use.

    Here, we explore some of the most common examples of English jargon at work that you might hear and suggest alternatives you can use…

    Blue-sky thinking

    This refers to ideas that are not limited by current thinking or beliefs. It’s used to encourage people to be more creative with their thinking. The phrase could be confusing as co-workers may wonder why you’re discussing the sky in a business environment.

    Instead of: “This is a new client, so we want to see some blue-sky thinking.”

    Try saying: “This is a new client, so don’t limit your creativity.”

    Helicopter view

    This phrase is often used to mean a broad overview of the business. It comes from the idea of being a passenger in a helicopter and being able to see a bigger view of a city or landscape than if you were simply viewing it from the ground.Second-language English speakers might take the phrase literally, and be puzzled as to why someone in the office is talking about taking a helicopter ride.

    Instead of: “Here’s a helicopter view of the business.”

    Try saying: “This is a broad view of the business.”

    Get all your ducks in a row

    This is nothing to do with actual ducks; it simply means to be organized. While we don’t exactly know the origin of this phrase, it probably stems from actual ducklings that walk in a neat row behind their parents.

    Instead of: “This is a busy time for the company, so make sure you get all your ducks in a row.”

    Try saying: “This is a busy time for the company, so make sure you’re as organized as possible.”

    Thinking outside the box

    Often used to encourage people to use novel or creative thinking. The phrase is commonly used when solving problems or thinking of a new concept. The idea is that, if you’re inside a box, you can only see those walls and that might block you from coming up with the best solution.

    Instead of: “The client is looking for something extra special, so try thinking outside the box.”

    Try saying: “The client is looking for something extra special, so try thinking of something a bit different to the usual work we do for them.”

    IGUs (Income Generating Units)

    A college principal alerted us to this one – it refers to his students. This is a classic example of jargon when many more words are used than necessary.

    Instead of: “This year, we have 300 new IGUs.”

    Try saying: “This year, we have 300 new students.”

    Run it up the flagpole

    Often followed by “…and see if it flies” or “…and see if anyone salutes it”, this phrase is a way of asking someone to suggest an idea and see what the reaction is.

    Instead of: “I love your idea, run it up the flagpole and see if it flies.”

    Try saying: “I love your idea, see what the others think about it.”

    Swim lane

    A visual element – a bit like a flow chart –  that distinguishes a specific responsibility in a business organization. The name for a swim lane diagram comes from the fact that the information is broken up into different sections – or “lanes” – a bit like in our picture above.

    Instead of: “Refer to the swim lanes to find out what your responsibilities are.”

    Try saying: “Refer to the diagram/chart to find out what your responsibilities are.”

    Bleeding edge

    A way to describe something that is innovative or cutting edge. It tends to imply an even greater advancement of technology that is almost so clever that it is unbelievable in its current state.

    Instead of: “The new technology we have purchased is bleeding edge.”

    Try saying: “The new technology we have purchased is innovative.”

    Tiger team

    A tiger team is a group of experts brought together for a single project or event. They’re often assembled to assure management that everything is under control, and the term suggests strength.

    Instead of: “The tiger team will solve the problem.” 

    Try saying: “The experts will solve the problem.” 

    Lots of moving parts

    When a project is complicated, this phrase is sometimes used to indicate lots is going on.

    Instead of: “This project will run for several months and there are lots of moving parts to it.”

    Try saying: “This project will run for several months and it will be complicated.”

    A paradigm shift

    Technically, this is a valid way to describe changing how you do something and the model you use. The word “paradigm” (pronounced “para-dime”) is an accepted way or pattern of doing something. So the “shift” part means that a possible new way has been discovered. Second-language English speakers however, might not be familiar with the meaning and might be confused about what it actually means.

    Instead of: “To solve this problem, we need a paradigm shift.”

    Try saying: To solve this problem; we need to think differently.”