Can computers really mark exams? Benefits of ELT automated assessments

app Languages
Hands typing at a laptop with symbols

Automated assessment, including the use of Artificial Intelligence (AI), is one of the latest education tech solutions. It speeds up exam marking times, removes human biases, and is as accurate and at least as reliable as human examiners. As innovations go, this one is a real game-changer for teachers and students. 

However, it has understandably been met with many questions and sometimes skepticism in the ELT community – can computers really mark speaking and writing exams accurately? 

The answer is a resounding yes. Students from all parts of the world already take AI-graded tests.  aԻ Versanttests – for example – provide unbiased, fair and fast automated scoring for speaking and writing exams – irrespective of where the test takers live, or what their accent or gender is. 

This article will explain the main processes involved in AI automated scoring and make the point that AI technologies are built on the foundations of consistent expert human judgments. So, let’s clear up the confusion around automated scoring and AI and look into how it can help teachers and students alike. 

AI versus traditional automated scoring

First of all, let’s distinguish between traditional automated scoring and AI. When we talk about automated scoring, generally, we mean scoring items that are either multiple-choice or cloze items. You may have to reorder sentences, choose from a drop-down list, insert a missing word- that sort of thing. These question types are designed to test particular skills and automated scoring ensures that they can be marked quickly and accurately every time.

While automatically scored items like these can be used to assess receptive skills such as listening and reading comprehension, they cannot mark the productive skills of writing and speaking. Every student's response in writing and speaking items will be different, so how can computers mark them?

This is where AI comes in. 

We hear a lot about how AI is increasingly being used in areas where there is a need to deal with large amounts of unstructured data, effectively and 100% accurately – like in medical diagnostics, for example. In language testing, AI uses specialized computer software to grade written and oral tests. 

How AI is used to score speaking exams

The first step is to build an acoustic model for each language that can recognize speech and convert it into waveforms and text. While this technology used to be very unusual, most of our smartphones can do this now. 

These acoustic models are then trained to score every single prompt or item on a test. We do this by using human expert raters to score the items first, using double marking. They score hundreds of oral responses for each item, and these ‘Standards’ are then used to train the engine. 

Next, we validate the trained engine by feeding in many more human-marked items, and check that the machine scores are very highly correlated to the human scores. If this doesn’t happen for any item, we remove it, as it must match the standard set by human markers. We expect a correlation of between .95-.99. That means that tests will be marked between 95-99% exactly the same as human-marked samples. 

This is incredibly high compared to the reliability of human-marked speaking tests. In essence, we use a group of highly expert human raters to train the AI engine, and then their standard is replicated time after time.  

How AI is used to score writing exams

Our AI writing scoring uses a technology called . LSA is a natural language processing technique that can analyze and score writing, based on the meaning behind words – and not just their superficial characteristics. 

Similarly to our speech recognition acoustic models, we first establish a language-specific text recognition model. We feed a large amount of text into the system, and LSA uses artificial intelligence to learn the patterns of how words relate to each other and are used in, for example, the English language. 

Once the language model has been established, we train the engine to score every written item on a test. As in speaking items, we do this by using human expert raters to score the items first, using double marking. They score many hundreds of written responses for each item, and these ‘Standards’ are then used to train the engine. We then validate the trained engine by feeding in many more human-marked items, and check that the machine scores are very highly correlated to the human scores. 

The benchmark is always the expert human scores. If our AI system doesn’t closely match the scores given by human markers, we remove the item, as it is essential to match the standard set by human markers.

AI’s ability to mark multiple traits 

One of the challenges human markers face in scoring speaking and written items is assessing many traits on a single item. For example, when assessing and scoring speaking, they may need to give separate scores for content, fluency and pronunciation. 

In written responses, markers may need to score a piece of writing for vocabulary, style and grammar. Effectively, they may need to mark every single item at least three times, maybe more. However, once we have trained the AI systems on every trait score in speaking and writing, they can then mark items on any number of traits instantaneously – and without error. 

AI’s lack of bias

A fundamental premise for any test is that no advantage or disadvantage should be given to any candidate. In other words, there should be no positive or negative bias. This can be very difficult to achieve in human-marked speaking and written assessments. In fact, candidates often feel they may have received a different score if someone else had heard them or read their work.

Our AI systems eradicate the issue of bias. This is done by ensuring our speaking and writing AI systems are trained on an extensive range of human accents and writing types. 

We Dz’t want perfect native-speaking accents or writing styles to train our engines. We use representative non-native samples from across the world. When we initially set up our AI systems for speaking and writing scoring, we trialed our items and trained our engines using millions of student responses. We continue to do this now as new items are developed.

The benefits of AI automated assessment

There is nothing wrong with hand-marking homework tests and exams. In fact, it is essential for teachers to get to know their students and provide personal feedback and advice. However, manually correcting hundreds of tests, daily or weekly, can be repetitive, time-consuming, not always reliable and takes time away from working alongside students in the classroom. The use of AI in formative and summative assessments can increase assessed practice time for students and reduce the marking load for teachers.

Language learning takes time, lots of time to progress to high levels of proficiency. The blended use of AI can:

  • address the increasing importance of formative assessmentto drive personalized learning and diagnostic assessment feedback 

  • allow students to practice and get instant feedback inside and outside of allocated teaching time

  • address the issue of teacher workload

  • create a virtuous combination between humans and machines, taking advantage of what humans do best and what machines do best. 

  • provide fair, fast and unbiased summative assessment scores in high-stakes testing.

We hope this article has answered a few burning questions about how AI is used to assess speaking and writing in our language tests. An interesting quote from Fei-Fei Li, Chief scientist at Google and Stanford Professor describes AI like this:

“I often tell my students not to be misled by the name ‘artificial intelligence’ — there is nothing artificial about it; A.I. is made by humans, intended to behave [like] humans and, ultimately, to impact human lives and human society.”

AI in formative and summative assessments will never replace the role of teachers. AI will support teachers, provide endless opportunities for students to improve, and provide a solution to slow, unreliable and often unfair high-stakes assessments.

Examples of AI assessments in ELT

At app, we have developed a range of assessments using AI technology.

Versant

The Versant tests are a great tool to help establish language proficiency benchmarks in any school, organization or business. They are specifically designed for placement tests to determine the appropriate level for the learner.

PTE Academic

The  is aimed at those who need to prove their level of English for a university place, a job or a visa. It uses AI to score tests and results are available within five days. 

app English International Certificate (PEIC)

app English International Certificate (PEIC) also uses automated assessment technology. With a two-hour test available on-demand to take at home or at school (or at a secure test center). Using a combination of advanced speech recognition and exam grading technology and the expertise of professional ELT exam markers worldwide, our patented software can measure English language ability.

Read more about the use of AI in our learning and testing here, or if you're wondering which English test is right for your students make sure to check out our post 'Which exam is right for my students?'.

More blogs from app

  • two business people sat together in a meeting both looking at a laptop

    Enhancing workplace communication: The new role of language assessments in business success

    By Andrew Khan
    Reading time: 4 minutes

    The integration of AI tools into workplaces around the world is starting to change the way people communicate professionally. that the use of AI to help draft documents and emails is driven not only by convenience and efficiency but also by a desire to be clear and precise in language.

    While potentially useful, tools to translate, generate, or ‘correct’ written text won’t help with the effectiveness of the verbal communication that powers business relationships.

  • A teacher holding a tablet to a young student in a classroom sat at a table

    Talking technology: Teaching 21st century communication strategies

    By
    Reading time: 4 minutes

    When my son created a web consulting business as a summer job, I offered to have business cards made for him. “Oh Dad,” he said, “Business cards are so 20th century!”

    It was an embarrassing reminder that communication norms are constantly changing, as are the technologies we use. Younger generations share contact information on their phones’ social media apps, not with business cards. A similar shift has been the move away from business cards featuring fax numbers. “What’s a fax?” my son might ask.

    Fax machines have had a surprisingly long life–the first fax machine was invented in 1843–but they have been largely retired because it’s easier to send images of documents via email attachments.

    More recent technologies, such as the 1992 invention of text messages, seem here to stay, but continue to evolve with innovations like emojis, a 1998 innovation whose name combines the Japanese words e (picture) and moji (character).

    The 55/38/7 rule and the three Cs

    Changing technologies challenge language teachers who struggle to prepare students with the formats and the strategies they need to be effective in academic, business, and social settings. These challenges start with questions about why we have particular norms around communication. These norms form a culture of communication.

    The artist/musician Brian Eno defines culture as what we Dz’t have to do. We may have to walk, but we Dz’t have to dance. Dancing, therefore, is culture. Communication is full of cultural practices that we Dz’t strictly need to do, but which make communication more successful. These include practices based on the 55/38/7 Rule and The Three Cs.

    The 55/38/7 rule is often misinterpreted as being about what someone hears when we speak. It actually refers to the insights of University of California professor, , who looked at how our attitudes, feelings, and beliefs influence our trust in what someone says.

    Mehrabian suggests words only account for seven percent of a message’s impact; tone of voice makes up 38 percent, and body language–including facial expressions–account for the other 55 percent. The consequence of this for our students is that it’s sometimes not so important what they are saying as how they are saying it.

    Another way of looking at this nonverbal communication is in terms of The Three Cs: context, clusters, and congruence.

    Context is about the environment in which communication takes place, any existing relationship between the speakers, and the roles they have. Imagine how each of these factors change if, for example, you met a surgeon at a party compared to meeting the same surgeon in an operating theater where you are about to have your head sawn open.

    Clusters are the sets of body language expressions that together make up a message; smiling while walking toward someone is far different than smiling while carefully backing away.

    Congruence refers to how body language matches–or doesn’t match–a speaker’s words. People saying, “Of course! It’s possible!” while unconsciously shaking their heads from side to side are perhaps being less than truthful.

    How does a culture of communication practices translate to new technologies? Mobile phone texts, just like 19th-century telegraph messages before them, need to be precise in conveying their meaning.

    In virtual meetings (on Teams and Google Hangouts, for example), students need to understand that tone of voice, facial expressions, and body language may be more important than the words they share.

    Politeness as one constant

    An additional key concern in virtual meetings is politeness. Once, in preparation for a new textbook, I was involved in soliciting topics of interest to university teachers. I was surprised that several teachers identified the need to teach politeness. The teachers pointed out that the brevity of social media meant that students were often unwittingly rude in their requests (typical email: “Where’s my grade!”). Moreover, such abruptness was crossing over to their in-person interactions.

    Politeness includes civility, getting along with others, as well as deference, showing respect to those who may have earned it through age, education, and achievement. But politeness is also related to strategies around persuasion and how to listen actively, engage with other speakers by clarifying and elaborating points and ask a range of question types. Online or in person, if students cannot interrupt politely or know when it is better to listen, whatever they have to say will be lost in the court of bad opinion.

    This is particularly important in preparation for academic and business contexts where students need to interact in groups, such as seminar settings and business meetings. Within these, it’s necessary for students to be able to take on a variety of roles, including leadership, taking notes, and playing devil’s advocate to challenge what a group thinks.

    Engaging students with project work

    Role-play can help raise awareness of these strategies among students, but it’s not enough to just take on a variety of roles found in common academic and business exchanges; students need to be able to reflect after each role-play session and infer what strategies are successful.

    Technology-based projects can also help students engage in a range of communication strategies. For example, a app series, StartUp, embraces technology in each unit by sprinkling various text messages and web-based research tasks. There are also multimedia projects where students use their phones to collect images or video and share the results in presentations that develop their critical thinking.

    For example:

    Make your own video

    Step 1 Choose a favorite restaurant or meal.

    Step 2 Make a 30-second video. Talk about the meal. Describe what you eat and drink. Explain why you like it.

    Step 3 Share your video. Answer questions and get feedback.

    This simple project subconsciously reinforces the unit’s vocabulary and grammar. It also allows students to personalize the project based on things that they need to talk about in daily life–their local foods in this case. This means that each student’s presentation is unique. Unlike with essay assignments, students tend to work hard to craft several versions until they are satisfied because they know their work will be seen by other students and that they will be asked questions that only they can answer.

    All this forces students to consider speaking strategies, as well as strategies for appropriate facial expressions and body language. Similarly, they have to use active listening strategies when listening to others’ presentations while asking questions. As technology continues to evolve, teachers need to integrate new applications into their teaching so students learn how to communicate with the tools they have at their disposal.

  • A man sat in a living room with books and plants in the background, he is reading a book

    Words that can't be translated into English

    By Charlotte Guest
    Reading time: 4 minutes

    While English is a rich language, there are some words from other languages that Dz’t have a direct translation. These words often describe special feelings, situations, or ideas that are deeply connected to their cultures. For example, just as some languages have specific words for different types of weather, other languages have unique words for particular moments or emotions that are hard to explain in English. Here are some interesting examples of untranslatable words that show us the different ways people see the world.