Can computers really mark exams? Benefits of ELT automated assessments

app Languages
Hands typing at a laptop with symbols

Automated assessment, including the use of Artificial Intelligence (AI), is one of the latest education tech solutions. It speeds up exam marking times, removes human biases, and is as accurate and at least as reliable as human examiners. As innovations go, this one is a real game-changer for teachers and students. 

However, it has understandably been met with many questions and sometimes skepticism in the ELT community – can computers really mark speaking and writing exams accurately? 

The answer is a resounding yes. Students from all parts of the world already take AI-graded tests.  aԻ Versanttests – for example – provide unbiased, fair and fast automated scoring for speaking and writing exams – irrespective of where the test takers live, or what their accent or gender is. 

This article will explain the main processes involved in AI automated scoring and make the point that AI technologies are built on the foundations of consistent expert human judgments. So, let’s clear up the confusion around automated scoring and AI and look into how it can help teachers and students alike. 

AI versus traditional automated scoring

First of all, let’s distinguish between traditional automated scoring and AI. When we talk about automated scoring, generally, we mean scoring items that are either multiple-choice or cloze items. You may have to reorder sentences, choose from a drop-down list, insert a missing word- that sort of thing. These question types are designed to test particular skills and automated scoring ensures that they can be marked quickly and accurately every time.

While automatically scored items like these can be used to assess receptive skills such as listening and reading comprehension, they cannot mark the productive skills of writing and speaking. Every student's response in writing and speaking items will be different, so how can computers mark them?

This is where AI comes in. 

We hear a lot about how AI is increasingly being used in areas where there is a need to deal with large amounts of unstructured data, effectively and 100% accurately – like in medical diagnostics, for example. In language testing, AI uses specialized computer software to grade written and oral tests. 

How AI is used to score speaking exams

The first step is to build an acoustic model for each language that can recognize speech and convert it into waveforms and text. While this technology used to be very unusual, most of our smartphones can do this now. 

These acoustic models are then trained to score every single prompt or item on a test. We do this by using human expert raters to score the items first, using double marking. They score hundreds of oral responses for each item, and these ‘Standards’ are then used to train the engine. 

Next, we validate the trained engine by feeding in many more human-marked items, and check that the machine scores are very highly correlated to the human scores. If this doesn’t happen for any item, we remove it, as it must match the standard set by human markers. We expect a correlation of between .95-.99. That means that tests will be marked between 95-99% exactly the same as human-marked samples. 

This is incredibly high compared to the reliability of human-marked speaking tests. In essence, we use a group of highly expert human raters to train the AI engine, and then their standard is replicated time after time.  

How AI is used to score writing exams

Our AI writing scoring uses a technology called . LSA is a natural language processing technique that can analyze and score writing, based on the meaning behind words – and not just their superficial characteristics. 

Similarly to our speech recognition acoustic models, we first establish a language-specific text recognition model. We feed a large amount of text into the system, and LSA uses artificial intelligence to learn the patterns of how words relate to each other and are used in, for example, the English language. 

Once the language model has been established, we train the engine to score every written item on a test. As in speaking items, we do this by using human expert raters to score the items first, using double marking. They score many hundreds of written responses for each item, and these ‘Standards’ are then used to train the engine. We then validate the trained engine by feeding in many more human-marked items, and check that the machine scores are very highly correlated to the human scores. 

The benchmark is always the expert human scores. If our AI system doesn’t closely match the scores given by human markers, we remove the item, as it is essential to match the standard set by human markers.

AI’s ability to mark multiple traits 

One of the challenges human markers face in scoring speaking and written items is assessing many traits on a single item. For example, when assessing and scoring speaking, they may need to give separate scores for content, fluency and pronunciation. 

In written responses, markers may need to score a piece of writing for vocabulary, style and grammar. Effectively, they may need to mark every single item at least three times, maybe more. However, once we have trained the AI systems on every trait score in speaking and writing, they can then mark items on any number of traits instantaneously – and without error. 

AI’s lack of bias

A fundamental premise for any test is that no advantage or disadvantage should be given to any candidate. In other words, there should be no positive or negative bias. This can be very difficult to achieve in human-marked speaking and written assessments. In fact, candidates often feel they may have received a different score if someone else had heard them or read their work.

Our AI systems eradicate the issue of bias. This is done by ensuring our speaking and writing AI systems are trained on an extensive range of human accents and writing types. 

We don’t want perfect native-speaking accents or writing styles to train our engines. We use representative non-native samples from across the world. When we initially set up our AI systems for speaking and writing scoring, we trialed our items and trained our engines using millions of student responses. We continue to do this now as new items are developed.

The benefits of AI automated assessment

There is nothing wrong with hand-marking homework tests and exams. In fact, it is essential for teachers to get to know their students and provide personal feedback and advice. However, manually correcting hundreds of tests, daily or weekly, can be repetitive, time-consuming, not always reliable and takes time away from working alongside students in the classroom. The use of AI in formative and summative assessments can increase assessed practice time for students and reduce the marking load for teachers.

Language learning takes time, lots of time to progress to high levels of proficiency. The blended use of AI can:

  • address the increasing importance of formative assessmentto drive personalized learning and diagnostic assessment feedback 

  • allow students to practice and get instant feedback inside and outside of allocated teaching time

  • address the issue of teacher workload

  • create a virtuous combination between humans and machines, taking advantage of what humans do best and what machines do best. 

  • provide fair, fast and unbiased summative assessment scores in high-stakes testing.

We hope this article has answered a few burning questions about how AI is used to assess speaking and writing in our language tests. An interesting quote from Fei-Fei Li, Chief scientist at Google and Stanford Professor describes AI like this:

“I often tell my students not to be misled by the name ‘artificial intelligence’ — there is nothing artificial about it; A.I. is made by humans, intended to behave [like] humans and, ultimately, to impact human lives and human society.”

AI in formative and summative assessments will never replace the role of teachers. AI will support teachers, provide endless opportunities for students to improve, and provide a solution to slow, unreliable and often unfair high-stakes assessments.

Examples of AI assessments in ELT

At app, we have developed a range of assessments using AI technology.

Versant

The Versant tests are a great tool to help establish language proficiency benchmarks in any school, organization or business. They are specifically designed for placement tests to determine the appropriate level for the learner.

PTE Academic

The  is aimed at those who need to prove their level of English for a university place, a job or a visa. It uses AI to score tests and results are available within five days. 

More blogs from app

  • A group of university students sat on the stone steps of a university campus

    From experience to innovation: How PTE Academic helped shape app English Express Test

    By
    Reading time: 2 minutes

    When we launched the app English Express Test, it’s true - we were not starting from scratch. We were building on years of experience, research, and innovation from PTE Academic, our globally recognised, high-stakes English test trusted by thousands of institutions.

    app English Express Test may be new, but it is backed by everything we have learned from delivering millions of secure, accurate, and trusted assessments around the world.

    A legacy of trust

    PTE Academic has been setting the standard for English language testing since 2009. It is known for its AI-powered scoring, robust security, and global recognition. Over the years, we have refined every part of the testing journey, from how we assess speaking and writing, to how we protect against fraud and ensure fairness.

    That legacy gave us a strong foundation to build something new. With app English Express Test, we took the best of what we know and reimagined it for a different kind of test taker, one who needs speed, flexibility, and simplicity, without compromising on quality.

    What we carried forward

    app English Express Test uses the same AI scoring technology that powers PTE Academic, trained on over 147,000 responses and verified by human experts. 

    This ensures that every score is consistent, fair, and accurate, no matter where or when the test is taken. We also brought our deep understanding of test security into the design of app English Express Test. From dual-camera proctoring to biometric voice verification, every layer of the test is built to protect integrity and build trust. 

    And just like PTE Academic, app English Express Test is aligned to the Global Scale of English and mapped to CEFR levels, so institutions can set score requirements with confidence. 

    Designed for a new generation of learners

    While PTE Academic is ideal for students applying globally, app English Express Test is purpose-built for learners with their sights set on the United States. It is still faster, fairer and simple, all at a great price, without losing the rigour that institutions expect.

    Students can take the test from home, receive results in minutes, and get certified scores within 48 hours. That kind of accessibility is a game-changer, especially for students in remote regions.

    Why this matters for institutions

    By accepting app English Express Test, institutions are not just offering another test option. They are offering a test backed by years of proven success. They are giving students a faster, more flexible way to apply, while maintaining the standards that support academic success.

    And for institutions already familiar with PTE Academic, app English Express Test is a natural extension, one that complements your admissions toolkit and helps you reach more students, more easily.

    Experience you can trust

    At app, we believe that innovation should be grounded in experience. That is why app English Express Test is not just a new test. It is the next step in our mission to make English testing more accessible, more secure, and more student-friendly.

    Because when we combine what we have learned with what students need today, we create something truly powerful.

  • A girl sat at a laptop with headphones on in a library

    5 myths about online language learning

    By
    Reading time: 3 minutes

    Technology has radically changed the way people are able to access information and learn. As a result, there are a great number of tools to facilitate online language learning – an area that’s been the subject of many myths. Here we highlight (and debunk) some of the bigger ones…

    Myth #1: You will learn more quickly

    Although online learning tools are designed to provide ways to teach and support the learner, they won’t provide you with a shortcut to proficiency or bypass any of the key stages of learning.Although you may well be absorbing lots of vocabulary and grammar rules while studying in isolation, this isn’t a replacement for an environment in which you can immerse yourself in the language with English speakers. Such settings help you improve your speaking and listening skills and increase precision, because the key is to find opportunities to practise both – widening your use of the language rather than simply building up your knowledge of it.

    Myth #2: It replaces learning in the classroom

    With big data and AI increasingly providing a more accurate idea of their level, as well as a quantifiable idea of how much they need to learn to advance to the next level of proficiency, classroom learning is vital for supplementing classroom learning. And with the Global Scale of English providing an accurate measurement of progress, students can personalise their learning and decide how they’re going to divide their time between classroom learning and private study.

    Myth #3: It can’t be incorporated into classroom learning

    There are a huge number of ways that students and teachers can use the Internet in the classroom. Meanwhile, app’s online courses and apps have a positive, measurable impact on your learning outcomes.

    Myth #4:You can't learn in the workplace

    Online language learning is ideally suited to the workplace and we must create the need to use the language and opportunities to practise it. A job offers one of the most effective learning environments: where communication is key and you’re frequently exposed to specialized vocabulary. Online language learning tools can flexibly support your busy schedule.

    Myth #5: Online language learning is impersonal and isolating

    A common misconception is that online language learning is a solitary journey, lacking the personal connection and support found in traditional classrooms. In reality, today’s digital platforms are designed to foster community and real interaction. With features like live virtual classrooms, discussion forums and instant feedback, learners can connect with peers and educators around the world, building skills together.

  • Woman on a laptop working focused in her office

    Helping students succeed with app English Express Test

    By
    Reading time: 2 minutes

    When a student applies to university, they are not just submitting a form. They are sharing their potential. And when institutions review those applications, they need to be confident that the tools they use, especially English proficiency tests, are giving them a clear, accurate picture of that student’s readiness.

    That is why accuracy matters. It is not just about getting a number. For us, it is about setting students up for success and helping institutions make decisions they can stand behind.

    Getting it right from the start

    The app English Express Test was designed to be fast and flexible, but never at the expense of accuracy. No – this was a top priority. Students can take it from anywhere, at any time, and receive results in minutes. But behind that speed is a powerful scoring engine that ensures every result is fair, consistent, and reliable.

    app English Express Test uses AI trained on over 147,000 responses, with a 0.98 correlation to human scoring. That means institutions can trust that the scores reflect real ability. And every test is reviewed by a human expert before certification, adding another layer of quality control. Add to this that admissions teams can review written and spoken samples – it really is a multi-faceted approach to the reliability of scores.

    Why accuracy supports academic success

    When students are placed in the right programs based on accurate scores, they are more likely to thrive. They can keep up with lectures, contribute to discussions, and complete assignments with confidence. That leads to better outcomes, higher retention, and a more positive student experience.

    On the other hand, if a student is placed when their scores are not representative of their skills, it can lead to frustration, poor performance, or even dropout. That is why accurate scoring is not just a technical detail, it really is a foundation for student success.

    What this means for institutions

    For admissions teams, accurate scoring means fewer surprises. It means being able to confidently admit students who are ready to succeed. And it means fewer resources spent on support for students who were not quite prepared.

    It also supports your institution’s reputation. When students succeed, they become advocates. They share their stories, recommend your programs, and contribute to a thriving academic community.

    A commitment to quality – Across the board

    While app English Express Test is ideal for students who need a fast, flexible option, app also offers PTE Academic, both of which support U.S. study applications.

    Together, app English Express Test and PTE Academic offer a complete solution, one that supports students at every stage of their journey and helps institutions make decisions with confidence.