Can computers really mark exams? Benefits of ELT automated assessments

app Languages
Hands typing at a laptop with symbols

Automated assessment, including the use of Artificial Intelligence (AI), is one of the latest education tech solutions. It speeds up exam marking times, removes human biases, and is as accurate and at least as reliable as human examiners. As innovations go, this one is a real game-changer for teachers and students. 

However, it has understandably been met with many questions and sometimes skepticism in the ELT community – can computers really mark speaking and writing exams accurately? 

The answer is a resounding yes. Students from all parts of the world already take AI-graded tests.  aԻ Versanttests – for example – provide unbiased, fair and fast automated scoring for speaking and writing exams – irrespective of where the test takers live, or what their accent or gender is. 

This article will explain the main processes involved in AI automated scoring and make the point that AI technologies are built on the foundations of consistent expert human judgments. So, let’s clear up the confusion around automated scoring and AI and look into how it can help teachers and students alike. 

AI versus traditional automated scoring

First of all, let’s distinguish between traditional automated scoring and AI. When we talk about automated scoring, generally, we mean scoring items that are either multiple-choice or cloze items. You may have to reorder sentences, choose from a drop-down list, insert a missing word- that sort of thing. These question types are designed to test particular skills and automated scoring ensures that they can be marked quickly and accurately every time.

While automatically scored items like these can be used to assess receptive skills such as listening and reading comprehension, they cannot mark the productive skills of writing and speaking. Every student's response in writing and speaking items will be different, so how can computers mark them?

This is where AI comes in. 

We hear a lot about how AI is increasingly being used in areas where there is a need to deal with large amounts of unstructured data, effectively and 100% accurately – like in medical diagnostics, for example. In language testing, AI uses specialized computer software to grade written and oral tests. 

How AI is used to score speaking exams

The first step is to build an acoustic model for each language that can recognize speech and convert it into waveforms and text. While this technology used to be very unusual, most of our smartphones can do this now. 

These acoustic models are then trained to score every single prompt or item on a test. We do this by using human expert raters to score the items first, using double marking. They score hundreds of oral responses for each item, and these ‘Standards’ are then used to train the engine. 

Next, we validate the trained engine by feeding in many more human-marked items, and check that the machine scores are very highly correlated to the human scores. If this doesn’t happen for any item, we remove it, as it must match the standard set by human markers. We expect a correlation of between .95-.99. That means that tests will be marked between 95-99% exactly the same as human-marked samples. 

This is incredibly high compared to the reliability of human-marked speaking tests. In essence, we use a group of highly expert human raters to train the AI engine, and then their standard is replicated time after time.  

How AI is used to score writing exams

Our AI writing scoring uses a technology called . LSA is a natural language processing technique that can analyze and score writing, based on the meaning behind words – and not just their superficial characteristics. 

Similarly to our speech recognition acoustic models, we first establish a language-specific text recognition model. We feed a large amount of text into the system, and LSA uses artificial intelligence to learn the patterns of how words relate to each other and are used in, for example, the English language. 

Once the language model has been established, we train the engine to score every written item on a test. As in speaking items, we do this by using human expert raters to score the items first, using double marking. They score many hundreds of written responses for each item, and these ‘Standards’ are then used to train the engine. We then validate the trained engine by feeding in many more human-marked items, and check that the machine scores are very highly correlated to the human scores. 

The benchmark is always the expert human scores. If our AI system doesn’t closely match the scores given by human markers, we remove the item, as it is essential to match the standard set by human markers.

AI’s ability to mark multiple traits 

One of the challenges human markers face in scoring speaking and written items is assessing many traits on a single item. For example, when assessing and scoring speaking, they may need to give separate scores for content, fluency and pronunciation. 

In written responses, markers may need to score a piece of writing for vocabulary, style and grammar. Effectively, they may need to mark every single item at least three times, maybe more. However, once we have trained the AI systems on every trait score in speaking and writing, they can then mark items on any number of traits instantaneously – and without error. 

AI’s lack of bias

A fundamental premise for any test is that no advantage or disadvantage should be given to any candidate. In other words, there should be no positive or negative bias. This can be very difficult to achieve in human-marked speaking and written assessments. In fact, candidates often feel they may have received a different score if someone else had heard them or read their work.

Our AI systems eradicate the issue of bias. This is done by ensuring our speaking and writing AI systems are trained on an extensive range of human accents and writing types. 

We don’t want perfect native-speaking accents or writing styles to train our engines. We use representative non-native samples from across the world. When we initially set up our AI systems for speaking and writing scoring, we trialed our items and trained our engines using millions of student responses. We continue to do this now as new items are developed.

The benefits of AI automated assessment

There is nothing wrong with hand-marking homework tests and exams. In fact, it is essential for teachers to get to know their students and provide personal feedback and advice. However, manually correcting hundreds of tests, daily or weekly, can be repetitive, time-consuming, not always reliable and takes time away from working alongside students in the classroom. The use of AI in formative and summative assessments can increase assessed practice time for students and reduce the marking load for teachers.

Language learning takes time, lots of time to progress to high levels of proficiency. The blended use of AI can:

  • address the increasing importance of formative assessmentto drive personalized learning and diagnostic assessment feedback 

  • allow students to practice and get instant feedback inside and outside of allocated teaching time

  • address the issue of teacher workload

  • create a virtuous combination between humans and machines, taking advantage of what humans do best and what machines do best. 

  • provide fair, fast and unbiased summative assessment scores in high-stakes testing.

We hope this article has answered a few burning questions about how AI is used to assess speaking and writing in our language tests. An interesting quote from Fei-Fei Li, Chief scientist at Google and Stanford Professor describes AI like this:

“I often tell my students not to be misled by the name ‘artificial intelligence’ — there is nothing artificial about it; A.I. is made by humans, intended to behave [like] humans and, ultimately, to impact human lives and human society.”

AI in formative and summative assessments will never replace the role of teachers. AI will support teachers, provide endless opportunities for students to improve, and provide a solution to slow, unreliable and often unfair high-stakes assessments.

Examples of AI assessments in ELT

At app, we have developed a range of assessments using AI technology.

Versant

The Versant tests are a great tool to help establish language proficiency benchmarks in any school, organization or business. They are specifically designed for placement tests to determine the appropriate level for the learner.

PTE Academic

The  is aimed at those who need to prove their level of English for a university place, a job or a visa. It uses AI to score tests and results are available within five days. 

More blogs from app

  • Business people sat at a table with papers smiling together

    Improve your strategic workforce planning with English language testing

    By Samantha Ball
    Reading time: 3 minutes

    Companies constantly seek methods to optimize workforce productivity and effectiveness. A powerful approach to achieving this goal is through strategic workforce planning bolstered by English language testing. This tactic not only identifies and addresses skills gaps but also reduces attrition and strengthens your workforce for both short-term and long-term success.

  • A teacher standing in front of others with a tablet smiling

    Teachers’ FAQs about the new CASAS STEPS

    By app Languages
    Reading time: 4 minutes

    Back in July 2024, the CASAS test was updated to become the CASAS STEPS (Student Test of English Progress and Success). In our previous blog posts, we discussed some of the reasons for the change,we covered acronyms every Adult ESOL teacher should learn, and this week we are answering frequently asked questions regarding the new assessment.

    1. What is the timeline for the CASAS STEPS implementation?

    The new test has been available since 2023 and was fully implemented nationwide on July 1, 2024. The CASAS STEPS is approved by OCTAE for NRS reporting through July 2030.

    2. How is the STEPS series different from the previous series?

    The CASAS STEPS assessments contain more rigorous questions and provide shorter testing times. Both Reading and Listening STEPS now have five levels, measuring academic vocabulary and higher-order thinking skills contained in the ELP Standards. Note that the test form numbers have also changed, ranging from 621R-630R and 621L-630L.

    3. What is the STEPS scale score range in relation to NRS levels?

    The new test levels are A-E, with two alternate forms for each level. The STEPS levels correspond to NRS levels 1-6. Each STEPS level overlaps with two - and only two - NRS levels, so there is no chance of a level 1 (beginning ESL literacy) student accidentally testing into a level 5 (high intermediate) class.

    4. How many questions are there and how long is each test?

    Reading: Locator (15 minutes); Level A (33 items, 30 minutes); Level B (36 items, 50 minutes); Levels C-E: (36 items, 75 minutes).
    Listening: Locator (15 minutes); Level A (33 items, 28 minutes); Level B (36 items, 45 minutes); Level C (39 items, 52 minutes); Level D (39 items, 56 minutes); Level E (39 items, 38 minutes).

    5. Can we pretest with the Life and Work series while transitioning to the STEPS series?

    No, agencies cannot pretest students on the Life and Work series and post-test on the STEPS series. It is essential that pre- and post-testing always occurs within the same series to ensure test reliability and validity.

    6. What are the STEPS competency areas?

    Basic communication, consumer economics, community resources, health, employment, and government and law (new for test levels D and E).

    7. What task areas does each test contain?

    The Reading STEPS contains four task areas: 1. Forms; 2. Charts, tables, and graphs; 3. Texts, emails, articles, and narratives; 4. Signs, ads, and labels. The following ELPS skills are assessed: vocabulary, details, main idea, inference, point of view, and supporting evidence.

    The Listening STEPS contains five task areas: vocabulary, details, dialogue, main idea, and summary. The following ELPS skills are assessed: retell key details, continue conversation, identify the main topic, and summarize.

    8. How can I prepare my students for the new test?

    app offers a wide selection of educational material to prepare you and your students for the new CASAS STEPS. Our FUTURE series is completely aligned with the new test format, with lesson prep tips, notes and examples for teachers, templates, study guides, test overviews, printed and digital test practices, answer sheets, and the integrated online platform MyEnglishLab.

    What other questions do you have?

    Click here to download a printable version of the charts and tables, to browse our textbook selection, including our FUTURE Series. If your program is not yet using the series, or if you’d like tutorials and tips as a current user,click here. Follow along onand share this post with your fellow teachers and administrators.

  • A teachet stood in front of a class in front of a board, smiling at his students.

    How to assess your learners using the GSE Assessment Frameworks

    By Billie Jago
    Reading time: 4 minutes

    With language learning, assessing both the quality and the quantity of language use is crucial for accurate proficiency evaluation. While evaluating quantity (for example the number of words written or the duration of spoken production) can provide insights into a learner's fluency and engagement in a task, it doesn’t show a full picture of a learner’s language competence. For this, they would also need to be evaluated on the quality of what they produce (such as the appropriateness, accuracy and complexity of language use). The quality also considers factors such as grammatical accuracy, lexical choice, coherence and the ability to convey meaning effectively.

    In order to measure the quality of different language skills, you can use the Global Scale of English (GSE) assessment frameworks.

    Developed in collaboration with assessment experts, the GSE Assessment Frameworks are intended to be used alongside the GSE Learning Objectives to help you assess the proficiency of your learners.

    There are two GSE Assessment Frameworks: one for adults and one for young learners.

    What are the GSE Assessment Frameworks?

    • The GSE Assessment Frameworks are intended to be used alongside the GSE Learning Objectives to help teachers assess their learners’ proficiency of all four skills (speaking, listening, reading and writing).
    • The GSE Learning Objectives focus on the things a learner can do, while the GSE Assessment Frameworks focus on how well a learner can do these things.
    • It can help provide you with examples of what proficiencies your learners should be demonstrating.
    • It can help teachers pinpoint students' specific areas of strength and weakness more accurately, facilitating targeted instruction and personalized learning plans.
    • It can also help to motivate your learners, as their progress is evidenced and they can see a clear path for improvement.

    An example of the GSE Assessment Frameworks

    This example is from the Adult Assessment Framework for speaking.

    As you can see, there are sub-skills within speaking (andfor the other three main overarching skills – writing, listening and reading). Within speaking, these areproductionandfluency, spoken interaction, language range andaccuracy.

    The GSE range (and corresponding CEFR level) is shown at the top of each column, and there are descriptors that students should ideally demonstrate at that level.

    However, it is important to note that students may sit across different ranges, depending on the sub-skill. For example, your student may show evidence of GSE 43-50 production and fluency and spoken interaction, but they may need to improve their language range and accuracy, and therefore sit in a range of GSE 36-42 for these sub-skills.