Can computers really mark exams? Benefits of ELT automated assessments

app Languages
Hands typing at a laptop with symbols

Automated assessment, including the use of Artificial Intelligence (AI), is one of the latest education tech solutions. It speeds up exam marking times, removes human biases, and is as accurate and at least as reliable as human examiners. As innovations go, this one is a real game-changer for teachers and students. 

However, it has understandably been met with many questions and sometimes skepticism in the ELT community – can computers really mark speaking and writing exams accurately? 

The answer is a resounding yes. Students from all parts of the world already take AI-graded tests.  aԻ Versanttests – for example – provide unbiased, fair and fast automated scoring for speaking and writing exams – irrespective of where the test takers live, or what their accent or gender is. 

This article will explain the main processes involved in AI automated scoring and make the point that AI technologies are built on the foundations of consistent expert human judgments. So, let’s clear up the confusion around automated scoring and AI and look into how it can help teachers and students alike. 

AI versus traditional automated scoring

First of all, let’s distinguish between traditional automated scoring and AI. When we talk about automated scoring, generally, we mean scoring items that are either multiple-choice or cloze items. You may have to reorder sentences, choose from a drop-down list, insert a missing word- that sort of thing. These question types are designed to test particular skills and automated scoring ensures that they can be marked quickly and accurately every time.

While automatically scored items like these can be used to assess receptive skills such as listening and reading comprehension, they cannot mark the productive skills of writing and speaking. Every student's response in writing and speaking items will be different, so how can computers mark them?

This is where AI comes in. 

We hear a lot about how AI is increasingly being used in areas where there is a need to deal with large amounts of unstructured data, effectively and 100% accurately – like in medical diagnostics, for example. In language testing, AI uses specialized computer software to grade written and oral tests. 

How AI is used to score speaking exams

The first step is to build an acoustic model for each language that can recognize speech and convert it into waveforms and text. While this technology used to be very unusual, most of our smartphones can do this now. 

These acoustic models are then trained to score every single prompt or item on a test. We do this by using human expert raters to score the items first, using double marking. They score hundreds of oral responses for each item, and these ‘Standards’ are then used to train the engine. 

Next, we validate the trained engine by feeding in many more human-marked items, and check that the machine scores are very highly correlated to the human scores. If this doesn’t happen for any item, we remove it, as it must match the standard set by human markers. We expect a correlation of between .95-.99. That means that tests will be marked between 95-99% exactly the same as human-marked samples. 

This is incredibly high compared to the reliability of human-marked speaking tests. In essence, we use a group of highly expert human raters to train the AI engine, and then their standard is replicated time after time.  

How AI is used to score writing exams

Our AI writing scoring uses a technology called . LSA is a natural language processing technique that can analyze and score writing, based on the meaning behind words – and not just their superficial characteristics. 

Similarly to our speech recognition acoustic models, we first establish a language-specific text recognition model. We feed a large amount of text into the system, and LSA uses artificial intelligence to learn the patterns of how words relate to each other and are used in, for example, the English language. 

Once the language model has been established, we train the engine to score every written item on a test. As in speaking items, we do this by using human expert raters to score the items first, using double marking. They score many hundreds of written responses for each item, and these ‘Standards’ are then used to train the engine. We then validate the trained engine by feeding in many more human-marked items, and check that the machine scores are very highly correlated to the human scores. 

The benchmark is always the expert human scores. If our AI system doesn’t closely match the scores given by human markers, we remove the item, as it is essential to match the standard set by human markers.

AI’s ability to mark multiple traits 

One of the challenges human markers face in scoring speaking and written items is assessing many traits on a single item. For example, when assessing and scoring speaking, they may need to give separate scores for content, fluency and pronunciation. 

In written responses, markers may need to score a piece of writing for vocabulary, style and grammar. Effectively, they may need to mark every single item at least three times, maybe more. However, once we have trained the AI systems on every trait score in speaking and writing, they can then mark items on any number of traits instantaneously – and without error. 

AI’s lack of bias

A fundamental premise for any test is that no advantage or disadvantage should be given to any candidate. In other words, there should be no positive or negative bias. This can be very difficult to achieve in human-marked speaking and written assessments. In fact, candidates often feel they may have received a different score if someone else had heard them or read their work.

Our AI systems eradicate the issue of bias. This is done by ensuring our speaking and writing AI systems are trained on an extensive range of human accents and writing types. 

We don’t want perfect native-speaking accents or writing styles to train our engines. We use representative non-native samples from across the world. When we initially set up our AI systems for speaking and writing scoring, we trialed our items and trained our engines using millions of student responses. We continue to do this now as new items are developed.

The benefits of AI automated assessment

There is nothing wrong with hand-marking homework tests and exams. In fact, it is essential for teachers to get to know their students and provide personal feedback and advice. However, manually correcting hundreds of tests, daily or weekly, can be repetitive, time-consuming, not always reliable and takes time away from working alongside students in the classroom. The use of AI in formative and summative assessments can increase assessed practice time for students and reduce the marking load for teachers.

Language learning takes time, lots of time to progress to high levels of proficiency. The blended use of AI can:

  • address the increasing importance of formative assessmentto drive personalized learning and diagnostic assessment feedback 

  • allow students to practice and get instant feedback inside and outside of allocated teaching time

  • address the issue of teacher workload

  • create a virtuous combination between humans and machines, taking advantage of what humans do best and what machines do best. 

  • provide fair, fast and unbiased summative assessment scores in high-stakes testing.

We hope this article has answered a few burning questions about how AI is used to assess speaking and writing in our language tests. An interesting quote from Fei-Fei Li, Chief scientist at Google and Stanford Professor describes AI like this:

“I often tell my students not to be misled by the name ‘artificial intelligence’ — there is nothing artificial about it; A.I. is made by humans, intended to behave [like] humans and, ultimately, to impact human lives and human society.”

AI in formative and summative assessments will never replace the role of teachers. AI will support teachers, provide endless opportunities for students to improve, and provide a solution to slow, unreliable and often unfair high-stakes assessments.

Examples of AI assessments in ELT

At app, we have developed a range of assessments using AI technology.

Versant

The Versant tests are a great tool to help establish language proficiency benchmarks in any school, organization or business. They are specifically designed for placement tests to determine the appropriate level for the learner.

PTE Academic

The  is aimed at those who need to prove their level of English for a university place, a job or a visa. It uses AI to score tests and results are available within five days. 

app English International Certificate (PEIC)

app English International Certificate (PEIC) also uses automated assessment technology. With a two-hour test available on-demand to take at home or at school (or at a secure test center). Using a combination of advanced speech recognition and exam grading technology and the expertise of professional ELT exam markers worldwide, our patented software can measure English language ability.

Read more about the use of AI in our learning and testing here, or if you're wondering which English test is right for your students make sure to check out our post 'Which exam is right for my students?'.

More blogs from app

  • A girl sat at a desk with a laptop and notepad studying and taking notes

    AI scoring vs human scoring for language tests: What's the difference?

    Por
    Reading time: 6 minutes

    When entering the world of language proficiency tests, test takers are often faced with a dilemma: Should they opt for tests scored by humans or those assessed by artificial intelligence (AI)? The choice might seem trivial at first, but understanding the differences between AI scoring and human language test scoring can significantly impact preparation strategy and, ultimately, determine test outcomes.

    The human touch in language proficiency testing and scoring

    Historically, language tests have been scored by human assessors. This method leverages the nuanced understanding that humans have of language, including idiomatic expressions, cultural references, and the subtleties of tone and even writing style, akin to the capabilities of the human brain. Human scorers can appreciate the creative and original use of language, potentially rewarding test takers for flair and originality in their answers. Scorers are particularly effective at evaluating progress or achievement tests, which are designed to assess a student's language knowledge and progress after completing a particular chapter, unit, or at the end of a course, reflecting how well the language tester is performing in their language learning studies.

    One significant difference between human and AI scoring is how they handle context. Human scorers can understand the significance and implications of a particular word or phrase in a given context, while AI algorithms rely on predetermined rules and datasets.

    The adaptability and learning capabilities of human brains contribute significantly to the effectiveness of scoring in language tests, mirroring how these brains adjust and learn from new information.

    Advantages:

    • Nuanced understanding: Human scorers are adept at interpreting the complexities and nuances of language that AI might miss.
    • Contextual flexibility: Humans can consider context beyond the written or spoken word, understanding cultural and situational implications.

    Disadvantages:

    • Subjectivity and inconsistency: Despite rigorous training, human-based scoring can introduce a level of subjectivity and variability, potentially affecting the fairness and reliability of scores.
    • Time and resource intensive: Human-based scoring is labor-intensive and time-consuming, often resulting in longer waiting times for results.
    • Human bias: Assessors, despite being highly trained and experienced, bring their own perspectives, preferences and preconceptions into the grading process. This can lead to variability in scoring, where two equally competent test takers might receive different scores based on the scorer's subjective judgment.

    The rise of AI in language test scoring

    With advancements in technology, AI-based scoring systems have started to play a significant role in language assessment. These systems utilize algorithms and natural language processing (NLP) techniques to evaluate test responses. AI scoring promises objectivity and efficiency, offering a standardized way to assess language and proficiency level.

    Advantages:

    • Consistency: AI scoring systems provide a consistent scoring method, applying the same criteria across all test takers, thereby reducing the potential for bias.
    • Speed: AI can process and score tests much faster than human scorers can, leading to quicker results turnaround.
    • Great for more nervous testers: Not everyone likes having to take a test in front of a person, so AI removes that extra stress.

    Disadvantages:

    • Lack of nuance recognition: AI may not fully understand subtle nuances, creativity, or complex structures in language the way a human scorer can.
    • Dependence on data: The effectiveness of AI scoring is heavily reliant on the data it has been trained on, which can limit its ability to interpret less common responses accurately.

    Making the choice

    When deciding between tests scored by humans or AI, consider the following factors:

    • Your strengths: If you have a creative flair and excel at expressing original thoughts, human-scored tests might appreciate your unique approach more. Conversely, if you excel in structured language use and clear, concise expression, AI-scored tests could work to your advantage.
    • Your goals: Consider why you're taking the test. Some organizations might prefer one scoring method over the other, so it's worth investigating their preferences.
    • Preparation time: If you're on a tight schedule, the quicker turnaround time of AI-scored tests might be beneficial.

    Ultimately, both scoring methods aim to measure and assess language proficiency accurately. The key is understanding how each approach aligns with your personal strengths and goals.

    The bias factor in language testing

    An often-discussed concern in both AI and human language test scoring is the issue of bias. With AI scoring, biases can be ingrained in the algorithms due to the data they are trained on, but if the system is well designed, bias can be removed and provide fairer scoring.

    Conversely speaking, human scorers, despite their best efforts to remain objective, bring their own subconscious biases to the evaluation process. These biases might be related to a test taker's accent, dialect, or even the content of their responses, which could subtly influence the scorer's perceptions and judgments. Efforts are continually made to mitigate these biases in both approaches to ensure a fair and equitable assessment for all test takers.

    Preparing for success in foreign language proficiency tests

    Regardless of the scoring method, thorough preparation remains, of course, crucial. Familiarize yourself with the test format, practice under timed conditions, and seek feedback on your performance, whether from teachers, peers, or through self-assessment tools.

    The distinctions between AI scoring and human in language tests continue to blur, with many exams now incorporating a mix of both to have students leverage their respective strengths. Understanding and interpreting written language is essential in preparing for language proficiency tests, especially for reading tests. By understanding these differences, test takers can better prepare for their exams, setting themselves up for the best possible outcome.

    Will AI replace human-marked tests?

    The question of whether AI will replace markers in language tests is complex and multifaceted. On one hand, the efficiency, consistency and scalability of AI scoring systems present a compelling case for their increased utilization. These systems can process vast numbers of tests in a fraction of the time it takes markers, providing quick feedback that is invaluable in educational settings. On the other hand, the nuanced understanding, contextual knowledge, flexibility, and ability to appreciate the subtleties of language that human markers bring to the table are qualities that AI has yet to fully replicate.

    Both AI and human-based scoring aim to accurately assess language proficiency levels, such as those defined by the Common European Framework of Reference for Languages or the Global Scale of English, where a level like C2 or 85-90 indicates that a student can understand virtually everything, master the foreign language perfectly, and potentially have superior knowledge compared to a native speaker.

    The integration of AI in language testing is less about replacement and more about complementing and enhancing the existing processes. AI can handle the objective, clear-cut aspects of language testing, freeing markers to focus on the more subjective, nuanced responses that require a human touch. This hybrid approach could lead to a more robust, efficient and fair assessment system, leveraging the strengths of both humans and AI.

    Future developments in AI technology and machine learning may narrow the gap between AI and human grading capabilities. However, the ethical considerations, such as ensuring fairness and addressing bias, along with the desire to maintain a human element in education, suggest that a balanced approach will persist. In conclusion, while AI will increasingly play a significant role in language testing, it is unlikely to completely replace markers. Instead, the future lies in finding the optimal synergy between technological advancements and human judgment to enhance the fairness, accuracy and efficiency of language proficiency assessments.

    Tests to let your language skills shine through

    Explore app's innovative language testing solutions today and discover how we are blending the best of AI technology and our own expertise to offer you reliable, fair and efficient language proficiency assessments. We are committed to offering reliable and credible proficiency tests, ensuring that our certifications are recognized for job applications, university admissions, citizenship applications, and by employers worldwide. Whether you're gearing up for academic, professional, or personal success, our tests are designed to meet your diverse needs and help unlock your full potential.

    Take the next step in your language learning journey with app and experience the difference that a meticulously crafted test can make.

  • Woman standing outside with a coffee and headphones

    Using language learning as a form of self-care for wellbeing

    Por
    Reading time: 6.5 minutes

    In today’s fast-paced world, finding time for self-care is more important than ever. Among a range of traditional self-care practices, learning a language emerges as an unexpected but incredibly rewarding approach. Learning a foreign language is a key aspect of personal development and can help your mental health, offering benefits like improved career opportunities, enhanced creativity, and the ability to connect with people from diverse cultures.

  • A woman teaching in front of a laptop with a noteboard behind her

    Implications for educators on fostering student success

    Por Belgin Elmas
    Reading time: 5 minutes

    app’s recent report, “How English empowers your tomorrow,” carries significant implications for educators. It underlines that increased English proficiency correlates with improved economic and social outcomes. Educational institutions play a crucial role in preparing students for professional success, employing various pedagogical approaches and teaching methods to meet the diverse needs of learners across universities, colleges and schools. However, the main unfortunate result of the report for educators is the argument that learners are leaving formal education without the essential skills required to achieve these better outcomes.

    Furthermore, as stated in the report, many of them are not lucky enough to be adequately equipped for the demands of their professional roles as they continue their careers. This emphasizes educators’ underlying responsibility to critically evaluate their teaching and assessment methods to ensure their students are effectively prepared for real-world challenges, especially as they transition into higher education where the stakes for academic and professional success are significantly elevated.

    The data of the report comes from five countries, and while Turkey is not one of them, many of the findings are still relevant to the English language education system in Turkey. Given the significant investment of time and effort, with foreign language education starting in the second grade for the majority of students in the Ministry of National Education schools, better outcomes would be expected in mastering the global language.

    Numerous reasons contributing to this failure could be listed but I would put the perception of how language is defined, taught and assessed within the education system in first place. English language classes are generally approached as “subjects to be taught” at schools, and rather than focusing on finding ways of improving learners’ skills in the foreign language, the curriculum includes “topics to be covered” with a heavy focus on grammar and vocabulary.

    This, of course, extends to assessment practices, and the cycle continues primarily with teaching and assessing grammar and vocabulary proficiency. Participants in app’s report claim the heavy emphasis on teaching grammar and vocabulary, and not having enough opportunities to practice the language both inside and outside the classroom, as the three primary factors contributing to their lack of communication skills. If this was asked to Turkish learners, it’s highly likely that we would get the exact same three top reasons. The implication for educators here is very explicit: we must first revisit the definition of what “knowing a language is” and align our definition with our teaching and assessment methodology. What use is knowing a language without being able to communicate with it?

    New opportunities needed for practice

    Another clear implication for learners’ lack of opportunities to use the target language both in and outside the classroom is evident; teachers must refrain from dominating classroom discourse and instead create opportunities for learners to actively engage with the language. Recognizing common learning barriers in this context is crucial, as these barriers can significantly hinder students' ability to practice language skills effectively in corporate settings, professional development, and adult learning environments. Especially in a foreign language context, like in Turkey, this would gain even more importance for the students who lack opportunities to practice their target language in their daily lives.

    Understanding different learning styles is essential in this process, as it allows teachers to design engagement strategies that accommodate visual, kinaesthetic, or auditory learning preferences, thus addressing the limitations and specific needs of individual learners. Teachers, who are reported to dominate 80% of class time with their own talk, have the primary responsibility for this issue. These teachers, which refers to the majority, should monitor themselves to ensure they are creating opportunities for active participation and language practice for their students.

    Encouraging the learning process as an everyday habit

    Students seem to need guidance for practicing the language not only inside but also outside the classroom to improve their proficiency, where external factors such as limited access to resources and environmental distractions can significantly hinder their ability to learn. Integrating technology into education and guiding students to continue their learning beyond classroom settings would undoubtedly be valuable advice. Language learning apps and especially social media can empower students to engage with the language in creative and meaningful ways, addressing extrinsic barriers by providing access to resources and support that overcome the lack of support from teachers or peers and environmental distractions.

    Being able to function in a foreign language, such as negotiating, giving opinions, and making suggestions, were indicated as areas where the gap exists between what is needed and what students possess in language skills. Such a result would again require a shift towards more communicative and task-based language teaching approaches, giving opportunities for students to exercise these skills not only in professional but also in academic and social contexts.

    Raising awareness among students about the benefits of language proficiency can be suggested as another implication that will also inspire them. Aligning educational curricula with real-life needs and raising awareness of both students and teachers about the rationale behind it is crucial for helping students set their own goals more accurately while their teachers guide them with realistic expectations.

    Understanding motivational learning barriers

    "I didn’t feel as if I was making progress" was one of the barriers participants indicated was stopping them from achieving greater proficiency, highlighting an emotional learning barrier that stems from internal challenges such as peer pressure and resistance to change. This gives another implication for assisting students to recognize and appreciate how much they have achieved in their learning process and how much more there is to achieve. Additionally, motivational barriers play a significant role, as they reflect the obstacles that arise from losing curiosity and desire for learning, leading to students missing classes or refusing to take courses. The Global Scale of English (GSE) is definitely a valuable tool to track learner progress by providing a concrete framework and by improving their confidence, thereby helping to overcome both emotional and motivational barriers.

    In conclusion, while the list of implications for educators might be enhanced, the most significant suggestion lies in reconsidering our perception of language learning and proficiency. This shift in perspective will have a great impact on all aspects of language education, particularly teaching and assessment methodologies. Embracing this new understanding of language teaching will not only enhance the effectiveness of language education but also better prepare learners for real-world language use and interaction and better life conditions.