Explaining computerized English testing in plain English

app Languages
a pair of hands typing at a laptop

Research has shown that automated scoring can give more reliable and objective results than human examiners when evaluating a person’s mastery of English. This is because an automated scoring system is impartial, unlike humans, who can be influenced by irrelevant factors such as a test taker’s appearance or body language. Additionally, automated scoring treats regional accents equally, unlike human examiners who may favor accents they are more familiar with. Automated scoring also allows individual features of a spoken or written test question response to be analyzed independent of one another, so that a weakness in one area of language does not affect the scoring of other areas.

was created in response to the demand for a more accurate, objective, secure and relevant test of English. Our automated scoring system is a central feature of the test, and vital to ensuring the delivery of accurate, objective and relevant results – no matter who the test-taker is or where the test is taken.

Development and validation of the scoring system to ensure accuracy

PTE Academic’s automated scoring system was developed after extensive research and field testing. A prototype test was developed and administered to a sample of more than 10,000 test takers from 158 different countries, speaking 126 different native languages. This data was collected and used to train the automated scoring engines for both the written and spoken PTE Academic items.

To do this, multiple trained human markers assess each answer. Those results are used as the training material for machine learning algorithms, similar to those used by systems like Google Search or Apple’s Siri. The model makes initial guesses as to the scores each response should get, then consults the actual scores to see well how it did, adjusts itself in a few directions, then goes through the training set over and over again, adjusting and improving until it arrives at a maximally correct solution – a solution that ideally gets very close to predicting the set of human ratings.

Once trained up and performing at a high level, this model is used as a marking algorithm, able to score new responses just like human markers would. Correlations between scores given by this system and trained human markers are quite high. The standard error of measurement between app’s system and a human rater is less than that between one human rater and another – in other words, the machine scores are more accurate than those given by a pair of human raters, because much of the bias and unreliability has been squeezed out of them. In general, you can think of a machine scoring system as one that takes the best stuff out of human ratings, then acts like an idealized human marker.

app conducts scoring validation studies to ensure that the machine scores are consistently comparable to ratings given by skilled human raters. Here, a new set of test-taker responses (never seen by the machine) are scored by both human raters and by the automated scoring system. Research has demonstrated that the automated scoring technology underlying PTE Academic produces scores comparable to those obtained from careful human experts. This means that the automated system “acts” like a human rater when assessing test takers’ language skills, but does so with a machine's precision, consistency and objectivity.

Scoring speaking responses with app’s Ordinate technology

The spoken portion of PTE Academic is automatically scored using app’s Ordinate technology. Ordinate technology results from years of research in speech recognition, statistical modeling, linguistics and testing theory. The technology uses a proprietary speech processing system that is specifically designed to analyze and automatically score speech from fluent and second-language English speakers. The Ordinate scoring system collects hundreds of pieces of information from the test takers’ spoken responses in addition to just the words, such as pace, timing and rhythm, as well as the power of their voice, emphasis, intonation and accuracy of pronunciation. It is trained to recognize even somewhat mispronounced words, and quickly evaluates the content, relevance and coherence of the response. In particular, the meaning of the spoken response is evaluated, making it possible for these models to assess whether or not what was said deserves a high score.

Scoring writing responses with Intelligent Essay Assessor™ (IEA)

The written portion of PTE Academic is scored using the Intelligent Essay Assessor™ (IEA), an automated scoring tool powered by app’s state-of-the-art Knowledge Analysis Technologies™ (KAT) engine. Based on more than 20 years of research and development, the KAT engine automatically evaluates the meaning of text, such as an essay written by a student in response to a particular prompt. The KAT engine evaluates writing as accurately as skilled human raters using a proprietary application of the mathematical approach known as Latent Semantic Analysis (LSA). LSA evaluates the meaning of language by analyzing large bodies of relevant text and their meanings. Therefore, using LSA, the KAT engine can understand the meaning of text much like a human.

What aspects of English does PTE Academic assess?

Written scoring

Spoken scoring

  • Word choice
  • Grammar and mechanics
  • Progression of ideas
  • Organization
  • Style, tone
  • Paragraph structure
  • Development, coherence
  • Point of view
  • Task completion
  • Sentence mastery
  • Content
  • Vocabulary
  • Accuracy
  • Pronunciation
  • Intonation
  • Fluency
  • Expressiveness
  • Pragmatics

More blogs from app

  • A teacher stood at the front of a class holding a tablet in front of adult students

    9 steps to teaching advanced business English

    Por Margaret O'Keeffe

    The challenge of teaching business English to C1 level students

    Once your English students reach a B2 level of English, they’re fairly competent communicators. For many learners, their motivation to improve starts to suffer when they reach this intermediate plateau. They understand almost everything and can express themselves clearly enough - so why would they want to continue learning English and achieve a C1 level of English?

    The CEFR describes C1-level learners as proficient users of a language. C1-level students have a high proficiency in English and perform well in an international work environment.

    How can we help our upper intermediate students reach this level and see the benefits in their own lives and careers? Here are nine steps you can take as an English language teacher to help your students achieve language proficiency.

  • A business woman in a suit sat at a laptop

    6 tips for teaching business English to low level learners

    Por Margaret O'Keeffe

    The CEFR describes A1 and A2 learners as ‘basic users’ of a language. So how can we help these students to develop their English for the workplace?

    Here are our six top tips:

    1. Focus on high-frequency vocabulary for work

    Learning English vocabulary for work context is the top priority for many low-level learners in business English classes. It helps them to communicate their message in a simple, effective way. This makes it important to teach common words and set expressions for everyday work situations.

    These include:

    • lexical sets (words related to the same topic or situation) – for example, days, months, numbers, verbs to describe work routines, verbs in the past.
    • common collocations with verbs and nouns (for example, manage a team, have meetings, place an order, solve a problem).
    • functional language and fixed phrases – greetings (How are you? Nice to meet you.) and offers (How can I help you? Would you like…?).

    2. Help students with vocabulary learning

    Teach vocabulary items in realistic contexts. For example, phone calls, to-do lists, short emails, text messages etc.

    While it might be tempting to give students lots of vocabulary to memorize, this can cause overload, be frustrating and ultimately demotivating for learners. Instead, you should aim to present eight to ten new words in a lesson as a general rule. This is an achievable number for working memory and helps to build learners’ confidence. The number of words can be a little higher if items are easy to show in images or there is repetition; for instance, the numbers 20 to 100.

    Have students make simple decisions about new words, as this helps with recall later. Start with simple tasks, such as matching words and pictures or verb and noun collocations they’ve seen in a short text (for example, managing a team, call customers, writing emails, etc.). Next, ask students to complete sentences using the target words and write their own sentences using these words.

    Getting students to personalize new vocabulary makes it more memorable, for instance writing sentences describing their work routines. Repetition also aids long-term memory, so make sure vocabulary is recycled in the materials in later lessons.

    Finally, make a list of vocabulary games to use for revision exercises, warmers and to finish classes.

    3. Maximize student speaking time

    Learners need to develop their English-speaking skills for work. The classroom is a safe, low-stakes environment for them to gain fluency and confidence.

    Use the audio and video scripts of short dialogues or an extract from a longer script. Students read the dialogue aloud in pairs or groups. Give feedback by drilling the stress and rhythm of any words or phrases which were difficult with the whole class. Back-chaining phrases – starting with the last sound and building up going backwards – is an excellent way to drill. Get students to swap roles and repeat the task.

    You can also use another technique called disappearing dialogue. Put a short dialogue on the board for students to practice in pairs. Then delete parts of the dialogue and ask them to repeat the task, swapping roles each time. Gradually delete more parts to increase the challenge. Students can reconstruct the dialogue as a final task.

    Moreover, surveys, questionnaires, true/false games, and information-gap exercises are ways to practice speaking in English, target structures, and vocabulary.