What to look for in an English placement test

Jennifer Manning
Students working outdoors, two of them are looking over a open book

If you’re an English teacher, Director of Studies or school owner you’ll know the importance of putting students in the right group. Whether it’s a business English class, exam prep or general English – a placement test is essential. Without one, you’ll teach classes with such varied levels and needs, it’ll be hard to plan an effective lesson.

Placing students at the wrong level will not only lead to unmotivated learners, but it may also cost your institution money.

But how exactly do you design a reliable, accurate and easy-to-use test? In this post, we’ll examine the key questions you need to consider before making your own placement test. We’ll also explore what features you need to achieve your goals.

Problems with traditional placement tests

Most private language schools (PLSs) and higher education institutions offer new students the opportunity to take a placement test before starting a course. However, these are often just a multiple-choice test, a short interview, or a combination of the two.

While this does act as a filter helping us group students into similar levels, there are a number of downfalls. Students can guess the answers to multiple-choice questions – and while they might give us a rough idea of their grammar knowledge, these tests don’t consider the four skills: speaking, writing, listening and reading.

Oral interviews, on the other hand, can give us an indication of the students’ spoken level. However, they also raise questions of objectivity and consistency that even specially trained teachers will struggle to avoid.

Another big issue with traditional tests is the amount of time they take. Multiple-choice exams are often graded by hand and interviewing every new student uses valuable resources that could be used elsewhere.

Key questions to consider

Before you re-design your current test completely, we’ve put together a series of questions to help you think about your objectives, define your needs and explore the challenges you may face.

Taking a few minutes to think about these things can make the process of finding the right English placement test go more smoothly and quickly. Once you know what you’re looking for, you’ll be ready to make a checklist of the most important features.

What will your test be used for?

  • Placing incoming ESL students into the appropriate English language program.
  • Measuring students’ progress throughout the school year.
  • Final assessment of students' abilities at the end of the school year (“exit test”).
  • All of the above.

Is this different from what you use your current test for? How soon are your needs likely to change?

What skills does your current test measure?

Does it measure speaking, listening, reading, writing, or all of the above? Are all of these skills measured in separate tests — or in one test?

  • How many students do you need to test at each intake?
  • How many students do you need to test each year? How many do you expect you’ll need to test in three years?

How quickly do you currently receive test results? How quickly would you like to receive them?

If you can test your students weeks before the start of the school term, you may have time to wait for results. However, if you are continuously testing students, or have students arriving just before the term begins, you may need to get results much more quickly.

What features in your current test do you like and dislike?

Are there things in your current test that you also want in your new test? Is anything missing, or anything that you don’t want your new test to have?

What resources are available to you?

Some English language tests require students to have the computer skills needed to take the test online. You may also need a testing lab that has the following:

  • computers
  • a stable internet connection
  • a headset with a built-in microphone
  • a preliminary checklist for placement tests.

Once you’ve got answers to the questions, you can use the checklist below to make sure your placement test has the features you need. It may also be useful for comparing products if you decide to use an external placement test.

A preliminary checklist for placement tests

What features do you need to achieve your goals?

Now that you've analyzed how you want to use your new English Placement test, create a checklist of the features that you need to achieve your goals.

Usability

  • The ability to test large numbers of students at one time
  • Fast and easy set-up and test implementation
  • Only brief training is necessary to learn to use the test
  • Total completion time is less than one hour
  • Automatic scoring by computer (no hand scoring)
  • Immediate results - the administrator can see results as soon as the testing period is over

Scalability

  • Includes administrative tools at no extra cost
  • includes everything needed to deploy the tests, without requiring the purchase of additional equipment

Security

  • Test forms that are randomized to prevent cheating
  • Secure reporting to ensure test information remains confidential

Test results

  • Automated scoring that can recognize and analyze speech components from both fluent and second-language English speakers
  • Comprehensive reporting that lets you easily compare scores with other Test scores, such as CEFR, GSE, IELTS and TOEFL

More blogs from app

  • Hands typing at a laptop with symbols

    Can computers really mark exams? Benefits of ELT automated assessments

    Por app Languages

    Automated assessment, including the use of Artificial Intelligence (AI), is one of the latest education tech solutions. It speeds up exam marking times, removes human biases, and is as accurate and at least as reliable as human examiners. As innovations go, this one is a real game-changer for teachers and students. 

    However, it has understandably been met with many questions and sometimes skepticism in the ELT community – can computers really mark speaking and writing exams accurately? 

    The answer is a resounding yes. Students from all parts of the world already take AI-graded tests.  aԻ Versanttests – for example – provide unbiased, fair and fast automated scoring for speaking and writing exams – irrespective of where the test takers live, or what their accent or gender is. 

    This article will explain the main processes involved in AI automated scoring and make the point that AI technologies are built on the foundations of consistent expert human judgments. So, let’s clear up the confusion around automated scoring and AI and look into how it can help teachers and students alike. 

    AI versus traditional automated scoring

    First of all, let’s distinguish between traditional automated scoring and AI. When we talk about automated scoring, generally, we mean scoring items that are either multiple-choice or cloze items. You may have to reorder sentences, choose from a drop-down list, insert a missing word- that sort of thing. These question types are designed to test particular skills and automated scoring ensures that they can be marked quickly and accurately every time.

    While automatically scored items like these can be used to assess receptive skills such as listening and reading comprehension, they cannot mark the productive skills of writing and speaking. Every student's response in writing and speaking items will be different, so how can computers mark them?

    This is where AI comes in. 

    We hear a lot about how AI is increasingly being used in areas where there is a need to deal with large amounts of unstructured data, effectively and 100% accurately – like in medical diagnostics, for example. In language testing, AI uses specialized computer software to grade written and oral tests. 

    How AI is used to score speaking exams

    The first step is to build an acoustic model for each language that can recognize speech and convert it into waveforms and text. While this technology used to be very unusual, most of our smartphones can do this now. 

    These acoustic models are then trained to score every single prompt or item on a test. We do this by using human expert raters to score the items first, using double marking. They score hundreds of oral responses for each item, and these ‘Standards’ are then used to train the engine. 

    Next, we validate the trained engine by feeding in many more human-marked items, and check that the machine scores are very highly correlated to the human scores. If this doesn’t happen for any item, we remove it, as it must match the standard set by human markers. We expect a correlation of between .95-.99. That means that tests will be marked between 95-99% exactly the same as human-marked samples. 

    This is incredibly high compared to the reliability of human-marked speaking tests. In essence, we use a group of highly expert human raters to train the AI engine, and then their standard is replicated time after time.  

    How AI is used to score writing exams

    Our AI writing scoring uses a technology called . LSA is a natural language processing technique that can analyze and score writing, based on the meaning behind words – and not just their superficial characteristics. 

    Similarly to our speech recognition acoustic models, we first establish a language-specific text recognition model. We feed a large amount of text into the system, and LSA uses artificial intelligence to learn the patterns of how words relate to each other and are used in, for example, the English language. 

    Once the language model has been established, we train the engine to score every written item on a test. As in speaking items, we do this by using human expert raters to score the items first, using double marking. They score many hundreds of written responses for each item, and these ‘Standards’ are then used to train the engine. We then validate the trained engine by feeding in many more human-marked items, and check that the machine scores are very highly correlated to the human scores. 

    The benchmark is always the expert human scores. If our AI system doesn’t closely match the scores given by human markers, we remove the item, as it is essential to match the standard set by human markers.

    AI’s ability to mark multiple traits 

    One of the challenges human markers face in scoring speaking and written items is assessing many traits on a single item. For example, when assessing and scoring speaking, they may need to give separate scores for content, fluency and pronunciation. 

    In written responses, markers may need to score a piece of writing for vocabulary, style and grammar. Effectively, they may need to mark every single item at least three times, maybe more. However, once we have trained the AI systems on every trait score in speaking and writing, they can then mark items on any number of traits instantaneously – and without error. 

    AI’s lack of bias

    A fundamental premise for any test is that no advantage or disadvantage should be given to any candidate. In other words, there should be no positive or negative bias. This can be very difficult to achieve in human-marked speaking and written assessments. In fact, candidates often feel they may have received a different score if someone else had heard them or read their work.

    Our AI systems eradicate the issue of bias. This is done by ensuring our speaking and writing AI systems are trained on an extensive range of human accents and writing types. 

    We don’t want perfect native-speaking accents or writing styles to train our engines. We use representative non-native samples from across the world. When we initially set up our AI systems for speaking and writing scoring, we trialed our items and trained our engines using millions of student responses. We continue to do this now as new items are developed.

    The benefits of AI automated assessment

    There is nothing wrong with hand-marking homework tests and exams. In fact, it is essential for teachers to get to know their students and provide personal feedback and advice. However, manually correcting hundreds of tests, daily or weekly, can be repetitive, time-consuming, not always reliable and takes time away from working alongside students in the classroom. The use of AI in formative and summative assessments can increase assessed practice time for students and reduce the marking load for teachers.

    Language learning takes time, lots of time to progress to high levels of proficiency. The blended use of AI can:

    • address the increasing importance of formative assessmentto drive personalized learning and diagnostic assessment feedback 

    • allow students to practice and get instant feedback inside and outside of allocated teaching time

    • address the issue of teacher workload

    • create a virtuous combination between humans and machines, taking advantage of what humans do best and what machines do best. 

    • provide fair, fast and unbiased summative assessment scores in high-stakes testing.

    We hope this article has answered a few burning questions about how AI is used to assess speaking and writing in our language tests. An interesting quote from Fei-Fei Li, Chief scientist at Google and Stanford Professor describes AI like this:

    “I often tell my students not to be misled by the name ‘artificial intelligence’ — there is nothing artificial about it; A.I. is made by humans, intended to behave [like] humans and, ultimately, to impact human lives and human society.”

    AI in formative and summative assessments will never replace the role of teachers. AI will support teachers, provide endless opportunities for students to improve, and provide a solution to slow, unreliable and often unfair high-stakes assessments.

    Examples of AI assessments in ELT

    At app, we have developed a range of assessments using AI technology.

    Versant

    The Versant tests are a great tool to help establish language proficiency benchmarks in any school, organization or business. They are specifically designed for placement tests to determine the appropriate level for the learner.

    PTE Academic

    The  is aimed at those who need to prove their level of English for a university place, a job or a visa. It uses AI to score tests and results are available within five days. 

    app English International Certificate (PEIC)

    app English International Certificate (PEIC) also uses automated assessment technology. With a two-hour test available on-demand to take at home or at school (or at a secure test center). Using a combination of advanced speech recognition and exam grading technology and the expertise of professional ELT exam markers worldwide, our patented software can measure English language ability.