Online English language testing for employment: Is it secure?

Jennifer Manning
Woman with a headset at a computer

Managers and HR professionals have a global workforce at their fingertips – and now, . This makes adopting a secure English language test for employment more important than ever.

An online English test enables organizations to assess candidates’ language proficiency from anywhere in the world, screen more applicants, and standardize the hiring process. They also help HR professionals and managers to save time – ensuring only people with the right language skills advance to the interview stage.

But how can employers be certain these tests are safe? And how easy is it for people to cheat? In this article, we’ll explore a few of the top security concerns we hear, and share what features make online language tests secure.

What is an online English test?

An online English test measures how well a job applicant can communicate in English, focusing on speaking, listening, reading and writing skills. They also assess a candidate’s specific English for business skills – for example, how clearly someone can communicate on the phone with clients, or understand what is being said during a conference call.

Online tests can be taken in a controlled environment – in a testing center with in-person proctors – but also from a job applicant’s personal computer or mobile phone at home. When tests are taken at home, they can be made more secure using virtual proctors or powerful AI monitoring technology.

Cheating, grading and data security

When many people think of taking a language test, they imagine the traditional way: students in a large testing center scribbling away with pen and paper. No mobile phones are allowed, and if test-takers are caught cheating, they’ll be flagged by a proctor walking around the room.

So when managers or HR professionals consider the option of an online English test – taken digitally and often without human supervision – it’s no surprise that many have questions about security. Let’s take a look at some common concerns:

Is cheating a problem?

A large number of test takers admit to cheating on their tests. According to research by the International Center for Academic Integrity, 68% of undergraduate students say they’ve cheated on a writing assignment or test, while 43% of graduate students say they have.

But how easy is it to cheat during a Versant test?

The truth is, not very. With Versant, exam cheating is actually quite difficult, and test takers would have to outsmart a range of AI monitoring technologies.

If a verified photo is uploaded to the platform, HirePro’s face recognition technology can compare the live test taker with it. This ensures test takers are who they say they are, and haven’t asked someone else to sit the exam for them. It is the institution’s responsibility to verify the original photo.

And since Versant tests are monitored using specialized AI algorithms – without a human present – even the slightest suspicious behaviors are flagged for review. For example, Versant notices if a different face appears in the video, or if the camera goes dark. With video monitoring, our platform also flags if the test taker moves from the camera, or looks away multiple times. And we’ll see if someone changes tabs on their computer.

Finally, the entire test is recorded. When suspicious behavior arises, HR professionals will decide whether to accept or reject the results – or have the candidate retake the test.

Are scores accurate?

We’ve all had frustrating experiences with AI. Chatbots don’t always understand what we’re trying to say, and speech recognition technology sometimes isn’t up to par. This leaves many wondering if they should trust AI to grade high-stakes tests – especially when the results could be the difference between someone getting the job, or not.

Versant uses patented AI technology to grade tests that are trained and optimized for evaluating English language proficiency. It evaluates speaking, listening, reading, writing, and even intelligibility.

Our AI is trained using thousands of fluent and second-language English speakers. With these models, we’re able to not only evaluate how someone should be assessed but also understand when they’ve mispronounced words or have made another mistake. Using all this information, a candidate’s final score is evaluated based on more than 2000 data points.

Do online tests follow GDPR standards?

HR professionals and managers deal with sensitive personal information every day. This includes each job applicant’s name, full address, date of birth, and sometimes even their social security number. The HR tools they implement therefore must also keep this data secure.

Most importantly, it must follow GDPR standards. The data must be gathered with consent and protected from exploitation. With Versant, test-taker data is securely stored and follows all .

All our data is encrypted at rest and in transmission. Versant assessment data is stored in the US and HirePro, our remote monitoring partner, stores the proctoring data in either Singapore or Europe, depending on customer needs. Both systems are GDPR compliant.

Versant: a secure English language test

ճ Versant automated language test is powered by patented AI technology to ensure the most accurate results for test takers and employers alike. Even better, our remote testing lets HR professionals securely and efficiently assess candidates worldwide, 24/7 – and recruit top global talent to help more companies scale.

More blogs from app

  • Hands typing at a laptop with symbols

    Can computers really mark exams? Benefits of ELT automated assessments

    Por app Languages

    Automated assessment, including the use of Artificial Intelligence (AI), is one of the latest education tech solutions. It speeds up exam marking times, removes human biases, and is as accurate and at least as reliable as human examiners. As innovations go, this one is a real game-changer for teachers and students. 

    However, it has understandably been met with many questions and sometimes skepticism in the ELT community – can computers really mark speaking and writing exams accurately? 

    The answer is a resounding yes. Students from all parts of the world already take AI-graded tests.  aԻ Versanttests – for example – provide unbiased, fair and fast automated scoring for speaking and writing exams – irrespective of where the test takers live, or what their accent or gender is. 

    This article will explain the main processes involved in AI automated scoring and make the point that AI technologies are built on the foundations of consistent expert human judgments. So, let’s clear up the confusion around automated scoring and AI and look into how it can help teachers and students alike. 

    AI versus traditional automated scoring

    First of all, let’s distinguish between traditional automated scoring and AI. When we talk about automated scoring, generally, we mean scoring items that are either multiple-choice or cloze items. You may have to reorder sentences, choose from a drop-down list, insert a missing word- that sort of thing. These question types are designed to test particular skills and automated scoring ensures that they can be marked quickly and accurately every time.

    While automatically scored items like these can be used to assess receptive skills such as listening and reading comprehension, they cannot mark the productive skills of writing and speaking. Every student's response in writing and speaking items will be different, so how can computers mark them?

    This is where AI comes in. 

    We hear a lot about how AI is increasingly being used in areas where there is a need to deal with large amounts of unstructured data, effectively and 100% accurately – like in medical diagnostics, for example. In language testing, AI uses specialized computer software to grade written and oral tests. 

    How AI is used to score speaking exams

    The first step is to build an acoustic model for each language that can recognize speech and convert it into waveforms and text. While this technology used to be very unusual, most of our smartphones can do this now. 

    These acoustic models are then trained to score every single prompt or item on a test. We do this by using human expert raters to score the items first, using double marking. They score hundreds of oral responses for each item, and these ‘Standards’ are then used to train the engine. 

    Next, we validate the trained engine by feeding in many more human-marked items, and check that the machine scores are very highly correlated to the human scores. If this doesn’t happen for any item, we remove it, as it must match the standard set by human markers. We expect a correlation of between .95-.99. That means that tests will be marked between 95-99% exactly the same as human-marked samples. 

    This is incredibly high compared to the reliability of human-marked speaking tests. In essence, we use a group of highly expert human raters to train the AI engine, and then their standard is replicated time after time.  

    How AI is used to score writing exams

    Our AI writing scoring uses a technology called . LSA is a natural language processing technique that can analyze and score writing, based on the meaning behind words – and not just their superficial characteristics. 

    Similarly to our speech recognition acoustic models, we first establish a language-specific text recognition model. We feed a large amount of text into the system, and LSA uses artificial intelligence to learn the patterns of how words relate to each other and are used in, for example, the English language. 

    Once the language model has been established, we train the engine to score every written item on a test. As in speaking items, we do this by using human expert raters to score the items first, using double marking. They score many hundreds of written responses for each item, and these ‘Standards’ are then used to train the engine. We then validate the trained engine by feeding in many more human-marked items, and check that the machine scores are very highly correlated to the human scores. 

    The benchmark is always the expert human scores. If our AI system doesn’t closely match the scores given by human markers, we remove the item, as it is essential to match the standard set by human markers.

    AI’s ability to mark multiple traits 

    One of the challenges human markers face in scoring speaking and written items is assessing many traits on a single item. For example, when assessing and scoring speaking, they may need to give separate scores for content, fluency and pronunciation. 

    In written responses, markers may need to score a piece of writing for vocabulary, style and grammar. Effectively, they may need to mark every single item at least three times, maybe more. However, once we have trained the AI systems on every trait score in speaking and writing, they can then mark items on any number of traits instantaneously – and without error. 

    AI’s lack of bias

    A fundamental premise for any test is that no advantage or disadvantage should be given to any candidate. In other words, there should be no positive or negative bias. This can be very difficult to achieve in human-marked speaking and written assessments. In fact, candidates often feel they may have received a different score if someone else had heard them or read their work.

    Our AI systems eradicate the issue of bias. This is done by ensuring our speaking and writing AI systems are trained on an extensive range of human accents and writing types. 

    We don’t want perfect native-speaking accents or writing styles to train our engines. We use representative non-native samples from across the world. When we initially set up our AI systems for speaking and writing scoring, we trialed our items and trained our engines using millions of student responses. We continue to do this now as new items are developed.

    The benefits of AI automated assessment

    There is nothing wrong with hand-marking homework tests and exams. In fact, it is essential for teachers to get to know their students and provide personal feedback and advice. However, manually correcting hundreds of tests, daily or weekly, can be repetitive, time-consuming, not always reliable and takes time away from working alongside students in the classroom. The use of AI in formative and summative assessments can increase assessed practice time for students and reduce the marking load for teachers.

    Language learning takes time, lots of time to progress to high levels of proficiency. The blended use of AI can:

    • address the increasing importance of formative assessmentto drive personalized learning and diagnostic assessment feedback 

    • allow students to practice and get instant feedback inside and outside of allocated teaching time

    • address the issue of teacher workload

    • create a virtuous combination between humans and machines, taking advantage of what humans do best and what machines do best. 

    • provide fair, fast and unbiased summative assessment scores in high-stakes testing.

    We hope this article has answered a few burning questions about how AI is used to assess speaking and writing in our language tests. An interesting quote from Fei-Fei Li, Chief scientist at Google and Stanford Professor describes AI like this:

    “I often tell my students not to be misled by the name ‘artificial intelligence’ — there is nothing artificial about it; A.I. is made by humans, intended to behave [like] humans and, ultimately, to impact human lives and human society.”

    AI in formative and summative assessments will never replace the role of teachers. AI will support teachers, provide endless opportunities for students to improve, and provide a solution to slow, unreliable and often unfair high-stakes assessments.

    Examples of AI assessments in ELT

    At app, we have developed a range of assessments using AI technology.

    Versant

    The Versant tests are a great tool to help establish language proficiency benchmarks in any school, organization or business. They are specifically designed for placement tests to determine the appropriate level for the learner.

    PTE Academic

    The  is aimed at those who need to prove their level of English for a university place, a job or a visa. It uses AI to score tests and results are available within five days. 

    app English International Certificate (PEIC)

    app English International Certificate (PEIC) also uses automated assessment technology. With a two-hour test available on-demand to take at home or at school (or at a secure test center). Using a combination of advanced speech recognition and exam grading technology and the expertise of professional ELT exam markers worldwide, our patented software can measure English language ability.