How to assess your learners using the GSE Assessment Frameworks

Billie Jago
Billie Jago
A teachet stood in front of a class in front of a board, smiling at his students.
Reading time: 4 minutes

With language learning, assessing both the quality and the quantity of language use is crucial for accurate proficiency evaluation. While evaluating quantity (for example the number of words written or the duration of spoken production) can provide insights into a learner's fluency and engagement in a task, it doesn’t show a full picture of a learner’s language competence. For this, they would also need to be evaluated on the quality of what they produce (such as the appropriateness, accuracy and complexity of language use). The quality also considers factors such as grammatical accuracy, lexical choice, coherence and the ability to convey meaning effectively.

In order to measure the quality of different language skills, you can use the Global Scale of English (GSE) assessment frameworks.

Developed in collaboration with assessment experts, the GSE Assessment Frameworks are intended to be used alongside the GSE Learning Objectives to help you assess the proficiency of your learners.

There are two GSE Assessment Frameworks: one for adults and one for young learners.

What are the GSE Assessment Frameworks?

  • The GSE Assessment Frameworks are intended to be used alongside the GSE Learning Objectives to help teachers assess their learners’ proficiency of all four skills (speaking, listening, reading and writing).
  • The GSE Learning Objectives focus on the things a learner can do, while the GSE Assessment Frameworks focus on how well a learner can do these things.
  • It can help provide you with examples of what proficiencies your learners should be demonstrating.
  • It can help teachers pinpoint students' specific areas of strength and weakness more accurately, facilitating targeted instruction and personalized learning plans.
  • It can also help to motivate your learners, as their progress is evidenced and they can see a clear path for improvement.

An example of the GSE Assessment Frameworks

This example is from the Adult Assessment Framework for speaking.

As you can see, there are sub-skills within speaking (andfor the other three main overarching skills – writing, listening and reading). Within speaking, these areproductionandfluency, spoken interaction, language range andaccuracy.

The GSE range (and corresponding CEFR level) is shown at the top of each column, and there are descriptors that students should ideally demonstrate at that level.

However, it is important to note that students may sit across different ranges, depending on the sub-skill. For example, your student may show evidence of GSE 43-50 production and fluency and spoken interaction, but they may need to improve their language range and accuracy, and therefore sit in a range of GSE 36-42 for these sub-skills.

The GSE assessment frameworks in practice let’s try

So, how can you use these frameworks as a teacher in your lesson? Let’s look at an example.

Imagine you are teaching a class of adult learners at GSE 43-50 (B1). This week, your class has been working towards writing an essay about living in the city vs the countryside. Your class has just written their final essay and you want to assess what they have produced.

Look at the writing sub-skills in the GSE Assessment Framework for adults. Imagine these are the criteria you are using to assess your students’ writing.

You read one of your student's essays, and in their essay they demonstrate that they can:

  • Express their opinion on the advantages and disadvantages of living in the city vs the countryside
  • Make relevant points which are mostly on-topic
  • Use topic-related language
  • Connect their ideas logically and in a way that flows well
  • Write in clear paragraphs

However, you notice that:

  • They tend to repeat common words, such as city, town, countryside, nice, busy
  • They don’t use punctuation effectively, for example missing commas, long sentences, missing capitalization
  • They have some issues with grammatical structures

Compare the above notes to the GSE Assessment Frameworks. What level is your learner demonstrating in each sub-skill? How could you evidence this using the criteria?

Now, compare your answers to the ideas below.

The points marked in the GSE 43-50 column are evidence that the student is at the expected writing level for their class, based on what you observed in their essay. The points marked in the GSE 36-42 column could be shown to the student to tell them what they need to focus on to improve, based on their essay.

Customizing the GSE assessment frameworks

The GSE Assessment Frameworks are flexible and customizable, and you can use the descriptors for your specific purpose. You can choose the appropriate GSE Assessment Frameworks for your context, and build your own formative assessment based on these.

In the example above, you were only assessing an essay, so you could ignore any contexts that were not applicable to that scenario. For example, writes personal and semi-formal letters and emails relating to everyday matters, or incorporates some relevant details from external sources.

Another benefit of the frameworks is that you can personalize assessments and create tailored learning roadmaps for individual students. Of course, not all learners are the same, so the descriptors allow students to see which sub-skill they need to work on in order to bring their writing (or speaking, listening or reading) up to their expected level. It also helps you as the teacher to understand what sub-skills to focus on in lessons to improve these main skills.

Finally, don’t be afraid to introduce your students to these descriptors or translate them into the learner's first language for lower levels. It is a great way for them to pinpoint and reflect on their strengths and areas for improvement, rather than simply getting a score and not understanding how to get to the next level of confidence and ability.

By incorporating the GSE Assessment Frameworks into your course for formative assessment, you can build students’ confidence and help them better reflect on their learning.

More blogs from app

  • Hands typing at a laptop with symbols

    Can computers really mark exams? Benefits of ELT automated assessments

    Por app Languages

    Automated assessment, including the use of Artificial Intelligence (AI), is one of the latest education tech solutions. It speeds up exam marking times, removes human biases, and is as accurate and at least as reliable as human examiners. As innovations go, this one is a real game-changer for teachers and students. 

    However, it has understandably been met with many questions and sometimes skepticism in the ELT community – can computers really mark speaking and writing exams accurately? 

    The answer is a resounding yes. Students from all parts of the world already take AI-graded tests.  aԻ Versanttests – for example – provide unbiased, fair and fast automated scoring for speaking and writing exams – irrespective of where the test takers live, or what their accent or gender is. 

    This article will explain the main processes involved in AI automated scoring and make the point that AI technologies are built on the foundations of consistent expert human judgments. So, let’s clear up the confusion around automated scoring and AI and look into how it can help teachers and students alike. 

    AI versus traditional automated scoring

    First of all, let’s distinguish between traditional automated scoring and AI. When we talk about automated scoring, generally, we mean scoring items that are either multiple-choice or cloze items. You may have to reorder sentences, choose from a drop-down list, insert a missing word- that sort of thing. These question types are designed to test particular skills and automated scoring ensures that they can be marked quickly and accurately every time.

    While automatically scored items like these can be used to assess receptive skills such as listening and reading comprehension, they cannot mark the productive skills of writing and speaking. Every student's response in writing and speaking items will be different, so how can computers mark them?

    This is where AI comes in. 

    We hear a lot about how AI is increasingly being used in areas where there is a need to deal with large amounts of unstructured data, effectively and 100% accurately – like in medical diagnostics, for example. In language testing, AI uses specialized computer software to grade written and oral tests. 

    How AI is used to score speaking exams

    The first step is to build an acoustic model for each language that can recognize speech and convert it into waveforms and text. While this technology used to be very unusual, most of our smartphones can do this now. 

    These acoustic models are then trained to score every single prompt or item on a test. We do this by using human expert raters to score the items first, using double marking. They score hundreds of oral responses for each item, and these ‘Standards’ are then used to train the engine. 

    Next, we validate the trained engine by feeding in many more human-marked items, and check that the machine scores are very highly correlated to the human scores. If this doesn’t happen for any item, we remove it, as it must match the standard set by human markers. We expect a correlation of between .95-.99. That means that tests will be marked between 95-99% exactly the same as human-marked samples. 

    This is incredibly high compared to the reliability of human-marked speaking tests. In essence, we use a group of highly expert human raters to train the AI engine, and then their standard is replicated time after time.  

    How AI is used to score writing exams

    Our AI writing scoring uses a technology called . LSA is a natural language processing technique that can analyze and score writing, based on the meaning behind words – and not just their superficial characteristics. 

    Similarly to our speech recognition acoustic models, we first establish a language-specific text recognition model. We feed a large amount of text into the system, and LSA uses artificial intelligence to learn the patterns of how words relate to each other and are used in, for example, the English language. 

    Once the language model has been established, we train the engine to score every written item on a test. As in speaking items, we do this by using human expert raters to score the items first, using double marking. They score many hundreds of written responses for each item, and these ‘Standards’ are then used to train the engine. We then validate the trained engine by feeding in many more human-marked items, and check that the machine scores are very highly correlated to the human scores. 

    The benchmark is always the expert human scores. If our AI system doesn’t closely match the scores given by human markers, we remove the item, as it is essential to match the standard set by human markers.

    AI’s ability to mark multiple traits 

    One of the challenges human markers face in scoring speaking and written items is assessing many traits on a single item. For example, when assessing and scoring speaking, they may need to give separate scores for content, fluency and pronunciation. 

    In written responses, markers may need to score a piece of writing for vocabulary, style and grammar. Effectively, they may need to mark every single item at least three times, maybe more. However, once we have trained the AI systems on every trait score in speaking and writing, they can then mark items on any number of traits instantaneously – and without error. 

    AI’s lack of bias

    A fundamental premise for any test is that no advantage or disadvantage should be given to any candidate. In other words, there should be no positive or negative bias. This can be very difficult to achieve in human-marked speaking and written assessments. In fact, candidates often feel they may have received a different score if someone else had heard them or read their work.

    Our AI systems eradicate the issue of bias. This is done by ensuring our speaking and writing AI systems are trained on an extensive range of human accents and writing types. 

    We don’t want perfect native-speaking accents or writing styles to train our engines. We use representative non-native samples from across the world. When we initially set up our AI systems for speaking and writing scoring, we trialed our items and trained our engines using millions of student responses. We continue to do this now as new items are developed.

    The benefits of AI automated assessment

    There is nothing wrong with hand-marking homework tests and exams. In fact, it is essential for teachers to get to know their students and provide personal feedback and advice. However, manually correcting hundreds of tests, daily or weekly, can be repetitive, time-consuming, not always reliable and takes time away from working alongside students in the classroom. The use of AI in formative and summative assessments can increase assessed practice time for students and reduce the marking load for teachers.

    Language learning takes time, lots of time to progress to high levels of proficiency. The blended use of AI can:

    • address the increasing importance of formative assessmentto drive personalized learning and diagnostic assessment feedback 

    • allow students to practice and get instant feedback inside and outside of allocated teaching time

    • address the issue of teacher workload

    • create a virtuous combination between humans and machines, taking advantage of what humans do best and what machines do best. 

    • provide fair, fast and unbiased summative assessment scores in high-stakes testing.

    We hope this article has answered a few burning questions about how AI is used to assess speaking and writing in our language tests. An interesting quote from Fei-Fei Li, Chief scientist at Google and Stanford Professor describes AI like this:

    “I often tell my students not to be misled by the name ‘artificial intelligence’ — there is nothing artificial about it; A.I. is made by humans, intended to behave [like] humans and, ultimately, to impact human lives and human society.”

    AI in formative and summative assessments will never replace the role of teachers. AI will support teachers, provide endless opportunities for students to improve, and provide a solution to slow, unreliable and often unfair high-stakes assessments.

    Examples of AI assessments in ELT

    At app, we have developed a range of assessments using AI technology.

    Versant

    The Versant tests are a great tool to help establish language proficiency benchmarks in any school, organization or business. They are specifically designed for placement tests to determine the appropriate level for the learner.

    PTE Academic

    The  is aimed at those who need to prove their level of English for a university place, a job or a visa. It uses AI to score tests and results are available within five days. 

    app English International Certificate (PEIC)

    app English International Certificate (PEIC) also uses automated assessment technology. With a two-hour test available on-demand to take at home or at school (or at a secure test center). Using a combination of advanced speech recognition and exam grading technology and the expertise of professional ELT exam markers worldwide, our patented software can measure English language ability.