3 opportunities for using mediation with young learners

Tim Goodier
A group of children looking engaged on a task whist their teacher is sat near them

Mediation in the CEFR

The addition of ‘can do’ descriptors for mediation in the CEFR Companion Volume is certainly generating a lot of discussion. The CEFR levels A1 to C2 are a reference point to organise learning, teaching and assessment, and they are used in primary and secondary programs worldwide. Teachers of young learners aligning their courses to the CEFR may wonder if they should therefore be ‘teaching’ mediation as a standard to follow. Is this really the case? And what might ‘teaching’ mediation mean?

This short answer is that this is not the case – the CEFR is a reference work, not a curriculum. So the ‘can do’ statements for each level are an optional resource to use selectively as we see fit. This is particularly true for young learners, where ‘can do’ statements may be selected, adapted and simplified in a way that is accessible and meaningful to them. This approach is demonstrated in the many European Language Portfolios (ELPs) for young learners that were validated by the Council of Europe following the launch of the CEFR and ELP. 

So let’s recap what is meant by mediation in the CEFR. The new scales deal with three main areas:  

  • Mediating a text: taking things you have understood and communicating them in your own words to help others understand.Ìý
  • Mediating concepts: collaborating with others to talk through ideas and solutions and reach new conclusions. Ìý
  • Mediating communication: supporting the acceptance of different cultural viewpoints.

Focusing on mediation with young learners

Mediation activities may involve aspects of cognitive demand, general social competencies and literacy development that are too challenging for a given target age group or level. These factors need to be carefully considered when designing tasks. However, with the proper guidance it is possible that young learners can engage in mediation activities in a simple way appropriate to age, ability and context. of the potential relevance of the new descriptors to age groups 7 to 10 and 11 to 15.

Opportunities for mediation in the young learner classroom

It’s fair to say that opportunities for mediation activities already regularly occur in the communicative young learner classroom. These can be identified and enhanced if we want to develop this area.  

1. Collaboration 

Many young learner courses adopt an enquiry-based learning approach, guiding learners to collaborate on tasks and reach conclusions through creative thinking. The CEFR provides ‘can do’ statements for collaborating in a group starting at A1: Ìý

  • Can invite others’ contributions to very simple tasks using short, simple phrases.Ìý
  • Can indicate that he/she understands and ask whether others understand.Ìý
  • Can express an idea with very simple words and ask what others think.

Young learners at this level can build a basic repertoire of simple ‘collaborative behaviors’ with keywords and phrases connected to visual prompts e.g. posters. A routine can be set up before pair and share tasks to practice short phrases for asking what someone thinks, showing understanding, or saying you don’t understand. This can also include paralanguage, modeled by the teacher, for showing interest and offering someone else the turn to speak. 

It is important for young learners to be clear about what is expected of them and what will happen next, so such routines can effectively scaffold collaborative enquiry-based learning tasks. 

2. Communication 

‘Can do’ statements for mediating communication, such as facilitating pluricultural space, can orient objectives for learners themselves to foster understanding between different cultures. Again young learners can develop their behaviors for welcoming, listening and indicating understanding with the help of visual prompts, stories and role-model characters.

3. Discussion of texts  

Young learners also spend a lot of time mediating texts because they enjoy talking about stories they have listened to, watched or read. Although there is only one statement for expressing a personal response to creative texts at A1: ‘Can use simple words and phrases to say how a work made him/her feel’, this can inspire a more conscious focus on classroom phases to talk about responses to texts and stories, and equipping learners with keywords and phrases to express their reactions. In this way, as they progress towards A2 young learners can develop the confidence to talk about different aspects of the story in their own words, such as characters and their feelings. 

Moving forward

Clearly, it is not obligatory to focus on mediation activities with young learners – but the ‘can do’ statements are an interesting area to consider and reflect upon. There are some obvious parallels between mediation activities and 21st century skills or soft communication skills, and the CEFR ‘can do’ statements can help formulate manageable communicative learning objectives in this area. This, in turn, can inspire and orient classroom routines and tasks which prepare learners to be active communicators and social agents in the target language, developing their confidence to engage in mediation tasks as a feature of their lifelong learning pathways.

More blogs from ÃÛÌÒapp

  • A teacher stood at the front of the class talking to her class

    English for employability: Why teaching general English is not enough

    By Ehsan Gorji
    Reading time: 4 minutes

    Many English language learners are studying English with the aim of getting down to the nitty-gritty of the language they need for their profession. Whether the learner is an engineer, a lawyer, a nanny, a nurse, a police officer, a cook, or a salesperson, simply teaching general English or even English for specific purposes is not enough. We need to improve our learners’ skills for employability.

    The four maxims of conversation

    In his article Logic and Conversation, Paul Grice, a philosopher of language, proposes that every conversation is based on four maxims: quantity, quality, relation and manner. He believes that if these maxims combine successfully, then the best conversation will take place and the right message will be delivered to the right person at the right time.

    The four maxims take on a deeper significance when it comes to the workplace, where things are often more formal and more urgent. Many human resources (HR) managers have spent hours fine-tuning workplace conversations simply because a job candidate or employee has not been adequately educated to the level of English language that a job role demands. This, coupled with the fact that many companies across the globe are adopting English as their official corporate language, has resulted in a new requirement in the world of business: mastery of the English language.

    It would not be satisfactory for an employee to be turned down for a job vacancy, to be disqualified after a while; or fail to fulfil his or her assigned tasks, because their English language profile either does not correlate with what the job fully expects or does not possess even the essential must-have can-dos of the job role.

    How the GSE Job Profiles can help

    The Job Profiles within the can help target those ‘must-have can-dos’ related to various job roles. The ‘Choose Learner’ drop-down menu offers the opportunity to view GSE Learning Objectives for four learner types: in this case, select ‘Professional Learners’. You can then click on the ‘Choose Job Role’ button to narrow down the objectives specific for a particular job role – for example, ‘Office and Administrative Support’ and then ‘Hotel, Motel and Resort Desk Clerks’.

    Then, I can choose the GSE/CEFR range I want to apply to my results. In this example, I would like to know what English language skills a hotel desk clerk is expected to master for B1-B1+/GSE: 43-58.

  • A group of business people sat together at a desk

    Harness the power of English for a global competitive edge

    By Samantha Ball
    Reading time: 7 minutes

    How does increasing English proficiency drive international growth? Read on to find out how future-focused business leaders are gaining a competitive edge globally by investing in English language training.

    The link between English language proficiency and global business growth is indisputable, and this presents leaders with an exciting opportunity to gain a competitive advantage.

  • A girl sat at a desk with a laptop and notepad studying and taking notes

    AI scoring vs human scoring for language tests: What's the difference?

    By
    Reading time: 6 minutes

    When entering the world of language proficiency tests, test takers are often faced with a dilemma: Should they opt for tests scored by humans or those assessed by artificial intelligence (AI)? The choice might seem trivial at first, but understanding the differences between AI scoring and human language test scoring can significantly impact preparation strategy and, ultimately, determine test outcomes.

    The human touch in language proficiency testing and scoring

    Historically, language tests have been scored by human assessors. This method leverages the nuanced understanding that humans have of language, including idiomatic expressions, cultural references, and the subtleties of tone and even writing style, akin to the capabilities of the human brain. Human scorers can appreciate the creative and original use of language, potentially rewarding test takers for flair and originality in their answers. Scorers are particularly effective at evaluating progress or achievement tests, which are designed to assess a student's language knowledge and progress after completing a particular chapter, unit, or at the end of a course, reflecting how well the language tester is performing in their language learning studies.

    One significant difference between human and AI scoring is how they handle context. Human scorers can understand the significance and implications of a particular word or phrase in a given context, while AI algorithms rely on predetermined rules and datasets.

    The adaptability and learning capabilities of human brains contribute significantly to the effectiveness of scoring in language tests, mirroring how these brains adjust and learn from new information.

    Advantages:

    • Nuanced understanding: Human scorers are adept at interpreting the complexities and nuances of language that AI might miss.
    • Contextual flexibility: Humans can consider context beyond the written or spoken word, understanding cultural and situational implications.

    Disadvantages:

    • Subjectivity and inconsistency: Despite rigorous training, human-based scoring can introduce a level of subjectivity and variability, potentially affecting the fairness and reliability of scores.
    • Time and resource intensive: Human-based scoring is labor-intensive and time-consuming, often resulting in longer waiting times for results.
    • Human bias: Assessors, despite being highly trained and experienced, bring their own perspectives, preferences and preconceptions into the grading process. This can lead to variability in scoring, where two equally competent test takers might receive different scores based on the scorer's subjective judgment.

    The rise of AI in language test scoring

    With advancements in technology, AI-based scoring systems have started to play a significant role in language assessment. These systems utilize algorithms and natural language processing (NLP) techniques to evaluate test responses. AI scoring promises objectivity and efficiency, offering a standardized way to assess language and proficiency level.

    Advantages:

    • Consistency: AI scoring systems provide a consistent scoring method, applying the same criteria across all test takers, thereby reducing the potential for bias.
    • Speed: AI can process and score tests much faster than human scorers can, leading to quicker results turnaround.
    • Great for more nervous testers: Not everyone likes having to take a test in front of a person, so AI removes that extra stress.

    Disadvantages:

    • Lack of nuance recognition: AI may not fully understand subtle nuances, creativity, or complex structures in language the way a human scorer can.
    • Dependence on data: The effectiveness of AI scoring is heavily reliant on the data it has been trained on, which can limit its ability to interpret less common responses accurately.

    Making the choice

    When deciding between tests scored by humans or AI, consider the following factors:

    • Your strengths: If you have a creative flair and excel at expressing original thoughts, human-scored tests might appreciate your unique approach more. Conversely, if you excel in structured language use and clear, concise expression, AI-scored tests could work to your advantage.
    • Your goals: Consider why you're taking the test. Some organizations might prefer one scoring method over the other, so it's worth investigating their preferences.
    • Preparation time: If you're on a tight schedule, the quicker turnaround time of AI-scored tests might be beneficial.

    Ultimately, both scoring methods aim to measure and assess language proficiency accurately. The key is understanding how each approach aligns with your personal strengths and goals.

    The bias factor in language testing

    An often-discussed concern in both AI and human language test scoring is the issue of bias. With AI scoring, biases can be ingrained in the algorithms due to the data they are trained on, but if the system is well designed, bias can be removed and provide fairer scoring.

    Conversely speaking, human scorers, despite their best efforts to remain objective, bring their own subconscious biases to the evaluation process. These biases might be related to a test taker's accent, dialect, or even the content of their responses, which could subtly influence the scorer's perceptions and judgments. Efforts are continually made to mitigate these biases in both approaches to ensure a fair and equitable assessment for all test takers.

    Preparing for success in foreign language proficiency tests

    Regardless of the scoring method, thorough preparation remains, of course, crucial. Familiarize yourself with the test format, practice under timed conditions, and seek feedback on your performance, whether from teachers, peers, or through self-assessment tools.

    The distinctions between AI scoring and human in language tests continue to blur, with many exams now incorporating a mix of both to have students leverage their respective strengths. Understanding and interpreting written language is essential in preparing for language proficiency tests, especially for reading tests. By understanding these differences, test takers can better prepare for their exams, setting themselves up for the best possible outcome.

    Will AI replace human-marked tests?

    The question of whether AI will replace markers in language tests is complex and multifaceted. On one hand, the efficiency, consistency and scalability of AI scoring systems present a compelling case for their increased utilization. These systems can process vast numbers of tests in a fraction of the time it takes markers, providing quick feedback that is invaluable in educational settings. On the other hand, the nuanced understanding, contextual knowledge, flexibility, and ability to appreciate the subtleties of language that human markers bring to the table are qualities that AI has yet to fully replicate.

    Both AI and human-based scoring aim to accurately assess language proficiency levels, such as those defined by the Common European Framework of Reference for Languages or the Global Scale of English, where a level like C2 or 85-90 indicates that a student can understand virtually everything, master the foreign language perfectly, and potentially have superior knowledge compared to a native speaker.

    The integration of AI in language testing is less about replacement and more about complementing and enhancing the existing processes. AI can handle the objective, clear-cut aspects of language testing, freeing markers to focus on the more subjective, nuanced responses that require a human touch. This hybrid approach could lead to a more robust, efficient and fair assessment system, leveraging the strengths of both humans and AI.

    Future developments in AI technology and machine learning may narrow the gap between AI and human grading capabilities. However, the ethical considerations, such as ensuring fairness and addressing bias, along with the desire to maintain a human element in education, suggest that a balanced approach will persist. In conclusion, while AI will increasingly play a significant role in language testing, it is unlikely to completely replace markers. Instead, the future lies in finding the optimal synergy between technological advancements and human judgment to enhance the fairness, accuracy and efficiency of language proficiency assessments.

    Tests to let your language skills shine through

    Explore ÃÛÌÒapp's innovative language testing solutions today and discover how we are blending the best of AI technology and our own expertise to offer you reliable, fair and efficient language proficiency assessments. We are committed to offering reliable and credible proficiency tests, ensuring that our certifications are recognized for job applications, university admissions, citizenship applications, and by employers worldwide. Whether you're gearing up for academic, professional, or personal success, our tests are designed to meet your diverse needs and help unlock your full potential.

    Take the next step in your language learning journey with ÃÛÌÒapp and experience the difference that a meticulously crafted test can make.