AI and Why It’s Impossible to Learn or Understand Language

Intelligence, whether human or artificial, cannot be determined purely through rational or quantitative measures. It also involves interpreting context, nuance and metaphor, the unpredictable elements of human thought. This piece examines how these aspects affect learning and understanding a language, and the challenges of participating in a community, especially as AI becomes more widely used for teaching and learning.

AI and Why It’s Impossible to Learn or Understand Language

Author: Prof John Traxler, UNESCO Chair, Commonwealth of Learning Chair and Academic Director of the Avallain Lab

St. Gallen, October 29, 2025 – In this piece, we argue that it is impossible to learn, understand or discuss what anyone else says or writes at anything beyond the simplest, most specific and concrete level. This even perhaps applies to people with a shared mother tongue, making conversation, learning, translating and reasoning more difficult than they initially seem, especially when they involve artificial intelligence and computers. 

The discussion is, in fact, divided into two halves: the first deals with language as idiom, and the second deals with language for reasoning. In other words, we are discussing language and learning, and language learning, thus discussing intelligence, artificial and otherwise.

Language as Idiom

AI and the Turing Test

Artificial intelligence is the ongoing effort to develop digital technologies that mimic human intelligence, despite the undefined nature of human intelligence. It has been through various incarnations, such as expert systems and neural nets, and now generative AI or GenAI, seeming to finally deliver on the promises of 40 or 50 years ago.

Over all this time, there has, however, been a test, the Turing Test, to evaluate AI’s apparent intelligence, revealing insights into both intelligence and language. GenAI, the current incarnation, is in effect pattern matching with a conversational interface, a sophisticated form of autocomplete, completing responses based on the world’s vast digital resources. However, because of this, it can produce ‘hallucinations’, responses that are plausible but wrong, and can also perpetuate harm, bias or misinformation.

The Turing Test imagines a human, the ‘tester’, able to interact independently with another human and a computer. If the tester cannot tell when he or she is interacting with the human or the computer, then the computer can be said to be ‘intelligent’; it passes the Turing Test. 

Expanding the Boundaries of Intelligence

We should, however, consider how this would work with a seemingly intelligent mammal, say a chimpanzee, conversing in American Sign Language, or an extraterrestrial, say ET, the visiting alien scientist. The film Arrival illustrates the possible superiority of other intelligences, their languages and their differences. These, too, might manifest ‘intelligence’ and challenge ours, widening our notions of intelligence and thus what we might expect from AI.

There is an alternative model of what is going on with intelligence, specifically with conversation, translation and learning; the Chinese Room. This thought experiment imagines a person passing words or perhaps phrases or sentences, called the ‘tokens’, into the Chinese Room. An operative looks them up in a large dictionary or some similar reference book or ‘look-up table’. The operative passes the answer or the translation or the learning out as another ‘token’, there seeming to be no intelligence or consciousness involved, only what is in effect an automaton.

However, it does raise questions about the operative; do they have any taste or ethics? Could they or should they be subject to Asimov’s Three Laws of Robotics? Is such an operative even possible? Is the operative merely another Chinese Room inside the Chinese Room or a way of disguising an algorithm as a human operative? Would the Chinese Room pass the Turing Test?

Human Understanding and the Limits of Machine Interpretation

Incidentally, in the film The Enigma of Kaspar Hauser, about a foundling, a boy with no past, set in Germany in the early nineteenth century, the eponymous hero is asked, ‘How to discern the villager who always tells the truth from the villager who always lies?’. Instead of applying deductive logic, Kaspar offers a simple, childlike answer from his unique perspective: he would ask the villager, ‘Are you a tree frog?’. His innocence allows him to see things differently, and his absurd question and approach might sidestep the issue of formal logic and thus rationality and intelligence. The Turing ‘tester’ just asks, ‘Is it raining tree frogs?’, revealing how a machine may struggle to interpret common sense and the outside world in the way humans do. 

What is relevant here, however, is not a generic human ‘tester’ but a human learner wanting to be taught. Could this learner tell the difference between a human teacher and an artificial one, GenAI in this case? It depends, of course, on the learner’s expectations of pedagogy. If the learner expected a didactic or transmissive pedagogy, GenAI could give a very competent lecture, essay, summary or slide deck, ‘hallucinations’ notwithstanding.

If, on the other hand, the learner expected something discursive, something that engaged with them personally and individually, building on what they already knew, correcting their misunderstandings, using a tone and terms familiar to them, then ‘raw’ GenAI would struggle. This is even before considering the added dimension of emotional intelligence, meaning recognising when the learner is tired, frustrated, bored, upset or in need of a comfort break or some social support.

Language for Reasoning

Early AI and Challenges in Language Learning

Let’s draw on two early efforts we had in 1960. PLATO was a computer-based learning system using ‘self-paced learning, small cycles of feedback and recorded traces of incremental progress’ (Cope & Kalantzis, 2023:4), showing that simple didactic teaching was possible, however crudely, very early on. Additionally, in about 1966, ELIZA, one of the earlier natural language processing programs, provided non-directive psychotherapy, that is, psychotherapy led by the client, not by the therapist. Psychotherapy led by the client’s problems or constructs that might have translated into non-directive or learner-centred pedagogy, heutagogy, perhaps, self-directed learning.

So, how does this relate to learning a language? Curiously, GenAI is based on the so-called large language models, and the medium for exploring intelligence seems to be the conversation, certainly not any IQ test!

Learning a language, even our own mother tongue, from any kind of computer is likely to be tricky. Firstly, it is difficult because computers lack body language, hand gestures and facial expressions.

Plurilingual Societies 

Then, in plurilingual societies such as South Africa, or even most modern societies, we have code switching, the switching between languages, even within individual sentences. There are also potential problems with language registers, ranging from frozen, formal, consultative, casual, to intimate. In a monocultural society, these should be straightforward. However, in multicultural societies, characterised by different norms, speakers may gravitate toward the more formal or the less formal; there can be uncertainty, confusion and upset. These are a kind of ‘cultural dimension’ that we will explore later, suggesting there is no easy correspondence between languages.

Euphemisms, Neologisms and Internet Language

Then we have euphemisms, puns and double entendre, not meaning what they say, and hyperbole and sarcasm, sometimes meaning the opposite of what they say. Furthermore, we have humour in general, but black humour in particular, but why ‘black’? What is it about blackness? We have neologisms, new words from nowhere, sometimes only fleeting, occasionally more durable, skeuomorphs, new meanings from old words, and acronyms, especially those from the internet and World Wide Web. All these pose problems for learners, who need to understand the cultural context and current culture. Similarly, problems arise for GenAI, especially when it always lags behind human understanding and skims across the surface, missing human nuances. 

Community Languages and Cultural Assimilation

We also have subversive, perhaps rebellious, perhaps secretive languages. For example, Polari, the one-time argot of the London gay theatre community, derived partly from Romani. Cockney, rhyming slang, historically from London’s East End, and based on a strict mechanism, which, for example, gets you from ‘hat’ via ‘tit-for-tat’ to ‘titfer’ or from ‘look’ via ‘butcher’s hook’ to ‘butchers’, so ‘can I have a butchers at your titfer?’.

There is also back slang, which forms a vocabulary from words spelt backwards. In Scotland, ‘Senga’ for Agnes. None of these examples is necessarily accessible, inclusive or open. Two textspeak examples make the same point: Arabish, the messaging language using a European keyboard for Arabic sounds, and Mxlish, the one-time language of South African teenagers using the messaging platform, MXit, both with enormous footprints. 

Each of these, in its own way, is the property of a particular community or culture, perhaps waiting to be appropriated, ridiculed, sanitised or ignored by others, and eventually, perhaps, to be ‘taught’, the kiss of death.

In fact, we could argue that learning these languages is an integral part of acceptance and assimilation into a defined community, in just the same way as talking about differential calculus and only then talking about integral calculus is part of acceptance and assimilation into the community of mathematicians. Our point is that displaying intelligence, acquiring language, being part of a culture, having a conversation and learning a subject are all very closely intertwined and necessarily complex for strangers or chatbots to join in with.

Metaphor and Abstraction

Then we get on to the metaphor. In a quarter of an hour of a television drama, I heard ‘black people’, ‘landmark decision’, ‘high art’ and ‘ wild goose chase’, none of which was literally true. I listen to ‘The Freewheelin’ Bob Dylan’, safe in the knowledge that Bob Dylan is not a bicycle. I worry about ‘raising money’, knowing this will not involve lifting the money upwards. ‘The Lord is my shepherd’, in the Psalms, does not tell me that I am a sheep. We also get bombarded with the language inherited from Aristotle, of ‘correspondences’, ‘the ship of state’, ‘the king of the jungle’ and ‘the body politic’, whilst thinking the car needs a wash, even though being inanimate, it has no needs. As a university professor, I have two chairs, neither of which I can actually sit upon, whilst on the news, I hear that the office of the president has been tarnished, though I also hear it has just been redecorated. Confusing, isn’t it?

Parables, such as the ‘Good Samaritan’, from the Gospel of Luke, and the ‘seed falling on stony ground’, from the Gospel of Matthew, are, in fact, just extended metaphors delivered in the hope that the meaning could be inferred by people familiar with the cultural context of their origin. People refer to the Prodigal Son, from the Gospel of Luke, with no idea of the meaning of prodigality. However, they are perhaps meaningless to other cultures, those remote from historical Palestine. The same is true of many fables, such as ‘The Hare and the Tortoise’.

However, as all are ripped out of their cultural or historical context, the moral point is needed now to explain the parable or fable, rather than the other way round, as originally intended; nowadays, sowers, samaritans, hares and tortoises are no longer everyday items. They are, in fact, clichés, remarks bereft of meaning, another challenge for language learners and large language models. 

While metaphor takes words from the concrete to the abstract, the use of ‘literally’ seems to drag them back again, so perhaps Bob Dylan is literally freewheeling, and money is literally being raised. ‘Literally’ is, however, sometimes used for emphasis and sometimes just used weirdly. Yesterday, I heard a podcaster talking about being ‘literally gobsmacked.’ Did he mean he had been smacked on the gob? Actually? Literally? As someone who is autistic, understanding language from a largely concrete interpretation, this confusion, uncertainty and ambiguity is a daily struggle. 

Once we get away from anything as simple and concrete as ‘the cat sat on the mat’ and approach the abstract of love, democracy, freedom, race, virtue and truth, we enter our own small community where some understanding is possible inside, but little is possible outside. These concepts of love, race, democracy, freedom, virtue and truth may all have very different meanings among, say, Marxists, Buddhists, Stoics, Confucians, feminists, humanists and Calvinists, unlike cats and mats. So how can we learn about them and converse about them? And how can our large language models ever engage with them meaningfully, except in a manner reminiscent of the Chinese Room model?

Conclusion

So, the conclusion, so far, is that while it might just be possible to have a meaningful dialogue across a shared culture and mother tongue, especially at the level of simple description and action, is there much hope of having one with computers?

Perhaps, this reinforces the importance of keeping humans at the centre of teaching and learning. AI, no matter how sophisticated, cannot keep up with the diversity, transience and cultural complexity of language. Responsible human mediation remains essential, and we must recognise that computers will never be fast enough or flexible enough. Owning up to these limits is an ethical response in itself, not just from Avallain but across the educational AI sector and its clients. 

However, safeguards like Avallain Intelligence provide a first line of defence. This strategy for ethically and safely implementing AI in education aims to put the human element at the centre. While it cannot solve all the challenges of the evolution of language, ethics or learning, it establishes a framework to ensure that technology remains guided by human understanding, creativity and judgement, enhancing rather than replacing human agency. 

This pair of blogs, the first half and the following second half, is about language, about how understanding language is tricky for humans and even trickier for computers; it is about the medium, not the message. Understanding this might not stop people from saying or promoting nasty, harmful things, but it might perhaps prevent them from being misunderstood.


About Avallain

For more than two decades, Avallain has enabled publishers, institutions and educators to create and deliver world-class digital education products and programmes. Our award-winning solutions include Avallain Author, an AI-powered authoring tool, Avallain Magnet, a peerless LMS with integrated AI, and TeacherMatic, a ready-to-use AI toolkit created for and refined by educators.

Our technology meets the highest standards with accessibility and human-centred design at its core. Through Avallain Intelligence, our framework for the responsible use of AI in education, we empower our clients to unlock AI’s full potential, applied ethically and safely. Avallain is ISO/IEC 27001:2022 and SOC 2 Type 2 certified and a participant in the United Nations Global Compact.

Find out more at avallain.com

_

Contact:

Daniel Seuling

VP Client Relations & Marketing

dseuling@avallain.com

Develop Empowered Communicative Learners with Safe and Accurate AI Tools

The latest Language Teaching Takeoff Webinar showcased four powerful TeacherMatic Language Teaching Edition generators that can transform speaking lessons and foster confident, capable communicators.

Develop Empowered Communicative Learners with Safe and Accurate AI Tools

London, October 2025 – In last week’s webinar, ‘Enhancing Speaking Lessons with CEFR-Aligned Effective Generators’, we explored how teachers can use safe and accurate AI tools to help students engage, express ideas, think critically and build confidence in speaking. Pedagogy expert and award-winning educator Nik Peachey demonstrated how the generators can be filtered by skill, selecting ‘Speaking’ to highlight key tools suitable for developing speaking activities. He then guided participants through four effective generators: Dialogue Creator, Differing Opinions, Debate and Discussion Topics

Moderated by Giada Brisotto, Senior Marketing and Sales Operations Manager at Avallain, the session illustrated how these AI generators can transform lessons into interactive, thought-provoking experiences.

Formal vs Informal Speaking Practice

As learners develop their speaking skills, it’s essential to help them adapt to diverse speaking contexts, which is key to building confident communicators. Nik firstly highlighted the importance of formal and informal practice with the Discussion Topics and Debate generators. 

The Discussion Topics generator creates stimulating, level-appropriate conversations. It produces meaningful and engaging discussions for learners at any level, whether A1 or C1. Teachers can include optional supporting materials to tailor activities to students’ current knowledge, creating relevant and interactive interactions.

For more structured interactions, the Debate generator creates authentic, formal debate scenarios. Students can practise precise language and persuasive techniques while gaining confidence in presenting their ideas in a formal setting.

Combine Reading and Speaking

Building on the effective Debate and Discussion Topics generators, which enable teachers to create meaningful, level-appropriate speaking activities, Nik Peachey then introduced and demonstrated the Differing Opinions generator.

Designed to bridge reading and speaking, this generator enables teachers to create activities encouraging learners to analyse viewpoints, express ideas and engage in structured, reflective discussions. By producing balanced arguments on any chosen topic, it empowers students to develop both reasoning and communication skills, leading to richer classroom interactions and deeper engagement with language.

Developing Confident Opinions

The Differing Opinions generator allows teachers to generate multiple perspectives on a single topic, which students can read, compare and respond to. This creates opportunities for learners to evaluate ideas, express agreement or disagreement and justify their opinions using targeted language. The exercise builds confidence in articulating thoughts and helps students develop persuasive and analytical language skills in a supportive classroom setting.

Task-Based Learning

Nik demonstrated how the generator can be integrated into task-based learning. Learners can read a set of opinions, discuss them in groups, record their responses and later reflect on how they expressed themselves. This process reinforces fluency, encourages critical thinking and helps students refine their communication skills through repetition and reflection. Teachers can regenerate or adapt results to better suit different learning levels, and keep activities dynamic and relevant.

Context-Based Dialogue

Continuing the focus on developing authentic speaking skills, Nik introduced the Dialogue Creator generator. Designed to imitate real communication, it allows teachers to produce natural conversations based on specific contexts, vocabulary and CEFR levels. By tailoring prompts and length, educators can generate dialogues that mirror realistic scenarios, helping learners practise fluency, pronunciation and interaction in a safe environment.

Nik discussed how to get the best out of this generator by using it for controlled speaking practice, exploring nuances in language use, building dialogues and producing localised results.

Controlled Practice

The Dialogue Creator produces ready-to-use scripts that help students refine pronunciation, rhythm and natural flow, gradually gaining confidence in real communication. Teachers can also generate listening versions so learners can identify intonation and stress patterns within authentic exchanges.

Nuances in Speaking the Language

Learners can bring these dialogues to life through dramatic or calm readings, encouraging expression and emotional depth. This approach helps students recognise subtle differences in tone, register and emphasis, developing awareness of how meaning shifts through delivery.

Dialogue-Build Exercises

To make activities more interactive, Nik suggested adapting generated dialogues into dialogue-building exercises by removing selected words or phrases. This technique encourages learners to recall vocabulary, complete sentences in context and reinforce language retention through repetition.

Produce Localised Results

Adding supporting materials or regional references allows teachers to generate localised dialogues that reflect cultural and linguistic nuances. These realistic contexts make lessons more relevant and help learners connect language with authentic, everyday communication.

Foster Confident, Capable Communicators in Your Classroom 

Speaking is one of the most rewarding aspects of learning a language for both the student and teacher. Nik’s demonstration of the Discussion Topics, Debate, Differing Opinions and Dialogue Creator generators showcases how the TeacherMatic Language Teaching Edition provides teachers with reliable, CEFR-aligned tools. By streamlining the creation of tailored speaking activities, these AI tools allow educators to focus on facilitating learning, while students develop into articulate, confident and critically engaged communicators.

Explore the TeacherMatic Language Teaching Edition

The TeacherMatic Language Teaching Edition provides a comprehensive suite of tools that empower educators to design, create and deliver high-quality, differentiated speaking lessons efficiently. It uses CEFR-aligned generators to support meaningful, engaging practice across diverse teaching contexts.

Next in the Webinar Series

Beyond the Classroom: Empowering Every Role in Language Education

 🗓 Thursday, 13th November
🕛 12:00 – 12:30 GMT | 13:00 – 13:30 CET

In the next Language Teaching Takeoff webinar, discover generators specifically designed for leaders and administrators. Learn how to streamline planning, support staff and maintain high-quality CEFR-aligned language programmes across your institution.


About Avallain

For more than two decades, Avallain has enabled publishers, institutions and educators to create and deliver world-class digital education products and programmes. Our award-winning solutions include Avallain Author, an AI-powered authoring tool, Avallain Magnet, a peerless LMS with integrated AI, and TeacherMatic, a ready-to-use AI toolkit created for and refined by educators.

Our technology meets the highest standards with accessibility and human-centred design at its core. Through Avallain Intelligence, our framework for the responsible use of AI in education, we empower our clients to unlock AI’s full potential, applied ethically and safely. Avallain is ISO/IEC 27001:2022 and SOC 2 Type 2 certified and a participant in the United Nations Global Compact.

Contact:

Daniel Seuling

VP Client Relations & Marketing

dseuling@avallain.com

Is Learning Analytics More Promise Than Practice?

Learning analytics has been praised for its potential to improve teaching and learning, but can insights from virtual learning environments and other institutional systems genuinely support students, lecturers and educational managers in everyday practice? This piece examines the current evidence, implementation challenges and transferability limits, helping readers understand where learning analytics can make a real difference and where its promise may exceed its current impact.

Is Learning Analytics More Promise Than Practice?

Author: Prof John Traxler, UNESCO Chair, Commonwealth of Learning Chair and Academic Director of the Avallain Lab

St. Gallen, September 26, 2025 – Learning analytics has a long history and has been the subject of extensive research. It seems to have considerable potential, but what is it, and does it have any practical value? 

The following account is based on the research literature and structured conversations with leading researchers, and it attempts to answer these questions.

What is Learning Analytics?

Learning analytics (LA) is, in broad terms, the notion that as students increasingly learn with digital technologies and as these digital technologies are capable of capturing large amounts of data from large numbers of students, this might enable educators and education systems to be more effective or efficient. 

According to some leading researchers, learning analytics is ‘the measurement, collection, analysis and reporting of data about learners and their contexts, for purposes of understanding and optimising learning and the environments in which it occurs.’ (Viberg, Hatakka, Bälter & Mavroudi, 2018) and ‘Learning analytics, is the analysis and representation of data about learners in order to improve learning,…’ (Clow, 2013).

As with much data, freely and cheaply available, we must, however, always remember, ‘Just because it’s meaningful, doesn’t mean you can measure it; just because you can measure it, doesn’t mean it’s meaningful!’ and we should ask ourselves, if it is both meaningful and measurable, who benefits and in what ways? Is it learners, perhaps in improved attitudes, improved subject knowledge, or even improved understanding of their own learning? Or is it teachers and lecturers? Or is it educational managers and administrators, each with very different values, priorities and targets?

Additionally, from another leading researcher, Professor Rebecca Ferguson of the UK Open University, giving the keynote, at the Learning Analytics Summer Institute, in Singapore, 2023, there is this summary, ‘….while we’ve carved a fantastic research domain for a large number of academics and a growing number of researchers globally, we have done less well at tackling improvement of the quality of learners’ lives by making the learning experience something that is less institutional, less course based, less focused on our system of education, and more focused on the experience of learners.’ 

So there are some doubts within the learning analytics research community.

How Does Learning Analytics Work

OK, so how does learning analytics work? To start with the basics, there are two dominant techniques. Firstly, predictive modelling, ‘a mathematical model is developed, which produces estimates of likely outcomes, which are then used to inform interventions designed to improve those outcomes … estimating how likely it is that individual students will complete a course, and using those estimates to target support to students to improve the completion rate.’ (Clow, 2013:7).

Secondly, social network analysis (SNA), ‘the analysis of the connections between people in a social context. Individual people are nodes, and the connections between them are ties or links. A social network diagram, or sociogram, can be drawn in an online forum; the nodes might be the individual participants, and the ties might indicate replies by one participant to another’s post … interpreted simply by eye (for example, you can see whether a network has lots of links, or whether there are lots of nodes with few links).’ (Clow, 2013:11). 

In practice, this means that the data is coming from the main academic digital workhouse, the virtual learning environment (VLE), aka the learning management system (LMS), and therein lies the problem, which we will discuss later.

Investigating Learning Analytics

Typical research questions that academics have been tackling include whether learning analytics:

  • improve learning outcomes, 
  • improve learning support and teaching, 
  • are taken up and used widely, including deployment at scale and
  • are used in an ethical way. (Viberg, Hatakka, Bälter & Mavroudi, 2018)

More recent systematic reviews have confirmed these trends. For example, Sghir, Adadi & Lahmer (2023) surveyed a decade of predictive learning analytics and concluded that although machine and deep learning approaches have become more sophisticated, they rarely translate into significant pedagogical impact. Likewise, a 2023 systematic review of learning analytics dashboards found that while dashboards are increasingly designed to support learning rather than just monitoring, their actual effects on student achievement, motivation and engagement remain limited (Kaliisa, Misiejuk, López-Pernas, Khalil, & Saqr, 2024). These findings echo the persistent ‘promise versus practice’ gap.

Typical answers, filled from systematically reviewing the research literature, include:

‘The proposition with most evidence (35%) in LA is that LA improve learning support and teaching in higher education.  

There is little evidence in terms of improving students’ learning outcomes. Only 9% (23 papers out of all the 252 reviewed studies) present evidence in this respect. 

… there is even less evidence for the third proposition. In only 6% of the papers, LA are taken up and used widely. This suggests that LA research has so far been rather uncertain about this proposition.

… our results reveal that 18% of the research studies even mention ‘ethics’ or ‘privacy’ … This is a rather small number considering that LA research, at least its empirical strand, should seriously approach the relevant ethics.’

And, unsurprisingly, ‘… there is considerable scope for improving the evidence base for learning analytics …’ (Ferguson & Clow, 2017). 

Findings on Learning Analytics Outcomes

However, ‘the studies’ results that provide some evidence in improvements of learning outcomes focus mainly on three areas: i) knowledge acquisition, including improved assessment marks and better grades, ii) skill development and iii) cognitive gains.’ (ibid)

These authors (ibid: p108) also failed to spot affective gains, meaning learners not liking learning any more, or metacognitive gains, meaning learners not becoming any better at learning, only getting more knowledge or understanding the subject better. More recent evidence (Kaliisa, Misiejuk, López-Pernas, Khalil & Saqr, 2024) supports this view: a systematic review of 38 empirical studies found that learning analytics dashboards showed at best small and inconsistent effects on student motivation, participation and achievement. This underscores that despite ongoing technological advances, affective and metacognitive benefits remain elusive.

The Practical Potential of Learning Analytics

However, the point of this blog is to tackle the relevance of this research without going needlessly into detail and ask whether learning analytics has something to offer routine academic practice across educational organisations and institutions. This means asking whether the data harvested in practice from a VLE or LMS can be of practical use. The details, context and concrete specifics may be necessary, but generally, there is a range of issues.

Firstly, students in their different universities, colleges or schools interact with a variety of other institutional systems, including:

  • Plagiarism detection, attendance and access monitoring, library systems, CAA (computer-aided assessment), lecture capture, e-portfolios, student satisfaction surveys and student enrolment databases (courses, marks, etc, plus data on postcode, disability, gender, ethnicity, qualifications, etc.).
  • Plus, search engines, external content (YouTube, websites, journals, Wikipedia, blogs, etc.) and external communities (TikTok, Instagram, Facebook, Quora, WhatsApp, X, etc.).

In order to get a complete picture of student activity, data would have to be harvested, cleaned and correlated from all these different sources. Permission would have to be obtained from each of the institutional data owners. Suppose institutional IT systems were stable enough for long enough. In that case, this might, in theory, be possible, albeit prohibitively expensive.

However, the fact that each institution has its own IT infrastructure, set up and systems, means that none of the work is transferable or generalisable; each institution would have to start from scratch. Recent case studies from UK higher education (Dixon, Howe & Richter, 2025) confirm this: although analytics can provide insights into teaching and assessment, challenges around data quality, integration and stakeholder trust often limit real-world adoption. In other words, the institutional ecosystems in which LA must operate are highly fragmented, and this lack of transferability continues to be one of the field’s most pressing barriers.

Secondly, academics would need to factor in face-to-face learning, formal and informal, in the hope that it, too, would complete the picture, balancing students with a preference for face-to-face with those with a preference for the digital. Even those with a preference for the digital may prefer to engage with institutional systems as little as possible, using their own devices and networks, learning from personal contact, social media, websites, search engines, podcasts and now AI chatbots.

Final Reflections

As a footnote, this account touches only briefly on the ethical dimensions (Misiejuk, Samuelsen, Kaliisa, & Prinsloo, 2025). Yet recent scholarship increasingly emphasises that ethics cannot be treated as an afterthought. Studies have shown that less than half of published LA frameworks explicitly address privacy or ethics (Khalil, Prinsloo & Slade, 2022). Practical guidelines for institutions (Rets, Herodotou & Gillespie, 2023) stress the need for transparency, informed consent and giving learners agency over their data. 

More critical perspectives highlight the risk that analytics reinforce inequities or institutional agendas over student wellbeing, calling for ‘responsible learning analytics’ (Khalil, Prinsloo & Slade, 2023). Others argue for idiographic approaches, analytics tailored to individuals rather than groups, to mitigate risks of bias and overgeneralisation (Misiejuk, Samuelsen, Kaliisa & Prinsloo, 2025). Together, these developments show that ethics is now central to the future of learning analytics practice.

So perhaps it is unsurprising that learning analytics has made little practical headway in the mainstream of formal education. These challenges suggest that while learning analytics holds promise, its routine application across educational institutions remains limited and requires careful, context-sensitive planning to realise its potential. 

References

Clow, D. (2013). An overview of learning analytics. Teaching in Higher Education, 18(6), 683-695.

Dixon, N., Howe, R., & Richter, U. (2025). Exploring learning analytics practices and their benefits through the lens of three case studies in UK higher education. Research in Learning Technology, 33, 3127

Ferguson, Rebecca and Clow, Doug (2017). Where is the evidence? A call to action for learning analytics. In LAK’17 Proceedings of the Seventh International Learning Analytics & Knowledge Conference, ACM International Conference Proceeding Series, ACM, New York, USA, pp. 56–65

Kaliisa, R., Misiejuk, K., López-Pernas, S., Khalil, M., & Saqr, M. (2024, March). Have learning analytics dashboards lived up to the hype? A systematic review of impact on students’ achievement, motivation, participation and attitude. In Proceedings of the 14th learning analytics and Knowledge Conference (pp. 295-304).

Khalil, M., Prinsloo, P., & Slade, S. (2022, March). A comparison of learning analytics frameworks: A systematic review. In LAK22: 12th international learning analytics and knowledge conference (pp. 152-163).

Khalil, M., Prinsloo, P., & Slade, S. (2023). Fairness, trust, transparency, equity, and responsibility in learning analytics. Journal of Learning Analytics, 10(1), 1-7.

Misiejuk, K., Samuelsen, J., Kaliisa, R., & Prinsloo, P. (2025). Idiographic learning analytics: Mapping of the ethical issues. Learning and Individual Differences, 117, 102599.

Rets, I., Herodotou, C., & Gillespie, A. (2023). Six Practical Recommendations Enabling Ethical Use of Predictive Learning Analytics in Distance Education. Journal of Learning Analytics, 10(1), 149-167.

Sghir, N., Adadi, A., & Lahmer, M. (2023). Recent advances in Predictive Learning Analytics: A decade systematic review (2012–2022). Education and information technologies, 28(7), 8299-8333.

Viberg, O., Hatakka, M., Bälter, O., & Mavroudi, A. (2018). The current landscape of learning analytics in higher education. Computers in human behavior, 89, 98-110


About Avallain

At Avallain, we are on a mission to reshape the future of education through technology. We create customisable digital education solutions that empower educators and engage learners around the world. With a focus on accessibility and user-centred design, powered by AI and cutting-edge technology, we strive to make education engaging, effective and inclusive.

Find out more at avallain.com

About TeacherMatic

TeacherMatic, a part of the Avallain Group since 2024, is a ready-to-go AI toolkit for teachers that saves hours of lesson preparation by using scores of AI generators to create flexible lesson plans, worksheets, quizzes and more.

Find out more at teachermatic.com

_

Contact:

Daniel Seuling

VP Client Relations & Marketing

dseuling@avallain.com

Revisit the Language Teaching Takeoff Webinar Series: Featured Highlights and Insights

While taking a short summer break, we wanted to pause and review the best moments and most important insights from our Language Teaching Takeoff Webinar Series. If you missed an episode or want to revisit the practical tips and tools demonstrated in the TeacherMatic Language Teaching Edition, this blog highlights key takeaways and illustrates how a purpose-built AI supports language educators and enhances classroom practice.

Revisit the Language Teaching Takeoff Webinar Series: Featured Highlights and Insights

London, August 2025 – The Language Teaching Takeoff Webinar Series offers a practical look at the TeacherMatic Language Teaching Edition, a toolkit designed specifically for language educators. It’s more than a generic AI solution: every generator is built around the realities of classroom teaching, with a focus on saving time, enhancing creativity, maintaining pedagogical standards and ensuring the ethical and safe adoption of AI in language education. 

This edition of TeacherMatic can generate comprehensive lesson plans, adapt texts and tasks, create original content and quizzes, provide personalised feedback and more, all tailored to different CEFR levels. Each 30-minute session focuses on integrating AI meaningfully and responsibly, providing ideas, activities and workflows that make a real difference to teaching and learning.

The series has attracted over 300 educators across four sessions, underscoring the strong interest in practical, teacher-focused AI solutions.

Meet the Hosts

Moderated by Giada Brisotto, Senior Marketing and Sales Operations Manager at Avallain, and led by Nik Peachey, award-winning educator, author and edtech consultant, each webinar combines deep expertise with actionable guidance. 

‘These generators aren’t just text tools. They’re designed with real classroom needs in mind. You input your goals, level and theme, and the results are ready to use or refine.’ – Nik Peachey, Director of Pedagogy, PeacheyPublications

Save Time While Planning Quality Lessons

The first webinar in the series, Elevate Your Lesson Planning’, explored how purpose-built AI can transform how teachers design lessons. One of the main insights from the session was the critical balance between efficiency and academic rigour. Nik demonstrated how the Lesson Plan generator enables educators to produce fully structured, CEFR-aligned lesson plans in just a few minutes. 

Key benefits highlighted in the session included:

  • CEFR-aligned outputs to ensure lessons meet recognised language standards.
  • Adaptable and editable plans that reflect the needs of individual classes.
  • Support for professional autonomy, giving teachers control instead of imposing rigid templates.
  • Support for core pedagogical models, including Communicative Language Teaching (CLT), Task-Based Learning (TBL), Presentation Practice Production (PPP), Lexical Approach and Test-Teach-Test.

The session emphasised that the real value of AI in education lies in targeted, purposeful support, rather than blanket automation. Starting with focused applications like lesson planning allows educators to make small, practical changes that can significantly impact both teaching quality and learners’ experiences.

Deliver Personalised CEFR-Aligned Feedback

The second webinar, From Rubrics to Results: How to Provide Impactful Feedback’, focused on how AI can help teachers provide meaningful, personalised feedback without adding to their workload. Nik demonstrated the Feedback generator, showing how educators can instantly create feedback tailored to each student while keeping them aligned with CEFR standards and institutional rubrics.

Key benefits highlighted in the session included:

  • CEFR-aligned feedback that can be tailored to specific subscales.
  • Feedback tailored to rubrics and assessment criteria, ensuring comments reflect your teaching context.
  • Balanced, constructive comments that highlight both strengths and areas for improvement.

During the session, it was stressed that AI works best when it enhances teacher expertise rather than replacing it. By streamlining the feedback process, educators can maintain high standards of personalisation and pedagogy, even with large groups of students.

Adapt and Analyse Content Across Levels

The third webinar, Adapting Content for Effective CEFR-Aligned Language Teaching’, spotlighted how AI can empower teachers to adapt existing materials to diverse learner groups and levels. Nik introduced two powerful tools specifically designed with classroom realities in mind: the Adapt your content generator and the CEFR Level Checker.

Key benefits highlighted in the session included:

  • Effortlessly adapting content from one CEFR level to another while preserving the original theme and ensuring the result is pedagogically effective.
  • Immediate, precise CEFR analysis of texts, breaking down vocabulary and grammar complexity to help verify learner-appropriate materials.
  • Supporting teacher control through editable outputs that can be fine-tuned for specific class needs.

As Nik emphasised, ‘It’s not just about saving time. It’s about creating something that actually works for your learners faster’. The session showed how these AI generators translate the complexity of CEFR adaptation into practical, editable resources, enabling teachers to respond precisely to different learner needs without compromising pedagogical integrity.

Engage Students and Assess Progress Quickly

Generate, Engage and Assess: Create Custom Texts and Multiple Choice Quizzes’, demonstrated how TeacherMatic can support both content creation and assessment in language teaching. Participants saw how the Create a text and Multiple Choice Questions generators allow teachers to produce original CEFR-level texts and assess learner understanding instantly, without prompt engineering or technical complexity.

Highlights from the session included:

  • Generating original classroom-ready texts tailored by topic, CEFR level, grammar focus, text type, vocabulary and length.
  • Creating CEFR-aligned multiple-choice quizzes from any text to assess comprehension, vocabulary or grammar.
  • Adapting content across proficiency levels while preserving the theme and ensuring pedagogical usefulness.

In this session, participants learned how combining flexible content and quiz generators can streamline lesson preparation, enhance learner engagement and support accurate, timely assessment.

The Language Teaching Takeoff Webinar Series has illustrated how purpose-built AI can support language educators in practical, impactful ways. The TeacherMatic Language Teaching Edition allows teachers to leverage AI responsibly, ethically and safely, enhancing learning while maintaining pedagogical standards and putting educators in control of their classroom practice.

The series isn’t over yet.


What’s Next:

After a short summer break, the Language Teaching Takeoff Webinar Series returns. Join us for the next session:

Create Engaging Materials from YouTube Content and Build Custom Glossaries

Date: Thursday, 11th September

Time: 12:00 – 12:30 BST | 13:00 – 13:30 CEST

Discover how AI generators can turn YouTube videos into engaging content, and learn how to generate custom glossaries tailored to CEFR levels and your learners’ needs.


Explore the Language Teaching Edition of TeacherMatic

Whether teaching A1 learners or guiding advanced students through C1 material, the Language Teaching Edition of TeacherMatic helps you do it more efficiently, precisely and flexibly. 


About Avallain

At Avallain, we are on a mission to reshape the future of education through technology. We create customisable digital education solutions that empower educators and engage learners around the world. With a focus on accessibility and user-centred design, powered by AI and cutting-edge technology, we strive to make education engaging, effective and inclusive.

Find out more at avallain.com

About TeacherMatic

TeacherMatic, a part of the Avallain Group since 2024, is a ready-to-go AI toolkit for teachers that saves hours of lesson preparation by using scores of AI generators to create flexible lesson plans, worksheets, quizzes and more.

Find out more at teachermatic.com

Contact:

Daniel Seuling

VP Client Relations & Marketing

dseuling@avallain.com

Bringing Mobile Learning Back with AI, Context and Expertise

What if mobile learning had the intelligence and context it lacked 25 years ago? This piece revisits the rise and fall of early mobile learning projects and considers how the convergence of artificial intelligence, contextual mobile data and educational expertise could support more responsive and personalised learning today.

Bringing Mobile Learning Back with AI, Context and Expertise

Author: Prof John Traxler, UNESCO Chair, Commonwealth of Learning Chair and Academic Director of the Avallain Lab

St. Gallen, July 28, 2025 – Around 25 years ago, many members of the European edtech research community, myself included, were engaged in projects, pilots and prototypes exploring what was then known as ‘mobile learning’. This roughly and obviously referred to learning with mobile phones, likely 3G, nearing the dawn of the smartphone era. Learners could already access all types of learning available on networked desktops in their colleges and universities, but they were now freed from their desktops. The excitement, however, was around all the additional possibilities. 

One of these was ‘contextual learning,’ meaning learning that responded to the learner’s context. Mobile phones knew where they were, where they had been and what they had been doing1. These devices could capture images, video and sound of their context, including both the user and their surroundings. This meant they could also understand and know their user, the learner. 

So, to provide some examples:

  • Walking around art galleries like the Uffizi and heritage sites like Nottingham Castle, learners with their mobile phones could stop at a painting randomly and receive a range of background information, including audio, video and images. The longer they stayed, the more they would receive. Based on other paintings they had lingered at, they could get suggestions, explanations and perspectives on what else they might like and where else they could go.
  • Augmented reality on mobile phones meant that learners standing in Berlin using their mobile phone as a camera viewfinder could see the Brandenburg Gate, but with the now-gone Berlin Wall interposed perfectly realistically as they walked up to and around it. Similarly, they could see Rembrandt’s house in Amsterdam. Learners could also walk across the English Lake District and see bygone landforms and glaciers, or engage in London murder mysteries, looking at evidence and hearing witnesses at various locations.
  • Recommender systems on mobile phones analysed learners’ behaviours, achievements and locations to suggest the learning activity that would suit them best based on their history and context. These recommendations could be linked to assignments, resources and colleagues on their university LMS, providing guidance and practical advice. For example, in a Canadian project, there are specific applications in tourism.
  • Using a system like Molly Oxford on their mobile phones, learners could be guided to the nearest available loan copy of a library book they wanted. They could also be given suggestions based on public transport, wheelchair accessible footpaths and library opening hours.
  • Trainee professionals, such as physiotherapists or veterinary nurses, in various projects across Yorkshire, could be assessed while carrying out a healthcare procedure in ‘real-life’ practice. Their mobile phones would capture the necessary validation and contextual data to ensure a trustworthy process.
  • Some early experiments, with Bluetooth and other forms of NFC (near-field communication), allowed passers-by or students to pick up comments or images hanging in discrete locations, such as a subway or corridor on a university campus, serving as sign-posting or street art. 

These pilots and projects implemented situated2, authentic3 and personalised4 learning as aspects of contextual learning, and espoused5 the principles of constructivism6 and social constructivism7. This was only possible as far as the contemporary resources and technologies permitted. They did not, however, encourage or allow content to be created, commented on, or contributed to by learners, only consumed by them. Also, they usually only engaged with learners on an individual basis, not supporting interaction or communication among learners, even those learning the same thing, at the same place and at the same time.

So what went wrong? Why aren’t such systems widespread across communities, galleries, cultural spaces, universities and colleges any more? And how have things changed? Could we do better now?

The Downfall of Mobile Learning: What Went Wrong?

Mobile phone ownership was not widespread two decades ago, and popular mobile phones were not as powerful as they are today. The ‘apps economy’8 had not taken off. This meant that projects and pilots had to develop all software systems from scratch and get them to interoperate9. They also had to fund and provide the necessary mobile phones for the few learners involved10

Once the pilot or project and its funding had finished, its ideas and implementation were not scalable or sustainable; they were unaffordable. Pilots and projects were usually conducted within formal educational institutions among their students. Also, evaluation and dissemination focused on technical feasibility, proof-of-concept and theoretical findings. They rarely addressed outcomes that would sway institutional managers and impact institutional performance metrics. As a result, these ideas remained optional margins of institutional activity rather than the regulated business of courses, qualifications, assessments and certificates. Nor was there a business model to support long-term adoption. 

In fairness, we should also factor in the political and economic climate at the end of the 2000s. The ‘subprime mortgage’ crisis11 and the ‘bonfire of the quangos’12 depleted the political goodwill and public finances for speculative development work. Work that had previously and implicitly assumed the ‘diffusion of innovations’13 into mainstream provision. That ‘trickle down’ would take these ideas from pilot project to production line.

The Shift in Mobile Learning: What Changed?

Certainly not the political or economic climate, but mobile phones are now familiar, ubiquitous and powerful, and so is artificial intelligence (AI), also familiar, ubiquitous and powerful. Both of these technologies are outside educational institutions rather than confined within them. 

These earlier pilots and projects were basically ‘dumb’ systems, with no ‘intelligence’, drawing only on information previously loaded into their closed systems. Now, we have ‘intelligence’, we have AI and we have AI chatbots on mobile phones. However, currently, AI lacks context and cannot know or respond to the location, history, activity or behaviour of the learner and their mobile phone. Unfortunately, many current AI applications and chatbots are stateless and do not retain memory across interactions, and this represents a further challenge to any continuity.

The Possibilities of Mobile Learning: Could We Do Better Now?

Today’s network technologies can enable distributed connected contributions and consumption, enabling writing and reading. These might realise more of the possibilities of constructivism and social constructivism. They could enable educational systems to learn about and respond to their individual learners and their environment, connecting groups of learners and showing them how to support each other14

So, is there the possibility of convergence? Is it possible to combine the ‘intelligence’ of AI, the ‘memory’ of databases and the context provided by mobile phones, including both the learner and their environment? Could this be merged and mediated by educational expertise, acting as an interface between the three technologies, filtering, selecting and safeguarding?

What might this look like? We could start by adding ‘intelligence’ and ‘memory’ to our earlier examples.

The Future of Mobile Learning: What Could it Look Like? 

In terms of formal learning, our previous examples of the Uffizi Galleries, the Lake District, the Berlin Wall and Nottingham Castle are easy to extrapolate and imagine. Subject to a mediating educational layer, learners would each be in touch with other learners, helping each other in personalised versions of the same task. They could receive background information, ideas, recommendations, feedback and suggestions, cross-referenced with deadlines, schedules and assignments from their university LMS, all based on the cumulative history of their individual and social interactions and activities. 

When it comes to community learning or visitor attractions, systems could be created that encourage interactive, informal learning. For example, a living local history or 3D community poem spread around in the air, held together by links and folksonomies15, perhaps using tags to connect ideas, a living virtual world overlaying the real one. These systems could also support more prosaic purely educational applications, combining existing literary, artistic or historical sources with personal reactions or recollections.

Technically, this is about accessing the mobile phone’s contextual data, but sometimes other simple mobile data communications, for context. It also requires querying a relational database16 to retrieve history and constraints, and perhaps an institutional LMS, to retrieve assignments, timetables and course notes. AI can then be prompted to bring these together for some educational activity. Certainly, a proof of concept is eminently feasible. The expertise and experience of the three core disciplines are still out there and only need to be connected, tasked and funded.

Conclusions and Concerns

This piece sketches some broad educational possibilities once we enlist AI to support various earlier kinds of contextual mobile learning. Specific implementations and developments must address considerable social, legal, ethical and regulatory concerns and requirements. The earlier generation of projects might have already worked with these, privacy and surveillance being the obvious ones. Still, AI adds an enormous extra dimension to these, and there are other concerns like digital over-saturation, especially of children and vulnerable adults.

Nonetheless, this convergence of AI, contextual mobile data and educational expertise promises a future where learning is not confined to traditional settings but is a fluid, intelligent and deeply embedded aspect of our daily lives, making education more effective, accessible and aligned with individual and societal needs.


Mobile Learning & GenAI for the Less Privileged, Refugees & the Global South

How can mobile learning and GenAI reach those traditionally left out of educational innovation?

In a recent episode of Silver Lining for Learning, an award-winning webinar and podcast series, Prof. John Traxler joined a panel to discuss how mobile learning and generative AI can support less privileged learners, including refugees and communities in the Global South. 

The episode, ‘Mobile Learning & GenAI for the Less Privileged, Refugees & the Global South,’ builds on many of the questions raised in this article. It explores how mobile technologies have and haven’t fulfilled their potential, and what role GenAI might now play in addressing longstanding educational inequalities.

Watch the full episode:


  1. There is considerable literature, including:
    Special editions: Research in Learning Technology, Vol. 17, 2009. 
    Review articles: Kukulska-Hulme, A., Sharples, M., Milrad, M., Arnedillo-Sanchez, I. & Vavoula, G. (2009). Innovation in mobile learning: A European perspective. International Journal of Mobile and Blended Learning, 1(1), 13–35.
    Aguayo, C., Cochrane, T. & Narayan, V. (2017). Key themes in mobile learning: Prospects for learner-generated learning through AR and VR. Australasian Journal of Educational Technology, 33(6).
    Edited books: Traxler, J. & Kukulska-Hulme, A. (Eds) (2015), Mobile Learning: The Next Generation, New York: Routledge. (Also available in Arabic, 2019.) 
    More philosophically, Traxler, J. (2011) Context in a Wider Context, Medienpädagogik, Zeitschrift für Theorie und Praxis der Medienbildung. The Special Issue entitled Mobile Learning in Widening Contexts: Concepts and Cases (Eds.) N. Pachler, B. Bachmair & J. Cook, Vol. 19, pp. 1-16. ↩︎
  2. Meaning, ‘real-life’ settings. ↩︎
  3. Meaning, ‘real-life’ tasks. ↩︎
  4. Meaning, learning tailored to each separate individual learner.  ↩︎
  5. Educational technology researchers distinguish between what teachers say, what they ‘espouse’, and what they actually do, what they ‘enact’, usually something far more conservative or traditional. ↩︎
  6. An educational philosophy based on learners actively building their knowledge through experiences and interactions. ↩︎
  7. A variant of constructivism that believes that learning is created through social interactions and through collaboration with others. For an excellent summary of both, see: https://www.simplypsychology.org/constructivism.html  ↩︎
  8. For an explanation, see: https://smartasset.com/investing/the-economics-of-mobile-apps ↩︎
  9. A common term among computing professionals, referring to whether or not different systems, such as hardware, software, applications and peripherals, will actually work together, or whether it would be more like trying to fit a UK plug into an EU socket.  ↩︎
  10. A more detailed account is available at: https://medium.com/@Jisc/what-killed-the-mobile-learning-dream-8c97cf66dd3d ↩︎
  11. For an explanation, see:https://en.wikipedia.org/wiki/Subprime_mortgage_crisis ↩︎
  12. For an explanation, see: 2010 UK quango reforms – Wikipedia, which impacted Becta, the LSDA, Jisc and other edtech supporters.  ↩︎
  13. For an explanation, see: https://en.wikipedia.org/wiki/Diffusion_of_innovations ↩︎
  14. The proximity of physical or geographical context that the location awareness of neighbouring mobile phones could extend to embrace social proximity, meaning learners who are socially connected, or educational proximity, meaning learners working on similar tasks. The latter idea connects to the notions of ‘scaffolding’, ‘the more knowledgeable other’ and ‘the zone of proximal development’ of the theorist Vygotsky. For more, see: https://en.wikipedia.org/wiki/Zone_of_proximal_development ↩︎
  15. Databases conventionally have a fixed structure, for example, personal details based on forename, surname, house name, street name and so on, with no choice. Folksonomies, by contrast, are defined by the user, often on the fly. For example, tagging with labels such as ‘people I like’, ‘people nearby’, ‘people with a car’. Diigo, a social bookmarking service, uses tagging to implement a folksonomy. ↩︎
  16. Relational databases, unlike ‘flat’ databases based solely on a file, capture relationships, such as a teacher working in a college or a student enrolling in a course, and include all the various individual teachers, courses, students and colleges. ↩︎

About Avallain

At Avallain, we are on a mission to reshape the future of education through technology. We create customisable digital education solutions that empower educators and engage learners around the world. With a focus on accessibility and user-centred design, powered by AI and cutting-edge technology, we strive to make education engaging, effective and inclusive.

Find out more at avallain.com

About TeacherMatic

TeacherMatic, a part of the Avallain Group since 2024, is a ready-to-go AI toolkit for teachers that saves hours of lesson preparation by using scores of AI generators to create flexible lesson plans, worksheets, quizzes and more.

Find out more at teachermatic.com

_

Contact:

Daniel Seuling

VP Client Relations & Marketing

dseuling@avallain.com