AI and Why It’s Impossible to Learn or Understand Language: Cultural and Cognitive Challenges

Language carries assumptions, cultural context, and implicit meaning that make comprehension and translation difficult for humans and even more complex for AI to master. Expanding on the first half, this piece explores language for persuasion, showing how cultural norms, reasoning patterns and rhetorical conventions shape communication, learning and the complexities of teaching or interpreting language effectively.

AI and Why It’s Impossible to Learn or Understand Language: Cultural and Cognitive Challenges

Author: Prof John Traxler, UNESCO Chair, Commonwealth of Learning Chair and Academic Director of the Avallain Lab

This piece continues to argue that it is impossible to learn, understand or discuss what anyone else says or writes beyond the simplest, most specific and concrete level, even perhaps among people with a shared mother tongue. This makes conversation, learning, translation and reasoning more difficult than they initially seem, especially when they involve artificial intelligence. 

The piece is divided into two halves. The first, ‘AI and Why It’s Impossible to Learn or Understand Language’, already deals with language as idiom, whilst the second half deals with language for reasoning. We transition from language for description to language for persuasion. 

As I said in the first half of this piece, language as an idiom is a challenge for learners or translators outside the culture or community that actually hosts that idiom, and this clearly also applies to the chatbots of GenAI.

Sadly, but obviously, reasoning also depends on language, and reasoning is not usually about anything as concrete and specific as ‘the cat sat on the mat’. In fact, it seems safe to say, as we did in the previous piece, that language is mostly not about the specific and the concrete, rather that language, especially language that is in any sense important, is metaphor, simile or analogy, and each of these is based on implied notions or assumptions of ‘likeness’.

Deductive and Inductive Reasoning

Reasoning, or rhetoric or argumentation, in the West, is defined as either deductive or inductive.

Deduction, the former, is true by definition, like ‘2 and 2 is 4’ or ‘Socrates is a man, all men are mortal … etc, etc.’ This is because that is how ‘4’ and how ‘men’ are defined. It has to be true because that’s how the terms are defined, strictly speaking, not true but valid; it is a tautology; it is circular. On inspection, we may be unclear whether it is males, homo sapiens or hominids being discussed and unclear how some things are counted, clouds, for example. 

The latter, induction, works from specific instances towards inferences about the general, say from ‘every dog I’ve met at the park has been friendly’, to ‘all dogs are friendly always’. Somehow, those dogs in the park are ‘like’ all dogs all the time. In making this inference, we preserve or favour one aspect and neglect others, such as ‘in the park’. Even statistical inferences work the same way; the ‘sample’ being analysed is ‘like’ the ‘population’, and somehow representative of the ‘population’, to use the statistical terminology. 

But, and this is the kicker, they all depend on some tacit or shared consensus about the ‘likeness’ that is going on, that, ‘this one is like that one and like that other one and like all those others’, and they have something in general, in common, and that depends on culture, that the people within a culture or subculture basically agree. As we said, once you get slightly more abstract than cats sitting on mats, all language is metaphor, analogy or simile, sometimes in plain sight when we see ‘like’ or ‘as’ in a sentence, sometimes hidden, with only an ‘is’.

The Challenge of Abstract Thought

Plato’s ‘Allegory of the Cave’, from ‘The Republic’ (Book VII, 514a–520a), expresses the notion that we humans only experience separate poor solid instances of some higher, hidden, abstract and immutable reality. ‘The dog,’ for example, is perhaps the wrong way around; we experience each of those poor, solid real dogs and assume we can group them together as some abstract ‘dog’ and discuss them accordingly, whereas different cultures might do the grouping and thus the reasoning differently. We assume that the distinction between ‘dog’ and ‘not-dog’ is clear-cut and sharp, or the ‘park’ and ‘not-the-park’, with nothing vague and nothing in between. 

Fuzzy concepts as opposed to sharp ones are another challenge for logic and reasoning, needing the duality of ‘either/or’ with nothing smeared out in between. In fact, even ‘park’ or ‘dog’ might not be so clear; are feral dogs or wild dogs included and is Hampstead Heath a park? We could be pragmatic and use the rule-of-thumb attributed to Indiana poet James Whitcomb Riley (1849–1916), ‘When I see a bird that walks like a duck and swims like a duck and quacks like a duck, I call that bird a duck.’ So Hampstead Heath is a park; a car park is not.

One way or another, these mental processes do not generate new knowledge; they expose and perhaps distort knowledge already beneath the words being used.

Examples of Culture Shaping Understanding

To take some specific examples where different words are used to describe basically the same process. That is, the process of culture shaping reasons, of words not describing our experiences but shaping them:

Hammer and nail: Abraham Maslow said, ‘If the only tool you have is a hammer, you tend to see every problem as a nail,’ meaning the extent to which preconceptions or interpretations shape understanding or analysis, the solution shaping the problem.  

Evolution and creation paradigms: Creationists argue that God created fossils to test the Christian faith, whilst evolutionists argue that fossils were the product of sedimentary deposition. The culture of a specific community, whether creationist or evolutionist, determines how it understands the evidence rather than the evidence determining the understanding.

Policy and evidence: The (cynical) notion is that ‘evidence-based policy formulation’ is often ‘policy-based evidence formulation’, a suspicion familiar to many of us who have worked for ministries and ministers; that the interpretation of evidence precedes the gathering of it, and of course, logically speaking, there is no evidence for evidence, that would be circular, it would be a logical fallacy.

Personal construct theory: Those ways, major or minor, that individuals use to understand or organise their experiences, those dogs in the park, for example, partial or over-simplified or over-generalised explanations that help us live our lives.

We might use different words, paradigms, policies, cultures or constructs, for example, but these are all essentially the same process at work: words actively shaping experience rather than passively describing it. I admit to being on shaky ground when analysing the workings of words with words, but what choice do I have? The aim was to point out, however weakly, the difficulty that GenAI might have in conversation, translation and education.

The Principle of Linguistic Relativity

The Sapir-Whorf Hypothesis, aka the ‘principle of linguistic relativity’, is relevant here. It is the notion that language shapes thought and perception, meaning speakers of different languages may think about and experience reality differently, in mutually incomprehensible ways, and may never truly understand each other. 

The Hypothesis suggests that a language’s structure influences how its speakers conceptualise their respective worlds. In essence, it suggests that language shapes thought and perception, meaning speakers of different languages may think about and experience reality differently. The language we learn influences our cognitive processes, including our perception, categorisation of experience and even our ability to think about certain concepts. 

There are two versions. The strong version of the hypothesis proposes that language determines thought, meaning that thought is impossible without language. A provocative view, rejected mainly by linguists and cognitive scientists, but resonating with George Orwell’s idea of Newspeak, which we mention later. The weak version, proposing that language influences thought, suggests that whilst thought is not solely determined by language, it is significantly shaped by it, a more acceptable interpretation. 

So we have differing vocabularies for snow in English. Languages like Inuit suggest that English speakers might have a less nuanced understanding of snow-related concepts. Secondly, some languages assign grammatical gender to objects, potentially influencing how speakers perceive those objects; meanwhile, the Chinese ideogram or character for ‘happiness’ was derived from ‘woman in house’, an interesting trajectory from the concrete to the abstract.

The point about Orwell is that the appendix in his novel ‘1984’ describes a political system that, by eliminating problematic or challenging words from its language, Newspeak, eliminates problematic or challenging thoughts from the population, suggesting again that the possibility that language can shape culture (or society in this case), or Orwell thought so. Any resonance with current concerns about political and corporate influence on the news media is, of course, purely coincidental.

Cultural Dimensions and the Definition of ‘Culture’

At some point, we ought to introduce ‘cultural dimensions’ and Geert Hofstede’s work, among others, as much of this piece mentions or implies culture as a fundamental mechanism in shaping language. Being abstract, we can only define ‘culture’ using either metaphors or other abstractions, so we will settle for something simple, ‘the way we do things around here’, ‘here’ being our society, our friends, our organisation, our profession or wherever else a group of people have shared values.

Cultures are obviously different from each other; ‘cultural dimensions’, based on Hofstede’s work, are a tool for describing in what respects and by how much they differ. So we might say that some cultures are risk-taking, others risk-averse; some are consensual, others authoritarian; some take the long view, others the short one; some are individualistic or even selfish, others communal and collectivist, and so on, giving us scales by which to calibrate different cultures. 

So these are alternative perspectives, language and conversation shaping culture and thought, and the opposite, culture and thought shaping conversation and language, or perhaps a dynamic between the two.

Culture, Reasoning and the Diffusion of Innovations

If we are to use language to reason, question, analyse, judge, evaluate and critique, rather than merely locate the cat, then we have to recognise how language is shaped by culture. This may be national culture, regional culture, gender culture, class culture, generational culture, ethnic culture, or indeed a mixture of all of these, as they still shape language. 

Reasoning, questioning, analysing, judging, evaluating and critiquing are essential components of higher-level learning and of higher-order language learning, if learning is to be about reasoning as well as rote reciting. Cultural dimensions, however, suggest that some cultures may be less tolerant of dissent and the outcomes of reasoning than others (and may not even have the language to express it), or might find some conclusions less palpable, less conforming or more risky. 

A further complication is the theorising behind the Diffusion of Innovations, which suggests that changes to opinions, attitudes or beliefs, in effect acceding to reasoning or argumentation, are all dependent on various factors; culture is one of these, as we can deduce from Hofstede’s ‘cultural dimensions’, for example, the risk-aversion/-acceptance and the consensual/authoritarian dimensions. There are, however, others, for example, the ‘relative advantage’ of the changed opinion, attitude or belief and its ‘trialability’, ‘observability’, ‘compatibility’ and ‘complexity’ are also factors in acceding to an argument or reason representing the changed opinion, attitude or belief. The pure logic of GenAI may well see these complications as unreasonable.

Language and Its Cultural Influence

The work of linguist Robert B. Kaplan comes at this from a different direction. Analysing essays from English, Romance, Semitic and Asian students suggested that every language is influenced by a unique thought pattern characteristic of that culture, or by the collective customs and beliefs of its people. 

Rhetoric, argumentation and thus reasoning exhibit culturally distinct patterns. Rhetorical conventions vary across cultures, affecting how students compose essays. English rhetoric in this depiction follows a linear, logical structure influenced by Western philosophical traditions. This harks back to our depiction of inductive and deductive reasoning, whilst other cultures may employ parallelism, helical, zig-zag or indirect approaches in writing, leading to different expectations in composition and argumentation, ones that make less sense to the processes of GenAI. 

Western Reasoning

Interestingly, a paper from Harvard published in November 2025 observes that, ‘LLM responses … their performance on cognitive psychological tasks most resembles that of people from Western, Educated, Industrialized, Rich, and Democratic (WEIRD)’

Returning briefly to Western, or WEIRD, reasoning: whilst we have described the established ways of reasoning correctly, the deductive and the inductive, there are also Western ways of reasoning incorrectly. When learning logic, you start with the fallacies of irrelevance: fallacies that introduce irrelevant information to distract from the main argument. 

For example:

Ad Hominem: Attacking the character or personal attributes of an opponent rather than the argument itself.  

Appeal to Emotion: Manipulating emotions, such as pity or envy, instead of using logical reasoning to win an argument.

Ad Populum: Claiming something is true or right because many people believe it. 

The Red Herring: A distracting point to divert attention from the actual issue. 

Straw Man: Misrepresenting an opponent’s argument to make it easier to attack. 

There are also Fallacies of Weak Induction, such as: 

The Post Hoc Fallacy: Assuming that because one event followed another, it must have caused it. 

The Slippery Slope: Asserting that a small first step will lead to a chain of related, often negative, events. 

Finally, there are Fallacies of Presumption, including:

Circular Reasoning: Using the conclusion as a premise to support a conclusion.

False Dichotomy: Presenting only two possible options when more options exist.

Cultural Significance of Logical Flaws

Our point is that these errors, even if valid within a Western context, may not be obvious or convincing to a non-Western language learner accustomed to reasoning differently, and that their weight may be less or different in other cultures. So, more hierarchical or authoritarian cultures may find Ad Hominem arguments perfectly valid when made by someone with sufficient status, whilst more consensual or communal cultures might be happy with Ad Populum arguments rather than standing out in the crowd. Additionally, cultures are probably each on a continuum from emotional to rational, and this, too, will determine how they react to reasoning and argument.  

Culture or individual cultures are, however, not immutable. According to historian Ian Mortimer, the Elizabethan hand mirror and the vernacular Bible, Tyndale’s in English and Luther’s in German, moved the needle towards greater individuality or individualism and lessened communality or collectivism in their societies, as the mobile phone selfie has done more recently. Relating individually to AI chatbots might have similar consequences, as individuals use them for emotional and intellectual support, or it might involve a completely different cultural dimension.

Other Linguistic Dimensions

Another attribute of language is lexical distance, the distance and differences within and between language families. So, for example, German is very close to Dutch but distant from Chinese, so German speakers might struggle to learn Chinese but not Dutch. Are GenAI chabots somewhere amongst the Anglo-American, low-context, consultative and slightly out-of-date languages, lexically distant from many other language families?

This linguistic metric might also apply to different literary genres or, indeed, different literary authors. Is James Joyce lexically distant from Ernest Hemingway, or a haiku from a sonnet, and thus more or less difficult to understand or translate, especially for chatbots rooted in the GenAI culture they inherit from their trainers?

Languages are also sometimes classified on a continuum from high-context to low-context, with greater or lesser baggage, background, assumptions and preconceptions, and metaphorically expecting more or less bandwidth for successful comprehension or translation. Clearly, low-context language learners will struggle to hear, for example, the irony, euphemism, hyperbole or sarcasm at work in high-context languages. In a high-context language, the neurodiverse will have much less metaphorical bandwidth; they, me, in this context, are low-context, missing cues and signals from their higher-context culture or colleagues.

The Challenge for Educational AI

It is difficult to imagine how to converse with each other with all of these issues going on, so perhaps we should spend more time talking about the cat sitting on the mat and less time on democracy and freedom, and perhaps, for safety’s sake, conversations with GenAI chatbots should also stick to cats. Language is not as simple as it seems, nor is learning it or teaching it, nor hoping that GenAI will be good at either.

The challenge for educational AI is how to proceed safely, from helping learners with the specific and concrete to the abstract and general, from learning about cats to learning about freedom.

The mention of ‘safely’ does, however, add the additional element of ‘harm’. This piece was not really about the ethical dimension of educational AI. This too can be tackled, like our account of deduction and induction, from the top down, from abstract general principles, such as beneficence, to the concrete and specific, such as bomb-making recipes, or from the bottom up, from a long list of concrete and specific misdemeanours, to some abstract general principle that unites them. 

Either way, we hope AI can replicate human reasoning, but human reasoning is flawed, and those flaws will likely be replicated in the training of GenAI. This is precisely the two approaches being explored and investigated at Avallain Lab.


About Avallain

For more than two decades, Avallain has enabled publishers, institutions and educators to create and deliver world-class digital education products and programmes. Our award-winning solutions include Avallain Author, an AI-powered authoring tool, Avallain Magnet, a peerless LMS with integrated AI, and TeacherMatic, a ready-to-use AI toolkit created for and refined by educators.

Our technology meets the highest standards with accessibility and human-centred design at its core. Through Avallain Intelligence, our framework for the responsible use of AI in education, we empower our clients to unlock AI’s full potential, applied ethically and safely. Avallain is ISO/IEC 27001:2022 and SOC 2 Type 2 certified and a participant in the United Nations Global Compact.

Find out more at avallain.com

_

Contact:

Daniel Seuling

VP Client Relations & Marketing

dseuling@avallain.com

AI and Why It’s Impossible to Learn or Understand Language

Intelligence, whether human or artificial, cannot be determined purely through rational or quantitative measures. It also involves interpreting context, nuance and metaphor, the unpredictable elements of human thought. This piece examines how these aspects affect learning and understanding a language, and the challenges of participating in a community, especially as AI becomes more widely used for teaching and learning.

AI and Why It’s Impossible to Learn or Understand Language

Author: Prof John Traxler, UNESCO Chair, Commonwealth of Learning Chair and Academic Director of the Avallain Lab

St. Gallen, October 29, 2025 – In this piece, we argue that it is impossible to learn, understand or discuss what anyone else says or writes at anything beyond the simplest, most specific and concrete level. This even perhaps applies to people with a shared mother tongue, making conversation, learning, translating and reasoning more difficult than they initially seem, especially when they involve artificial intelligence and computers. 

The discussion is, in fact, divided into two halves: the first deals with language as idiom, and the second deals with language for reasoning. In other words, we are discussing language and learning, and language learning, thus discussing intelligence, artificial and otherwise.

Language as Idiom

AI and the Turing Test

Artificial intelligence is the ongoing effort to develop digital technologies that mimic human intelligence, despite the undefined nature of human intelligence. It has been through various incarnations, such as expert systems and neural nets, and now generative AI or GenAI, seeming to finally deliver on the promises of 40 or 50 years ago.

Over all this time, there has, however, been a test, the Turing Test, to evaluate AI’s apparent intelligence, revealing insights into both intelligence and language. GenAI, the current incarnation, is in effect pattern matching with a conversational interface, a sophisticated form of autocomplete, completing responses based on the world’s vast digital resources. However, because of this, it can produce ‘hallucinations’, responses that are plausible but wrong, and can also perpetuate harm, bias or misinformation.

The Turing Test imagines a human, the ‘tester’, able to interact independently with another human and a computer. If the tester cannot tell when he or she is interacting with the human or the computer, then the computer can be said to be ‘intelligent’; it passes the Turing Test. 

Expanding the Boundaries of Intelligence

We should, however, consider how this would work with a seemingly intelligent mammal, say a chimpanzee, conversing in American Sign Language, or an extraterrestrial, say ET, the visiting alien scientist. The film Arrival illustrates the possible superiority of other intelligences, their languages and their differences. These, too, might manifest ‘intelligence’ and challenge ours, widening our notions of intelligence and thus what we might expect from AI.

There is an alternative model of what is going on with intelligence, specifically with conversation, translation and learning; the Chinese Room. This thought experiment imagines a person passing words or perhaps phrases or sentences, called the ‘tokens’, into the Chinese Room. An operative looks them up in a large dictionary or some similar reference book or ‘look-up table’. The operative passes the answer or the translation or the learning out as another ‘token’, there seeming to be no intelligence or consciousness involved, only what is in effect an automaton.

However, it does raise questions about the operative; do they have any taste or ethics? Could they or should they be subject to Asimov’s Three Laws of Robotics? Is such an operative even possible? Is the operative merely another Chinese Room inside the Chinese Room or a way of disguising an algorithm as a human operative? Would the Chinese Room pass the Turing Test?

Human Understanding and the Limits of Machine Interpretation

Incidentally, in the film The Enigma of Kaspar Hauser, about a foundling, a boy with no past, set in Germany in the early nineteenth century, the eponymous hero is asked, ‘How to discern the villager who always tells the truth from the villager who always lies?’. Instead of applying deductive logic, Kaspar offers a simple, childlike answer from his unique perspective: he would ask the villager, ‘Are you a tree frog?’. His innocence allows him to see things differently, and his absurd question and approach might sidestep the issue of formal logic and thus rationality and intelligence. The Turing ‘tester’ just asks, ‘Is it raining tree frogs?’, revealing how a machine may struggle to interpret common sense and the outside world in the way humans do. 

What is relevant here, however, is not a generic human ‘tester’ but a human learner wanting to be taught. Could this learner tell the difference between a human teacher and an artificial one, GenAI in this case? It depends, of course, on the learner’s expectations of pedagogy. If the learner expected a didactic or transmissive pedagogy, GenAI could give a very competent lecture, essay, summary or slide deck, ‘hallucinations’ notwithstanding.

If, on the other hand, the learner expected something discursive, something that engaged with them personally and individually, building on what they already knew, correcting their misunderstandings, using a tone and terms familiar to them, then ‘raw’ GenAI would struggle. This is even before considering the added dimension of emotional intelligence, meaning recognising when the learner is tired, frustrated, bored, upset or in need of a comfort break or some social support.

Language for Reasoning

Early AI and Challenges in Language Learning

Let’s draw on two early efforts we had in 1960. PLATO was a computer-based learning system using ‘self-paced learning, small cycles of feedback and recorded traces of incremental progress’ (Cope & Kalantzis, 2023:4), showing that simple didactic teaching was possible, however crudely, very early on. Additionally, in about 1966, ELIZA, one of the earlier natural language processing programs, provided non-directive psychotherapy, that is, psychotherapy led by the client, not by the therapist. Psychotherapy led by the client’s problems or constructs that might have translated into non-directive or learner-centred pedagogy, heutagogy, perhaps, self-directed learning.

So, how does this relate to learning a language? Curiously, GenAI is based on the so-called large language models, and the medium for exploring intelligence seems to be the conversation, certainly not any IQ test!

Learning a language, even our own mother tongue, from any kind of computer is likely to be tricky. Firstly, it is difficult because computers lack body language, hand gestures and facial expressions.

Plurilingual Societies 

Then, in plurilingual societies such as South Africa, or even most modern societies, we have code switching, the switching between languages, even within individual sentences. There are also potential problems with language registers, ranging from frozen, formal, consultative, casual, to intimate. In a monocultural society, these should be straightforward. However, in multicultural societies, characterised by different norms, speakers may gravitate toward the more formal or the less formal; there can be uncertainty, confusion and upset. These are a kind of ‘cultural dimension’ that we will explore later, suggesting there is no easy correspondence between languages.

Euphemisms, Neologisms and Internet Language

Then we have euphemisms, puns and double entendre, not meaning what they say, and hyperbole and sarcasm, sometimes meaning the opposite of what they say. Furthermore, we have humour in general, but black humour in particular, but why ‘black’? What is it about blackness? We have neologisms, new words from nowhere, sometimes only fleeting, occasionally more durable, skeuomorphs, new meanings from old words, and acronyms, especially those from the internet and World Wide Web. All these pose problems for learners, who need to understand the cultural context and current culture. Similarly, problems arise for GenAI, especially when it always lags behind human understanding and skims across the surface, missing human nuances. 

Community Languages and Cultural Assimilation

We also have subversive, perhaps rebellious, perhaps secretive languages. For example, Polari, the one-time argot of the London gay theatre community, derived partly from Romani. Cockney, rhyming slang, historically from London’s East End, and based on a strict mechanism, which, for example, gets you from ‘hat’ via ‘tit-for-tat’ to ‘titfer’ or from ‘look’ via ‘butcher’s hook’ to ‘butchers’, so ‘can I have a butchers at your titfer?’.

There is also back slang, which forms a vocabulary from words spelt backwards. In Scotland, ‘Senga’ for Agnes. None of these examples is necessarily accessible, inclusive or open. Two textspeak examples make the same point: Arabish, the messaging language using a European keyboard for Arabic sounds, and Mxlish, the one-time language of South African teenagers using the messaging platform, MXit, both with enormous footprints. 

Each of these, in its own way, is the property of a particular community or culture, perhaps waiting to be appropriated, ridiculed, sanitised or ignored by others, and eventually, perhaps, to be ‘taught’, the kiss of death.

In fact, we could argue that learning these languages is an integral part of acceptance and assimilation into a defined community, in just the same way as talking about differential calculus and only then talking about integral calculus is part of acceptance and assimilation into the community of mathematicians. Our point is that displaying intelligence, acquiring language, being part of a culture, having a conversation and learning a subject are all very closely intertwined and necessarily complex for strangers or chatbots to join in with.

Metaphor and Abstraction

Then we get on to the metaphor. In a quarter of an hour of a television drama, I heard ‘black people’, ‘landmark decision’, ‘high art’ and ‘ wild goose chase’, none of which was literally true. I listen to ‘The Freewheelin’ Bob Dylan’, safe in the knowledge that Bob Dylan is not a bicycle. I worry about ‘raising money’, knowing this will not involve lifting the money upwards. ‘The Lord is my shepherd’, in the Psalms, does not tell me that I am a sheep. We also get bombarded with the language inherited from Aristotle, of ‘correspondences’, ‘the ship of state’, ‘the king of the jungle’ and ‘the body politic’, whilst thinking the car needs a wash, even though being inanimate, it has no needs. As a university professor, I have two chairs, neither of which I can actually sit upon, whilst on the news, I hear that the office of the president has been tarnished, though I also hear it has just been redecorated. Confusing, isn’t it?

Parables, such as the ‘Good Samaritan’, from the Gospel of Luke, and the ‘seed falling on stony ground’, from the Gospel of Matthew, are, in fact, just extended metaphors delivered in the hope that the meaning could be inferred by people familiar with the cultural context of their origin. People refer to the Prodigal Son, from the Gospel of Luke, with no idea of the meaning of prodigality. However, they are perhaps meaningless to other cultures, those remote from historical Palestine. The same is true of many fables, such as ‘The Hare and the Tortoise’.

However, as all are ripped out of their cultural or historical context, the moral point is needed now to explain the parable or fable, rather than the other way round, as originally intended; nowadays, sowers, samaritans, hares and tortoises are no longer everyday items. They are, in fact, clichés, remarks bereft of meaning, another challenge for language learners and large language models. 

While metaphor takes words from the concrete to the abstract, the use of ‘literally’ seems to drag them back again, so perhaps Bob Dylan is literally freewheeling, and money is literally being raised. ‘Literally’ is, however, sometimes used for emphasis and sometimes just used weirdly. Yesterday, I heard a podcaster talking about being ‘literally gobsmacked.’ Did he mean he had been smacked on the gob? Actually? Literally? As someone who is autistic, understanding language from a largely concrete interpretation, this confusion, uncertainty and ambiguity is a daily struggle. 

Once we get away from anything as simple and concrete as ‘the cat sat on the mat’ and approach the abstract of love, democracy, freedom, race, virtue and truth, we enter our own small community where some understanding is possible inside, but little is possible outside. These concepts of love, race, democracy, freedom, virtue and truth may all have very different meanings among, say, Marxists, Buddhists, Stoics, Confucians, feminists, humanists and Calvinists, unlike cats and mats. So how can we learn about them and converse about them? And how can our large language models ever engage with them meaningfully, except in a manner reminiscent of the Chinese Room model?

Conclusion

So, the conclusion, so far, is that while it might just be possible to have a meaningful dialogue across a shared culture and mother tongue, especially at the level of simple description and action, is there much hope of having one with computers?

Perhaps, this reinforces the importance of keeping humans at the centre of teaching and learning. AI, no matter how sophisticated, cannot keep up with the diversity, transience and cultural complexity of language. Responsible human mediation remains essential, and we must recognise that computers will never be fast enough or flexible enough. Owning up to these limits is an ethical response in itself, not just from Avallain but across the educational AI sector and its clients. 

However, safeguards like Avallain Intelligence provide a first line of defence. This strategy for ethically and safely implementing AI in education aims to put the human element at the centre. While it cannot solve all the challenges of the evolution of language, ethics or learning, it establishes a framework to ensure that technology remains guided by human understanding, creativity and judgement, enhancing rather than replacing human agency. 

This pair of blogs, the first half and the following second half, is about language, about how understanding language is tricky for humans and even trickier for computers; it is about the medium, not the message. Understanding this might not stop people from saying or promoting nasty, harmful things, but it might perhaps prevent them from being misunderstood.


About Avallain

For more than two decades, Avallain has enabled publishers, institutions and educators to create and deliver world-class digital education products and programmes. Our award-winning solutions include Avallain Author, an AI-powered authoring tool, Avallain Magnet, a peerless LMS with integrated AI, and TeacherMatic, a ready-to-use AI toolkit created for and refined by educators.

Our technology meets the highest standards with accessibility and human-centred design at its core. Through Avallain Intelligence, our framework for the responsible use of AI in education, we empower our clients to unlock AI’s full potential, applied ethically and safely. Avallain is ISO/IEC 27001:2022 and SOC 2 Type 2 certified and a participant in the United Nations Global Compact.

Find out more at avallain.com

_

Contact:

Daniel Seuling

VP Client Relations & Marketing

dseuling@avallain.com

Is Learning Analytics More Promise Than Practice?

Learning analytics has been praised for its potential to improve teaching and learning, but can insights from virtual learning environments and other institutional systems genuinely support students, lecturers and educational managers in everyday practice? This piece examines the current evidence, implementation challenges and transferability limits, helping readers understand where learning analytics can make a real difference and where its promise may exceed its current impact.

Is Learning Analytics More Promise Than Practice?

Author: Prof John Traxler, UNESCO Chair, Commonwealth of Learning Chair and Academic Director of the Avallain Lab

St. Gallen, September 26, 2025 – Learning analytics has a long history and has been the subject of extensive research. It seems to have considerable potential, but what is it, and does it have any practical value? 

The following account is based on the research literature and structured conversations with leading researchers, and it attempts to answer these questions.

What is Learning Analytics?

Learning analytics (LA) is, in broad terms, the notion that as students increasingly learn with digital technologies and as these digital technologies are capable of capturing large amounts of data from large numbers of students, this might enable educators and education systems to be more effective or efficient. 

According to some leading researchers, learning analytics is ‘the measurement, collection, analysis and reporting of data about learners and their contexts, for purposes of understanding and optimising learning and the environments in which it occurs.’ (Viberg, Hatakka, Bälter & Mavroudi, 2018) and ‘Learning analytics, is the analysis and representation of data about learners in order to improve learning,…’ (Clow, 2013).

As with much data, freely and cheaply available, we must, however, always remember, ‘Just because it’s meaningful, doesn’t mean you can measure it; just because you can measure it, doesn’t mean it’s meaningful!’ and we should ask ourselves, if it is both meaningful and measurable, who benefits and in what ways? Is it learners, perhaps in improved attitudes, improved subject knowledge, or even improved understanding of their own learning? Or is it teachers and lecturers? Or is it educational managers and administrators, each with very different values, priorities and targets?

Additionally, from another leading researcher, Professor Rebecca Ferguson of the UK Open University, giving the keynote, at the Learning Analytics Summer Institute, in Singapore, 2023, there is this summary, ‘….while we’ve carved a fantastic research domain for a large number of academics and a growing number of researchers globally, we have done less well at tackling improvement of the quality of learners’ lives by making the learning experience something that is less institutional, less course based, less focused on our system of education, and more focused on the experience of learners.’ 

So there are some doubts within the learning analytics research community.

How Does Learning Analytics Work

OK, so how does learning analytics work? To start with the basics, there are two dominant techniques. Firstly, predictive modelling, ‘a mathematical model is developed, which produces estimates of likely outcomes, which are then used to inform interventions designed to improve those outcomes … estimating how likely it is that individual students will complete a course, and using those estimates to target support to students to improve the completion rate.’ (Clow, 2013:7).

Secondly, social network analysis (SNA), ‘the analysis of the connections between people in a social context. Individual people are nodes, and the connections between them are ties or links. A social network diagram, or sociogram, can be drawn in an online forum; the nodes might be the individual participants, and the ties might indicate replies by one participant to another’s post … interpreted simply by eye (for example, you can see whether a network has lots of links, or whether there are lots of nodes with few links).’ (Clow, 2013:11). 

In practice, this means that the data is coming from the main academic digital workhouse, the virtual learning environment (VLE), aka the learning management system (LMS), and therein lies the problem, which we will discuss later.

Investigating Learning Analytics

Typical research questions that academics have been tackling include whether learning analytics:

  • improve learning outcomes, 
  • improve learning support and teaching, 
  • are taken up and used widely, including deployment at scale and
  • are used in an ethical way. (Viberg, Hatakka, Bälter & Mavroudi, 2018)

More recent systematic reviews have confirmed these trends. For example, Sghir, Adadi & Lahmer (2023) surveyed a decade of predictive learning analytics and concluded that although machine and deep learning approaches have become more sophisticated, they rarely translate into significant pedagogical impact. Likewise, a 2023 systematic review of learning analytics dashboards found that while dashboards are increasingly designed to support learning rather than just monitoring, their actual effects on student achievement, motivation and engagement remain limited (Kaliisa, Misiejuk, López-Pernas, Khalil, & Saqr, 2024). These findings echo the persistent ‘promise versus practice’ gap.

Typical answers, filled from systematically reviewing the research literature, include:

‘The proposition with most evidence (35%) in LA is that LA improve learning support and teaching in higher education.  

There is little evidence in terms of improving students’ learning outcomes. Only 9% (23 papers out of all the 252 reviewed studies) present evidence in this respect. 

… there is even less evidence for the third proposition. In only 6% of the papers, LA are taken up and used widely. This suggests that LA research has so far been rather uncertain about this proposition.

… our results reveal that 18% of the research studies even mention ‘ethics’ or ‘privacy’ … This is a rather small number considering that LA research, at least its empirical strand, should seriously approach the relevant ethics.’

And, unsurprisingly, ‘… there is considerable scope for improving the evidence base for learning analytics …’ (Ferguson & Clow, 2017). 

Findings on Learning Analytics Outcomes

However, ‘the studies’ results that provide some evidence in improvements of learning outcomes focus mainly on three areas: i) knowledge acquisition, including improved assessment marks and better grades, ii) skill development and iii) cognitive gains.’ (ibid)

These authors (ibid: p108) also failed to spot affective gains, meaning learners not liking learning any more, or metacognitive gains, meaning learners not becoming any better at learning, only getting more knowledge or understanding the subject better. More recent evidence (Kaliisa, Misiejuk, López-Pernas, Khalil & Saqr, 2024) supports this view: a systematic review of 38 empirical studies found that learning analytics dashboards showed at best small and inconsistent effects on student motivation, participation and achievement. This underscores that despite ongoing technological advances, affective and metacognitive benefits remain elusive.

The Practical Potential of Learning Analytics

However, the point of this blog is to tackle the relevance of this research without going needlessly into detail and ask whether learning analytics has something to offer routine academic practice across educational organisations and institutions. This means asking whether the data harvested in practice from a VLE or LMS can be of practical use. The details, context and concrete specifics may be necessary, but generally, there is a range of issues.

Firstly, students in their different universities, colleges or schools interact with a variety of other institutional systems, including:

  • Plagiarism detection, attendance and access monitoring, library systems, CAA (computer-aided assessment), lecture capture, e-portfolios, student satisfaction surveys and student enrolment databases (courses, marks, etc, plus data on postcode, disability, gender, ethnicity, qualifications, etc.).
  • Plus, search engines, external content (YouTube, websites, journals, Wikipedia, blogs, etc.) and external communities (TikTok, Instagram, Facebook, Quora, WhatsApp, X, etc.).

In order to get a complete picture of student activity, data would have to be harvested, cleaned and correlated from all these different sources. Permission would have to be obtained from each of the institutional data owners. Suppose institutional IT systems were stable enough for long enough. In that case, this might, in theory, be possible, albeit prohibitively expensive.

However, the fact that each institution has its own IT infrastructure, set up and systems, means that none of the work is transferable or generalisable; each institution would have to start from scratch. Recent case studies from UK higher education (Dixon, Howe & Richter, 2025) confirm this: although analytics can provide insights into teaching and assessment, challenges around data quality, integration and stakeholder trust often limit real-world adoption. In other words, the institutional ecosystems in which LA must operate are highly fragmented, and this lack of transferability continues to be one of the field’s most pressing barriers.

Secondly, academics would need to factor in face-to-face learning, formal and informal, in the hope that it, too, would complete the picture, balancing students with a preference for face-to-face with those with a preference for the digital. Even those with a preference for the digital may prefer to engage with institutional systems as little as possible, using their own devices and networks, learning from personal contact, social media, websites, search engines, podcasts and now AI chatbots.

Final Reflections

As a footnote, this account touches only briefly on the ethical dimensions (Misiejuk, Samuelsen, Kaliisa, & Prinsloo, 2025). Yet recent scholarship increasingly emphasises that ethics cannot be treated as an afterthought. Studies have shown that less than half of published LA frameworks explicitly address privacy or ethics (Khalil, Prinsloo & Slade, 2022). Practical guidelines for institutions (Rets, Herodotou & Gillespie, 2023) stress the need for transparency, informed consent and giving learners agency over their data. 

More critical perspectives highlight the risk that analytics reinforce inequities or institutional agendas over student wellbeing, calling for ‘responsible learning analytics’ (Khalil, Prinsloo & Slade, 2023). Others argue for idiographic approaches, analytics tailored to individuals rather than groups, to mitigate risks of bias and overgeneralisation (Misiejuk, Samuelsen, Kaliisa & Prinsloo, 2025). Together, these developments show that ethics is now central to the future of learning analytics practice.

So perhaps it is unsurprising that learning analytics has made little practical headway in the mainstream of formal education. These challenges suggest that while learning analytics holds promise, its routine application across educational institutions remains limited and requires careful, context-sensitive planning to realise its potential. 

References

Clow, D. (2013). An overview of learning analytics. Teaching in Higher Education, 18(6), 683-695.

Dixon, N., Howe, R., & Richter, U. (2025). Exploring learning analytics practices and their benefits through the lens of three case studies in UK higher education. Research in Learning Technology, 33, 3127

Ferguson, Rebecca and Clow, Doug (2017). Where is the evidence? A call to action for learning analytics. In LAK’17 Proceedings of the Seventh International Learning Analytics & Knowledge Conference, ACM International Conference Proceeding Series, ACM, New York, USA, pp. 56–65

Kaliisa, R., Misiejuk, K., López-Pernas, S., Khalil, M., & Saqr, M. (2024, March). Have learning analytics dashboards lived up to the hype? A systematic review of impact on students’ achievement, motivation, participation and attitude. In Proceedings of the 14th learning analytics and Knowledge Conference (pp. 295-304).

Khalil, M., Prinsloo, P., & Slade, S. (2022, March). A comparison of learning analytics frameworks: A systematic review. In LAK22: 12th international learning analytics and knowledge conference (pp. 152-163).

Khalil, M., Prinsloo, P., & Slade, S. (2023). Fairness, trust, transparency, equity, and responsibility in learning analytics. Journal of Learning Analytics, 10(1), 1-7.

Misiejuk, K., Samuelsen, J., Kaliisa, R., & Prinsloo, P. (2025). Idiographic learning analytics: Mapping of the ethical issues. Learning and Individual Differences, 117, 102599.

Rets, I., Herodotou, C., & Gillespie, A. (2023). Six Practical Recommendations Enabling Ethical Use of Predictive Learning Analytics in Distance Education. Journal of Learning Analytics, 10(1), 149-167.

Sghir, N., Adadi, A., & Lahmer, M. (2023). Recent advances in Predictive Learning Analytics: A decade systematic review (2012–2022). Education and information technologies, 28(7), 8299-8333.

Viberg, O., Hatakka, M., Bälter, O., & Mavroudi, A. (2018). The current landscape of learning analytics in higher education. Computers in human behavior, 89, 98-110


About Avallain

At Avallain, we are on a mission to reshape the future of education through technology. We create customisable digital education solutions that empower educators and engage learners around the world. With a focus on accessibility and user-centred design, powered by AI and cutting-edge technology, we strive to make education engaging, effective and inclusive.

Find out more at avallain.com

About TeacherMatic

TeacherMatic, a part of the Avallain Group since 2024, is a ready-to-go AI toolkit for teachers that saves hours of lesson preparation by using scores of AI generators to create flexible lesson plans, worksheets, quizzes and more.

Find out more at teachermatic.com

_

Contact:

Daniel Seuling

VP Client Relations & Marketing

dseuling@avallain.com

TeacherMatic in Kenya: Insights from a One-Day Pilot with Educators

How can AI tools support teachers while respecting local contexts, infrastructure limits and professional expertise? This piece examines a TeacherMatic pilot in Kenya, where secondary school teachers explored AI-powered generators. By reflecting on practical challenges such as connectivity and curriculum alignment, the article considers how responsibly designed AI can enhance learning and promote inclusive classroom innovation.

TeacherMatic in Kenya: Insights from a One-Day Pilot with Educators

Author: Carles Vidal, MSc in Digital Education, Business Director of Avallain Lab

Kenya, August 2025 – In May 2025, the Avallain Lab, in collaboration with the Avallain Foundation, conducted a one-day pilot with Kenyan teachers to explore how generative AI tools could support them in their daily educational work. The initiative focused on TeacherMatic, Avallain’s AI toolkit for teachers, aiming to gain early insights on its suitability for the Kenyan context and identify potential improvement areas.

From Research to Pilot Design

In January 2025, Teaching with GenAI: Insights on Productivity, Creativity, Quality and Safety, an independent, research-driven report commissioned by the Avallain Group and produced by Oriel Square Ltd, was published. It explores how GenAI can enhance teaching and learning while addressing educational opportunities, challenges and ethical considerations. Building on this, the pilot translated the report’s themes into a series of sessions featuring hands-on activities for teachers. These sessions allowed participants to discuss and apply the report’s ideas in practical activities and add new perspectives to the conversation.

With this purpose in mind, twelve local secondary school teachers, representing both public and private institutions, were selected to provide a sample consistent with the previous study. 

In preparation for the pilot, the Kenyan curriculum was incorporated into TeacherMatic’s curriculum alignment generation options so that the participants could use it to inform their content requests. Since a phased implementation of a new curriculum is currently underway in Kenya, both the existing and the upcoming versions were included to provide teachers with all possible options in this transition context.

The pilot was organised in three parts. It began with a focus group designed to capture participants’ initial impressions and existing knowledge of GenAI tools, while also introducing them to TeacherMatic. This was followed by breakout sessions, where smaller groups of teachers engaged in hands-on exploration of the tool. The day concluded with a plenary session, bringing everyone together to share insights and provide feedback.

Infrastructure Challenges

During the initial focus group, teachers described existing infrastructure challenges relating to both the availability of devices and the reliability of internet connections, as part of the general context of their teaching practices. Connectivity was identified as a critical barrier, with ‘slow or unreliable internet and, in some cases, complete service interruptions lasting hours’ being common in many public institutions. According to the group, while private schools tend to experience fewer connectivity issues, many public schools continue to face significant barriers due to their reliance on intermittent mobile networks. 

Participants also reported limited access to devices, particularly in public schools, where ‘only a few computers are available and shared among all teachers’. Most public schools operate under centralised device policies, with limited computer labs and few, if any, classroom-based devices. In this context, mobile phones become the primary means of accessing tools such as educational technologies.

Interactive Breakout SessionsIn the breakout sessions, teachers explored a curated set of TeacherMatic generators, including ‘Lesson Plan’, ‘Multiple Choice Questions’, ‘Debate’, ‘True or False’, ‘Learning Activities’ and ‘Inspiration!’. Participants accessed TeacherMatic on computer devices, tablets and mobile phones and worked in Swahili and English during discussions and content generation.

A small group of participants at the Kenyan TeacherMatic pilot collaborate during a breakout session, reviewing notes and using the toolkit on digital devices.
During a breakout session, participants explore TeacherMatic generators together on a mobile device.

During the sessions, the teachers engaged freely with the generators, exchanging ideas, debating approaches and sharing expectations and concerns. Participants expressed strong enthusiasm for the potential of using GenAI tools in their classrooms, viewing them as a way to enhance teaching resources and remain ahead of their students in adopting this technology.

After the hands-on sessions, participants reconvened for a larger group discussion to share how they perceived TeacherMatic and, more broadly, GenAI tools, including what aspects attracted them, what concerns they had and what support or training they would need for effective adoption.

Findings and Reflections

The final group discussion revealed a general agreement on the following areas:

  • Time-saving benefits: Participants valued the speed and quality of the generated content and identified significant reductions in classroom preparation time, which they felt would allow them to improve the delivery of their lessons. As one teacher said, ‘If we can save time on planning, we can spend more time on students.’
  • Curriculum alignment: Although both current Kenyan curricula were included in TeacherMatic, participants saw opportunities for even more detailed curriculum integration, highlighting the need for further content localisation down to the most detailed level of curriculum implementation.
  • Creativity and pedagogical innovation: Teachers expressed a strong need for multimodal learner-facing content, such as clips or visuals, to help explain complex topics, ‘like 3D geometry’. With learners already using AI creatively, some felt that text-based outputs alone were insufficient. As one participant explained, ‘You can’t teach about the inside of a pyramid with text.’ 
  • AI literacy training programs for teachers: Teachers also voiced the importance of receiving training in GenAI so that students do not outpace them in its use. As one teacher expressed, ‘Let’s take this AI to the classroom… show them that their teachers are also up-to-date.’
  • Reassurance that GenAI tools are not a replacement for teachers:  Participants stressed the importance of teachers retaining full agency in creating and delivering learning resources, especially when validating content intended for their students.
A facilitator stands at the front of the room as participants in the TeacherMatic Kenyan pilot engage in a group discussion, with laptops and notes on the table.
Teachers and facilitators discuss key findings from the TeacherMatic Kenyan pilot, highlighting opportunities and challenges in classroom use.

Early Insights and Broader Lessons

While this was only a one-day pilot with a small group of teachers, it offered valuable, early insights into both the opportunities and barriers to adopting GenAI in Kenyan classrooms. Some challenges, like limited devices and connectivity, may be more specific to the region and require systemic solutions, but others, such as the need for curriculum-aligned content and teacher training, echo what we have seen elsewhere.

A group photo of all participants in the Kenyan TeacherMatic pilot, standing together outdoors under trees.
Thank you to Martina Amoth (CEO, Avallain Foundation East Africa), Robert Ochiel (Avallain Lab Intern) and all the participants of the Kenyan TeacherMatic pilot for sharing their time, reflections and experiences.

These shared lessons show that even small-scale pilots can guide product development and spark ideas for making GenAI a meaningful, inclusive tool for educators, regardless of where they teach.


About Avallain

At Avallain, we are on a mission to reshape the future of education through technology. We create customisable digital education solutions that empower educators and engage learners around the world. With a focus on accessibility and user-centred design, powered by AI and cutting-edge technology, we strive to make education engaging, effective and inclusive.

Find out more at avallain.com

About TeacherMatic

TeacherMatic, a part of the Avallain Group since 2024, is a ready-to-go AI toolkit for teachers that saves hours of lesson preparation by using scores of AI generators to create flexible lesson plans, worksheets, quizzes and more.

Find out more at teachermatic.com

Contact:

Daniel Seuling

VP Client Relations & Marketing

dseuling@avallain.com

Bringing Mobile Learning Back with AI, Context and Expertise

What if mobile learning had the intelligence and context it lacked 25 years ago? This piece revisits the rise and fall of early mobile learning projects and considers how the convergence of artificial intelligence, contextual mobile data and educational expertise could support more responsive and personalised learning today.

Bringing Mobile Learning Back with AI, Context and Expertise

Author: Prof John Traxler, UNESCO Chair, Commonwealth of Learning Chair and Academic Director of the Avallain Lab

St. Gallen, July 28, 2025 – Around 25 years ago, many members of the European edtech research community, myself included, were engaged in projects, pilots and prototypes exploring what was then known as ‘mobile learning’. This roughly and obviously referred to learning with mobile phones, likely 3G, nearing the dawn of the smartphone era. Learners could already access all types of learning available on networked desktops in their colleges and universities, but they were now freed from their desktops. The excitement, however, was around all the additional possibilities. 

One of these was ‘contextual learning,’ meaning learning that responded to the learner’s context. Mobile phones knew where they were, where they had been and what they had been doing1. These devices could capture images, video and sound of their context, including both the user and their surroundings. This meant they could also understand and know their user, the learner. 

So, to provide some examples:

  • Walking around art galleries like the Uffizi and heritage sites like Nottingham Castle, learners with their mobile phones could stop at a painting randomly and receive a range of background information, including audio, video and images. The longer they stayed, the more they would receive. Based on other paintings they had lingered at, they could get suggestions, explanations and perspectives on what else they might like and where else they could go.
  • Augmented reality on mobile phones meant that learners standing in Berlin using their mobile phone as a camera viewfinder could see the Brandenburg Gate, but with the now-gone Berlin Wall interposed perfectly realistically as they walked up to and around it. Similarly, they could see Rembrandt’s house in Amsterdam. Learners could also walk across the English Lake District and see bygone landforms and glaciers, or engage in London murder mysteries, looking at evidence and hearing witnesses at various locations.
  • Recommender systems on mobile phones analysed learners’ behaviours, achievements and locations to suggest the learning activity that would suit them best based on their history and context. These recommendations could be linked to assignments, resources and colleagues on their university LMS, providing guidance and practical advice. For example, in a Canadian project, there are specific applications in tourism.
  • Using a system like Molly Oxford on their mobile phones, learners could be guided to the nearest available loan copy of a library book they wanted. They could also be given suggestions based on public transport, wheelchair accessible footpaths and library opening hours.
  • Trainee professionals, such as physiotherapists or veterinary nurses, in various projects across Yorkshire, could be assessed while carrying out a healthcare procedure in ‘real-life’ practice. Their mobile phones would capture the necessary validation and contextual data to ensure a trustworthy process.
  • Some early experiments, with Bluetooth and other forms of NFC (near-field communication), allowed passers-by or students to pick up comments or images hanging in discrete locations, such as a subway or corridor on a university campus, serving as sign-posting or street art. 

These pilots and projects implemented situated2, authentic3 and personalised4 learning as aspects of contextual learning, and espoused5 the principles of constructivism6 and social constructivism7. This was only possible as far as the contemporary resources and technologies permitted. They did not, however, encourage or allow content to be created, commented on, or contributed to by learners, only consumed by them. Also, they usually only engaged with learners on an individual basis, not supporting interaction or communication among learners, even those learning the same thing, at the same place and at the same time.

So what went wrong? Why aren’t such systems widespread across communities, galleries, cultural spaces, universities and colleges any more? And how have things changed? Could we do better now?

The Downfall of Mobile Learning: What Went Wrong?

Mobile phone ownership was not widespread two decades ago, and popular mobile phones were not as powerful as they are today. The ‘apps economy’8 had not taken off. This meant that projects and pilots had to develop all software systems from scratch and get them to interoperate9. They also had to fund and provide the necessary mobile phones for the few learners involved10

Once the pilot or project and its funding had finished, its ideas and implementation were not scalable or sustainable; they were unaffordable. Pilots and projects were usually conducted within formal educational institutions among their students. Also, evaluation and dissemination focused on technical feasibility, proof-of-concept and theoretical findings. They rarely addressed outcomes that would sway institutional managers and impact institutional performance metrics. As a result, these ideas remained optional margins of institutional activity rather than the regulated business of courses, qualifications, assessments and certificates. Nor was there a business model to support long-term adoption. 

In fairness, we should also factor in the political and economic climate at the end of the 2000s. The ‘subprime mortgage’ crisis11 and the ‘bonfire of the quangos’12 depleted the political goodwill and public finances for speculative development work. Work that had previously and implicitly assumed the ‘diffusion of innovations’13 into mainstream provision. That ‘trickle down’ would take these ideas from pilot project to production line.

The Shift in Mobile Learning: What Changed?

Certainly not the political or economic climate, but mobile phones are now familiar, ubiquitous and powerful, and so is artificial intelligence (AI), also familiar, ubiquitous and powerful. Both of these technologies are outside educational institutions rather than confined within them. 

These earlier pilots and projects were basically ‘dumb’ systems, with no ‘intelligence’, drawing only on information previously loaded into their closed systems. Now, we have ‘intelligence’, we have AI and we have AI chatbots on mobile phones. However, currently, AI lacks context and cannot know or respond to the location, history, activity or behaviour of the learner and their mobile phone. Unfortunately, many current AI applications and chatbots are stateless and do not retain memory across interactions, and this represents a further challenge to any continuity.

The Possibilities of Mobile Learning: Could We Do Better Now?

Today’s network technologies can enable distributed connected contributions and consumption, enabling writing and reading. These might realise more of the possibilities of constructivism and social constructivism. They could enable educational systems to learn about and respond to their individual learners and their environment, connecting groups of learners and showing them how to support each other14

So, is there the possibility of convergence? Is it possible to combine the ‘intelligence’ of AI, the ‘memory’ of databases and the context provided by mobile phones, including both the learner and their environment? Could this be merged and mediated by educational expertise, acting as an interface between the three technologies, filtering, selecting and safeguarding?

What might this look like? We could start by adding ‘intelligence’ and ‘memory’ to our earlier examples.

The Future of Mobile Learning: What Could it Look Like? 

In terms of formal learning, our previous examples of the Uffizi Galleries, the Lake District, the Berlin Wall and Nottingham Castle are easy to extrapolate and imagine. Subject to a mediating educational layer, learners would each be in touch with other learners, helping each other in personalised versions of the same task. They could receive background information, ideas, recommendations, feedback and suggestions, cross-referenced with deadlines, schedules and assignments from their university LMS, all based on the cumulative history of their individual and social interactions and activities. 

When it comes to community learning or visitor attractions, systems could be created that encourage interactive, informal learning. For example, a living local history or 3D community poem spread around in the air, held together by links and folksonomies15, perhaps using tags to connect ideas, a living virtual world overlaying the real one. These systems could also support more prosaic purely educational applications, combining existing literary, artistic or historical sources with personal reactions or recollections.

Technically, this is about accessing the mobile phone’s contextual data, but sometimes other simple mobile data communications, for context. It also requires querying a relational database16 to retrieve history and constraints, and perhaps an institutional LMS, to retrieve assignments, timetables and course notes. AI can then be prompted to bring these together for some educational activity. Certainly, a proof of concept is eminently feasible. The expertise and experience of the three core disciplines are still out there and only need to be connected, tasked and funded.

Conclusions and Concerns

This piece sketches some broad educational possibilities once we enlist AI to support various earlier kinds of contextual mobile learning. Specific implementations and developments must address considerable social, legal, ethical and regulatory concerns and requirements. The earlier generation of projects might have already worked with these, privacy and surveillance being the obvious ones. Still, AI adds an enormous extra dimension to these, and there are other concerns like digital over-saturation, especially of children and vulnerable adults.

Nonetheless, this convergence of AI, contextual mobile data and educational expertise promises a future where learning is not confined to traditional settings but is a fluid, intelligent and deeply embedded aspect of our daily lives, making education more effective, accessible and aligned with individual and societal needs.


Mobile Learning & GenAI for the Less Privileged, Refugees & the Global South

How can mobile learning and GenAI reach those traditionally left out of educational innovation?

In a recent episode of Silver Lining for Learning, an award-winning webinar and podcast series, Prof. John Traxler joined a panel to discuss how mobile learning and generative AI can support less privileged learners, including refugees and communities in the Global South. 

The episode, ‘Mobile Learning & GenAI for the Less Privileged, Refugees & the Global South,’ builds on many of the questions raised in this article. It explores how mobile technologies have and haven’t fulfilled their potential, and what role GenAI might now play in addressing longstanding educational inequalities.

Watch the full episode:


  1. There is considerable literature, including:
    Special editions: Research in Learning Technology, Vol. 17, 2009. 
    Review articles: Kukulska-Hulme, A., Sharples, M., Milrad, M., Arnedillo-Sanchez, I. & Vavoula, G. (2009). Innovation in mobile learning: A European perspective. International Journal of Mobile and Blended Learning, 1(1), 13–35.
    Aguayo, C., Cochrane, T. & Narayan, V. (2017). Key themes in mobile learning: Prospects for learner-generated learning through AR and VR. Australasian Journal of Educational Technology, 33(6).
    Edited books: Traxler, J. & Kukulska-Hulme, A. (Eds) (2015), Mobile Learning: The Next Generation, New York: Routledge. (Also available in Arabic, 2019.) 
    More philosophically, Traxler, J. (2011) Context in a Wider Context, Medienpädagogik, Zeitschrift für Theorie und Praxis der Medienbildung. The Special Issue entitled Mobile Learning in Widening Contexts: Concepts and Cases (Eds.) N. Pachler, B. Bachmair & J. Cook, Vol. 19, pp. 1-16. ↩︎
  2. Meaning, ‘real-life’ settings. ↩︎
  3. Meaning, ‘real-life’ tasks. ↩︎
  4. Meaning, learning tailored to each separate individual learner.  ↩︎
  5. Educational technology researchers distinguish between what teachers say, what they ‘espouse’, and what they actually do, what they ‘enact’, usually something far more conservative or traditional. ↩︎
  6. An educational philosophy based on learners actively building their knowledge through experiences and interactions. ↩︎
  7. A variant of constructivism that believes that learning is created through social interactions and through collaboration with others. For an excellent summary of both, see: https://www.simplypsychology.org/constructivism.html  ↩︎
  8. For an explanation, see: https://smartasset.com/investing/the-economics-of-mobile-apps ↩︎
  9. A common term among computing professionals, referring to whether or not different systems, such as hardware, software, applications and peripherals, will actually work together, or whether it would be more like trying to fit a UK plug into an EU socket.  ↩︎
  10. A more detailed account is available at: https://medium.com/@Jisc/what-killed-the-mobile-learning-dream-8c97cf66dd3d ↩︎
  11. For an explanation, see:https://en.wikipedia.org/wiki/Subprime_mortgage_crisis ↩︎
  12. For an explanation, see: 2010 UK quango reforms – Wikipedia, which impacted Becta, the LSDA, Jisc and other edtech supporters.  ↩︎
  13. For an explanation, see: https://en.wikipedia.org/wiki/Diffusion_of_innovations ↩︎
  14. The proximity of physical or geographical context that the location awareness of neighbouring mobile phones could extend to embrace social proximity, meaning learners who are socially connected, or educational proximity, meaning learners working on similar tasks. The latter idea connects to the notions of ‘scaffolding’, ‘the more knowledgeable other’ and ‘the zone of proximal development’ of the theorist Vygotsky. For more, see: https://en.wikipedia.org/wiki/Zone_of_proximal_development ↩︎
  15. Databases conventionally have a fixed structure, for example, personal details based on forename, surname, house name, street name and so on, with no choice. Folksonomies, by contrast, are defined by the user, often on the fly. For example, tagging with labels such as ‘people I like’, ‘people nearby’, ‘people with a car’. Diigo, a social bookmarking service, uses tagging to implement a folksonomy. ↩︎
  16. Relational databases, unlike ‘flat’ databases based solely on a file, capture relationships, such as a teacher working in a college or a student enrolling in a course, and include all the various individual teachers, courses, students and colleges. ↩︎

About Avallain

At Avallain, we are on a mission to reshape the future of education through technology. We create customisable digital education solutions that empower educators and engage learners around the world. With a focus on accessibility and user-centred design, powered by AI and cutting-edge technology, we strive to make education engaging, effective and inclusive.

Find out more at avallain.com

About TeacherMatic

TeacherMatic, a part of the Avallain Group since 2024, is a ready-to-go AI toolkit for teachers that saves hours of lesson preparation by using scores of AI generators to create flexible lesson plans, worksheets, quizzes and more.

Find out more at teachermatic.com

_

Contact:

Daniel Seuling

VP Client Relations & Marketing

dseuling@avallain.com

Avallain and Educate Ventures Research Collaborate to Deliver Robust, Real-World Guidance on Ethical AI in Education

‘From The Ground Up’ is a new report and research-based framework designed in line with Avallain Intelligence, our strategy for the responsible use of AI in education, and built with and for educators and institutions.

Avallain and Educate Ventures Research Collaborate to Deliver Robust, Real-World Guidance on Ethical AI in Education

St. Gallen, June 2025 – As generative AI transforms classrooms and educational workflows, clear, actionable ethical standards have never been more urgent. This is the challenge addressed in ‘From the Ground Up: Developing Standard Ethical Guidelines for AI Implementation in Education’, a new report developed by Educate Ventures Research in partnership with Avallain.

Drawing on extensive consultation with educators, multi-academy trusts, developers and policy specialists, the report introduces a practical framework of 12 ethical controls. These are designed to ensure that AI technologies align with educational values, enhance rather than replace human interaction and remain safe, fair and transparent in practice.

Unlike abstract policy statements, ‘From the Ground Up’ bases its guidance in classroom realities and product-level design. It offers publishers, institutions, content service providers and teachers a path forward that combines innovation with integrity.‘Since the beginning, we have believed that education technology must keep the human element at its core. This report reinforces that view by placing the experiences of teachers and learners at the centre of how we build, evaluate and implement AI. Our role is to ensure that innovation never comes at the cost of well-being, agency or trust, but instead strengthens the human connections that make learning meaningful.’ – Ursula Suter and Ignatz Heinz, Co-Founders of Avallain.

A Framework Informed By The People It Serves

Developed over six months through research, case analysis, and structured stakeholder engagement, the report draws on input from multi-academy trust leaders, expert panels of educators, technologists and AI ethicists.

The result is a framework of 12 ethical controls:

  1. Learning Outcome Alignment
  2. User Agency Preservation
  3. Cultural Sensitivity and Inclusion
  4. Critical Thinking Promotion
  5. Transparent AI Limitations
  6. Adaptive Human Interaction Balance
  7. Impact Measurement Framework
  8. Ethical Use Training and Awareness
  9. Bias Detection and Fairness Assurance
  10. Emotional Intelligence and Well-being Safeguards
  11. Organisational Accountability & Governance
  12. Age-Appropriate & Safe Implementation

Each control includes a definition, challenges, mitigation strategies, implementation guidance and relevance to all key education stakeholders. The result is a practical, structured set of tools, not just principles.

‘This report exemplifies our mission at Educate Ventures Research and Avallain: to bridge the gap between academic research and real-world educational technology. By working closely with teachers, school leaders and developers, we’ve created ethical controls that are both grounded in evidence and practical in use. Our goal is to ensure that AI in education is not only effective, but also transparent, fair and aligned with the human values that define great teaching.’ – Prof. Rose Luckin, CEO of Educate Ventures Research and Avallain Advisory Board Member.

Recommendations That Speak To Real-World Risks

Some of the report’s most relevant insights include:

User Agency Preservation
AI should support, not override, the decisions of teachers and the autonomy of learners. Design should prioritise flexibility and transparency, allowing human control and informed decision-making.

Cultural Sensitivity and Inclusion
The report calls for continuous audits, bias detection and cultural representation in AI training data and outputs, with robust mechanisms for local adaptation.

Transparent AI Limitations
AI systems must explain what they can and cannot do. Visual cues, plain-language disclosures and in-context explanations all help users manage expectations.

Adaptive Human Interaction Balance
The rise of AI must not mean the erosion of dialogue. Thresholds for teacher-student and peer-to-peer interaction should be built into implementation plans, not left to chance.

Impact Measurement Framework
The report calls for combining short-term performance data and long-term qualitative indicators to assess whether AI tools genuinely support learning.

Relevance Across The Education Ecosystem

For Publishers

The report’s recommendations align closely with educational publishers’ strategic goals. Whether using AI to accelerate content production, localise materials, or personalise resources, ethical deployment requires more than efficiency. It requires governance structures that protect against bias, uphold academic rigour and enable human review. Solutions like Avallain Author already embed editorial control into AI-supported workflows, ensuring quality and trust remain paramount.

For Schools And Institutions

From primary schools to higher and vocational education providers, the pressure to adopt AI is growing. The report provides practical guidance on how to do so responsibly. It outlines how to set up oversight mechanisms, train staff, communicate transparently with parents and evaluate long-term impact. For institutions already exploring AI for tutoring or assessment, the controls offer a roadmap to stay aligned with safeguarding, inclusion and pedagogy.

For Content Service Providers

Agencies supporting publishers and ministries with learning design, editorial production and localisation will find clear implications throughout the report. From building inclusive datasets to ensuring transparent output verification, ethical AI becomes a shared responsibility across the value chain. Avallain’s technology, driven by Avallain Intelligence, enables these partners to apply ethical filters and maintain editorial standards at scale.

For Teachers

Educators are frontline decision makers. They shape how AI is used in the classroom. The report explicitly calls for User Agency Preservation to be maintained, Ethical Use Training and Awareness to be prioritised and teacher feedback to guide AI evolution. Solutions within Avallain’s portfolio, such as TeacherMatic, are already embedding these principles by offering editable outputs, contextual prompts and transparency in how each suggestion is generated.

The Role Of Avallain Intelligence: Putting Ethical Controls Into Action

Avallain Intelligence is Avallain’s strategy for the ethical and safe implementation of AI in education and the applied framework that aims to integrate these 12 ethical controls. It adheres to principles such as transparency, fairness, accessibility and agency within the core infrastructure of Avallain’s digital solutions.

This includes:

  • Explainable interfaces that clarify how AI decisions are made.
  • Editable content outputs that preserve user control.
  • Cultural customisation features for inclusive learning contexts.
  • Bias Detection and Fairness Assurance systems with review mechanisms.
  • Built-in feedback loops to refine AI based on classroom realities.

Avallain Intelligence was developed to meet and exceed the expectations outlined in ‘From the Ground Up’. This means publishers, teachers, service providers and institutions using Avallain tools are not starting from scratch but are already working within an ecosystem designed for ethical AI.

The work of the Avallain Lab, our in-house academic and pedagogical hub, continuously informs these principles and ensures that every advancement is grounded in research, ethics and real classroom needs.

‘The insights and methodology that underpin this report reflect the foundational work of the Avallain Lab and our commitment to research-led development. By aligning ethical guidance with practical use cases, we ensure that Avallain Intelligence evolves in direct response to real pedagogical needs. This collaboration shows how rigorous academic frameworks can inform responsible AI design and help create tools that are not only innovative but also educationally sound and trustworthy.’ – Carles Vidal, Business Director of the Avallain Lab. 

Download The Executive Version

This is a practical roadmap for anyone seeking to navigate the opportunities and risks of AI in education with clarity, confidence and care.

Whether you are a publisher exploring AI-powered content workflows, a school leader integrating new technologies into classrooms or a teacher looking for trusted guidance, ‘From the Ground Up’ offers research-based recommendations you can act on today.

Click here to download the executive version of the report to explore how the 12 ethical controls can help your organisation adopt AI responsibly, support educators, protect learners and remain committed to your educational mission.


About Educate Ventures Research

Educate Ventures Research (EVR) is an innovative boutique consultancy and training provider dedicated to helping education organisations leverage AI to unlock insights, enhance learning and drive positive outcomes and impact.​

Its mission is to empower people to use AI safely to learn and thrive. EVR envisions a society in which intelligent, evidence-informed learning tools enable everyone to fulfil their potential, regardless of background, ability or context. Through its research, frameworks and partnerships, EVR continues to shape how AI can serve as a trusted companion in teaching and learning.

Find out more at educateventures.com

About Avallain

At Avallain, we are on a mission to reshape the future of education through technology. We create customisable digital education solutions that empower educators and engage learners around the world. With a focus on accessibility and user-centred design, powered by AI and cutting-edge technology, we strive to make education engaging, effective and inclusive.

Find out more at avallain.com

_

Contact:

Daniel Seuling

VP Client Relations & Marketing

dseuling@avallain.com

Thinking about the Edtech Echo Chamber

Educational technology is often seen as a straightforward solution to teaching challenges. Yet, beneath the surface lies a complex dynamic. Who ultimately shapes educational technology? This piece explores the proximity between those who buy and sell edtech and the gap between these decision-makers and those who actually use it. This imbalance influences both innovation and pedagogy. 

Thinking about the Edtech Echo Chamber

Author: Prof John Traxler, UNESCO Chair, Commonwealth of Learning Chair and Academic Director of the Avallain Lab

Since joining Avallain and whilst continuing to work as a university professor, I have been reflecting on the nature of the edtech environment. My perspective is not only very generalised, subjective and impressionistic. It also overlooks major disturbances, most obviously the global pandemic, the alleged ‘pivot’ to digital learning and the global explosion of artificial intelligence, with its haphazard adoption in education.

Specifically, I have been thinking about the small informal community of people within the organisations of the education sectors who design, develop and sell dedicated edtech systems and other people who buy, install and maintain such systems. On behalf of their respective organisations, they are engaged in transactions that are highly focused, highly technical, highly complex and highly responsible. The members of this informal community, both ‘buyers’ and ‘sellers’, must, by the nature of their enormous expertise, share very similar backgrounds, values, language, ideas and influential personalities in order to be effective. Their experience suggests that in their careers they can change from ‘sellers’ or ‘buyers’ and back again several times. 

I suspect that they share a kind of groupthink that seems, certainly in their terms, to be productive, objective and transparent. By this, I mean that the buyers and sellers agree on what they should be discussing (and what not to discuss). This groupthink determines the direction of procurement and consequently focuses on making existing products and systems faster, bigger, cheaper, more secure, more attractive and more compliant, and builds on current perceived successes. 

The User Community

There is, however, another informal community involved, on the periphery of the informal edtech buyers and sellers community, namely that of teachers, lecturers, learners and students.

My worry is that because of differences in values, language, ideas and influential personalities, any discourse with these communities of teachers, lecturers, learners or students is much less efficient and effective. It is often perceived as partly mutually incomprehensible, characterised by one community or the other using concepts, methods, tools, values and references not wholly or confidently understood by the other.

As an example, many organisations using educational technology are trying to address equity, inclusion and diversity in their provision and their ethos. They may also be trying to promote different models or strategies for teaching and learning. Whilst the communities of teachers and lecturers know whom to involve to advance these initiatives within their own work, moving upstream and being able to articulate their needs in technically meaningful ways seems generally much more difficult. There is a chasm between ‘academic’ departments, doing the teaching, and ‘service’ departments, running the digital technology.

Obviously, issues like staff retraining, interoperability and managerial nervousness further limit the scope for systemic, as opposed to incremental, change. So do the business models of educational organisations and, for example, of education and academic publishers.

Horizon Scanning

I did consultancy for the UK NHS, National Health Service, some years ago, helping to improve their edtech ‘horizon scanning’ capacity, and whilst it is possible to develop methods and tools for this, I now worry that the problem is the possible inability to break out of the groupthink, out of the accepted views, of the community in question. At the time, I expressed this slightly differently, saying it was easy to see innovations on the horizon coming straight at you, but the challenge was to spot the relevance of those on the horizon, appearing further off to the left or way off to the right. Again, there is a difference between ‘hard’ technical stuff on the horizon and ‘soft’ educational stuff.  

There might be a connection between these observations about horizon scanning and other work on tools and methods to support brainstorming, which attempt to generate new ideas within a community as opposed to recognising ideas outside the community and on the horizon.  

I might be equating the groupthink of various closed but informal groups with the ideas about paradigms, scientific or otherwise, but in a practical sense, I wondered how we promote the ‘paradigm shifts’ that bring about dramatic but benign or beneficial transformation. In short, where do new products come from?

Breaking the Edtech Echo Chamber

In conclusion, I am attempting to make a case that the people buying and selling educational technology often understand each other much better than they understand the people using it, and thus educational technology is driven by technology push (or technological determinism) rather than pedagogy pull. 

I think this builds in some pedagogic conservatism. There might be other reasons or perspectives, but this gap remains a critical challenge. 

The future of educational technology depends on breaking down silos and aligning the expertise of buyers and sellers with the lived needs of educators and learners. Together, fostering shared language and values will empower all stakeholders to participate in shaping tools that genuinely enhance education.


1 Perhaps this current piece could be reworked to address these two issues but I think both have served to reinforce existing attitudes and values, and that pronouncements of systemic transformation may be premature or overstated or misleading.

2 But clearly this can only be impressions and could never be based on anything purporting to be ‘scientific’ or ‘objective’. 

3 I think in fact I am saying this community articulates and represents a ‘paradigm’ as defined by Thomas S. Kuhn in his 1974 short paper Second Thoughts on Paradigms (available online at https://uomustansiriyah.edu.iq/media/lectures/10/10_2019_02_17!07_45_06_PM.pdf), albeit a modest one compared to Darwinian evolution, heliocentric astronomy or even object-oriented programming.

4 There is also a factor understood in requirements engineering about the human incapacity to answer questions about the future; ask customers or users what they would like in the future and they will reply, what they already have but faster. This too builds in conservatism. Fortunately, there are various better techniques to elicit future requirements from customers or users. 

5 Characterised on one side by fairly generalised, abstract and social ideas and values and on the other by specific, concrete and technical ideas and values, though it is difficult for this characterisation to be objective and neutral.

6 It could be the grand ‘connectivist’ conceptions of the early ideologically driven MOOCs or merely flipped learning, self-directed learning, critical digital literacy, project-based learning, situated learning and so on.

7 Which might explain why most universities and colleges seem stuck in the digital technology of the 1990s, namely the VLE/LMS and the networked desktop computer, in spite of the ubiquity of social media and personal technologies.

8 Defined here as the ability of different hardware and software systems with different roles within a complex organisation to work together.

9 ‘Horizon scanning’ is the activity of intercepting and interpreting ideas that are emergent, unformed, unclear and then seeing their practical relevance ahead of colleagues and competitors. There are various methods and for the NHS we attempted to synthesise and validate a method from those already in government departments, universities and corporations.

10 Thinking of Teflon and Post-Its.


About Avallain

At Avallain, we are on a mission to reshape the future of education through technology. We create customisable digital education solutions that empower educators and engage learners around the world. With a focus on accessibility and user-centred design, powered by AI and cutting-edge technology, we strive to make education engaging, effective and inclusive.

Find out more at avallain.com

About TeacherMatic

TeacherMatic, a part of the Avallain Group since 2024, is a ready-to-go AI toolkit for teachers that saves hours of lesson preparation by using scores of AI generators to create flexible lesson plans, worksheets, quizzes and more.

Find out more at teachermatic.com

Contact:

Daniel Seuling

VP Client Relations & Marketing

dseuling@avallain.com

Who Owns ‘Truth’ in the Age of Educational GenAI?

As generative AI becomes more deeply embedded in digital education, it no longer simply delivers knowledge; it shapes it. What counts as truth, and whose truth is represented, becomes increasingly complex. Rather than offering fixed answers, this piece challenges educational technologists to confront the ethical tensions and contextual sensitivities that now define digital learning.

Who Owns ‘Truth’ in the Age of Educational GenAI?

Author: Prof. John Traxler, UNESCO Chair, Commonwealth of Learning Chair and Academic Director of the Avallain Lab

St. Galen, May 23, 2025 – Idealistically, perhaps, teaching and learning are about sharing truths, and sharing facts, values, ideas and opinions. Over the past three decades, digital technology has been increasingly involved or implicated in teaching and learning, and increasingly involved or implicated in shaping the truths, the facts, values, ideas and opinions that are shared. Truth seems increasingly less absolute, stable and reliable and digital technology seems increasingly less neutral and passive.

The emergence of powerful and easily available AI, both inside education and in the societies outside it, only amplifies and accelerates the instabilities and uncertainties around truth, making it far less convincing for educational digital technologists to stand aside, hoping that research or legislation or public opinion will understand the difficulties and make the rules. This piece unpacks these sometimes controversial and uncomfortable propositions, providing no easy answers but perhaps clarifying the questions.

Truth and The Digital

Truth is always tricky. It is getting trickier and trickier, and faster and faster. We trade in truth, we all trade in truth; it is the foundation of our communities and our companies, our relationships and our transactions. It is the basis on which we teach and learn, we understand and we act. And we need to trust it.

The last two decades have, however, seen the phrases ‘fake news’ and ‘post truth’ used to make assertions and counter assertions in public spheres, physical and digital, insidiously reinforcing the notion that truth is subjective, that everyone has their own truth. It just needs to be shouted loudest. These two decades also saw the emergence and visibility of communities, big and small, in social media, able to coalesce around their own specific beliefs, their own truths, some benign, many malign, but all claiming their adherents to be truths. 

The digital was conceived ideally as separate and neutral. It was just the plumbing, the pipes and the reservoirs that stored and transferred truths, from custodian or creator to consumers, from teacher to learner. Social media, intrusive, pervasive and universal, changed that, hosting all those different communities.

The following selection of assertions comprises some widely accepted truths, though this will always depend on the community; others are generally recognised as false and some, the most problematic, generate profound disagreement and discomfort.

  • The moon is blue cheese, the Earth is flat
  • God exists
  • Smoking is harmless
  • The holocaust never happened 
  • Prostate cancer testing is unreliable
  • Gay marriage is normal 
  • Climate change isn’t real 
  • Evolution is fake
  • Santa Claus exists 
  • Assisted dying is a valid option
  • Women and men are equal
  • The sun will rise
  • Dangerous adventure sports are character-building
  • Colonialism was a force for progress

These can all be found on the internet somewhere and all represent the data upon which GenAI is trained as it harvests the world’s digital resources. Whether or not each is conceived as true depends on the community or culture.

Saying, ‘It all depends on what you mean by …’ ignores the fundamental issue, and yes, some may be merely circular while others may allow some prevarication and hair-splitting, but they all exist. 

Educational GenAI

In terms of the ethics of educational AI, extreme assertions like the ‘sun will rise’ or ‘the moon is blue cheese’ are not a challenge. If a teacher wants to use educational GenAI tools to produce teaching materials that make such assertions, the response is unequivocal; it is either ‘here are your teaching materials’ or ‘sorry, we can’t support you making that assertion to your pupils’.   

Where educational AI needs much more development is in dealing with assertions which, for us, may describe non-controversial truths, such as ‘women and men are equal’ and ‘gay marriage is normal’, but which may be met by different cultures and communities with violently different opinions.

GenAI harvests the world’s digital resources, regurgitating them as plausible, and in doing so, captures all the prejudice, biases, half-truths and fake news already out there in those digital resources. The role of educational GenAI tools is to mediate and moderate these resources in the interests of truth and safety, but we argue that this is not straightforward. If we know more about learners’ culture and contexts and their countries, we are more likely to provide resources with which they are comfortable, even if we are not. 

Who Do We Believe?

Unfortunately, some existing authorities that might have helped, guided and adjudicated these questions are less useful than previously. The speed and power of GenAI have overwhelmed and overtaken them. 

Regulation and guidance have often mixed pre-existing concerns about data security with assorted general principles and haphazard examples of their application, all focused on education in the education system rather than learning outside it. The education system has, in any case, been distracted by concerns about plagiarism and has not yet addressed the long-term issues of ensuring school-leavers and graduates flourish and prosper in societies and economies where AI is already ubiquitous, pervasive, intrusive and often unnoticed. In any case, the treatment of minority communities or cultures within education systems may itself already be problematic.

Education systems exist within political systems. We have to acknowledge that digital technologies, including educational digital technologies, have become more overtly politicised as global digital corporations and powerful presidents have become more closely aligned.

Meanwhile, the conventional cycle of research funding, delivery, reflection and publication is sluggish compared to developments in GenAI. Opinions and anecdotes in blogs and media have instead filled the appetite for findings, evaluations, judgments and positions. Likewise, the conventional cycle of guidance, training, and regulation is slow, and many of the outputs have been muddled and generalised. Abstract theoretical critiques have not always had a chance to engage with practical experiences and technical developments, often leading to evangelical enthusiasm or apocalyptic predictions. 

So, educational technologists working with GenAI may have little adequate guidance or regulation for the foreseeable future.

Why is This Important?

Educational technologists are no longer bystanders, merely supplying and servicing the pipes and reservoirs of education. Educational technologists have become essential intermediaries, bridging the gap between the raw capabilities of GenAI, which are often indiscriminate, and the diverse needs, cultures and communities of learners. Ensuring learners’ safe access to truth is, however, not straightforward since both truth and safety are relative and changeable, and so educational technologists strive to add progressively more sensitivity and safety to truths for learners. 

At the Avallain Lab, aligned with Avallain Intelligence, our broader AI strategy, we began a thorough and ongoing programme of building ethics controls that identify what are almost universally agreed to be harmful and unacceptable assertions. We aim to enhance our use of educational GenAI in Avallain systems to represent our core values, while recognising that although principles for trustworthy AI may be universal, the ways they manifest can vary from context to context, posing a challenge for GenAI tools. This issue can be mitigated through human intervention, reinforcing the importance of teachers and educators. Furthermore, GenAI tools must be more responsive to local contexts, a responsibility that lies with AI systems deployers and suppliers. While no solution can fully resolve society’s evolving controversies, we are committed to staying ahead in anticipating and responding to them.

About Avallain

At Avallain, we are on a mission to reshape the future of education through technology. We create customisable digital education solutions that empower educators and engage learners around the world. With a focus on accessibility and user-centred design, powered by AI and cutting-edge technology, we strive to make education engaging, effective and inclusive.

Find out more at avallain.com

About TeacherMatic

TeacherMatic, a part of the Avallain Group since 2024, is a ready-to-go AI toolkit for teachers that saves hours of lesson preparation by using scores of AI generators to create flexible lesson plans, worksheets, quizzes and more.

Find out more at teachermatic.com

Contact:

Daniel Seuling

VP Client Relations & Marketing

dseuling@avallain.com

How Can We Navigate the Path to Truly Inclusive Digital Education?

True inclusivity in digital education demands more than good intentions. Colonial legacies still influence the technologies and systems we use today. As we embrace AI, we must consider whether it truly serves all learners or if it carries the biases of the past along with the impact of digital neo-colonialism in education. Drawing on work commissioned by UNESCO and discussions across UK universities, this is an opportunity to recognise hidden influences and ultimately create a fairer and more equitable digital learning environment.

How Can We Navigate the Path to Truly Inclusive Digital Education?

Author: John Traxler, UNESCO Chair, Commonwealth of Learning Chair and Academic Director of the Avallain Lab

What Are We Talking About?

St. Galen, April 25, 2025 – This blog draws on work commissioned by UNESCO, to be published later in the year1, and on webinars across UK universities. Discussions about decolonising educational technology have formed part of initiatives in universities globally, alongside those about decolonising the curriculum, as part of the ‘inclusion, diversity and equity’ agenda, and in the wider world, alongside movements for reparations2 and repatriation3

This blog was written from an English perspective. Other authors would write it differently.

Decolonising is a misleadingly negative-sounding term. The point of ‘decolonising’ is often misunderstood to be merely remediation, undoing the historical wrongs to specific communities and cultures and then making amends. Yes, it is those things, but it is also about enriching the educational experience of all learners, helping them understand and appreciate the richness and diversity of the world around them.

Colonialism is not limited to the historical activities of British imperialists or even European ones. Tsarist Russia, Soviet Russia, Imperial China, Communist China and Ottoman Turkey are all examples. It remains evident within the one-time coloniser nations and the one-time colonised; Punjabi communities in the English West Midlands and Punjab itself, both still living with the active legacies of an imperial past. It is present in legacy ex-colonial education systems, in the ‘soft power’ of the Alliance Française, the Voice of America, the Goethe Institute, the British Council, the Instituto Cervantes, the World Service, the Peace Corps and the Confucius Institutes, and is now resurgent as the digital neo-colonialism of global corporations headquartered in Silicon Valley.

Why does it matter? It matters because it is an issue of justice and fairness, of right and wrong, and it matters to policy-makers, teachers, learners, employers, companies and the general public as a visible and emotive issue.

What About Educational Technology?

How is it relevant to educational technology? Firstly, ‘educational technology’ is only the tip of the iceberg in terms of how people learn with digital technology. People learn casually, opportunistically and unsupported, driven by momentary curiosity, self-improvement and economic necessity. They do so outside systems of formal instruction. Decolonising ‘educational technology’ may be easier and more specific than decolonising the digital technologies of informal learning, but they have many technologies in common.

At the most superficial level, the interactions and interfaces of digital technologies are dominated by images that betray their origins through visual metaphors such as egg-timers, desktops, files, folders, analogue clocks, wastepaper bins, gestures like the ‘thumbs up’ and cultural assumptions such as green meaning ‘go’. These technologies often default to systems and conventions shaped by history, such as the Gregorian calendar, the International Dateline, Mercator projections, Imperial weights and measures (or Système Internationale) and naming conventions like Far East, West Indies and Latin America. They also tend to prioritise the colonial legacies, European character sets, American spelling and left-to-right, top-to-bottom typing. 

Speech recognition still favours the global power languages and their received pronunciation, vocabulary and grammar. Other languages and dialects only come on stream slowly; likewise, language translation. Furthermore, the world’s digital content is strongly biased in favour of these powerful languages, values and interests. Consider Wikipedia, for example, where content in English outweighs that in Arabic by about ten-to-one, and content on Middle-earth outweighs that on most of Africa. Search engines are common tools for every kind of learner, but again, the research literature highlights the bias in favour of specific languages, cultures and ideas. Neologisms from (American) English, especially for new products and technologies, are often absorbed into other languages without change.

On mobiles, the origins of textspeak from corporations targeting global markets, technically using ASCII (American Standard Code for Information Interchange), meant different language communities were forced to adapt. For example, using pinyin letters rather than Chinese characters or inventing Arabish to represent the shape of Arabic words using Latin characters. 

In reference to educational technology, we have to ask about the extent to which these embody and reinforce, specifically European, ideas about teaching, learning, studying, progress, assessment, cheating, courses and even learning analytics and library usage. Additionally, if you look at the educational theories that underpin educational technologies and then the theorists who produced them, you see only white male European faces.  

The Intersection of Technology and Subjects

There is, however, the extra complication of the intersection of what we use for teaching, the technology, and what gets taught from the different topics to subjects. The subjects are also being subjected to scrutiny. This includes checking reading lists for balance and representation, refocusing history and geography, recognising marginalised scientists and engineers and the critical positioning of language learning. Language education, in particular, must navigate between the global dominance and utility of American English and the need to preserve and support mother tongues, dialects and patois, which are vital parts of the preservation of intangible cultural heritage. 

The Ethical Challenges of AI

The sudden emergence of AI into educational technology is our best chance and worst fears. It is accepted that GenAI recycles the world’s digital resources, meaning the world’s misunderstandings, its misinformation, its prejudices and its biases, meaning in this case, its colonialistic mindsets, its colonising attitudes and its prejudices about cultures, languages, ethnicities, communities and peoples, about which is superior and which is inferior. 

To prevent or pre-empt the ‘harms’ associated with AI-driven content, Avallain’s new Ethics Filter Feature minimises the risk of generating biased, harmful, or unethical content. Aligned with Avallain Intelligence, our broader AI strategy, this control offers an additional safeguard that reduces problematic responses, ensuring more reliable and responsible outcomes. The Ethics Filter debuted in TeacherMatic and will soon be made available for activation across Avallain’s full suite of GenAI solutions.

How Should the EdTech Industry Respond?

Practically speaking, we must recognise that the manifestations of colonialism are neither monolithic nor undifferentiated; some of these we can change, while others we cannot.

For all of them, we can raise awareness and criticality to help developers, technologists, educators, teachers and learners make judicious choices and safe decisions. To recognise their own possible unconscious bias and unthinking acceptance, and to share their concerns.

We can recognise the diversity of the people we work with, inside and outside our organisations, and seek and respect their cultures and values in what we develop and deploy. We can audit what we use and find or produce alternatives. We can build safeguards and standards.

We can select, reject, transform or mitigate many different manifestations of colonialism as we encounter them and explain to clients and users that this is a positive process, enriching everyone’s experiences of digital learning.


1Traxler, J. & Jandrić, P. (2025) Decolonising Educational Technology in Peters, M. A., Green, B. J., Kamenarac, O., Jandrić, P., & Besley, T. (Eds.). (2025a). The Geopolitics of Postdigital Educational Development. Cham: Springer.

2Reparations refers to calls from countries, for example in the Caribbean, for their colonisers (countries, companies, monarchies, churches, cities, families) to redress the economic and financial damage caused by chattel slavery.

3Repatriation refers to returning cultural artifacts to their countries of origin, for example the Benin Bronzes, the Rosetta Stone and ‘Elgin’ Marbles currently in the British Museum.


About Avallain

At Avallain, we are on a mission to reshape the future of education through technology. We create customisable digital education solutions that empower educators and engage learners around the world. With a focus on accessibility and user-centred design, powered by AI and cutting-edge technology, we strive to make education engaging, effective and inclusive.

Find out more at avallain.com

About TeacherMatic

TeacherMatic, a part of the Avallain Group since 2024, is a ready-to-go AI toolkit for teachers that saves hours of lesson preparation by using scores of AI generators to create flexible lesson plans, worksheets, quizzes and more.

Find out more at teachermatic.com

Contact:

Daniel Seuling

VP Client Relations & Marketing

dseuling@avallain.com

Leading with Confidence: What MATs Need to Know About GenAI in Education

A closer look at the online briefing ‘Effective GenAI for UK Schools, Academies and MATs’ and how UK MATs are strategically implementing AI to empower teachers, streamline operations and uphold ethical standards.

Leading with Confidence: What MATs Need to Know About GenAI in Education

London, April 10, 2025 – The online briefing ‘Effective GenAI for UK Schools, Academies and MATs’ offered MAT leaders a clear, practical overview of how artificial intelligence is beginning to shift the education landscape, not just in theory, but in day-to-day classroom realities.

Get a glance of the insightful discussion by watching the recording of the webinar.

The event was moderated by Giada Brisotto, Marketing Project Manager at Avallain. The panel featured: 

  • Shareen Wilkinson, Executive Director of Education at LEO Academy Trust
  • Carles Vidal, Business Director at Avallain Lab 
  • Reza Mosavian, Senior Partnership Development Manager at TeacherMatic.

Anchored in the findings of ‘Teaching with GenAI’, an independent report produced by Oriel Square and commissioned by the Avallain Group, the message throughout the session was clear: GenAI can help MATs reduce pressure on staff, drive efficiency and maintain strategic oversight, provided implementation is ethical, measured and pedagogically sound.

From Policy to Practice: What MATs Are Actually Doing

Shareen Wilkinson, Executive Director of Education at LEO Academy Trust, outlined their structured approach to GenAI adoption, designed specifically for multi-academy environments. The trust has implemented a tiered strategy that recognises the distinct needs and responsibilities of different stakeholder groups:

  • Leadership and management use GenAI to enhance operational efficiency, improve decision-making through data insights and streamline trust-wide documentation.
  • Teachers are supported in reducing planning time, customising resources and improving assessment strategies with AI-assisted tools.
  • Pupils are beginning to explore safe and age-appropriate uses of GenAI, supported by clear guidance and staff oversight to ensure digital literacy and ethical use.

“We started with low-risk areas,” Wilkinson explained, “to see where time could be saved without compromising learning or safety.” The results have been encouraging. Teachers report gaining back several hours a week, while resource quality and adaptability have improved across subjects and key stages.

Key lesson for MATs: A phased, role-specific approach allows for safe experimentation, measurable impact and trust-wide consistency, without a one-size-fits-all rollout.

Empowering Teachers, Not Replacing Them

A strong theme throughout was the role of GenAI as a support mechanism to empower teachers, not replace them or create more challenges for them. “It’s not about teachers working harder,” said Wilkinson. “It’s about teachers working smarter, and having the time to focus on what really matters: the learners.”

The conversation echoed findings from the ‘Teaching with GenAI’ report, which shows that the majority of teachers believe GenAI has real potential to reduce workload. When MATs implement these tools with a clear framework, the benefits can be scaled across schools without losing autonomy or creativity at the local level.

As Carles Vidal from Avallain Lab explained, “AI should never replace educators. It should reduce workload, improve access and protect the human relationships at the heart of learning.”

Key insight: Retention improves when teachers feel supported, not sidelined. AI can ease burnout when it enhances, not replaces, teacher agency.

Ensuring Safety, Alignment and Strategic Fit

Reza Mosavian of TeacherMatic reminded leaders that GenAI implementation is not just about tools but about trust. “Ask the right questions: Who built this? Is it safe? Does it protect our staff and pupils’ data? Does it align with your values as a MAT?”

This aligns closely with Avallain Intelligence, the group’s strategy for ethical AI development in education. With this approach, the MATs sector can effectively but also safely implement Avallain’s AI solutions such as TeacherMatic, our AI toolkit for teachers, that truly enhance teaching and learning, without compromising the integrity of the classroom.

For MAT leaders, the message is to focus on safeguarding, GDPR compliance, and curriculum alignment, not on novelty or speed of rollout.

Evaluation First, Adoption Second

The speakers stressed the importance of structured evaluation before adoption. MATs should treat GenAI procurement like any strategic initiative, with clear success criteria.

Reza offered a simple rubric:

  • Does it save staff time?
  • Does it meet the needs of all learners?
  • Is it safe and trustworthy?
  • Can it scale within your trust structure?

To support this process, many MATs are finding success with a digital champion model. As highlighted in the ‘Teaching with GenAI’ report and discussed by both Reza and Shareen during the session, appointing digital champions allows schools to trial tools in context, evaluate their effectiveness and build internal confidence through peer-led engagement.

Reza noted that the most effective champions are teachers still in the classroom, or those with a strong teaching and learning background. “They’re grounded in the day-to-day pressures and can assess AI through a real pedagogical lens,” he said. A peer-led structure not only builds trust, but also ensures feedback is relevant and grounded in actual practice.

He shared the example of a school that piloted GenAI specifically for lesson planning. Teachers trialled tools within a controlled group, giving iterative feedback to refine their use. One major takeaway was the clear time-saving benefit, but equally important was the ability to assess how AI could complement, rather than replace, teachers’ existing methods.

Pilot programmes, staff feedback loops and structured trial periods emerged as crucial components of sustainable GenAI implementation. Most importantly, this collaborative and contextual approach helps to win “hearts and minds” within the organisation, laying the groundwork for long-term success.

Final Thought: Collaboration Is Our Strongest Tool

The briefing concluded with a call to leadership. MATs have a unique opportunity to shape AI’s role in UK education. By collaborating, sharing knowledge and placing ethics at the forefront, trusts can lead this change rather than react to it.

The Avallain Group remains committed to supporting MATs through research, safe tools and professional dialogue, ensuring that GenAI is a partner in progress, not a point of risk.

Explore the Full Report: Teaching with GenAI

Click here to gain deeper insights and access practical recommendations for successful GenAI implementation in the full report.

About Avallain

At Avallain, we are on a mission to reshape the future of education through technology. We create customisable digital education solutions that empower educators and engage learners around the world. With a focus on accessibility and user-centred design, powered by AI and cutting-edge technology, we strive to make education engaging, effective and inclusive.

Find out more at avallain.com

About TeacherMatic

TeacherMatic, a part of the Avallain Group since 2024, is a ready-to-go AI toolkit for teachers that saves hours of lesson preparation by using scores of AI generators to create flexible lesson plans, worksheets, quizzes and more.
Find out more at teachermatic.com