Intelligence, whether human or artificial, cannot be determined purely through rational or quantitative measures. It also involves interpreting context, nuance and metaphor, the unpredictable elements of human thought. This piece examines how these aspects affect learning and understanding a language, and the challenges of participating in a community, especially as AI becomes more widely used for teaching and learning.
AI and Why It’s Impossible to Learn or Understand Language
Author: Prof John Traxler, UNESCO Chair, Commonwealth of Learning Chair and Academic Director of the Avallain Lab
St. Gallen, October 29, 2025 – In this piece, we argue that it is impossible to learn, understand or discuss what anyone else says or writes at anything beyond the simplest, most specific and concrete level. This even perhaps applies to people with a shared mother tongue, making conversation, learning, translating and reasoning more difficult than they initially seem, especially when they involve artificial intelligence and computers.
The discussion is, in fact, divided into two halves: the first deals with language as idiom, and the second deals with language for reasoning. In other words, we are discussing language and learning, and language learning, thus discussing intelligence, artificial and otherwise.
Language as Idiom
AI and the Turing Test
Artificial intelligence is the ongoing effort to develop digital technologies that mimic human intelligence, despite the undefined nature of human intelligence. It has been through various incarnations, such as expert systems and neural nets, and now generative AI or GenAI, seeming to finally deliver on the promises of 40 or 50 years ago.
Over all this time, there has, however, been a test, the Turing Test, to evaluate AI’s apparent intelligence, revealing insights into both intelligence and language. GenAI, the current incarnation, is in effect pattern matching with a conversational interface, a sophisticated form of autocomplete, completing responses based on the world’s vast digital resources. However, because of this, it can produce ‘hallucinations’, responses that are plausible but wrong, and can also perpetuate harm, bias or misinformation.
The Turing Test imagines a human, the ‘tester’, able to interact independently with another human and a computer. If the tester cannot tell when he or she is interacting with the human or the computer, then the computer can be said to be ‘intelligent’; it passes the Turing Test.
Expanding the Boundaries of Intelligence
We should, however, consider how this would work with a seemingly intelligent mammal, say a chimpanzee, conversing in American Sign Language, or an extraterrestrial, say ET, the visiting alien scientist. The film Arrival illustrates the possible superiority of other intelligences, their languages and their differences. These, too, might manifest ‘intelligence’ and challenge ours, widening our notions of intelligence and thus what we might expect from AI.
There is an alternative model of what is going on with intelligence, specifically with conversation, translation and learning; the Chinese Room. This thought experiment imagines a person passing words or perhaps phrases or sentences, called the ‘tokens’, into the Chinese Room. An operative looks them up in a large dictionary or some similar reference book or ‘look-up table’. The operative passes the answer or the translation or the learning out as another ‘token’, there seeming to be no intelligence or consciousness involved, only what is in effect an automaton.
However, it does raise questions about the operative; do they have any taste or ethics? Could they or should they be subject to Asimov’s Three Laws of Robotics? Is such an operative even possible? Is the operative merely another Chinese Room inside the Chinese Room or a way of disguising an algorithm as a human operative? Would the Chinese Room pass the Turing Test?
Human Understanding and the Limits of Machine Interpretation
Incidentally, in the film The Enigma of Kaspar Hauser, about a foundling, a boy with no past, set in Germany in the early nineteenth century, the eponymous hero is asked, ‘How to discern the villager who always tells the truth from the villager who always lies?’. Instead of applying deductive logic, Kaspar offers a simple, childlike answer from his unique perspective: he would ask the villager, ‘Are you a tree frog?’. His innocence allows him to see things differently, and his absurd question and approach might sidestep the issue of formal logic and thus rationality and intelligence. The Turing ‘tester’ just asks, ‘Is it raining tree frogs?’, revealing how a machine may struggle to interpret common sense and the outside world in the way humans do.
What is relevant here, however, is not a generic human ‘tester’ but a human learner wanting to be taught. Could this learner tell the difference between a human teacher and an artificial one, GenAI in this case? It depends, of course, on the learner’s expectations of pedagogy. If the learner expected a didactic or transmissive pedagogy, GenAI could give a very competent lecture, essay, summary or slide deck, ‘hallucinations’ notwithstanding.
If, on the other hand, the learner expected something discursive, something that engaged with them personally and individually, building on what they already knew, correcting their misunderstandings, using a tone and terms familiar to them, then ‘raw’ GenAI would struggle. This is even before considering the added dimension of emotional intelligence, meaning recognising when the learner is tired, frustrated, bored, upset or in need of a comfort break or some social support.
Language for Reasoning
Early AI and Challenges in Language Learning
Let’s draw on two early efforts we had in 1960. PLATO was a computer-based learning system using ‘self-paced learning, small cycles of feedback and recorded traces of incremental progress’ (Cope & Kalantzis, 2023:4), showing that simple didactic teaching was possible, however crudely, very early on. Additionally, in about 1966, ELIZA, one of the earlier natural language processing programs, provided non-directive psychotherapy, that is, psychotherapy led by the client, not by the therapist. Psychotherapy led by the client’s problems or constructs that might have translated into non-directive or learner-centred pedagogy, heutagogy, perhaps, self-directed learning.
So, how does this relate to learning a language? Curiously, GenAI is based on the so-called large language models, and the medium for exploring intelligence seems to be the conversation, certainly not any IQ test!
Learning a language, even our own mother tongue, from any kind of computer is likely to be tricky. Firstly, it is difficult because computers lack body language, hand gestures and facial expressions.
Plurilingual Societies
Then, in plurilingual societies such as South Africa, or even most modern societies, we have code switching, the switching between languages, even within individual sentences. There are also potential problems with language registers, ranging from frozen, formal, consultative, casual, to intimate. In a monocultural society, these should be straightforward. However, in multicultural societies, characterised by different norms, speakers may gravitate toward the more formal or the less formal; there can be uncertainty, confusion and upset. These are a kind of ‘cultural dimension’ that we will explore later, suggesting there is no easy correspondence between languages.
Euphemisms, Neologisms and Internet Language
Then we have euphemisms, puns and double entendre, not meaning what they say, and hyperbole and sarcasm, sometimes meaning the opposite of what they say. Furthermore, we have humour in general, but black humour in particular, but why ‘black’? What is it about blackness? We have neologisms, new words from nowhere, sometimes only fleeting, occasionally more durable, skeuomorphs, new meanings from old words, and acronyms, especially those from the internet and World Wide Web. All these pose problems for learners, who need to understand the cultural context and current culture. Similarly, problems arise for GenAI, especially when it always lags behind human understanding and skims across the surface, missing human nuances.
Community Languages and Cultural Assimilation
We also have subversive, perhaps rebellious, perhaps secretive languages. For example, Polari, the one-time argot of the London gay theatre community, derived partly from Romani. Cockney, rhyming slang, historically from London’s East End, and based on a strict mechanism, which, for example, gets you from ‘hat’ via ‘tit-for-tat’ to ‘titfer’ or from ‘look’ via ‘butcher’s hook’ to ‘butchers’, so ‘can I have a butchers at your titfer?’.
There is also back slang, which forms a vocabulary from words spelt backwards. In Scotland, ‘Senga’ for Agnes. None of these examples is necessarily accessible, inclusive or open. Two textspeak examples make the same point: Arabish, the messaging language using a European keyboard for Arabic sounds, and Mxlish, the one-time language of South African teenagers using the messaging platform, MXit, both with enormous footprints.
Each of these, in its own way, is the property of a particular community or culture, perhaps waiting to be appropriated, ridiculed, sanitised or ignored by others, and eventually, perhaps, to be ‘taught’, the kiss of death.
In fact, we could argue that learning these languages is an integral part of acceptance and assimilation into a defined community, in just the same way as talking about differential calculus and only then talking about integral calculus is part of acceptance and assimilation into the community of mathematicians. Our point is that displaying intelligence, acquiring language, being part of a culture, having a conversation and learning a subject are all very closely intertwined and necessarily complex for strangers or chatbots to join in with.
Metaphor and Abstraction
Then we get on to the metaphor. In a quarter of an hour of a television drama, I heard ‘black people’, ‘landmark decision’, ‘high art’ and ‘ wild goose chase’, none of which was literally true. I listen to ‘The Freewheelin’ Bob Dylan’, safe in the knowledge that Bob Dylan is not a bicycle. I worry about ‘raising money’, knowing this will not involve lifting the money upwards. ‘The Lord is my shepherd’, in the Psalms, does not tell me that I am a sheep. We also get bombarded with the language inherited from Aristotle, of ‘correspondences’, ‘the ship of state’, ‘the king of the jungle’ and ‘the body politic’, whilst thinking the car needs a wash, even though being inanimate, it has no needs. As a university professor, I have two chairs, neither of which I can actually sit upon, whilst on the news, I hear that the office of the president has been tarnished, though I also hear it has just been redecorated. Confusing, isn’t it?
Parables, such as the ‘Good Samaritan’, from the Gospel of Luke, and the ‘seed falling on stony ground’, from the Gospel of Matthew, are, in fact, just extended metaphors delivered in the hope that the meaning could be inferred by people familiar with the cultural context of their origin. People refer to the Prodigal Son, from the Gospel of Luke, with no idea of the meaning of prodigality. However, they are perhaps meaningless to other cultures, those remote from historical Palestine. The same is true of many fables, such as ‘The Hare and the Tortoise’.
However, as all are ripped out of their cultural or historical context, the moral point is needed now to explain the parable or fable, rather than the other way round, as originally intended; nowadays, sowers, samaritans, hares and tortoises are no longer everyday items. They are, in fact, clichés, remarks bereft of meaning, another challenge for language learners and large language models.
While metaphor takes words from the concrete to the abstract, the use of ‘literally’ seems to drag them back again, so perhaps Bob Dylan is literally freewheeling, and money is literally being raised. ‘Literally’ is, however, sometimes used for emphasis and sometimes just used weirdly. Yesterday, I heard a podcaster talking about being ‘literally gobsmacked.’ Did he mean he had been smacked on the gob? Actually? Literally? As someone who is autistic, understanding language from a largely concrete interpretation, this confusion, uncertainty and ambiguity is a daily struggle.
Once we get away from anything as simple and concrete as ‘the cat sat on the mat’ and approach the abstract of love, democracy, freedom, race, virtue and truth, we enter our own small community where some understanding is possible inside, but little is possible outside. These concepts of love, race, democracy, freedom, virtue and truth may all have very different meanings among, say, Marxists, Buddhists, Stoics, Confucians, feminists, humanists and Calvinists, unlike cats and mats. So how can we learn about them and converse about them? And how can our large language models ever engage with them meaningfully, except in a manner reminiscent of the Chinese Room model?
Conclusion
So, the conclusion, so far, is that while it might just be possible to have a meaningful dialogue across a shared culture and mother tongue, especially at the level of simple description and action, is there much hope of having one with computers?
Perhaps, this reinforces the importance of keeping humans at the centre of teaching and learning. AI, no matter how sophisticated, cannot keep up with the diversity, transience and cultural complexity of language. Responsible human mediation remains essential, and we must recognise that computers will never be fast enough or flexible enough. Owning up to these limits is an ethical response in itself, not just from Avallain but across the educational AI sector and its clients.
However, safeguards like Avallain Intelligence provide a first line of defence. This strategy for ethically and safely implementing AI in education aims to put the human element at the centre. While it cannot solve all the challenges of the evolution of language, ethics or learning, it establishes a framework to ensure that technology remains guided by human understanding, creativity and judgement, enhancing rather than replacing human agency.
This pair of blogs, the first half and the following second half, is about language, about how understanding language is tricky for humans and even trickier for computers; it is about the medium, not the message. Understanding this might not stop people from saying or promoting nasty, harmful things, but it might perhaps prevent them from being misunderstood.
About Avallain
For more than two decades, Avallain has enabled publishers, institutions and educators to create and deliver world-class digital education products and programmes. Our award-winning solutions include Avallain Author, an AI-powered authoring tool, Avallain Magnet, a peerless LMS with integrated AI, and TeacherMatic, a ready-to-use AI toolkit created for and refined by educators.
Our technology meets the highest standards with accessibility and human-centred design at its core. Through Avallain Intelligence, our framework for the responsible use of AI in education, we empower our clients to unlock AI’s full potential, applied ethically and safely. Avallain is ISO/IEC 27001:2022 and SOC 2 Type 2 certified and a participant in the United Nations Global Compact.
Find out more at avallain.com
_
Contact:
Daniel Seuling
VP Client Relations & Marketing
dseuling@avallain.com