AI Can Fail You, and You Need to Understand How That Can Happen

As AI systems become more ubiquitous, policymakers, technology companies, publishers, educators and students are called to play essential roles in how these tools are developed and used. In this piece, Dr Helen Beetham explores the risks AI poses to learning, expertise, creativity and educational integrity, challenging current assumptions about AI in education and arguing for the protection of what we understand learning to be.

AI Can Fail You, and You Need to Understand How That Can Happen

An interview with Dr Helen Beetham, lecturer, researcher and consultant in digital education, on criticality and AI, conducted by Carles Vidal, MSc in Digital Education, Business Director of Avallain Lab 

The following interview was initially planned to discuss the topic of critical thinking and GenAI in education, drawing on the report Avallain published last June, ‘From the Ground Up’. That text proposes 12 controls for safer, more ethical use of AI in education, and the idea of embedding critical thinking in both the design of these tools and in teaching practices is one of its core guidelines.

To explore these ideas further, we spoke with Dr Helen Beetham, a leading educational researcher and consultant whose current focus is set on criticality and AI. Right from the start, our conversation went beyond critical thinking as a product design strategy and a necessary learning skill to encompass a broader perspective on criticality.

In the following lines, Helen unpacks her views on AI and the risks its adoption implies for societies and educational systems in general, and for educators and students in particular. She points out the problematic nature of foundational models and the agendas driving them, and suggests a range of alternative policies and practices that should be considered to manage these risks. 

For those looking for a silver lining, as Helen says, this moment is a great opportunity to think about tech and what we want from it. This is why this conversation is so timely, as only by understanding the complexities at stake will we be able to address them and ensure we continue to deliver real value from our technologies for publishers and educators.

Interview Quick Links: 

  1. Why is it important that all education stakeholders have a critical stance on AI?
  2. Can GenAI have real transformative educational potential?
  3. Can AI models be improved to generate rich, adaptive educational content?
  4. Why might GenAI be counterproductive for learners without foundational knowledge?
  5. Can critical thinking help us develop GenAI tools that reduce risk and foster reflection?
  6. Is it possible to generate tools that prioritise the learning process over the finished product?
  7. What is the future for GenAI in education, and what should we be ready to challenge?

Interview with Dr Helen Beetham

Helen, given your area of research, we would like to address the importance of criticality and critical thinking in relation to GenAI tools, particularly the main risks the educational community faces, how to address them and the opportunities you see in these technologies.

1. Why is it important that the different actors of the educational community develop a critical stance in the face of AI systems?

First, I’m glad you identify that there are different actors with different powers to act. 

Teachers, students, school and university leaders, AI developers and the foundation companies all have different responsibilities, and I wouldn’t expect the same kinds of criticality to apply. For teachers and learners, there are reasons to be critical that concern the learning process, and there are reasons to be critical that concern the systems we depend on to deliver education. 

At the level of learning, paper after research paper has shown that people who use generative AI for significant tasks – reading, writing, coding, design – are not learning to do those tasks, or not in the ways they have previously been done. Retention is poor. Subsequent performance, for example, under exam conditions, is poor. Even expert skills are degraded through persistent use of AI. This is not at all surprising. We know that learning to read and write rewires the brain, and literacy is not a one-and-done skill; it’s something we continue to develop, or that can atrophy if we stop developing it. Arguably, the whole purpose of school is to develop people who can participate in the literate practices that societies value, and university is about developing more specialist literacies such as scientific, legal, technical and so on. When generative AI is used for those tasks, the relevant development of the brain, the understanding, the practice and even the identity is not going to happen. Something else might develop, such as a facility with the AI interface. So we need to look critically at that trade-off.

Another reason to be critical is the nature of the models these technologies rely on. Most accounts of ‘critical’ AI use focus on the outputs, especially the inaccuracies, biases and errors. You can improve those issues with post-training data labour, but fundamentally, the data model is not a world model, not even a reliable model of its training data. The errors are not going to go away.  So ‘checking the outputs’ seems like an important critical response. But what does that mean? It can only mean checking against other information systems. And what happens when those other information systems are saturated with AI-generated content? The information/media literacy revolution encourages critical questions, but they mainly concern people and their motives: who authored this, when, and why, who is disseminating it, what interests are being served, with what designs on your opinions and behaviour and personal data? None of these questions can be asked of AI outputs, or really of any information in systems that are AI saturated. ‘Checking the outputs’ of AI requires us to completely reassess what is in circulation as information: what are its sources, authorities, meaning-making processes, and what systems have been involved? It’s not a simple matter of technique.

More concerning to me than the errors in outputs are the effects of stylistic and semantic flattening. Inference tends towards the middle. Everyone starts to sound and read the same. There are a few people, experts and creatives in their field, who are using generativity to push the boundaries of their own practice and good luck to them. But they have not developed in that practice by using generative AI. For learners who do not even know the range of responses that are possible, let alone how to evaluate the outlying and the innovative, the use of AI will always tend to the most standard response. In minority cultural and intellectual fields, the stereotyping effects are even more pronounced. It’s incredibly boring and demotivating, for teachers and students alike, to have rich learning materials reduced to five bullet points and for those bullet points to be expanded again to five paragraphs of entirely predictable student text. We keep asking: what do you think? What in all this material speaks to you? What do you care about? The whole point of education is to help people find the answers to these questions for themselves. I find learners increasingly reluctant to do that, because now there is always a safe answer. Of course, data models can describe different perspectives, provided these are already described in the training corpus. What they can’t do is help learners to develop their own perspective. So a critical stance might be one that asks how this kind of development, if we value it, can still take place. 

A final reason to be critical is the gap between what the AI industry promised for education and what we have actually got. I studied AI in the 1980s, and I’ve worked in education technology, broadly speaking, since the 1990s, so I’ve been around a few hype cycles, but nothing quite like this one. People in education are not enamoured of AI, of the quality of its outputs. They are increasingly concerned about its downsides, especially for learners’ development. But the idea that if you don’t love AI, then you are the problem, the idea that if you don’t inject AI into every aspect of their experience, you are failing students, these ideas are pervasive. The GenAI challenge becomes a crisis if educators collude in the magical thinking and the myth-making. When we trust our disciplinary methods to help us understand GenAI, we can be critical in a whole variety of ways. And when GenAI refuses to be understood, because it entails blackbox architectures and deliberately obscure commercial practices, this is not a mystery to bow down before but a huge risk to truth-telling and responsible thought.

2. In your opinion, can GenAI tools and AI-based technologies have real transformative educational potential?

If you mean ‘can generative AI be used for positive educational ends?’, then of course, in the hands of a dedicated educator, any material can have learning value. I’ve been in classes where generative AI benchmarks are critiqued, where students research model biases and carbon footprints, and where they do journalistic work on AI companies. I’ve had students query system prompts, learn basic ML algorithms, and try a variety of creative responses to generated outputs. Educators are constantly experimenting and constantly learning. My experience is that the wider the variety of perspectives students have for understanding generative AI, the better chance they have of making a critical response. And it’s possible to support many different perspectives in the classroom.

If you mean ‘can students’ use of generative AI in their own independent study time be good for their learning?’, then I am more sceptical. I could break down the evidence for you, but what I would observe from my own experience is that the most engaged students, the ones that tend to be most thoughtful with generative AI in their learning process, are also the ones that are most concerned about losing skills and critical perspective. They are the students we should be engaging to help us build AI-resilient assessments and learning spaces. But just because some students are navigating generative AI thoughtfully, we can’t fulfil our responsibility to the rest by showing them good examples or preaching about ‘integrity’. Like fast food, ‘fast thought’ is irresistible: it offers compulsive and addictive behaviours in place of nourishment. According to The Brookings Institution, the costs of those behaviours for individual learners are already ‘daunting’. 

And lastly, if you mean, ‘can data-based methods produce efficiencies in education systems?’ I’m sure those can be found. But the question of what we are accelerating and why is particularly pressing in education. We don’t ask students to produce assignments because we want more content in the world, but because we want students to learn. Ideally, we don’t ask academics to publish because we want more content in the world either, but because we want more meaningful research. Unfortunately, though, our systems have been set up to reward those proxies for thinking and for research, and GenAI enters those reward systems with what is, again, an irresistible promise to produce those proxies faster and get faster to the rewards. 

People are becoming very aware of the costs. The GenAI moment is an opportunity to think critically about technology and what we want from it. I see big shifts in attitudes to social media, for example, and I think that’s an example of transformational change being driven by reactions to AI. But none of these shifts happens through interactions with GenAI alone, and there is now evidence that the more time people spend with chatbots and avatars, the less critical they are inclined to be.

3. Can models be improved to generate rich educational content that can adapt to diverse contexts and learning needs?

None of the agents I have seen in use or in development has been very impressive at adapting to learners’ contexts and needs. I imagine that retraining has to go beyond extended prompting or RAG injection to get past the sycophancy and information-giving biases that are baked into the foundation models. More fundamentally, learners do not know what they do not know, or why it might be important, so learner-to-chatbot interactions on their own are very unlikely to initiate deep learning. The other approach is to involve teachers in defining the interactions they want learners to have. If teachers can define, in clear enough terms, the learning goals and needs of their class and if they can check, refine and contextualise what comes out of an AI application, I suggest they can probably adapt content and activities in more conventional ways. Many of the teacher-centred applications I’ve seen amount to good practice in lesson planning. But teachers still have to deliver on their plans. Say an AI-designed activity isn’t going as expected in class, do you fire up the AI and have another go? Teachers need to understand their materials if they are going to teach adaptively. So do we hand everything over – lesson plans, curricula, assessment rubrics, teaching materials, student feedback, and all the agency and skills and adaptability that go with really owning those things – do we hand that over to AI companies in return for fractional gains in lesson planning?

On your question about rich content, an activity I’ve shared with students for three years now is to generate images for educational use, based on a prompt from the OpenAI teaching materials. The results are always terrible, especially when compared with the resources you can find with an OER or Wikimedia search. This year, some students pushed back against the generative part of the activity on sustainability grounds. That has been an interesting development. But the positive part of the activity is that it makes students develop pedagogic judgement. The difference between the AI images and those chosen or designed by educators is a space for real learning. I’ve also had the experience of students and colleagues uploading my own materials, and generating podcasts, quizzes and gamified versions, and in one case, an app. The results were unrecognisable to me, so much design thinking and conceptual nuance were lost. But that’s not really the point. Every learner who wants one by now has an AI bot they can use to version content for themselves, whether they want it simplified, gamified, mind-mapped, transposed or translated, or given an anime makeover. There are accessibility gains here that I don’t want to trivialise. But I’m not sure where it leaves learning design. A bad outcome would be for education to disinvest in universal design and accessibility support because ‘learners are doing it with AI’. A worse outcome – just taking this to its logical conclusion – would be for learning design itself to disappear as a profession and as a shared language. If every learner is their own unique microcosm of intellectual needs and sensory preferences, if those needs can be met by ‘designs’ or ‘experiences’ generated on demand from the soup of educational content, what really is the role of the designer? Or the teacher, for that matter?

Behind the apparently empirical question ‘can GenAI applications provide effective learning support?’ are questions that might be a bit more uncomfortable. Such as: should GenAI interactions replace teaching interactions? Should teaching assistants and student support professionals become chatbots, or rather become the data workers who make the chatbots work? Perhaps most insidiously of all, should GenAI replace other students in the learning process? Alexandr Wang, head of Meta’s super-intelligence lab, has said he will wait to have kids until they can connect straight into the network mind and bypass all the messy interactions of school. Learners will kick back against further loss of contact with teachers, I suspect, but they are already voting with their feet for chatbot companions over peer learners. And that is really troubling. Social constructivist theory tells us it’s better to learn alongside other learners who are at a similar learning stage to us, who make similar mistakes and discoveries, who can be resources for our learning in their differences from us, despite the frictions and frustrations involved (that also teach us something). Yet we seem to be encouraging learners in the delusion that the perfect other is always available, and it is a chatbot. 

4. In your work, you refer to the AI expertise paradox, where if you want to use AI effectively and safely, you need to be an expert, so you can benefit from shortcuts and avoid inaccuracies. However, the same cannot be said for students who are still learning. Can you please elaborate on why GenAI might be counterproductive for those who have not yet built that foundational knowledge?

I’m not sure expert use is necessarily effective and safe. There have been several longitudinal studies now, from MIT, Bloomberg and Stanford, that have found experts are not as productive with AI as they think they are. They are certainly not as productive as their bosses think they should be. It turns out ‘checking for inaccuracies’ and ameliorating losses of quality and context are non-trivial. Experts are slower than novices when they apply AI because they are aware of these issues and have to make judgments about them. What parts of a task might lend themselves to AI efficiencies? How best to realise them? What are the likely impacts on quality, safety and professional values? Experts are finding uses, but only with significant trade-offs, some of which may be visible only at the organisational or sector level. 

From an educational point of view, yes, the worry is that novices never get to develop the judgment and expertise that is needed to work effectively with AI. Herbert Dreyfus’ original critique of the AI project, back in the 1960s and 1970s, was that it misunderstood the nature of expertise. Educators understand it better, for example, that it develops through iterative practice. That’s the point of learning spaces where novices can practice without high costs of failure, where they can get expert feedback, and develop fluency and judgement. All of those developmental processes are lost when learners use generative AI to produce what looks like expertise. At least, it looks like expertise to students, and this is another aspect of the paradox. It looks less like expertise to assessors. But we are working in a system that has previously taken that kind of evidence at face value, and where the time educators have to engage with students’ development has been pared away.

What the expertise paradox allows me to say to students is: I will not fail you for using AI. But AI can fail you, and you need to understand how that can happen.

5. You have also identified ‘Cognitive Dependence’, ‘Deskilling’, ‘Reduced Learning’ and ‘Less Personal Agency’ as potential effects for users of GenAI tools, particularly among non-experts. To mitigate these risks, do you believe that critical thinking can be effectively promoted by developing GenAI tools that foster reflection and scepticism, present alternative perspectives, and highlight that students should not take generated output at face value?

The only remedy for students who avoid practice and engagement is to practice and engage, and to have support in place to do so. Perhaps some aspects of critical thinking might be supported with GenAI tools; I don’t rule it out, particularly when it comes to the critique of AI. But criticality and scepticism are not simple ‘techniques’. I know there has been a movement in media literacy, for example, to ‘inoculate’ young people against deepfakes, which sounds like a nice, simple shot in the arm. But what this inoculation involves is really a series of engagements. Typically, learners will spend time practising and applying diagnostic methods. Then they will create their own media pieces or ‘memes’. And finally, they will strategise to spread their memes, to make them as persuasive and pervasive as possible. Basically, they learn to make clickbait so they can learn not to be so easily baited. There are analogous activities you can do with GenAI, some of which engage learners more deeply with the data structure than others. The ‘inoculation’ task might be to prompt for a particular persona and interaction style, for example, and reflect on the results. It might be vibe coding a simple app and exploring the quality of the code. I think these are productive approaches, but they are not at all simple to implement.

A common remedy proposed for these problems is to teach students ‘good’ prompting strategies. When this involves using a particular template or standard, it seems to me counterproductive. It’s unlikely learners will surface anything interesting or discover the limitations of inference for themselves if prompting is reduced to a fill-the-gaps or cut-and-paste exercise. In fact, a lot of prompt crafting now takes place in the application layer, where interfaces may be even more frictionless than the familiar prompt screen. Larry Page looked forward to an AI search tool that would ‘understand exactly what you wanted before you knew yourself. For the foundation companies, the best prompt is one that entirely pre-empts the user’s needs. One of the critical exercises I do with students is to have them review chat logs and ask whether they think the user is prompting the chatbot, or the chatbot is prompting the user. But chatbot logs these days are often hidden in the application layer.

The alternative you suggest is that inference is made more effortful, for example, by defining the chatbot persona as a reflective mentor or a Socratic partner. I think this identifies the problem correctly, and it’s one I’ve discussed a lot with colleagues. How to ‘interrupt’ the straight line to the solution and introduce a more developmental path. What I’m not sure about is the possibility of installing these solutions as dialogic techniques within a chatbot interface. As I said in a previous answer, I have not yet found an AI agent that is effective in this mode. Either the moves are generic and stereotyped, or they come from examples in the training data. If there is a generic rule for ‘scepticism’, it can be reproduced in a list of reflective questions. And for a topic-based approach to ‘challenging a student’s logic’, generative AI should not, in my view, be the first resort either. Students learn more from co-teaching other students than they learn from interacting with a chatbot, because they are also doing the thinking involved in following the other person’s logic, noticing assumptions and blind spots, and recognising there are different perspectives on the same problem. Playing both sides is far more developmental. 

The other issue is that learners turn to GenAI for complex reasons. Anxiety is often a big part of it, or the fear of missing out, or a lack of academic confidence. Dealing with these issues requires a deep engagement with learners. Learners need to understand why we ask them to do things (that they might do with GenAI), and we need to explain that better, but they also need good reasons for responding to those tasks in ways that feel challenging and uncertain. By the time learners reach the end of an undergraduate degree in the UK, they might have spent 18 years in an education system that values and measures outcomes. Valuing things that get in the way of the outcome is going to take more than individual encouragement or exhortation. It will require profound change.

Finally, much of the investment in GenAI has gone into interface effects that tend to undermine critical thinking. You can now talk to, vibe with and even engage with data models through wearables and emotion sensors. The effect is to undermine skills of mediation, or what used to be called literacy: conscious and effortful practices of engaging with other people’s thinking, and developing our own. The feeling of effortlessness and immediacy is not just about saving time. It is emotionally beguiling. You need never misunderstand or be misunderstood again. 

Cognitive offloading is natural to human beings – it is one definition of culture – but it is never entirely safe. It creates vulnerabilities and dependencies. For example, we depend on shared signifying systems and tools, cultural records, and whoever controls them. I agree with Musk on this at least, that ‘safeguards’ are always ideological. It’s just that data models are inherently ideological, from their training data through the judgements of data workers to the contents of prompts. There is no way of steering AI models that is neutral. But unlike the cultural forms we have lived with for tens and hundreds of years, how they are ideological is obscure, and how they influence us is unfamiliar. There may come a time when we have no choice but to use GenAI if we want to participate in cultural and intellectual experiences, at least if we want to participate digitally. At that point, it might make sense to align oneself with particular architectures and not with others: Grok culture or Claude culture. But until then, and in hope it never comes, the ‘safe’ option I suggest is to keep practising alternative skills, interacting with other archives besides the data archive, maintaining and valuing other media, and nurturing our own embodied memories. Then, at least, there are alternative vantage points on the obscure ideologies and behavioural effects of the data model.

6. You have pointed out that GenAI tools tend to place the value of education on the final output rather than the process. As you put it, when students are asked to create content, it is not because the world needs more content, but because the act of creating is how they acquire knowledge and skills. Do you think it is possible to generate tools that prioritise the learning process over the finished product?

We already have tools and features of tools that support a focus on the learning process. Annotation is one that can be used both for solo reflection and for shared feedback. Document sharing, design spaces and coding environments, e-portfolios, all these are useful too. But you can build a process-oriented learning environment in just about any platform, or from simple open-source tools. What matters is the pedagogic intention. If we want education to be centred on processes of reasoning, and personal development, and on the specialist practices of each subject, we have to invest attention in these things and not in proxies for them, such as grammatical sentences or test scores. As soon as we standardise what we are looking for, that standard can become a data source or a prompt for GenAI. But moving away from standardised assessments and standard proxies for learning will require the kind of transformation I mentioned at the start. It would mean learners working on authentic challenges that arise from a real context – a context that provides its own standard, its own intrinsic feedback on students’ solutions. That might be the school or university and its community, or a placement setting, or a project with stakeholders. More learning would have to happen in shared, live, embodied spaces, and that learning would have to be high-quality if learners are to invest in it rather than in the electronic angel on their shoulder. Educators would have to enable and assess students’ work on its own terms, letting go of many proxies we have relied on before. This would have implications for teaching time and attention, and the places that learning happens, and therefore its costs.

A very small part of that is about the tools that are used. There may be some technical solutions for how learning relationships are mediated and how learning environments are made. I would stick my neck out and say we have all the functionality we need; we just need it to be more modular, more open, cheaper, and more flexible. But there really is no way to automate attention, witnessing, engagement, and care. This is what students need, and when they say they want a chatbot, what I hear is that they aren’t getting enough attention and care from other sources. We have to be very careful about contradicting students’ experiences, since that is a large part of what we are working with, but I do think we can challenge what they say they want from AI. That will involve confronting their fears. They are afraid of using AI. They are afraid of not using AI. Asking students to raise their eyes from these anxieties and think about what kind of learning and working futures they want is always an intense experience, but it is powerful. These deep engagements are what we need, in my view.

It is an irony that the use of generative AI by students has produced a workload crisis for teachers, at least those who are committed to student learning. It is exceptionally hard to read through an AI-generated or partially generated assignment to discern the work students have done, the process they have undertaken, and thus what guidance they need. If you don’t care, of course, it’s a breeze. Set your AI agent onto marking theirs. But if you do care, you know that the solution is to make the process itself the focus. We need teachers and students to build those processes together, and the learning environments that can support them. That would be a lot more exciting to me than asking what workflows GenAI can come up with or what new AI-integrated mega-platforms we need.

7. What is the future for GenAI in education, and what should we be ready to challenge?

I did my fair share of foreseeing early on, and much of what I was saying now seems like common sense. So GenAI hasn’t kept improving with scale, productivity increases have been hard to demonstrate, and the promised educational benefits continue to be elusive. 

I’m not an expert on the interface between the AI industry and edtech, in terms of the business models and the distribution of income. It seems to me that a lot of what passes for AI in education is rather shallow customisation in the application layer. At the moment, the foundation companies are very keen to secure subscriptions and use cases from education organisations, and of course, educational content and data. Bespoke educational applications may be a good way to forge those relationships, for now. But the long-term vision for these companies is not AI-enabled education; it’s AI instead of education. It’s something much more like the Neuralink, or Josh Dahn’s Synthesis, or Marc Andreessen’s vision of every child having their own dedicated AI tutor from birth, bypassing the need to engage with a shared learning culture. And while I personally think that is a fantasy, a dystopia, I do think the foundation companies will want to monetise everything they have learned from education and edtech in the medium term. To mine education, if you like, in order to undermine education. 

As I said at the start, different actors in the educational space have different powers to choose from. I would love to see university leaders taking a more critical and ethically grounded position on generative AI. I’d like to see ministries and departments of education employing people who think deeply about these issues, rather than AI company secondments or industry-adjacent think tanks. I think that would make a considerable difference to how GenAI is implemented, or not implemented, in educational contexts.

I share the anxieties educators feel around generative AI, which makes us grasp at terms like ‘implementation’ so we can feel more in control. Generative AI is not being implemented. It was, as its proponents like to say, ‘unleashed’ on users and knowledge systems and cultural archives, every bit as irresponsibly as that sounds. Its developers have quickly become the richest companies in the world, dictating terms to governments, regulators and publishers. Every choice we make about AI in education is made downstream of these events, and in the face of this power. But choices are still possible. The most important, for me, is to tell the truth about GenAI, without hype or magical thinking, trusting our pedagogic and disciplinary methods to support us in that. And when those methods don’t help, to be truthful about our uncertainty. 

We should be ready to challenge developments that extend the black box of not knowing and not being accountable further into our classroom practices. That means, I think, that we should insist on alternative knowledge archives and knowledge practices being available – that do not pass through generative data architectures – to protect what we understand student learning to be. Doing this demands technical ingenuity as well as intellectual commitment, and both are valuable skills in any foreseeable future. Just as we provide alternatives to social media in school and university platforms, where different rules and norms apply, I think we can do the same in relation to GenAI.


About Helen Beetham

Image not found

Dr Helen Beetham is an experienced consultant, researcher and educator working in the field of digital education in the university sector. Her publications include ‘Rethinking Pedagogy for a Digital Age’ (Routledge, 2006, 2010 and 2019), ‘Rethinking Learning for a Digital Age’ and an edited special issue of ‘Learning, Media and Technology’ (2022). Her current research centres on critical pedagogies of technology and subject specialist pedagogies, in the context of new challenges to critical thinking and humanist epistemology.

For two decades, Helen has advised global universities and international bodies on their digital education strategies, producing influential horizon scanning and research reports for Jisc. Her Digital Capabilities framework is a standard across UK Higher and Health Education, and she contributed to the European Union’s DigCompEdu framework, which incorporates AI and data competencies. An experienced educator, she has developed and taught master’s courses in education and learning design, and currently documents her research via her Substack, Imperfect Offerings.


About Avallain

For more than two decades, Avallain has enabled publishers, institutions and educators to create and deliver world-class digital education products and programmes. Our award-winning solutions include Avallain Author, an AI-powered authoring tool, Avallain Magnet, a peerless LMS with integrated AI, and TeacherMatic, a ready-to-use AI toolkit created for and refined by educators.

Our technology meets the highest standards with accessibility and human-centred design at its core. Through Avallain Intelligence, our framework for the responsible use of AI in education, we empower our clients to unlock AI’s full potential, applied ethically and safely. Avallain is ISO/IEC 27001:2022 and SOC 2 Type 2 certified and a participant in the United Nations Global Compact.

Contact:

Daniel Seuling

VP Client Relations & Marketing

dseuling@avallain.com

What Makes Feedback Meaningful and How Can AI Enhance Teacher-Led Delivery

The latest Language Teaching Takeoff Webinar welcomed first-time guest host Pilar Capaul. As a language teacher and ELT content creator, she shared examples from her own lessons to demonstrate how teachers can use the TeacherMatic Language Teaching Edition to monitor understanding and create engaging activities.

What Makes Feedback Meaningful and How Can AI Enhance Teacher-Led Delivery

London, April 2026 – In ‘Provide Meaningful, Timely Feedback at Scale with the Power of AI’, Joanna Szoke examined the role feedback plays in learner progress, focusing not just on providing it, but on what makes it truly impactful. She also introduced and demonstrated the new TeacherMatic ‘Advanced Feedback’ generator, showing how it can empower teachers to deliver feedback at scale, save time and use AI in a safe, ethical and teacher-led way.

Moderated by Giada Brisotto, Senior Marketing and Sales Operations Manager at Avallain, the session focused on how feedback should do more than comment on performance. It should motivate, inspire and give learners clear opportunities to improve and progress.

Feed Forward, Not Just Feedback

One of Joanna Szoke’s favourite topics, and a key area of expertise, is feedback and assessment in language teaching. She opened the session by asking an important question: what makes feedback useful?

Joanna wanted to reiterate that effective feedback should do more than just review performance; it should help students move forward. Feedback should support progress and build confidence. 

She also highlighted the importance of timing and specificity. Feedback is most valuable when learners can still act on it and when it includes clear explanations, relevant examples and practical actions for improvement.

Finally, Joanna suggested that feedback can also come from self-reflection and peer review. This shift to student-centred learning allows for greater ownership and even reduces teacher workload. 

Reducing Workload Without Reducing Quality

Feedback is not only important, but also one of the most time-consuming responsibilities. Alongside approaches such as self-assessment and peer review, Joanna wanted to demonstrate how TeacherMatic can enable teachers to reduce workload while still delivering impactful, effective feedback.

She introduced the new ‘Advanced Feedback’ generator. Designed to support teachers while keeping professional judgement central, it streamlines feedback workflows without compromising quality. Key features include bulk uploads, Cambridge English alignment, customisable criteria, support for handwritten submissions and annotated feedback for text-based work.

With a simple setup process, teachers can create an assignment, upload the brief or paste instructions, then choose criteria-based feedback, annotated feedback or both.

For criteria-based feedback, teachers can select their own criteria or Cambridge English criteria, with options such as Accuracy and Grammar, Vocabulary and Word Choice, Coherence and Cohesion and Fluency and Communication. Teachers can also select CEFR levels before saving the assignment and inviting submissions.

Feedback at Scale, Teachers in Control

Once assignments are created, teachers can upload one submission or bulk-upload multiple pieces of student work, making it far easier to manage feedback at scale.

Joanna highlighted that efficiency should never come at the expense of responsibility. When using AI to assess or evaluate student work, teachers should be transparent with learners and seek consent before uploading submissions.

She also emphasised that the generator is there to support the feedback process, not replace it, explaining that it should ‘help me with feedback, not produce the entire feedback’, and reinforcing the importance of keeping teachers as active participants throughout the process. Teachers should review outputs, refine responses and make the final professional judgement before anything is shared with students.

Practical Outputs for Teachers and Learners

Joanna then explored the structure of the feedback provided. It is practical, clear and ready to refine.

A dedicated For Teacher view provides a more detailed breakdown, including performance against selected criteria, recognised strengths, areas for improvement and a corresponding CEFR level. Teachers also receive a written summary of the submission, alongside suggested next steps to guide future progress.

The For Student view uses more targeted language with phrasing such as ‘You can form basic sentences, but check your verb tenses.’ This creates feedback that is more personal and easier for students to act on.

Taking Feedback Further

While useful and impactful feedback has been generated, Joanna recognises that it may still need a follow-up activity to reinforce learning, such as a gap-fill activity. The refine option allows teachers to do this. They can adapt the tone, ask to increase motivation or generate additional tasks tailored to specific learner needs.

For example, teachers can request extra practice activities that target recurring mistakes. This can turn feedback into continued learning rather than a final comment.

She also demonstrated the highly practical option of uploading handwritten PDF submissions, recognising that handwritten work remains common in many teaching contexts and continues to offer value for learners.

Joanna then showcased the power of annotated feedback for text-based submissions, where comments are automatically added directly to the student’s work. These annotations can be edited, removed or expanded with the teacher’s own feedback, creating a fast and flexible way to personalise responses.

When sharing feedback with learners, teachers can export it as a PDF or copy it into a Word document for further editing. As Joanna noted, this allows teachers to retain the human element while benefiting from a more efficient workflow.

Putting Teachers and Feedback at the Centre of the Learning Journey

As Joanna highlighted throughout the session, TeacherMatic is far more than a generic AI tool; it is designed specifically for language teaching workflows. The Language Teaching Edition has been built specifically for language educators, with over 50 purpose-built generators designed to make language teaching faster and more effective.

The new ‘Advanced Feedback’ generator is a clear example of this. It reduces the workload of delivering detailed feedback by empowering teachers to provide timely, meaningful feedback at scale.

Rather than replacing professional judgement, the generator strengthens it. Teachers set the criteria, review outputs, refine responses and decide what is ultimately shared with learners. The result is a more efficient workflow that saves time, supports consistency and places teachers and feedback where they belong, at the centre of the learning journey.

Explore the TeacherMatic Language Teaching Edition

From planning CEFR-aligned lessons and creating high-quality activities to implementing structured feedback workflows and more, the TeacherMatic Language Teaching Edition is built on recognised language teaching methodologies and developed with input from the International House World Organisation, NILE, Eaquals and English UK.

Designed as a safe and ethical AI toolkit for language teachers, it delivers reliability, strong pedagogical alignment and outputs created for use inside and outside the classroom.

Next in the Webinar Series

Make Informed CEFR Alignment Decisions In the Age of AI

🗓 Thursday, 14th May
🕛 12:00 – 12:30 BST (13:00 – 13:30 CEST)

Join award-winning educator Nik Peachey as he introduces the new ‘CEFR Alignment for Teachers: In the Age of AI course.

See how to apply CEFR principles in a structured, practical way using TeacherMatic. Learn how to make informed decisions, maintain pedagogical integrity and adapt outputs to different learner contexts while retaining full professional control.


About Avallain

For more than two decades, Avallain has enabled publishers, institutions and educators to create and deliver world-class digital education products and programmes. Our award-winning solutions include Avallain Author, an AI-powered authoring tool, Avallain Magnet, a peerless LMS with integrated AI, and TeacherMatic, a ready-to-use AI toolkit created for and refined by educators.

Our technology meets the highest standards with accessibility and human-centred design at its core. Through Avallain Intelligence, our framework for the responsible use of AI in education, we empower our clients to unlock AI’s full potential, applied ethically and safely. Avallain is ISO/IEC 27001:2022 and SOC 2 Type 2 certified and a participant in the United Nations Global Compact.

Contact:

Daniel Seuling

VP Client Relations & Marketing

dseuling@avallain.com

New CEFR Alignment Course Developed in Collaboration with NILE

Avallain has launched ‘CEFR Alignment for Teachers: In the Age of AI’, a new online course for language teachers, developed in collaboration with CEFR specialists Dr Elaine Boyd and Thom Kiddle at Norwich Institute for Language Education (NILE). Available on Avallain Magnet, the course officially launches at IATEFL 2026 and supports teachers in applying CEFR principles to AI-generated and classroom materials with confidence.

New CEFR Alignment Course Developed in Collaboration with NILE

St. Gallen, April 2026 – ‘CEFR Alignment for Teachers: In the Age of AI’, a free, interactive course, is now available on Avallain Magnet, our peerless, AI-integrated learning management system. It will be officially launched at the IATEFL International Conference and Exhibition 2026 (21st–24th April). 

Developed through the shared efforts of the Avallain team and CEFR specialists Dr Elaine Boyd and Thom Kiddle at NILE, it helps language teachers align, evaluate and adapt generated texts, while strengthening their ability to make pedagogically sound decisions for learners at different CEFR levels.

A Framework That Continues to Shape Language Education

In 2001, the Common European Framework of Reference for Languages (CEFR) marked a defining moment in language education. It established a standard framework for describing language proficiency and achievement. Over the past 25 years, we can see its significant impact across course design, level benchmarking, assessment frameworks and published learning materials. 

While the CEFR has been widely used, alignment has not always been done consistently or transparently. In some instances, claims of CEFR alignment are not clearly substantiated or supported by defined principles or practices. This raises important questions about validity and professional accountability, which this course aims to address by deepening understanding and improving alignment decisions.

CEFR Alignment in the Age of AI

The rapid growth and adoption of AI in language education were another key driver behind the creation of ‘CEFR Alignment for Teachers: In the Age of AI’. Teachers can now generate context-specific, personalised learning materials more quickly than ever. This creates new opportunities to adapt content to learners’ needs with greater speed and flexibility. 

However, as seen in past misuse of the CEFR, the availability of these tools does not in itself ensure that materials are appropriate for a given level. The risk of misalignment remains, particularly where outputs are not evaluated against the descriptors, scales and principles that underpin the framework.

The course addresses this challenge and reinforces the need for informed teacher judgment by strengthening teachers’ knowledge and skills in applying the CEFR. Its aim is to build confident teachers who can make sound decisions and ensure that alignment claims are both pedagogically sound and professionally defensible.

Flexible Learning, Grounded in Practice

During the course, language teachers will gain a broad understanding of the CEFR’s scope, familiarise themselves with specific levels and scales and ultimately deepen their knowledge of its structure.

Delivered on Avallain Magnet, this course is flexible, interactive and self-paced. It will strengthen teachers’ confidence in deciding how to use texts for learners at different CEFR levels and enhance their understanding of how to adapt AI-generated texts and tasks for specific scales. 

As CEFR alignment expert Dr Elaine Boyd explains, ‘This course is designed to really help teachers align the CEFR scales and descriptors with the specific needs of their classes. And the great thing is, teachers can dip in and out of it when they have time and build their skills at their own pace.’

From Understanding to Informed Application

The course provides an overview of the CEFR, introducing its descriptors, their defining features and how one level differs from another.

Through interactive modules, participants will engage with illustrative descriptors, analyse authentic written and listening texts and practise discriminating between descriptors at different levels in the same scale, including the ‘plus levels’. 

David Moxon, Learning Technology Specialist and Content Developer at Avallain, who helped develop and publish the course on Avallain Magnet, explains, ‘While it is important for participants to gain a broad understanding of the CEFR framework, it is equally critical that they engage with it. Interactive exercises, such as benchmarking tasks, will help translate theory into practice. The learning environment also offers the opportunity for teachers to assess their progress throughout the course and evaluate their confidence in a final self-assessment.’

As AI becomes part of everyday language teaching, this course supports teachers in working more effectively with AI-generated content and is designed to complement the use of the TeacherMatic Language Teaching Edition, a trusted AI toolkit that empowers language educators ethically and safely.

Our collective efforts were not to deny the role of AI, but rather to reinforce the importance of professional judgement and ensure that alignment decisions are informed by context, pedagogy and a clear understanding of the framework. 

Reflecting on the course design, Thom Kiddle, NILE Director and CEFR specialist, notes, ‘We really enjoyed designing the course and thinking creatively about how to draw teachers’ focus to the horizontal dimension of the CEFR across all the different modes of communication, and to really engage with the way the individual descriptors are worded and what that means for learner language ability.’

Designed to Support Professional Growth

This course is intended for language teachers who are already familiar with the fundamentals of the CEFR and are looking to deepen their understanding and strengthen their practical application of it. It is also relevant for academic managers, senior teachers, syllabus designers and edtech coordinators involved in curriculum development and learning design.

While no prior knowledge of AI is required, the course recognises the growing role of AI content in language education and supports teachers working with both AI-generated and traditionally developed materials.

Official Launch at IATEFL 2026

From the 21st to the 24th of April, the Avallain team will attend the IATEFL International Conference and Exhibition 2026 in Brighton (UK). This event will bring together English language teaching professionals and enthusiasts from around the world, providing an excellent opportunity for the official launch of ‘CEFR Alignment for Teachers: In the Age of AI’.

The course reflects a joint commitment to an honest and professional approach to working with the CEFR, supporting educators in making sound, evidence-based decisions for learners at every level.


About NILE

NILE is one of the world’s biggest providers of training and development for English language teaching. Based in the UK and working internationally, NILE provides expert-led programmes online and in person, supporting educators, institutions and ministries worldwide. They are regularly involved in the development and implementation of large-scale education reform projects around the world.

NILE is a member of English UK and holds accreditation from the British Council, Eaquals and AQUEDUTO, reflecting its commitment to quality, professional standards and responsible practice.

About Avallain

For more than two decades, Avallain has enabled publishers, institutions and educators to create and deliver world-class digital education products and programmes. Our award-winning solutions include Avallain Author, an AI-powered authoring tool, Avallain Magnet, a peerless LMS with integrated AI, and TeacherMatic, a ready-to-use AI toolkit created for and refined by educators.

Our technology meets the highest standards with accessibility and human-centred design at its core. Through Avallain Intelligence, our framework for the responsible use of AI in education, we empower our clients to unlock AI’s full potential, applied ethically and safely. Avallain is ISO/IEC 27001:2022 and SOC 2 Type 2 certified and a participant in the United Nations Global Compact.

Find out more at avallain.com

_

Contact:

Daniel Seuling

VP Client Relations & Marketing

dseuling@avallain.com

Avallain Named Among Europe’s Top AI Solutions: Here’s Why Responsible AI in Education Matters

Education Technology Insights Europe has included Avallain in its ‘Top Artificial Intelligence Solutions in Europe’ selection, reflecting the growing importance of responsible and effective AI adoption in digital education.

Avallain Named Among Europe’s Top AI Solutions: Here’s Why Responsible AI in Education Matters

St. Gallen, April 2026 – Education Technology Insights Europe has named Avallain among its ‘Top Artificial Intelligence Solutions in Europe‘. The recognition highlights Avallain’s work with publishers, institutions and educators to responsibly integrate AI into digital education, with ethics, safety and practical impact at the core.

As a specialised industry magazine, Education Technology Insights Europe is focused on the evolving education landscape. It supports institutions, administrators and technology leaders in navigating digitally enabled learning environments, with company selections informed by subscriber nominations, editorial research and insights from an industry advisory panel.

The inclusion of Avallain in this list reflects a growing priority for other organisations to implement AI in ways that are not only innovative, but also practical, ethical, safe and aligned with educational goals.

Supporting Safe, Ethical and Human-Centred AI Adoption in Education

For organisations developing and delivering digital education, the challenge is no longer whether to adopt AI, but how to do so in a way that is effective, safe and aligned with educational goals.

This requires, more than standalone tools, a structured approach grounded in real teaching, learning and content development needs. By working closely with publishers, schools and educators, Avallain supports the development of AI capabilities that respond directly to classroom realities, curriculum requirements and operational demands.

Avallain Intelligence: A Practical Framework for Publishers, Schools and Educators

At the centre of this approach is Avallain Intelligence, Avallain’s framework for the responsible use of AI in education. It provides organisations with a clear and structured way to adopt AI with confidence while maintaining a strong human-centred focus.

Through Avallain Intelligence, organisations benefit from:

  • AI designed to support and empower publishers, content creators, schools and educators, not replace them.
  • Tools that reduce administrative workload while preserving pedagogical control and creativity.
  • Clear standards for data privacy, security and ethical use.
  • Reduced risk when introducing AI into existing products and programmes.
  • Alignment with regulatory requirements and institutional policies.

This ensures that human expertise remains at the core of digital education, with AI acting as a support layer that enhances quality, efficiency and impact.

Enabling Scalable and Cohesive Digital Education

For publishers and institutions, one of the key challenges is ensuring that AI is not introduced in isolation but integrated across the full learning experience.

Avallain supports this through a connected ecosystem:

  • Avallain Author, our flexible, AI-powered authoring tool
  • Avallain Magnet, our peerless, AI-enhanced learning management system
  • TeacherMatic, Avallain’s AI toolkit designed for and refined by educators

Together, these solutions enable organisations to streamline workflows, reduce complexity and ensure consistency across content, delivery and teaching support.

Supporting Better Outcomes in a Changing EdTech Landscape

As AI becomes a standard component of digital education, organisations are increasingly focused on outcomes such as improving efficiency, maintaining quality and supporting educators without adding unnecessary complexity.

‘Organisations need more than access to AI. They need clarity, control and confidence in how it is applied. Through Avallain Intelligence, we support our clients and partners in implementing AI in ways that are responsible, effective and aligned with their educational objectives, while ensuring that educators and content experts remain at the centre of the process’, said Ursula Suter, Executive Chairwoman and Co-Founder at Avallain.


About Avallain

For more than two decades, Avallain has enabled publishers, institutions and educators to create and deliver world-class digital education products and programmes. Our award-winning solutions include Avallain Author, an AI-powered authoring tool, Avallain Magnet, a peerless LMS with integrated AI, and TeacherMatic, a ready-to-use AI toolkit created for and refined by educators.

Our technology meets the highest standards with accessibility and human-centred design at its core. Through Avallain Intelligence, our framework for the responsible use of AI in education, we empower our clients to unlock AI’s full potential, applied ethically and safely. Avallain is ISO/IEC 27001:2022 and SOC 2 Type 2 certified and a participant in the United Nations Global Compact.

Find out more at avallain.com

_

Contact:

Daniel Seuling

VP Client Relations & Marketing

dseuling@avallain.com

Use TeacherMatic’s AI Tools to Inspire, Monitor and Motivate in Everyday Teaching

The latest Language Teaching Takeoff Webinar welcomed first-time guest host Pilar Capaul. As a language teacher and ELT content creator, she shared examples from her own lessons to demonstrate how teachers can use the TeacherMatic Language Teaching Edition to monitor understanding and create engaging activities.

Use TeacherMatic’s AI tools to Inspire, Monitor and Motivate in Everyday Teaching

London, March 2026 – In ‘Inspire, Monitor, Motivate: Practical AI Tools for Everyday Teaching,’ Pilar showcased the ‘Did you do your homework?’ and ‘Inspiration!’ generators, demonstrating how two of her favourite TeacherMatic AI tools can be used to check learner comprehension and create engaging classroom activities. Drawing on examples from her own lessons, she showed how teachers can adapt tasks to suit different learner profiles, topics and levels.

Moderated by Giada Brisotto, Senior Marketing and Sales Operations Manager at Avallain, the session explored how everyday classroom challenges can be approached with greater confidence and new, creative ideas for lessons and activities.

An AI Toolkit for Everyday Language Teaching Tasks

Pilar introduced the TeacherMatic Language Teaching Edition, an AI toolkit she values for the range of practical tasks it supports in everyday teaching. With more than 50 generators designed for language educators, teachers can plan lessons, adapt materials and generate meaningful activities that contextualise language for learners. 

She also highlighted that teachers can select the methodology they want to apply, ensuring that the generated activities and resources align with their preferred teaching approach.

Assessing What Students Really Understood

Homework is an important starting point for any lesson. As students enter the classroom, Pilar wants a quick sense of whether they engaged with the material and understood the key ideas. As she explained during the session, ‘I don’t just want to know if they did it. I want to know if they understood it.’

Simply asking students to raise their hands to confirm they completed a homework task rarely provides this level of insight. Instead, our host demonstrated how teachers can use the ‘Did you do your homework?’ generator to turn homework checks into short activities that reveal what learners have actually understood.

Turning Homework Checks into a Lesson Warm-Up

Using a homework task she had set for an upper-intermediate class studying environmental topics, Pilar illustrated her approach to assessing comprehension. Students were asked to watch a video at home and create a mind map highlighting key information. To ensure understanding, she uploaded the video transcript to the ‘Did you do your homework?’ generator, and asked it to produce three short summaries, only one of which correctly reflects the content.

Pilar tailored the activity to B1 learners with a medium-length output. She also included an optional description of the class: energetic teenagers with short attention spans who are accustomed to fast-paced content on platforms like TikTok. The goal was to create something that would capture their attention immediately, while illustrating how teachers can also adapt content to specific learner needs and different classroom contexts.

Refining for Real Classroom Settings

Below the generated content, teachers can find an answer key. Acknowledging that teachers often teach several classes and set many tasks, this resource provides additional reassurance. 

While the generated result already provided what was needed to evaluate learner understanding, she decided to push the platform a little further by considering her learner profile more closely. These students may not be particularly engaged by a topic such as pollution, so she refined the results by suggesting ‘add jokes to make it engaging for teenagers.’ Pilar reminded teachers that AI can also be guided in other ways, for example, by asking it to focus on specific grammar points, such as the present simple, to use narrative tenses or simply to make the activity more playful and engaging.

The updated output showed how even small adjustments can make a noticeable difference. Rather than relying on a standard textbook-style activity, she had something tailored to her learners. She added the task to her lesson plan and asked students to identify the correct summary, creating a lively warm-up at the start of the lesson. This activity encourages students to revisit the homework, reflect on what they have learned and discuss the topic together, while also giving the teacher a clear sense of how well they have understood the video.

Finding Inspiration When a Topic Feels Uninteresting

Sometimes teachers need to cover topics that are not immediately engaging. The ‘Inspiration!’ generator enables teachers to quickly and easily make these lessons feel relevant, meaningful and motivating. 

To demonstrate this generator, our host used a group of her own adult learners. These are A2-level students who had studied English before but were returning to it after a break. They had practised the present simple many times and were beginning to feel frustrated, even though they still needed more practice. In this case, the question was: how do we approach the topic differently and make it fresh again?

Creating and Refining Activities for Greater Engagement

Describe the learner profile: in Pilar’s example, this is a group of busy adults who want to make progress quickly. She then explored the additional settings, selecting the Communicative Language Teaching model so the activities would focus on speaking practice.

The result was a range of classroom ideas connected to the topic ‘Routines around the world’, including matching routines to different cultures, role-play activities based on daily schedules and short quizzes designed to practise question formation. Rather than repeating familiar coursebook exercises, the activities provided new ways to approach the same language point while keeping learners actively involved.

She also illustrated how these ideas can be refined further. When the webinar participants suggested turning the activities into games, she typed ‘include more games’ into the refine box. The regenerated output included additional suggestions, such as board games, creating opportunities for students to practise the language while focusing on interaction and friendly competition.

From Ideas to Real Classroom Use

Throughout the session, it was emphasised that the value of these generators lies in how teachers use and adapt the results. She also highlighted the information icon available within each generator, which provides guidance, examples and practical tips for getting the most out of each tool.

Once activities are generated, they can be exported and reused in future lessons. Pilar advised users to save outputs so they can be incorporated into lesson planning, revisited for revision activities or shared with colleagues to see how they work in different classrooms. In this way, the generated ideas become part of a broader teaching process rather than a one-off resource.

By combining quick activity generation with teacher judgement and refinement, the TeacherMatic Language Teaching Edition can support teachers in creating lessons that remain engaging, adaptable and relevant to their learners.

Explore the TeacherMatic Language Teaching Edition

The TeacherMatic Language Teaching Edition provides language educators with practical, safe AI tools for planning lessons, generating engaging classroom activities and developing engaging language learning experiences. Teachers remain in control of every step, reviewing and refining outputs so they reflect their teaching approach, learners and classroom context.

Next in the Webinar Series

Provide Meaningful, Timely Feedback at Scale with the Power of AI

🗓 Thursday, 16th April
🕛 12:00 – 12:30 BST (13:00 – 13:30 CEST)

Join Joanna Szoke, freelance teacher trainer and AI in education specialist, for the next session in the Language Teaching Takeoff Webinar Series as she explores the challenges of delivering meaningful, timely feedback and the role AI can play in supporting this process. 

See the new Advanced Feedback generator in action, designed to support feedback workflows at scale while maintaining teacher oversight.


About Avallain

For more than two decades, Avallain has enabled publishers, institutions and educators to create and deliver world-class digital education products and programmes. Our award-winning solutions include Avallain Author, an AI-powered authoring tool, Avallain Magnet, a peerless LMS with integrated AI, and TeacherMatic, a ready-to-use AI toolkit created for and refined by educators.

Our technology meets the highest standards with accessibility and human-centred design at its core. Through Avallain Intelligence, our framework for the responsible use of AI in education, we empower our clients to unlock AI’s full potential, applied ethically and safely. Avallain is ISO/IEC 27001:2022 and SOC 2 Type 2 certified and a participant in the United Nations Global Compact.

Contact:

Daniel Seuling

VP Client Relations & Marketing

dseuling@avallain.com

Plan a Comprehensive and Impactful Course with TeacherMatic

The latest Language Teaching Takeoff webinar welcomed back educator and edtech specialist Nik Peachey, who explored how the TeacherMatic Language Teaching Edition can support the full cycle of planning: from course design to detailed lesson preparation, through to meaningful lesson wrap-ups that reinforce learning.

Plan a Comprehensive and Impactful Course with TeacherMatic

London, February 2026 – In ‘Plan Smarter and Teach with Confidence,’ Nik focused on course planning and its often time-intensive components. He demonstrated how teachers, academic managers and directors of studies can use TeacherMatic’s AI generators, including the ‘Scheme of Work / Curriculum Plan’ generator, to support this work while maintaining professional control.

Moderated by Giada Brisotto, Senior Marketing and Sales Operations Manager at Avallain, the session focused not only on planning but on developing it in greater detail, from course design through to fully structured lessons and effective wrap-ups.

Before Planning a Course 

Nik began by acknowledging the time-intensive nature of developing effective courses. He emphasised the importance of reducing repetitive preparation, building clear planning structures and aligning content with learner levels. To support this process, TeacherMatic provides AI tools for each stage of course development, enabling teachers to build structured plans while keeping content aligned with the CEFR.

He also demonstrated how generators can be quickly located using simple filter settings. Users can filter by task or role to surface the most relevant tools and favourite the ones they use most often, making the planning process more efficient.

Before moving into the generators themselves, Nik encouraged participants to consider lesson wrap-ups as part of the planning process. This step is often overlooked but plays an important role in reinforcing learning and supporting retention at the end of each lesson.

Creating a Course Plan

Nik opened the demonstration with the ‘Scheme of Work / Curriculum Plan’ generator, showing how users can plan a course for a specific group of learners. Using the Sustainable Development Goals as the course theme, he defined key topics, set the number of sessions to six and selected a table format at the B1 level. Additional details, such as learner age and optional support materials, were added to personalise the course further.

He also selected a pedagogical model, choosing Task-Based Learning, and showed how course creators can receive guidance on learning needs. The result was a clearly structured scheme of work presented in table form, with six session titles and supporting descriptions. Each session followed a task-based framework with pre-task, main task, post-task and wrap-up stages, and concluded with a review and action plan. 

Building Out Individual Lessons

Once a course plan is in place, each session needs to be developed in greater detail. A lesson outline alone is rarely sufficient, so the focus shifted to how the ‘Lesson Plan’ generator can expand a single session into a fully structured lesson. Nik demonstrated how to define a topic, clarify lesson aims, and set timing and a pedagogical model, all while keeping the lesson aligned with CEFR levels, skills and subscales.

The generated plan followed a clear, task-based structure. It was organised to include an introduction, main activity, language focus and summary, with suggested resources and homework. This provided a detailed foundation that could be refined and adapted, enabling teachers, academic managers and directors of studies to move from outline to delivery with greater confidence, while reducing preparation time. 

Reinforcing Learning as Part of the Plan

The final stage of the workflow focused on lesson wrap-ups. This is an area often overlooked in planning but essential for reinforcing learning and encouraging reflection.

Using the ‘Lesson Wrap-Up’ generator, Nik showed how teachers can set the topic, CEFR level and learner profile, as well as include specific learning needs or supporting materials. The generator then produces a range of structured activities designed to check understanding and prompt reflection. Activities included true-or-false checks, gap fills, discussion prompts and poster creation, which Nik noted was a particularly effective way for learners to reflect while engaging more creatively with the topic.

By building this final stage into the planning process, teachers can close lessons with purpose, allowing learners to review, reflect and retain key language while ensuring that each session connects clearly to the wider course.

From Big Picture to Lesson Reflection

A strong course considers each stage of the teaching process, from the initial structure through to the reinforcement of learning at the end of a lesson. Nik demonstrated how this full workflow can be supported within TeacherMatic, progressing from a course plan to detailed lesson planning and, finally, to lesson wrap-ups that consolidate learning.

With CEFR alignment embedded throughout, teachers can build from the big picture into individual sessions and then use additional generators to create supporting materials. Nik demonstrated how filters, such as ‘Speaking’ and ‘Reading’, can quickly identify relevant tools, enabling teachers to produce resources aligned with lesson objectives. Plans and materials can be saved and shared across a school account, supporting collaboration and reducing duplication. 

Together, this structured flow enables teachers, academic managers and directors of studies to plan with greater clarity, maintain professional control and ensure that each lesson contributes meaningfully to the wider course.

Explore the TeacherMatic Language Teaching Edition

For educators seeking greater clarity and consistency in planning, the TeacherMatic Language Teaching Edition provides CEFR-aligned generators to support course design, lesson development, course materials and lesson wrap-ups, with the flexibility to refine and adapt plans across contexts.

Next in the Webinar Series

Inspire, Monitor, Motivate: Practical AI Tools for Everyday Teaching

🗓 Thursday, 12th March
🕛 12:00 – 12:30 GMT | 13:00 – 13:30 CET

Join first-time guest host Pilar Capaul, language teacher and ELT content creator, for a practical session focused on real classroom use cases. 

Pilar will demonstrate how two TeacherMatic generators can support everyday teaching by drawing on examples from her own lessons. See how the ‘Did you do your homework?’ generator can be used to check understanding and completion, and how the ‘Inspiration!’ generator can spark motivation and engagement.


About Avallain

For more than two decades, Avallain has enabled publishers, institutions and educators to create and deliver world-class digital education products and programmes. Our award-winning solutions include Avallain Author, an AI-powered authoring tool, Avallain Magnet, a peerless LMS with integrated AI, and TeacherMatic, a ready-to-use AI toolkit created for and refined by educators.

Our technology meets the highest standards with accessibility and human-centred design at its core. Through Avallain Intelligence, our framework for the responsible use of AI in education, we empower our clients to unlock AI’s full potential, applied ethically and safely. Avallain is ISO/IEC 27001:2022 and SOC 2 Type 2 certified and a participant in the United Nations Global Compact.

Contact:

Daniel Seuling

VP Client Relations & Marketing

dseuling@avallain.com

Make Exam Preparation More Engaging and Effective

The first Language Teaching Takeoff Webinar of the year welcomed AI in education specialist and freelance teacher trainer Joanna Szoke, who explored how teachers can use the TeacherMatic Language Teaching Edition to create dynamic, engaging exam practice.

Make Exam Preparation More Engaging and Effective

London, January 2026 – In ‘Create Dynamic and Engaging Exam Practice for Your Students’, Joanna discussed assessment and feedback. She demonstrated how teachers can use the ‘Cambridge Style Exam Prep Generator’ and ‘Worksheet’ generator to produce targeted materials for learners preparing for high-pressure assessments.

Moderated by Giada Brisotto, Senior Marketing and Sales Operations Manager at Avallain, the session reinforced the importance of moving beyond assessment as simply a grade, positioning it instead as an opportunity to support learner progress and give teachers clearer insight into what to reinforce and revisit.

Assessment and Feedback

Joanna began by emphasising the close relationship between assessment and feedback, describing them as a continuous cycle rather than separate classroom tasks. When assessment is used as an ongoing process, it becomes a practical way to identify what learners understand, where they need further support and how teachers can adapt to meet those needs.

Rather than treating assessment as an endpoint, Joanna encouraged teachers to use it as a guide to strengthen learner progress and to ensure that feedback remains purposeful and actionable.

Exam English vs Real-Life English

Exam preparation can easily become focused on format and technique, but meaningful practice also needs to develop transferable communication skills. Joanna stressed the importance of connecting exam tasks to real-life language use. By making this connection, teachers ensure that learners can apply what they practise beyond the assessment setting.

Joanna explained how exam-style activities can be adapted to reflect authentic contexts and learner interests, keeping preparation engaging while still targeting the specific demands of the assessment. This approach supports both exam readiness and broader language development without compromising either.

Cambridge-Style Exam Practice in Action

To bring these ideas into a practical teaching context, Joanna demonstrated the ‘Cambridge Style Exam Prep Generator’ and how language educators can use it to create practice tasks aligned with Cambridge English levels A2 Key, B1 Preliminary, B2 First and C1 Advanced. Depending on the level selected, the generator supports different paper formats, including Reading and Writing at A2 Key, Reading at B1 Preliminary and Reading and Use of English at both B2 First and C1 Advanced.

Joanna highlighted how quickly teachers can generate exam-style materials, then refine them to suit their learners and classroom context. Teachers can adjust the topic, language focus or task demands to create more relevant practice and keep preparation adaptable. She also emphasised that these materials are intended solely for practice. Teachers should use them alongside official past papers and published exam preparation resources, with teacher review and adaptation remaining essential.

Flexible Worksheets for Targeted Practice

To build level-appropriate practice materials that can be adapted to different teaching contexts, Joanna also showcased the ‘Worksheet’ generator. Worksheets are a reliable format for reinforcing learning and checking understanding, particularly during assessment preparation.

The demonstration highlighted how teachers can generate worksheets on almost any topic, select activity types and adjust outputs to reflect learner profiles and specific needs. Teachers can also refine results further, remove suggested answers where appropriate and export content into editable formats for layout changes and added visuals. This flexibility makes it easier to create engaging, targeted practice while keeping teacher review and adaptation central.

Supporting Confident Exam Preparation

Effective exam preparation is not only about measuring performance. It is also an opportunity to strengthen learning through purposeful assessment, timely feedback and targeted practice that reflects real assessment demands.

With CEFR alignment built into the TeacherMatic Language Teaching Edition, teachers can generate level-appropriate materials that support structured preparation and classroom needs. By combining tools such as the ‘Cambridge Style Exam Prep Generator’ and the ‘Worksheet’ generator with professional judgement and refinement, teachers can create engaging practice that supports learner confidence and readiness when it matters most.

Explore the TeacherMatic Language Teaching Edition

Built for language teaching, the TeacherMatic Language Teaching Edition enables teachers to create CEFR-aligned materials for exam preparation, assessment, classroom practice and more, with flexibility to refine outputs for different learners and contexts.

Next in the Webinar Series

Plan Smarter and Teach with Confidence

🗓 Thursday, 12th February
🕛 12:00 – 12:30 GMT | 13:00 – 13:30 CET

Join award-winning educator Nik Peachey as he demonstrates how to use planning generators in the TeacherMatic Language Teaching Edition. See AI tools such as the ‘Scheme of Work/Curriculum Plan’ generator, which are designed to support teachers, academic managers and directors of studies in reducing repetitive preparation and creating structures that can be adapted to any teaching context.


About Avallain

For more than two decades, Avallain has enabled publishers, institutions and educators to create and deliver world-class digital education products and programmes. Our award-winning solutions include Avallain Author, an AI-powered authoring tool, Avallain Magnet, a peerless LMS with integrated AI, and TeacherMatic, a ready-to-use AI toolkit created for and refined by educators.

Our technology meets the highest standards with accessibility and human-centred design at its core. Through Avallain Intelligence, our framework for the responsible use of AI in education, we empower our clients to unlock AI’s full potential, applied ethically and safely. Avallain is ISO/IEC 27001:2022 and SOC 2 Type 2 certified and a participant in the United Nations Global Compact.

Contact:

Daniel Seuling

VP Client Relations & Marketing

dseuling@avallain.com

Language Education and Technology in Times of Rapid Change: Ahead of the TISLID Conference

Rapid technological, social and linguistic change is reshaping language education and research. In this piece, Prof John Traxler reflects on Avallain’s collaboration with the TISLID conference series (Technological Innovation for Specialized Linguistic Domains), exploring the limits of traditional, stable frameworks and considering why more adaptive, responsive models are increasingly necessary. This article also highlights the importance of sustained dialogue between researchers and education technology developers in translating research into practice.

Language Education and Technology in Times of Rapid Change: Ahead of the TISLID Conference

Author: Prof John Traxler, UNESCO Chair, Commonwealth of Learning Chair and Academic Director of the Avallain Lab

St. Gallen, January 16, 2026 – Language as a whole, language learning and digital education are evolving faster than ever, and all three are becoming more and more inextricably mixed as digital technologies, especially AI, become cheaper, easier and widely accessible, and societies become more and more global, connected, changeable and mobile. 

This means that relevant research must not only be conducted quickly and effectively, but also disseminated and applied equally quickly and effectively, applied to technical development and pedagogic delivery, and extended beyond research communities. So the interface between academic research communities and the edtech sector needs to be effective and responsive, but it has its problems. 

The Limits of Traditional Publishing

Publications, meaning books and journals, used to be the gold standard, their trustworthiness and relevance guaranteed by peer review processes conducted blind by expert reviewers. These are now less widely used, in general, because the rapidity of technical, educational and social change means they struggle to keep up, especially books, and they have very limited readership. 

Research journals have their own unique problems; over the last decade, pressure from research funders, both UK and EU, has insisted on ‘open’ publication, meaning research journals must be freely available to any interested reader, no paywall, no subscription, no restrictions. This, however, has disrupted the publishers’ business model, which previously relied on libraries and readers paying to read. So now publishers must derive their income from writers, not readers, and introduce an APC (author processing charge) of several hundred to several thousand euros or dollars. 

Professional researchers are, of course, still under the systemic pressure from their institutions to ‘publish or perish’ in order to increase their institutional rankings, and so ‘predatory journals’ emerge with dubious credentials and dubious quality assurance, happy to publish very quickly on receipt of the appropriate APC. AI has only amplified these problems, partly because of the rapidly increasing volume of AI research to be published and partly because some of it is probably specious, written by AI. This account is a slight simplification; there are exceptions to each of these assertions, but the general direction of travel is as described.

Responding to Change: Avallain Lab and the Importance of Dialogue

This state of affairs was, incidentally, one of the reasons for establishing the Avallain Lab, namely, creating a more responsive and trustworthy interface between research and the company, and building in expertise and experience as publication becomes less straightforward.

In turn, this shift means that the other medium of dissemination, namely gatherings, meaning seminars and conferences, becomes correspondingly more important. 

This leads us to our collaboration with an upcoming conference on shared interests, including language, learning and digital technologies. The conference is one of the TISLID series in Spain, ‘Technological Innovation for Specialized Linguistic Domains’, a long-running conference series hosted by the ATLAS research group, ‘Applying Technology to Languages’, in UNED, Spain’s national distance learning university, based in Madrid. It takes place in Úbeda, Spain, from the 22nd to the 24th April 2026.

Rethinking Language Teaching and Linguistic Research in a Liquid World

The conference series aims to foster interdisciplinary dialogue among teachers, researchers and professionals on how to rethink language teaching and linguistic research in a liquid world, as Zygmunt Bauman’s theory suggests, a world never stable long enough to comprehend and is characterised by change, uncertainty and digitalisation.

‘Language Research and Education in Fluid Times: The Rise of Adaptive Competences’  is the conference theme for the next iteration. It focuses on the study, teaching and learning of languages, contextualised within a world in a constant and vertiginous state of evolution and transformation, of identity as well as relationships. This world is driven by multilingual needs and conditioned by globalisation, digital technology, mobility and artificial intelligence.

The title aims to suggest how human activity must adapt to unprecedentedly dynamic contexts in which linguistic, cultural, technological and communicative boundaries are increasingly blurred. In these contexts, human beings face uncertainty, diversity and new realities, some unforeseen, many ephemeral, that demand solutions that are both ethical and open, innovative and adaptive, hybrid and transdisciplinary.

The Rise of Adaptive Competence

In response to these conditions, the concept of adaptive competence becomes central. Rooted in soft or transversal skills, adaptive competence encompasses abilities such as cognitive flexibility, communicative resilience, digital and media literacy and intercultural competence. 

The conference reflects a probable paradigm shift in language education and research, namely one that moves from stable, prescriptive frameworks toward fluid, adaptive models better aligned with the complexities and transformations of contemporary societies. With such a shift, edtech developers and the edtech sector clearly need to be closely and frequently listening to researchers and their findings. Avallain is pleased to be working with this community of researchers and to be involved in its conference and its publications as part of an ongoing mission to lead the sector in translating research into action.


About Avallain

For more than two decades, Avallain has enabled publishers, institutions and educators to create and deliver world-class digital education products and programmes. Our award-winning solutions include Avallain Author, an AI-powered authoring tool, Avallain Magnet, a peerless LMS with integrated AI, and TeacherMatic, a ready-to-use AI toolkit created for and refined by educators.

Our technology meets the highest standards with accessibility and human-centred design at its core. Through Avallain Intelligence, our framework for the responsible use of AI in education, we empower our clients to unlock AI’s full potential, applied ethically and safely. Avallain is ISO/IEC 27001:2022 and SOC 2 Type 2 certified and a participant in the United Nations Global Compact.

Find out more at avallain.com

_

Contact:

Daniel Seuling

VP Client Relations & Marketing

dseuling@avallain.com

Responsibly Adopting AI in Language Education

For the final episode of 2025, the Language Teaching Takeoff Webinar Series brought together experts from across language education and edtech to examine how AI can be adopted responsibly in teaching practice.

Responsibly Adopting AI in Language Education

London, December 2025 – In ‘Transforming Language Teaching with Ethical AI: A Panel Discussion’, educator and edtech consultant Nik Peachey, teacher and ELT content creator Pilar Capaul, teacher trainer and lecturer Joanna Szoke, and Ian Johnstone, VP Partnerships at Avallain, discussed ethical considerations, institutional responsibility and practical ways to integrate AI with confidence.

Moderated by Giada Brisotto, Senior Marketing and Sales Operations Manager at Avallain, the session examined how AI toolkits, such as the TeacherMatic Language Teaching Edition, can be used in teaching practice to improve efficiency without diminishing teacher agency.

The Potential and Advantages of AI in Language Teaching

Opening the discussion, Nik identified time as one of the most persistent challenges for language teachers, from marking and lesson planning to adapting materials for specific classroom contexts. He noted that while coursebooks provide structure, they are often designed for global audiences and may not fully reflect the needs of individual learners.

Nik explained that AI can help teachers adapt and extend materials more efficiently, supporting personalisation without adding complexity. He referenced AI toolkits such as the TeacherMatic Language Teaching Edition, where generators and CEFR-aligned inputs reduce reliance on prompt-writing skills and support differentiation, especially for learners with diverse needs.

From a teacher training perspective, Joanna reinforced this point by highlighting speed and responsiveness as key advantages. She explained how AI tools enable teachers to generate resources for niche teaching contexts and specific learner profiles, allowing educators and trainers to focus more on pedagogy and professional reflection rather than on content production.

AI in the Classroom: Practical Examples that Move Beyond Content Creation

Drawing on classroom experience, Pilar discussed how AI-generated activities can serve clear learning purposes rather than simply producing content.

Using TeacherMatic generators like ‘Have you done your homework’, she replaces a simple homework check with a diagnostic warm-up that reveals whether learners have truly understood a task, enabling her to decide how the lesson should progress and where support is most needed.

To make reading more purposeful, the ‘Ask an Expert’ generator prompts learners to read with intent, question information and evaluate meaning rather than read passively.

The Role of Education Technology Providers in Ethical AI Adoption

Shifting the discussion to institutional responsibility, Ian noted that education technology companies must ensure AI does not begin to lead educational practice. While new capabilities may appear compelling, he stressed that decision-making should remain educator-led, with tools designed to support teaching rather than dictate it.

Ian highlighted the importance of sustained research, classroom piloting and collaboration with educators and institutions to refine how AI is deployed in practice. He also emphasised the role of providers in sharing what they learn through structured guidance and training, empowering teachers and organisations to build confidence, develop informed approaches and navigate the broader shift AI is bringing to language education.

Where Does AI Add the Most Value for Language Teachers

The benefits of AI depend primarily on what teachers need to achieve. Joanna explained that for planning and administrative work, it can reduce time spent on tasks such as drafting reports or lesson outlines, provided teachers remain attentive to the data they share and treat outputs as a starting point rather than a final version. At the same time, she strongly argued for classroom use, where working with AI alongside students creates opportunities to model critical evaluation, ethical decision-making and responsible use, helping learners understand not just how to use these tools but also how to question them.

Ian reaffirmed that responsibility cannot sit solely with teachers. He added that education technology companies must take an active role in designing safeguards into AI toolkits, using clear interface guidance to discourage inappropriate use and implementing measures that reduce the risk of sensitive data being shared. By embedding these considerations at both the practical and systemic levels, edtech providers can ensure ethical use is built in by design, rather than relying on individual educators to navigate these challenges on their own.

Getting Started with AI in Daily Practice

Nik encouraged teachers to start small and let curiosity guide their first steps, suggesting they focus on a single area, such as planning, feedback or material creation, rather than trying to do everything at once. He advised identifying everyday pain points and using AI as a conversational partner to explore possible approaches. At the same time, Joanna added that teachers should not overcomplicate the process, noting that simple questions and natural interaction are often the most effective way to begin building confidence.

Ethics, Transparency and Authentic Classroom Use

Returning to the ethics question, Ian stressed the importance of preserving the dialogic nature of learning, ensuring that interaction remains a meaningful exchange rather than a one-way output. He explained that TeacherMatic is designed as an educational AI toolkit, with a built-in chat environment and filters that set clear boundaries for what can be shared and generated in a learning context, reducing the risk of inappropriate content or data misuse. 

At an organisational level, Ian highlighted Avallain’s responsibility to underpin this work through ongoing research conducted by a dedicated lab, where academic expertise focuses on ethical frameworks, regulatory developments and the broader implications of AI use, including environmental impact. Together, these layers ensure that safeguards are embedded by design and continuously reviewed as technology evolves.

From a classroom perspective, Pilar examined how authenticity is maintained when AI-generated materials are shaped around real learners. Using the TeacherMatic AI toolkit, she highlighted the use of generators such as ‘Inspiration!’ and ‘Adapt your Content’ to create multiple versions of activities on the same topic. This allows students to work at an appropriate level, feel recognised and engage more confidently, reinforcing that AI-generated materials remain meaningful only when guided by teacher insight and an understanding of learner context.

Assessment, Exam Preparation and the Limits of Automation

Joanna addressed the use of AI in assessment by drawing a clear distinction between formative and summative contexts. For formative assessment, she highlighted the value of AI in generating feedback and action points to support ongoing learning, while emphasising the need for professional judgement. In summative contexts, she noted that although automated scoring can play a role for specific task types, final decisions should remain with the teacher, adding that when working with AI, ‘I will be curious and cautious.’

Building on this, Ian reinforced that generative AI should not be positioned as a decision-maker in summative assessment. He explained that language models form a new understanding each time they evaluate a piece of work and do not draw on the accumulated experience of a trained language teacher. As a result, they can offer multiple, variable interpretations rather than a consistent, auditable evaluation. For summative contexts, he argued, there should always be a role for teacher review and moderation, noting that only rule-based, algorithmic approaches, where assessment criteria are explicitly defined and auditable, may be appropriate for high-stakes decisions.

Looking at day-to-day teaching, Pilar drew on her experience preparing learners for international exams, particularly teenagers who may feel disengaged or under pressure. She explained how the rollout of the TeacherMatic ‘Cambridge Style Exam Prep Generator’ has enabled her to personalise exam-style activities around familiar topics, helping sustain motivation while maintaining relevance. Working in a bilingual setting with varying proficiency, she also described how creating resources on the same content at different levels enabled all students to prepare together while still working at a level that felt appropriate and achievable.

Looking Ahead: Supporting Teachers as AI Tools Evolve

AI toolsets will increasingly become multimodal, enabling teachers to generate audio, images, video and presentations alongside text. Nik noted that this could significantly reduce the time teachers spend searching for suitable media, allowing them to create more stimulating, multimedia-rich lessons and adapt more easily to online or blended learning environments.

Ian expanded on this by placing these developments within a broader roadmap for educational AI. While TeacherMatic already supports the creation of worksheets and lesson plans, he explained that interactive learning experiences are the next step. Drawing on Avallain’s background in interactive content, he outlined how integrating generative capabilities with interactive courseware will enable teachers to deliver more engaging activities and assignments directly in the classroom, rather than treating interactivity as a separate layer.

Joanna emphasised that technology alone is not enough. She stressed the importance of building teacher confidence and critical awareness, encouraging educators to experiment, ask questions and practise with AI tools while remaining alert to hype. Maintaining professional judgement, she argued, means staying attentive to how outputs are generated and preserving a healthy distance between automated suggestions and pedagogical decision-making.

Ethical Adoption as a Shared Responsibility

The value of AI in language education depends on how thoughtfully it is adopted. When pedagogy leads, and professional judgement remains central, AI toolkits, such as TeacherMatic, can empower teachers to manage their workload, design purposeful learning activities and respond more effectively to diverse learner needs.

At the same time, ethical adoption requires shared responsibility. Teachers need space to experiment critically and build confidence, while education technology providers must ensure safeguards, transparency and ongoing research are embedded by design. 

Explore the TeacherMatic Language Teaching Edition

The TeacherMatic Language Teaching Edition is an AI toolkit specifically designed for language educators. Through its purpose-built AI generators, teachers can create activities, support planning, approach assessment and more with greater consistency and control, while reducing time spent on routine tasks.

Next in the Webinar Series

Create Dynamic and Engaging Exam Practice for Your Students

🗓 Thursday, 22nd January
🕛 12:00 – 12:30 GMT | 13:00 – 13:30 CET

The next edition of the Language Teaching Takeoff Webinar Series will welcome Joanna Szoke. A freelance teacher trainer and AI in education specialist, she will open the new year with a practical session focused on exam preparation.

Her first episode will demonstrate the ‘Cambridge Style Exam Prep Generator’ within the TeacherMatic Language Teaching Edition, alongside other generators designed for assessment-focused use. The session will explore how teachers can create engaging exam-style practice, adapt tasks to different learner needs and approach assessment in ways that support confidence and progression.


About Avallain

For more than two decades, Avallain has enabled publishers, institutions and educators to create and deliver world-class digital education products and programmes. Our award-winning solutions include Avallain Author, an AI-powered authoring tool, Avallain Magnet, a peerless LMS with integrated AI, and TeacherMatic, a ready-to-use AI toolkit created for and refined by educators.

Our technology meets the highest standards with accessibility and human-centred design at its core. Through Avallain Intelligence, our framework for the responsible use of AI in education, we empower our clients to unlock AI’s full potential, applied ethically and safely. Avallain is ISO/IEC 27001:2022 and SOC 2 Type 2 certified and a participant in the United Nations Global Compact.

Contact:

Daniel Seuling

VP Client Relations & Marketing

dseuling@avallain.com

When Educators Become AI Designers: Inside Edinburgh’s AI for Teaching Innovation Project

Creating meaningful, impactful AI tools relies on collaboration and real-world testing. The ‘AI for Teaching Innovation’ project at the Edinburgh Futures Institute (University of Edinburgh) applies this principle by involving educators, students and learning technologists in the co-design and classroom testing of AI tools. In this interview with Javier Tejera, Senior Learning Technology and Design Advisor, he explains how this approach ensures AI supports authentic learning and teaching.

When Educators Become AI Designers: Inside Edinburgh’s AI for Teaching Innovation Project

An interview with Javier Tejera, Senior Learning Technology and Design Advisor, Edinburgh Futures Institute, University of Edinburgh, conducted by Carles Vidal, MSc in Digital Education, Business Director of Avallain Lab

As part of the Memorandum of Understanding between the Centre for Research in Digital Education (University of Edinburgh) and Avallain AG, signed earlier this year, both institutions are strengthening their collaboration to bridge the worlds of research and industry. The partnership helps the University engage more closely with technological and commercial trends, while supporting Avallain in deepening its research awareness and developing pedagogically rich, ethically grounded learning technologies.

Building on this shared vision, we want to highlight one of the University’s most exciting recent initiatives: the AI for Teaching Innovation project led by Professor Siân Bayne and Javier Tejera from the Edinburgh Futures Institute. Now entering its second phase, the project explores how generative AI can be used to create meaningful, field-specific teaching tools through a co-design process that actively involves professors in defining each app’s scope, refining prompts and, crucially, testing the tools in real educational settings. In its first year, this innovative project delivered ten AI-powered applications across disciplines such as medicine, business administration, law, history and environmental studies.

The results of this initiative are now being presented in different venues and have been unanimously welcomed by educators eager to create meaningful AI tools. At the same time, the project sets an example for edtech companies on how to develop AI apps with genuine value for education.

Interview with Javier Tejera

Javier, at the Edinburgh Futures Institute, you and your team have led the AI for Teaching Innovation project, exploring how generative AI can open new possibilities for teaching and learning. The project supports educators in designing and building AI-driven applications that respond to real pedagogical needs, fostering creative human/machine partnerships and helping academic staff develop confidence and skills in working with AI.

  1. Let’s start with the origin of the project. What prompted the creation of this project, and what key questions or objectives guided your exploration of generative AI’s role in teaching and learning?

When ChatGPT was released, I asked it a couple of questions about a course I was teaching elsewhere. The responses seemed perfectly fine at first sight, but if I paid close attention, they were quite wrong, frankly. I thought this had potential as a way to prompt students to think critically about a given text, but I thought it wasn’t enough: I also wanted to add specific style, tone, configurations, or, in other words, to have some degree of control over the AI. I was thinking that this could open endless possibilities to be creative in teaching.  

This led me to start building small web applications where I could have more control over what the AI generated. As a small pilot, I created a couple of applications that were used in an MBA and an MSc in Heritage here at the University of Edinburgh, and the teachers and students quite liked the experience.  

From this point, we launched the AI for Teaching Innovation project, where we co-create web applications for teaching and learning. Our main idea here is to be creative while moving away from the hype and disillusionment of AI and education, and to try to explore and understand collectively whether AI might actually help us to teach or not. 

  1. In your view, what makes this project unique compared to other AI-in-education initiatives happening today?

I think it is our commitment to a collaborative, ground-up approach that actively involves teachers, students and learning technologists in the design and implementation process. Many existing products in the AI and education space are developed in isolation, often far removed from the realities of classroom teaching. As someone who works closely with educators on a day-to-day basis, I have a firsthand understanding of their needs and challenges. I can see clearly that many of the current solutions simply do not resonate with teachers or meet their requirements.  

Teachers know very well what will work in their specific contexts. However, it is often difficult (if possible at all) for them to be involved in the design process of educational products. This project tries to bridge that gap. I often see the project as transforming teachers from software consumers into software producers. I think this is the uniqueness of this initiative.

Image courtesy of the AI for Teaching Innovation project, Edinburgh Futures Institute, University of Edinburgh.
  1. You involved lecturers closely from the very beginning. How did their participation in the co-design process shape the direction or outcomes of the project?

Their participation is not just an element, but we believe it’s the entire foundation of the project. We have a very structured co-design process. It starts with workshops where we reflect on the ‘Big Ideas’ of AI and education while also providing a hands-on space to sketch ideas using UX design activities. From there, the ideas selected go to the next phase, where we have learning design clinics and discovery meetings to polish the ideas before the build. 

What the project team does is facilitate this whole process, but the ideas come from the teachers. When an app is ready, they are its best ambassadors because of the sense of ownership created from the very beginning. 

They come up with ideas we genuinely hadn’t thought about. Now, thanks to them, we have new tools and application patterns that we can adapt to other courses. This participation highly influences the direction of the project, but I would even argue that it is the direction of the project itself. 

  1. The examples you developed, such as simulated stakeholder interviews or clinical case scenarios, seem both creative and practical. Could you share more about the value these tools bring to the classroom and how educators and students have responded to them?

Take the ‘Entrepreneurial Personas’ app, for example, which was co-designed with the Business School and used in an MBA course. It’s designed to help students practice B2C and B2B market research. Students come in with their own business ideas, and the tool helps them challenge those ideas, refine their concepts and discover potential new product features that would be useful for their target market (which they can also customise). 

This was the first application created, and it was fascinating to see that students were obviously learning about business and entrepreneurship, but they were also actively learning about the possibilities and limitations of using AI in a real-world business context. 

It sparked a much richer conversation that covered not only the topics being taught but also the technology itself. We see this ‘dual’ learning (subject matter expertise and AI) in all the apps. Based on the initial survey data we are gathering, students like the experience. Many report that they start feeling a bit sceptical, but then they end up quite enjoying it. And in this line, if there is one common piece of feedback across all the apps, it’s the teachers mentioning that students are highly engaged during the learning experience. 

  1. You’ve managed to deliver ten tailor-made, highly specific AI applications in a remarkably short time. Could you tell us more about the development framework and workflow that made this possible?

Speed and flexibility during development are essential due to our limited resources, so we decided to use React for all project applications, ensuring they are quite modular. Each app sits on a set of shared components: conversation modules, feedback areas, saved interactions, scenario branching and so on. This modularity allowed us to rapidly swap features in and out, tailoring each application to what the teachers want. 

Interestingly, all the apps cover very specific use cases, but we are experiencing the ‘paradox of specificity’ where the more closely we tailor, the more widely applicable the core patterns become. Also, we deliberately avoided time-consuming tasks during the build, for example, ‘LLM benchmarking’ because our competitive advantage isn’t in shaving a few percentage points off model accuracy, but in leveraging the learning design and subject-matter expertise provided by the teachers. The real value is the workflow, not the technology: teachers bring domain and teaching expertise, we bring the scaffolding for rapid iteration and deployment while combining it with learning design and a creative but critical approach to AI. 

  1. From a pedagogical perspective, what does the project tell us about the ways AI can meaningfully contribute to improving teaching and learning practices, rather than simply replicating or automating existing ones?

This is a crucial distinction, and it’s at the heart of our project. If you look at the current landscape, most AI-in-education products are based on productivity and efficiency. They’re designed to (supposedly) help teachers grade faster, plan faster, speed up admin tasks, etc. This is not pedagogy really; they don’t fundamentally change the learning experience. 

We’re moving away from the efficiency-first mindset and asking, ‘How can this technology foster teaching creativity?’ or ‘How can it help us to teach differently?’ This is where we think AI gets exciting. It’s not just about replicating or making a task faster; it’s about exploring forms of active, exploratory, fun, engaging learning. The project shows that when teachers lead the design, they ask for tools that help their students think critically, practice complex skills and engage with the content in a deeper way.

  1. This project seems to challenge the traditional ‘top-down’ model of educational technology development. What are the advantages of an academic-led innovation process?

In the usual procurement model, educators tend to be left out of key decisions and become mere software consumers. Even when teachers are consulted, they rarely get a real say in what gets built or how it works. Choices are limited, and their expertise doesn’t reach the design stage. With this project, we deliberately flipped this situation. Here, educators are brought in from the outset, shaping the actual product design. The advantage is obvious: the tools reflect real, lived classroom needs, not generic assumptions. Academic-led innovation means faculty don’t have to compromise with solutions built for and by somebody else. They co-create resources that fit their teaching, their students and their pedagogy and values. This kind of direct involvement brings more enthusiasm and ownership, encourages creative risk-taking and creates new ideas that most commercial vendors miss entirely. Ultimately, the result is technology that feels genuinely created for education by educators themselves. 

  1. The AI for Teaching Innovation project has a strong emphasis on educational research. What are the initial results showing? And what other specific research proposals can we expect?

It’s still early, so data collection is starting now, but the initial feedback is promising. Anecdotally, we have teachers reporting that students have better results on the assessment, feeling more comfortable, confident and clear about what is expected from them. But I want to highlight that we are just as interested in the qualitative elements. We’re looking closely at how the experience is perceived by both students and staff. Those elements are just as critical as quantitative data.  

We see this project as a ramp-up for educational innovation and research. It’s not just one single research project; it opens the door to many different angles of research. Because each application is so closely connected to a specific field, we’re seeing the teachers themselves take the lead. They are already showing these ideas at conferences and are planning to write papers for their particular disciplines, applying these pedagogical findings in medicine, business, environmental studies, law, etc. The project’s research output won’t be a single paper from the core team; it will be a collection of discipline-specific studies led by the educators who conceptualise and use the apps. 

  1. Looking ahead, do you imagine this collaborative model of AI app creation evolving in universities and the wider edtech sector?

Indeed, and I think we need more! I certainly believe that teachers have endless ideas for educational products, but they don’t have the space or opportunities to bring them to life. They know what their students actually need. So, risking an over-generalisation here, I think there is a disconnect between teaching contexts and the edtech sector. 

On one side, you have educators with deep subject-matter expertise and on-the-ground pedagogical knowledge; they aren’t developers, nor should they be, as their job is to teach and research within their fields. On the other side, you have the edtech sector, which possesses the technical expertise, development resources and infrastructure to build and scale products, but many of the brilliant, context-specific ideas teachers have don’t reach their design teams.  

So, I truly think that fostering more collaboration can unlock a wealth of creativity that the edtech sector has largely left untapped, and it would be fantastic to move in that direction more seriously as a sector. 

  1. Finally, on a more personal note, what has this experience taught you about the future of human/AI collaboration in education, and what message would you share with educators who are just starting to explore AI in their teaching?

My message to educators would be quite simple: start playing around with AI, but do it critically. It’s easy to be overwhelmed. We are all swimming in a sea of hype, flooded with blanket statements and grand promises about how AI will ‘fix’ or ‘revolutionise’ education. This hype can push people into two camps: uncritical adoption or total rejection. I don’t think either is very helpful. 

For me, being critical is not about rejecting the technology. It means getting your hands dirty and trying things while asking nuanced and difficult questions: Who built this, and what are their values? What is their design context and intended purpose? Does it actually work in my specific context? When they say ‘it works,’ what evidence are they using? 

This critical engagement is how we move forward responsibly. It’s how we, as an educational community, get to shape this technology towards better futures for education, rather than just accepting the future that is simply sold to us. So, my advice is to be curious, be cautious and be the one who decides what ‘good’ looks like.


Image not found

Javier Tejera is a Senior Learning Technology and Design Advisor at the Edinburgh Futures Institute (University of Edinburgh) and also works independently as a digital education consultant. On the one hand, he is interested in innovative, cutting-edge digital technologies for teaching and learning. On the other hand, he is fascinated by low-tech, mobile-first settings and the social contexts in which education operates.

In his role at the University of Edinburgh, he co-leads the AI for Teaching Innovation project together with Professor Siân Bayne. They support and enable teaching innovation through Generative AI by providing course teams with learning design and software development support to build web applications for live teaching.

As a consultant, Javier is currently involved in digital education projects aimed at rural school teachers in Peru and Bolivia. He has previously supported universities in Tanzania, Kenya, Uganda and Nigeria in their transition to digital education.

Javier is a self-taught web developer and holds a BSc (Hons) in Psychopedagogy and an MA in International Development from the University of Santiago de Compostela (Spain), as well as an MSc in Digital Education with Distinction from the University of Edinburgh.

Email: Javier.Tejera@ed.ac.uk


About the CRDE at the University of Edinburgh

The Centre for Research in Digital Education is part of the University of Edinburgh, based in the Edinburgh Futures Institute and the Moray House School of Education and Sport. It does research, teaching and knowledge exchange in areas relating to digital education including policy, practice, artificial intelligence and education futures.

We work with many partner universities as well as policymakers, the cultural heritage sector, schools and other public and private sector organisations. Our partners value us for our critical approach to learning, teaching and technology in formal and informal education, and for the ways in which we combine our research with world-leading practice in digital education.

Find out more at de.ed.ac.uk


About Avallain

For more than two decades, Avallain has enabled publishers, institutions and educators to create and deliver world-class digital education products and programmes. Our award-winning solutions include Avallain Author, an AI-powered authoring tool, Avallain Magnet, a peerless LMS with integrated AI, and TeacherMatic, a ready-to-use AI toolkit created for and refined by educators.

Our technology meets the highest standards with accessibility and human-centred design at its core. Through Avallain Intelligence, our framework for the responsible use of AI in education, we empower our clients to unlock AI’s full potential, applied ethically and safely. Avallain is ISO/IEC 27001:2022 and SOC 2 Type 2 certified and a participant in the United Nations Global Compact.

Find out more at avallain.com

_

Contact:

Daniel Seuling

VP Client Relations & Marketing

dseuling@avallain.com