As AI systems become more ubiquitous, policymakers, technology companies, publishers, educators and students are called to play essential roles in how these tools are developed and used. In this piece, Dr Helen Beetham explores the risks AI poses to learning, expertise, creativity and educational integrity, challenging current assumptions about AI in education and arguing for the protection of what we understand learning to be.
AI Can Fail You, and You Need to Understand How That Can Happen
An interview with Dr Helen Beetham, lecturer, researcher and consultant in digital education, on criticality and AI, conducted by Carles Vidal, MSc in Digital Education, Business Director of Avallain Lab
The following interview was initially planned to discuss the topic of critical thinking and GenAI in education, drawing on the report Avallain published last June, ‘From the Ground Up’. That text proposes 12 controls for safer, more ethical use of AI in education, and the idea of embedding critical thinking in both the design of these tools and in teaching practices is one of its core guidelines.
To explore these ideas further, we spoke with Dr Helen Beetham, a leading educational researcher and consultant whose current focus is set on criticality and AI. Right from the start, our conversation went beyond critical thinking as a product design strategy and a necessary learning skill to encompass a broader perspective on criticality.
In the following lines, Helen unpacks her views on AI and the risks its adoption implies for societies and educational systems in general, and for educators and students in particular. She points out the problematic nature of foundational models and the agendas driving them, and suggests a range of alternative policies and practices that should be considered to manage these risks.
For those looking for a silver lining, as Helen says, this moment is a great opportunity to think about tech and what we want from it. This is why this conversation is so timely, as only by understanding the complexities at stake will we be able to address them and ensure we continue to deliver real value from our technologies for publishers and educators.
Interview Quick Links:
- Why is it important that all education stakeholders have a critical stance on AI?
- Can GenAI have real transformative educational potential?
- Can AI models be improved to generate rich, adaptive educational content?
- Why might GenAI be counterproductive for learners without foundational knowledge?
- Can critical thinking help us develop GenAI tools that reduce risk and foster reflection?
- Is it possible to generate tools that prioritise the learning process over the finished product?
- What is the future for GenAI in education, and what should we be ready to challenge?
Interview with Dr Helen Beetham
Helen, given your area of research, we would like to address the importance of criticality and critical thinking in relation to GenAI tools, particularly the main risks the educational community faces, how to address them and the opportunities you see in these technologies.
1. Why is it important that the different actors of the educational community develop a critical stance in the face of AI systems?
First, I’m glad you identify that there are different actors with different powers to act.
Teachers, students, school and university leaders, AI developers and the foundation companies all have different responsibilities, and I wouldn’t expect the same kinds of criticality to apply. For teachers and learners, there are reasons to be critical that concern the learning process, and there are reasons to be critical that concern the systems we depend on to deliver education.
At the level of learning, paper after research paper has shown that people who use generative AI for significant tasks – reading, writing, coding, design – are not learning to do those tasks, or not in the ways they have previously been done. Retention is poor. Subsequent performance, for example, under exam conditions, is poor. Even expert skills are degraded through persistent use of AI. This is not at all surprising. We know that learning to read and write rewires the brain, and literacy is not a one-and-done skill; it’s something we continue to develop, or that can atrophy if we stop developing it. Arguably, the whole purpose of school is to develop people who can participate in the literate practices that societies value, and university is about developing more specialist literacies such as scientific, legal, technical and so on. When generative AI is used for those tasks, the relevant development of the brain, the understanding, the practice and even the identity is not going to happen. Something else might develop, such as a facility with the AI interface. So we need to look critically at that trade-off.
Another reason to be critical is the nature of the models these technologies rely on. Most accounts of ‘critical’ AI use focus on the outputs, especially the inaccuracies, biases and errors. You can improve those issues with post-training data labour, but fundamentally, the data model is not a world model, not even a reliable model of its training data. The errors are not going to go away. So ‘checking the outputs’ seems like an important critical response. But what does that mean? It can only mean checking against other information systems. And what happens when those other information systems are saturated with AI-generated content? The information/media literacy revolution encourages critical questions, but they mainly concern people and their motives: who authored this, when, and why, who is disseminating it, what interests are being served, with what designs on your opinions and behaviour and personal data? None of these questions can be asked of AI outputs, or really of any information in systems that are AI saturated. ‘Checking the outputs’ of AI requires us to completely reassess what is in circulation as information: what are its sources, authorities, meaning-making processes, and what systems have been involved? It’s not a simple matter of technique.
More concerning to me than the errors in outputs are the effects of stylistic and semantic flattening. Inference tends towards the middle. Everyone starts to sound and read the same. There are a few people, experts and creatives in their field, who are using generativity to push the boundaries of their own practice and good luck to them. But they have not developed in that practice by using generative AI. For learners who do not even know the range of responses that are possible, let alone how to evaluate the outlying and the innovative, the use of AI will always tend to the most standard response. In minority cultural and intellectual fields, the stereotyping effects are even more pronounced. It’s incredibly boring and demotivating, for teachers and students alike, to have rich learning materials reduced to five bullet points and for those bullet points to be expanded again to five paragraphs of entirely predictable student text. We keep asking: what do you think? What in all this material speaks to you? What do you care about? The whole point of education is to help people find the answers to these questions for themselves. I find learners increasingly reluctant to do that, because now there is always a safe answer. Of course, data models can describe different perspectives, provided these are already described in the training corpus. What they can’t do is help learners to develop their own perspective. So a critical stance might be one that asks how this kind of development, if we value it, can still take place.
A final reason to be critical is the gap between what the AI industry promised for education and what we have actually got. I studied AI in the 1980s, and I’ve worked in education technology, broadly speaking, since the 1990s, so I’ve been around a few hype cycles, but nothing quite like this one. People in education are not enamoured of AI, of the quality of its outputs. They are increasingly concerned about its downsides, especially for learners’ development. But the idea that if you don’t love AI, then you are the problem, the idea that if you don’t inject AI into every aspect of their experience, you are failing students, these ideas are pervasive. The GenAI challenge becomes a crisis if educators collude in the magical thinking and the myth-making. When we trust our disciplinary methods to help us understand GenAI, we can be critical in a whole variety of ways. And when GenAI refuses to be understood, because it entails blackbox architectures and deliberately obscure commercial practices, this is not a mystery to bow down before but a huge risk to truth-telling and responsible thought.
2. In your opinion, can GenAI tools and AI-based technologies have real transformative educational potential?
If you mean ‘can generative AI be used for positive educational ends?’, then of course, in the hands of a dedicated educator, any material can have learning value. I’ve been in classes where generative AI benchmarks are critiqued, where students research model biases and carbon footprints, and where they do journalistic work on AI companies. I’ve had students query system prompts, learn basic ML algorithms, and try a variety of creative responses to generated outputs. Educators are constantly experimenting and constantly learning. My experience is that the wider the variety of perspectives students have for understanding generative AI, the better chance they have of making a critical response. And it’s possible to support many different perspectives in the classroom.
If you mean ‘can students’ use of generative AI in their own independent study time be good for their learning?’, then I am more sceptical. I could break down the evidence for you, but what I would observe from my own experience is that the most engaged students, the ones that tend to be most thoughtful with generative AI in their learning process, are also the ones that are most concerned about losing skills and critical perspective. They are the students we should be engaging to help us build AI-resilient assessments and learning spaces. But just because some students are navigating generative AI thoughtfully, we can’t fulfil our responsibility to the rest by showing them good examples or preaching about ‘integrity’. Like fast food, ‘fast thought’ is irresistible: it offers compulsive and addictive behaviours in place of nourishment. According to The Brookings Institution, the costs of those behaviours for individual learners are already ‘daunting’.
And lastly, if you mean, ‘can data-based methods produce efficiencies in education systems?’ I’m sure those can be found. But the question of what we are accelerating and why is particularly pressing in education. We don’t ask students to produce assignments because we want more content in the world, but because we want students to learn. Ideally, we don’t ask academics to publish because we want more content in the world either, but because we want more meaningful research. Unfortunately, though, our systems have been set up to reward those proxies for thinking and for research, and GenAI enters those reward systems with what is, again, an irresistible promise to produce those proxies faster and get faster to the rewards.
People are becoming very aware of the costs. The GenAI moment is an opportunity to think critically about technology and what we want from it. I see big shifts in attitudes to social media, for example, and I think that’s an example of transformational change being driven by reactions to AI. But none of these shifts happens through interactions with GenAI alone, and there is now evidence that the more time people spend with chatbots and avatars, the less critical they are inclined to be.
3. Can models be improved to generate rich educational content that can adapt to diverse contexts and learning needs?
None of the agents I have seen in use or in development has been very impressive at adapting to learners’ contexts and needs. I imagine that retraining has to go beyond extended prompting or RAG injection to get past the sycophancy and information-giving biases that are baked into the foundation models. More fundamentally, learners do not know what they do not know, or why it might be important, so learner-to-chatbot interactions on their own are very unlikely to initiate deep learning. The other approach is to involve teachers in defining the interactions they want learners to have. If teachers can define, in clear enough terms, the learning goals and needs of their class and if they can check, refine and contextualise what comes out of an AI application, I suggest they can probably adapt content and activities in more conventional ways. Many of the teacher-centred applications I’ve seen amount to good practice in lesson planning. But teachers still have to deliver on their plans. Say an AI-designed activity isn’t going as expected in class, do you fire up the AI and have another go? Teachers need to understand their materials if they are going to teach adaptively. So do we hand everything over – lesson plans, curricula, assessment rubrics, teaching materials, student feedback, and all the agency and skills and adaptability that go with really owning those things – do we hand that over to AI companies in return for fractional gains in lesson planning?
On your question about rich content, an activity I’ve shared with students for three years now is to generate images for educational use, based on a prompt from the OpenAI teaching materials. The results are always terrible, especially when compared with the resources you can find with an OER or Wikimedia search. This year, some students pushed back against the generative part of the activity on sustainability grounds. That has been an interesting development. But the positive part of the activity is that it makes students develop pedagogic judgement. The difference between the AI images and those chosen or designed by educators is a space for real learning. I’ve also had the experience of students and colleagues uploading my own materials, and generating podcasts, quizzes and gamified versions, and in one case, an app. The results were unrecognisable to me, so much design thinking and conceptual nuance were lost. But that’s not really the point. Every learner who wants one by now has an AI bot they can use to version content for themselves, whether they want it simplified, gamified, mind-mapped, transposed or translated, or given an anime makeover. There are accessibility gains here that I don’t want to trivialise. But I’m not sure where it leaves learning design. A bad outcome would be for education to disinvest in universal design and accessibility support because ‘learners are doing it with AI’. A worse outcome – just taking this to its logical conclusion – would be for learning design itself to disappear as a profession and as a shared language. If every learner is their own unique microcosm of intellectual needs and sensory preferences, if those needs can be met by ‘designs’ or ‘experiences’ generated on demand from the soup of educational content, what really is the role of the designer? Or the teacher, for that matter?
Behind the apparently empirical question ‘can GenAI applications provide effective learning support?’ are questions that might be a bit more uncomfortable. Such as: should GenAI interactions replace teaching interactions? Should teaching assistants and student support professionals become chatbots, or rather become the data workers who make the chatbots work? Perhaps most insidiously of all, should GenAI replace other students in the learning process? Alexandr Wang, head of Meta’s super-intelligence lab, has said he will wait to have kids until they can connect straight into the network mind and bypass all the messy interactions of school. Learners will kick back against further loss of contact with teachers, I suspect, but they are already voting with their feet for chatbot companions over peer learners. And that is really troubling. Social constructivist theory tells us it’s better to learn alongside other learners who are at a similar learning stage to us, who make similar mistakes and discoveries, who can be resources for our learning in their differences from us, despite the frictions and frustrations involved (that also teach us something). Yet we seem to be encouraging learners in the delusion that the perfect other is always available, and it is a chatbot.
4. In your work, you refer to the AI expertise paradox, where if you want to use AI effectively and safely, you need to be an expert, so you can benefit from shortcuts and avoid inaccuracies. However, the same cannot be said for students who are still learning. Can you please elaborate on why GenAI might be counterproductive for those who have not yet built that foundational knowledge?
I’m not sure expert use is necessarily effective and safe. There have been several longitudinal studies now, from MIT, Bloomberg and Stanford, that have found experts are not as productive with AI as they think they are. They are certainly not as productive as their bosses think they should be. It turns out ‘checking for inaccuracies’ and ameliorating losses of quality and context are non-trivial. Experts are slower than novices when they apply AI because they are aware of these issues and have to make judgments about them. What parts of a task might lend themselves to AI efficiencies? How best to realise them? What are the likely impacts on quality, safety and professional values? Experts are finding uses, but only with significant trade-offs, some of which may be visible only at the organisational or sector level.
From an educational point of view, yes, the worry is that novices never get to develop the judgment and expertise that is needed to work effectively with AI. Herbert Dreyfus’ original critique of the AI project, back in the 1960s and 1970s, was that it misunderstood the nature of expertise. Educators understand it better, for example, that it develops through iterative practice. That’s the point of learning spaces where novices can practice without high costs of failure, where they can get expert feedback, and develop fluency and judgement. All of those developmental processes are lost when learners use generative AI to produce what looks like expertise. At least, it looks like expertise to students, and this is another aspect of the paradox. It looks less like expertise to assessors. But we are working in a system that has previously taken that kind of evidence at face value, and where the time educators have to engage with students’ development has been pared away.
What the expertise paradox allows me to say to students is: I will not fail you for using AI. But AI can fail you, and you need to understand how that can happen.
5. You have also identified ‘Cognitive Dependence’, ‘Deskilling’, ‘Reduced Learning’ and ‘Less Personal Agency’ as potential effects for users of GenAI tools, particularly among non-experts. To mitigate these risks, do you believe that critical thinking can be effectively promoted by developing GenAI tools that foster reflection and scepticism, present alternative perspectives, and highlight that students should not take generated output at face value?
The only remedy for students who avoid practice and engagement is to practice and engage, and to have support in place to do so. Perhaps some aspects of critical thinking might be supported with GenAI tools; I don’t rule it out, particularly when it comes to the critique of AI. But criticality and scepticism are not simple ‘techniques’. I know there has been a movement in media literacy, for example, to ‘inoculate’ young people against deepfakes, which sounds like a nice, simple shot in the arm. But what this inoculation involves is really a series of engagements. Typically, learners will spend time practising and applying diagnostic methods. Then they will create their own media pieces or ‘memes’. And finally, they will strategise to spread their memes, to make them as persuasive and pervasive as possible. Basically, they learn to make clickbait so they can learn not to be so easily baited. There are analogous activities you can do with GenAI, some of which engage learners more deeply with the data structure than others. The ‘inoculation’ task might be to prompt for a particular persona and interaction style, for example, and reflect on the results. It might be vibe coding a simple app and exploring the quality of the code. I think these are productive approaches, but they are not at all simple to implement.
A common remedy proposed for these problems is to teach students ‘good’ prompting strategies. When this involves using a particular template or standard, it seems to me counterproductive. It’s unlikely learners will surface anything interesting or discover the limitations of inference for themselves if prompting is reduced to a fill-the-gaps or cut-and-paste exercise. In fact, a lot of prompt crafting now takes place in the application layer, where interfaces may be even more frictionless than the familiar prompt screen. Larry Page looked forward to an AI search tool that would ‘understand exactly what you wanted before you knew yourself. For the foundation companies, the best prompt is one that entirely pre-empts the user’s needs. One of the critical exercises I do with students is to have them review chat logs and ask whether they think the user is prompting the chatbot, or the chatbot is prompting the user. But chatbot logs these days are often hidden in the application layer.
The alternative you suggest is that inference is made more effortful, for example, by defining the chatbot persona as a reflective mentor or a Socratic partner. I think this identifies the problem correctly, and it’s one I’ve discussed a lot with colleagues. How to ‘interrupt’ the straight line to the solution and introduce a more developmental path. What I’m not sure about is the possibility of installing these solutions as dialogic techniques within a chatbot interface. As I said in a previous answer, I have not yet found an AI agent that is effective in this mode. Either the moves are generic and stereotyped, or they come from examples in the training data. If there is a generic rule for ‘scepticism’, it can be reproduced in a list of reflective questions. And for a topic-based approach to ‘challenging a student’s logic’, generative AI should not, in my view, be the first resort either. Students learn more from co-teaching other students than they learn from interacting with a chatbot, because they are also doing the thinking involved in following the other person’s logic, noticing assumptions and blind spots, and recognising there are different perspectives on the same problem. Playing both sides is far more developmental.
The other issue is that learners turn to GenAI for complex reasons. Anxiety is often a big part of it, or the fear of missing out, or a lack of academic confidence. Dealing with these issues requires a deep engagement with learners. Learners need to understand why we ask them to do things (that they might do with GenAI), and we need to explain that better, but they also need good reasons for responding to those tasks in ways that feel challenging and uncertain. By the time learners reach the end of an undergraduate degree in the UK, they might have spent 18 years in an education system that values and measures outcomes. Valuing things that get in the way of the outcome is going to take more than individual encouragement or exhortation. It will require profound change.
Finally, much of the investment in GenAI has gone into interface effects that tend to undermine critical thinking. You can now talk to, vibe with and even engage with data models through wearables and emotion sensors. The effect is to undermine skills of mediation, or what used to be called literacy: conscious and effortful practices of engaging with other people’s thinking, and developing our own. The feeling of effortlessness and immediacy is not just about saving time. It is emotionally beguiling. You need never misunderstand or be misunderstood again.
Cognitive offloading is natural to human beings – it is one definition of culture – but it is never entirely safe. It creates vulnerabilities and dependencies. For example, we depend on shared signifying systems and tools, cultural records, and whoever controls them. I agree with Musk on this at least, that ‘safeguards’ are always ideological. It’s just that data models are inherently ideological, from their training data through the judgements of data workers to the contents of prompts. There is no way of steering AI models that is neutral. But unlike the cultural forms we have lived with for tens and hundreds of years, how they are ideological is obscure, and how they influence us is unfamiliar. There may come a time when we have no choice but to use GenAI if we want to participate in cultural and intellectual experiences, at least if we want to participate digitally. At that point, it might make sense to align oneself with particular architectures and not with others: Grok culture or Claude culture. But until then, and in hope it never comes, the ‘safe’ option I suggest is to keep practising alternative skills, interacting with other archives besides the data archive, maintaining and valuing other media, and nurturing our own embodied memories. Then, at least, there are alternative vantage points on the obscure ideologies and behavioural effects of the data model.
6. You have pointed out that GenAI tools tend to place the value of education on the final output rather than the process. As you put it, when students are asked to create content, it is not because the world needs more content, but because the act of creating is how they acquire knowledge and skills. Do you think it is possible to generate tools that prioritise the learning process over the finished product?
We already have tools and features of tools that support a focus on the learning process. Annotation is one that can be used both for solo reflection and for shared feedback. Document sharing, design spaces and coding environments, e-portfolios, all these are useful too. But you can build a process-oriented learning environment in just about any platform, or from simple open-source tools. What matters is the pedagogic intention. If we want education to be centred on processes of reasoning, and personal development, and on the specialist practices of each subject, we have to invest attention in these things and not in proxies for them, such as grammatical sentences or test scores. As soon as we standardise what we are looking for, that standard can become a data source or a prompt for GenAI. But moving away from standardised assessments and standard proxies for learning will require the kind of transformation I mentioned at the start. It would mean learners working on authentic challenges that arise from a real context – a context that provides its own standard, its own intrinsic feedback on students’ solutions. That might be the school or university and its community, or a placement setting, or a project with stakeholders. More learning would have to happen in shared, live, embodied spaces, and that learning would have to be high-quality if learners are to invest in it rather than in the electronic angel on their shoulder. Educators would have to enable and assess students’ work on its own terms, letting go of many proxies we have relied on before. This would have implications for teaching time and attention, and the places that learning happens, and therefore its costs.
A very small part of that is about the tools that are used. There may be some technical solutions for how learning relationships are mediated and how learning environments are made. I would stick my neck out and say we have all the functionality we need; we just need it to be more modular, more open, cheaper, and more flexible. But there really is no way to automate attention, witnessing, engagement, and care. This is what students need, and when they say they want a chatbot, what I hear is that they aren’t getting enough attention and care from other sources. We have to be very careful about contradicting students’ experiences, since that is a large part of what we are working with, but I do think we can challenge what they say they want from AI. That will involve confronting their fears. They are afraid of using AI. They are afraid of not using AI. Asking students to raise their eyes from these anxieties and think about what kind of learning and working futures they want is always an intense experience, but it is powerful. These deep engagements are what we need, in my view.
It is an irony that the use of generative AI by students has produced a workload crisis for teachers, at least those who are committed to student learning. It is exceptionally hard to read through an AI-generated or partially generated assignment to discern the work students have done, the process they have undertaken, and thus what guidance they need. If you don’t care, of course, it’s a breeze. Set your AI agent onto marking theirs. But if you do care, you know that the solution is to make the process itself the focus. We need teachers and students to build those processes together, and the learning environments that can support them. That would be a lot more exciting to me than asking what workflows GenAI can come up with or what new AI-integrated mega-platforms we need.
7. What is the future for GenAI in education, and what should we be ready to challenge?
I did my fair share of foreseeing early on, and much of what I was saying now seems like common sense. So GenAI hasn’t kept improving with scale, productivity increases have been hard to demonstrate, and the promised educational benefits continue to be elusive.
I’m not an expert on the interface between the AI industry and edtech, in terms of the business models and the distribution of income. It seems to me that a lot of what passes for AI in education is rather shallow customisation in the application layer. At the moment, the foundation companies are very keen to secure subscriptions and use cases from education organisations, and of course, educational content and data. Bespoke educational applications may be a good way to forge those relationships, for now. But the long-term vision for these companies is not AI-enabled education; it’s AI instead of education. It’s something much more like the Neuralink, or Josh Dahn’s Synthesis, or Marc Andreessen’s vision of every child having their own dedicated AI tutor from birth, bypassing the need to engage with a shared learning culture. And while I personally think that is a fantasy, a dystopia, I do think the foundation companies will want to monetise everything they have learned from education and edtech in the medium term. To mine education, if you like, in order to undermine education.
As I said at the start, different actors in the educational space have different powers to choose from. I would love to see university leaders taking a more critical and ethically grounded position on generative AI. I’d like to see ministries and departments of education employing people who think deeply about these issues, rather than AI company secondments or industry-adjacent think tanks. I think that would make a considerable difference to how GenAI is implemented, or not implemented, in educational contexts.
I share the anxieties educators feel around generative AI, which makes us grasp at terms like ‘implementation’ so we can feel more in control. Generative AI is not being implemented. It was, as its proponents like to say, ‘unleashed’ on users and knowledge systems and cultural archives, every bit as irresponsibly as that sounds. Its developers have quickly become the richest companies in the world, dictating terms to governments, regulators and publishers. Every choice we make about AI in education is made downstream of these events, and in the face of this power. But choices are still possible. The most important, for me, is to tell the truth about GenAI, without hype or magical thinking, trusting our pedagogic and disciplinary methods to support us in that. And when those methods don’t help, to be truthful about our uncertainty.
We should be ready to challenge developments that extend the black box of not knowing and not being accountable further into our classroom practices. That means, I think, that we should insist on alternative knowledge archives and knowledge practices being available – that do not pass through generative data architectures – to protect what we understand student learning to be. Doing this demands technical ingenuity as well as intellectual commitment, and both are valuable skills in any foreseeable future. Just as we provide alternatives to social media in school and university platforms, where different rules and norms apply, I think we can do the same in relation to GenAI.
About Helen Beetham

Dr Helen Beetham is an experienced consultant, researcher and educator working in the field of digital education in the university sector. Her publications include ‘Rethinking Pedagogy for a Digital Age’ (Routledge, 2006, 2010 and 2019), ‘Rethinking Learning for a Digital Age’ and an edited special issue of ‘Learning, Media and Technology’ (2022). Her current research centres on critical pedagogies of technology and subject specialist pedagogies, in the context of new challenges to critical thinking and humanist epistemology.
For two decades, Helen has advised global universities and international bodies on their digital education strategies, producing influential horizon scanning and research reports for Jisc. Her Digital Capabilities framework is a standard across UK Higher and Health Education, and she contributed to the European Union’s DigCompEdu framework, which incorporates AI and data competencies. An experienced educator, she has developed and taught master’s courses in education and learning design, and currently documents her research via her Substack, Imperfect Offerings.
About Avallain
For more than two decades, Avallain has enabled publishers, institutions and educators to create and deliver world-class digital education products and programmes. Our award-winning solutions include Avallain Author, an AI-powered authoring tool, Avallain Magnet, a peerless LMS with integrated AI, and TeacherMatic, a ready-to-use AI toolkit created for and refined by educators.
Our technology meets the highest standards with accessibility and human-centred design at its core. Through Avallain Intelligence, our framework for the responsible use of AI in education, we empower our clients to unlock AI’s full potential, applied ethically and safely. Avallain is ISO/IEC 27001:2022 and SOC 2 Type 2 certified and a participant in the United Nations Global Compact.
_
Contact:
Daniel Seuling
VP Client Relations & Marketing