AI in HE: Impressions of Artificial Intelligence in UK Higher Education

Shri Footring, Governor at Writtle College, ARU and Prof. John Traxler, UNESCO Chair, Commonwealth of Learning Chair and Academic Director of the Avallain Lab, next to a picture of a HE teacher discussing AI with colleagues.

Shri Footring and Prof. John Traxler share insights from a series of informal conversations with university professionals, leading and supporting learning technology initiatives in UK higher education. They discuss how AI impacts academic practices, from enhancing teaching and learning to addressing pedagogical integrity and ethics concerns. 

AI in HE: Impressions of Artificial Intelligence in UK Higher Education

Authors: Shri Footring, Governor at Writtle College, ARU, and Prof. John Traxler, UNESCO Chair, Commonwealth of Learning Chair and Academic Director of the Avallain Lab.

Introduction

St. Gallen, March 28, 2025 – Understanding the attitudes and experiences of university staff towards the use of generative AI (GenAI) in their institutions is an important and evolving issue. As this technology spreads and develops, there is curiosity about the new possibilities it might offer, as well as widespread concerns about its use and misuse amongst both staff and students.

We conducted a series of informal conversations to explore the experiences of people whose professional roles involved leading and supporting the use of learning technologies in their institutions. We discussed the GenAI tools and technologies they were using, along with their concerns and how they were addressing them. We also explored their thoughts and ideas about the future. 

The individuals we spoke with were typically working on one or more innovative projects and actively exploring what GenAI could offer in their specific contexts. They were keenly aware of the issues and were involved in writing policies, guidance and training for their organisations. 

This was not a comprehensive survey or investigation but an opportunity to understand what really mattered to our respondents. While the concerns raised were neither universal across the university sector nor exclusive to it, the conversations may give some insights into where any follow-up research should focus.

We are grateful for this opportunity to hear from university professionals. Our interview protocol encouraged participants to discuss the aspects of GenAI that interested them the most. This led to some fascinating, in-depth conversations about successful projects. We gratefully acknowledge their thoughts, which were shared with us on the condition of anonymity. Consequently, there are no direct quotes, only summaries and paraphrases.

The Developing Use of GenAI in Higher Education

The ‘panic’ concerning GenAI, which started in mid-2023, has now passed and universities typically have increasingly mature policies, guidelines and training programmes for students and staff. Some participants referred to a deceptive calm and general acceptance of GenAI, while one described the impossibility of trying to outrun it.

There was widespread recognition that the capability to understand and use AI technologies effectively was becoming an important graduate attribute from the point of view of future employers. Some participants mentioned that faculties within their institutions, such as healthcare and medicine, are making extensive use of industry-specific AI technologies. 

A few highlighted that teaching staff were experimenting with the new possibilities of GenAI and building them into the curriculum. 

Respondents were divided about the effectiveness of specialist GenAI frontends for teaching staff. Some thought it important that teaching staff develop the skills to do their own prompt engineering. Others welcomed the opportunities to use these tools for, say, quickly and safely generating case studies, students’ brainstorming, or podcast generation. 

All agreed that developing staff awareness, skills and capabilities was an important priority. 

Copilot was the most commonly mentioned tool, presumably because Microsoft was the institutional default and was already installed, supported and running for staff and students, though only a handful had adopted Copilot 365. Others, such as Gemini, ChatGPT, Claude, Studiosity, Grammarly and Notebook LM, were also used by individual lecturers. However, these were not often officially adopted at the institutional level, and the burden of procurement procedures often inhibited individual lecturers from adopting non-standard technologies or systems.

Opportunities and Challenges

Academic integrity is the most pressing issue. A couple of our participants reported a marked increase in reported incidents of academic misconduct. One individual stated that this was a significant area requiring further attention. It had been the topic at learning, teaching and assessment committees, as well as assessment boards. The issue arose due to the large number of academic misconduct cases specifically involving AI. 

The national and professional media have recently reported alarming levels of GenAI use by students. However, they also noted that GenAI is now pervasive, ubiquitous and unobtrusive, and not necessarily used for what might constitute outright plagiarism. In today’s litigious climate, plagiarism and academic misconduct have legal and thus financial implications, especially if students are expelled from their courses or lose their grades. The regulators are struggling to respond to the less direct uses. 

Participants observed that it will be increasingly difficult to define what it means to ‘use AI’ and that there is a need to further characterise what is now meant by ‘academic misconduct’. This echoes wider concerns, in the arts and entertainment as well as academia, about what originality, creativity and intellectual property (IP) now mean. The focus on the technical issues of assessment may come at the expense of considering how university education should evolve to ensure graduates thrive not only in their careers but throughout their lives in a world already increasingly saturated with AI. One participant worried that universities would be failing their students if they graduated with no AI skills, saying those students would be at a huge disadvantage in the workplace, the workplace market, or while competing for jobs.

A few participants described AI as becoming ubiquitous, highlighting the need for extensive work on raising awareness about the content it produces. This includes ensuring colleagues understand the dangers of AI, its ethical aspects and how it can and cannot be used.

What might become more problematic is the line between using AI and not using AI, especially as over time, colleagues will not even be aware that they are using it. As academics and students use tools like Grammarly, this is probably already the case, and AI support is often now a default for search engines and word processors.

The environmental impact was a concern for some participants, who noted the tension between the impetus to innovate and experiment with AI and the possible or reported environmental damage. They expressed hope that some environmentally friendly good practices might emerge while acknowledging the ethical challenges involved.

This could be viewed in the wider context of some universities’ ethos, with one participant characterising them as inherently conservative, despite their self-image as dynamic centres of learning focused on research and development. Pedagogy will not however change, stuck not in the 20th century but in the 19th century, in terms of pedagogic design. With AI, this is an unfortunate observation since another participant talked of the impossibility of trying to outrun AI, but of not knowing how to embrace it or of how to avoid it.

Hand-written assignments and sit-down exams are hardly forward-looking solutions. There was conversely a mention of AI-assisted marking, to manage the marking and feedback load of large classes. This is part of the grassroots appeal of AI in education, that of reducing the often onerous teaching loads imposed on lecturers and perhaps thus allowing them greater focus on pastoral care or research projects. However, the consequences in terms of career security were also stressed. 

Recruitment came up. A couple of participants emphasised the difficulties experienced in distinguishing between applicants when more than half of the cover letters seemed AI-generated. One individual explained that they have resorted to using agencies because of problems with accurate shortlisting. 

The Future

Our participants thought it would become increasingly difficult to define what it means to ‘use AI’, and this is probably already the case. Conversely, there will presumably be less focus on individual tools as familiarity spreads. This was echoed in a desire to develop lecturers’ individual AI capabilities rather than rely on ready-made tools. There was also support for in-house teams with significant prompt-engineering expertise to provide specific, tailored solutions for their lecturers.

Many participants highlighted the need to change mindsets about exploring new pedagogical possibilities. Leading researchers are already starting to describe a pedagogy called ‘generativism’, a successor to the widely espoused but less frequently enacted constructivism and social constructivism. Whether this will have much traction amongst a largely conservationist pedagogic rank-and-file is a moot point. 

Some universities embrace a strongly independent and autonomous rhetoric, demonstrating a flair for innovation and individuality. Consequently, they may develop their own AI tools to enhance pedagogy, much like their predecessors did with the virtual learning environment (VLE) two decades ago. This approach could alleviate concerns regarding data security by maintaining university IP and student data within manageable boundaries.

What is perhaps surprising in the conversations is the absence of concern about the precarious financial state of most UK universities and how this may affect the roles of AI in teaching, learning, assessment and management. Given the conservatism mentioned earlier and the risk aversion sometimes mentioned, alongside the constant calls for greater financial efficiency, the drives to recruit overseas students and the ongoing reduction of staffing, the roles of AI could be complex, challenging and problematic.


Avallain’s Commitment to Responsible AI

At Avallain, we recognise both the opportunities and the challenges AI presents in education. Through Avallain Intelligence, our initiative for responsible AI integration, we work to ensure that AI enhances productivity while upholding the principles of ethics and safety. AI should serve as a tool to support educators, not replace them, preserving the human element at the heart of learning.

As AI becomes more embedded in higher education, institutions face difficult questions about its role in pedagogy, assessment and student support. Our work has shown that thoughtful, context-aware AI solutions can help navigate these complexities. It is now more crucial than ever for educators to possess the appropriate expertise to ensure AI transparency and maintain institutional control over data and intellectual property. 

By dedicating ourselves to responsible AI adoption and nurturing relationships with institutions and educational experts, we achieve a balance between AI’s potential and the preservation of academic integrity and ethical responsibility.


About Avallain

At Avallain, we are on a mission to reshape the future of education through technology. We create customisable digital education solutions that empower educators and engage learners around the world. With a focus on accessibility and user-centred design, powered by AI and cutting-edge technology, we strive to make education engaging, effective and inclusive.

Find out more at avallain.com