EU’s Guidelines for Trustworthy AI: A Reliable Framework for Edtech Companies

This post is the first in a series that highlights the most relevant recommendations, and regulations on ethics and AI-systems, produced by international institutions and educational agencies world-wide. Our goal is to provide updated and actionable insights to all stakeholders, including designers, developers and users involved in the field.

EU’s Guidelines for Trustworthy AI: A Reliable Framework for Edtech Companies

A look into the EU’s ethical recommendations and their possible adaptation to Gen-AI-based educational content creation services.

Author: Carles Vidal, Business Director of the Avallain Lab

Since the release of OpenAI’s ChatGPT-3.5, in November 2022, the edtech sector has focused its efforts on delivering products and services that leverage the creative potential of large language models (LLMs) to offer personalised and localised learning content to users. 

LLMs have prompted the educational content industry to reassess traditional editorial processes, and have also transformed the way in which teachers and professors plan, create and distribute classroom content across schools and universities.

The generalised uptake of Generative AI technologies [Gen-AI], in education, calls for ensuring that their design, development and use are based on a thorough understanding of the ethical implications at stake, a clear risk analysis, and the application of the corresponding mitigating strategies.

We start by discussing the work of the High-level Expert Group on AI (HLEG on AI), appointed by the European Commission in 2018 to support the implementation of the European strategy on AI. The work provides policy recommendations on AI-related topics. The “Ethics Guidelines for Trustworthy AI”(2019) and its complementary “Assessment List for Trustworthy AI, for Self-Assessment” (2020) are two non-binding texts that can be read as one single framework.

1. Ethics Guidelines for Trustworthy AI

From an AI practitioner’s point of view, the guidelines and the assessment list for trustworthy AI are strategic tools with which companies can build their own policies to ensure the implementation of ethical AI-Systems. In this sense, the work of the HLEG on AI is presented as a generalist model that can/should be adapted to the context of each specific AI-System. Additionally, due to its holistic approach, the framework addresses not only the technological requirements of AI-systems, but also considers all actors and processes involved throughout the entire life cycle of the AI.

As the HLEG on AI states, the guidelines’ “foundational ambition” is the achievement of trustworthy AI, which requires AI-systems, actors and processes to be “lawful, ethical, and robust”. Having said this, the authors explicitly exclude legality from the scope of the document, deferring to the corresponding regulations, and focus on addressing the ethical and robust dimensions for trustworthy AI-systems.

The framework is structured around three main conceptual levels, progressing from more abstract to more concrete. At the top level, defining the foundations of trustworthy AI, four “ethical imperatives” are established, to which all AI systems, actors, and processes must adhere:

  1. Respect for Human Agency
  2. Prevention of Harm
  3. Fairness 
  4. Explicability

At a second level, the framework introduces a set of seven key requirements for the realisation of trustworthy AI. The list is neither exhaustive nor presented in a hierarchical order. 

  1. Human Agency and Oversight
  2. Technical Robustness and Safety
  3. Privacy and Data Governance 
  4. Transparency
  5. Diversity, Non-discrimination and Fairness
  6. Societal and Environmental Wellbeing
  7. Accountability

The relevance of these key requirements extends beyond these guidelines. They also inform recital 27 and, implicitly, article 1 of the recently published EU AI Act, of April 2024.

The guidelines suggest a range of technical and non-technical methods for their implementation (e.g., architectures for trustworthy AI, codes of conduct, standardization, diversity and inclusive design) that actors can use to enforce the mentioned requirements. 

Achieving trustworthy AI is an ongoing and iterative process that requires continuous assessment and adaptation of the methods employed to implement key requirements in dynamic environments.

2. Assessment List for Trustworthy AI

The third level of the framework consists of an “Assessment List for Trustworthy AI” (ALTAI), intended to operationalise the key requirements. It is primarily addressed to developers and deployers of AI-Systems that directly interact with users. 

The ALTAI list breaks down the key requirements into more concrete categories. It provides a range of self-assessment questions for each of these, aiming to spark reflection around every aspect. Each individual actor is left to decide on the corresponding mitigating measures.

For example, the ethical requirement of Diversity, Non-Discrimination and Fairness, is divided in three subsections: 

1) Avoidance of unfair bias

2) Accessibility and Universal Design 

3) Stakeholder participation

In turn, for Avoidance of Unfair Bias, a series of self-assessment questions are proposed, a sample of which is listed below:

  • Did you establish a strategy or a set of procedures to avoid creating or reinforcing unfair bias in the AI system, both regarding the use of input data as well as for the algorithm design? 
  • Did you consider diversity and representativeness of end-users and/or subjects in the data? 
    • Did you test for specific target groups or problematic use cases? 
    • Did you research and use publicly available technical tools, that are state-of-the-art, to improve your understanding of the data, model and performance? 
    • Did you assess and put in place processes to test and monitor for potential biases during the entire lifecycle of the AI system (e.g. biases due to possible limitations stemming from the composition of the used data sets (lack of diversity, non-representativeness)? 
    • Where relevant, did you consider diversity and representativeness of end-users and or subjects in the data? 

The guidelines also suggest that companies incorporate their assessment processes into a governance mechanism, involving both top management and operations. The text even proposes a governance model, describing roles and responsibilities. 

The assessment list is not intended to be exhaustive and follows a generalist (horizontal) approach. The purpose of the HLEG on AI is to provide a set of questions that help all AI-system actors operationalise the more abstract key requirements, and to encourage them to adapt the assessment list to the specific needs of their sector and continuously update it.

In accordance with this vision, and grounded in the same framework, the European Commission published in September 2022, the “Ethical Guidelines on the Use of AI and Data in Teaching and Learning for Educators”. This document is a valuable resource for teachers and educators, helping them to reflect on AI and critically assess whether the AI systems they are using comply with the Key Requirements for Trustworthy AI.

3. Adapting and implementing the guidelines.

Having analysed the work of the HLEG on AI, we understand that it is proposed as a framework that companies like Avallain, along with other AI-system deployers, can build upon to create an adapted version that ensures the ethical design, development, and use of AI tools for the educational content creation community.

To this end, we support the framework’s recommendation of establishing a multidisciplinary body within companies to define ethical and robustness standards, identify the corresponding mitigating interventions, and ensure their implementation across all involved areas. This governing body should play a crucial role in the continuous adaptation of the company’s ethics and AI strategy to future ethical challenges.

About the Avallain Lab

We established the Avallain Lab in 2023 to be an ethically and pedagogically sound academic resource, providing support to Avallain product designers and partners, as well as the wider e-learning community.

This unit operates under the academic leadership of John Traxler and the business direction of Carles Vidal. The Avallain Lab also has the support of an advisory panel including Professor Rose Luckin. This experience and expertise allows us to deliver research-informed technology and experiences for learners and teachers, including in the field of AI.

The Avallain Lab is a unique, novel and innovative approach acting as the interface between the world’s vast and rapidly evolving research outputs, activities, networks and communities and Avallain’s continued ambition to enhance both the pedagogic and technical dimensions of its products and services with relevant medium-term ideas and longer-term concepts.

The Lab supports Avallain’s trials and workshops, informs internal discussion and draws in external expertise. The Lab is building a library of research publications, contributing to blogs and research papers and presenting at conferences and webinars. Early work focussed on learning analytics and spaced learning but the current focus is artificial intelligence, specifically ethics and pedagogy and their interactions.

About Carles Vidal

Business Director of the Avallain Lab, 
MSc in Digital Education by the University of Edinburgh.

Carles Vidal is an educational technologist with more than twenty years of experience in content publishing, specializing in creating e-learning solutions that empower educators and students in K12 and other educational stages. His work has included the publishing direction of learning materials aligned with various curricula across Spain and Latin American countries.

About John Traxler

Academic Director of the Avallain Lab, 
FRSA, MBCS, AFIMA, MIET

John Traxler, FRSA, MBCS, AFIMA, MIET, is Professor of Digital Learning, UNESCO Chair in Innovative Informal Digital Learning in Disadvantaged and Development Contexts and Commonwealth of Learning Chair for innovations in higher education. His papers are cited over 11,000 times and Stanford lists him in the top 2% in his discipline. He has written over 40 papers and seven books, and has consulted for a variety of international agencies including UNESCO, ITU, ILO, USAID, DFID, EU, UNRWA, British Council and UNICEF.

About Rose Luckin

Advisory Panellist of the Avallain Lab,
Doctor of Philosophy – PhD, Cognitive Science and AI

Rosemary (Rose) Luckin is Professor of Learner Centred Design at UCL Knowledge Lab, Director of EDUCATE, and author of Machine Learning and Human Intelligence: The Future of Education for the 21st Century (2018). She has also authored and edited numerous academic papers.  

Dr Luckin’s work centres on investigating the design and evaluation of educational technology. On top of this, she is Specialist Adviser to the UK House of Commons Education Select Committee for their inquiry into the Fourth Industrial Revolution. 

Her other positions include: 

  • Co-founder of the Institute for Ethical AI in Education
  • Past President of the International Society for AI in Education
  • A member of the UK Office for Students Horizon Scanning panel
  • Adviser to the AI and Robotics panel of the Topol review into the future of the NHS workforce
  • A member of the European AI Alliance
  • Previous Holder of an International Franqui Chair at KU Leuven