As the EU AI Act emerges as the first legal framework for AI, it provides a unique roadmap for ethical and responsible AI use in education. This landmark legislation is poised to influence educational technology globally, making it essential for leaders around the globe to understand its impact and potential ripple effects. In this article, we aim to briefly overview the EU AI Act and how FeedbackFruits is implementing these principles, leading a future where AI strengthens trust, upholds pedagogy, and responsibly transforms education.
The European Union's Artificial Intelligence Act (AI Act) is a pioneering effort to regulate artificial intelligence, balancing innovation with ethical considerations and safety. The AI Act, the first comprehensive legal framework for AI globally, seeks to position Europe at the forefront of trustworthy AI development.
The AI Act, launched in 2024, is part of the EU Commission's initiative to create “A Europe fit for the digital age” for the 2019–2024 term.
The EU AI Act, probably one of the most controversial legal frameworks, was first introduced in December 2023. After the EU Commission published the AI Act proposal in April 2021, a political agreement was reached, and the Act was published and entered into force on August 1st, 2024. It is expected to be fully applicable and enforceable by August 2nd, 2026 with some exceptions:
The AI Act adopts a risk-based regulatory approach, categorizing AI applications into four distinct risk levels:
For more details on the AI systems classified under the framework, you can visit this website: https://artificialintelligenceact.eu/high-level-summary/
This risk-based approach is what differentiates the EU AI Act from the GDPR. Instead of defining a list of AI systems that are prohibited or high-risk, the Act offers a systematic approach to decide how much risk an AI system presents to society and individuals.
The implications of the EU AI Act for the education sector are profound. Education is classified under the high-risk category, which means that AI applications in this field must adhere to strict compliance requirements. This classification reflects the significant impact AI can have on students' educational trajectories and their future careers.
Within the education sector, several applications fall under the high-risk category:
These applications must comply with rigorous safety and documentation standards to mitigate risks. For instance, any AI system used for grading must ensure accuracy and human oversight to maintain fairness and transparency.
One of the most notable aspects of the EU AI Act is the prohibition of emotion inference systems in educational contexts. This includes any AI applications that attempt to interpret students' emotions through biometric data. The rationale behind this prohibition is to protect students' fundamental rights and prevent manipulative practices that could adversely affect their educational experience.
How can we continue to protect the fundamental rights and freedoms of individuals while implementing innovative technologies to improve teaching and learning?
At FeedbackFruits, we are committed to developing AI tools that align with the ethical standards set forth by the EU AI Act. To achieve this, we have established a framework of guiding principles that govern our AI applications. These principles are designed to safeguard the interests of three key stakeholders: students, teachers, and institutional administrators.
Our design principles focus on the following key areas:
Following these principles, we map our products and features into a risk classification, which is internally performed by breaking risk assessment down into 3 criteria:
Below are some examples of how we apply this design principles in designing our product:
A feature of our AI solution, Acai, can generate instant, formative feedback on technical aspects of students’ academic writing submissions (e.g., grammar, vocabulary, citation, structure, etc.). Based on the pedagogy-first principle, this feature is low-risk since students can choose to follow or ignore the feedback without any consequences. We also adopt the transparency and right-to-object principle by clearly explaining the feature and allowing students to opt out of receiving feedback.
In addition, feedback generated by Acai is accompanied by explanations and options for students to object to the given feedback, reflecting the interpretability and right-to-object principle. Not only are students able to understand why a comment was made on their work, but they are also given the freedom to assess the feedback and choose their own actions.
Automated Feedback Coach, another feature of Acai, helps guide students throughout their feedback process by giving timely advice and suggestions.
For this feature, we aim to ensure transparency from the beginning with clear explanations of the function and how student data will be used. Teachers can choose whether they want to enable the feedback coach or not, which reflects the right-to-object principle. To safeguard data privacy, we openly announce that student data are not used to train the language model.
Read more about our Transparency Note.
Our latest addition to Acai is the Grading Assistant feature, which provides suggestions on feedback and grading of students’ work. This aims to help teachers deliver faster, more consistent feedback at scale. Perceiving this to be a high-risk development, we aim to minimize the pitfalls by adhering to our principles: being transparent about AI being the driver of the feature, letting teachers choose to use the auto-grading or not, and ensuring data privacy.
When using the feature, a clear explanation for each suggestion is provided with the option for the teacher to object to the AI prompt, reflecting the interpretability and right-to-object principle.
As we navigate the complexities of the EU AI Act, FeedbackFruits is dedicated to implementing these principles in our product development. We believe that AI can transform education positively, but it must be done responsibly. By adhering to the guidelines established by the Act, we aim to foster an environment where AI supports educators and students alike.
We recognize that the landscape of AI in education is continually evolving. To stay ahead, we actively seek feedback from educators and stakeholders to refine our practices and ensure our tools serve their intended purpose. This collaborative approach allows us to adapt to new challenges and maintain compliance with emerging regulations.
We also officially signed the AI Pact Voluntary Pledges, reinforcing our commitment to ethical AI development and deployment in compliance with the forthcoming EU AI Act. Read more in our press release.
The EU AI Act represents a critical step towards establishing a framework for responsible AI use in education. By understanding its implications and committing to ethical practices, EdTech providers and educational institutions can pave the way for a future where AI strengthens trust, upholds pedagogy, and transforms teaching and learning in responsible and meaningful ways.
FeedbackFruits' ethical AI framework serves as a roadmap for this journey, demonstrating how a proactive, stakeholder-centric approach can unlock the full potential of AI while mitigating the risks. As the education landscape continues to evolve, the lessons learned and the principles established by FeedbackFruits can serve as a guiding light for the entire EdTech ecosystem, ensuring that the integration of AI remains firmly grounded in the values of transparency, interpretability, and responsible innovation.
We encourage you to share your thoughts on the EU AI Act and its impact on education. How do you see AI transforming your educational practices? What concerns do you have regarding its implementation? Let’s work together to create a responsible future for AI in education.
Explore how to best implement active learning strategies with deep understanding of different modalities
FeedbackFruits announces partnerships with many institutions worldwide over the past 4 months
An overview of the state of competency-based education (CBE) in higher education around the world