Technology outpaces regulation, as the saying goes. But in the case of AI, government institutions are catching up. In October 2023, the White House released the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. In March of 2024, the European Union passed the Artificial Intelligence Act.
While there is some ambiguity in the executive order and the AI Act, both include provisions that will directly affect educational institutions and how they implement AI, inside and outside the classroom.
The Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence is an impressive-sounding name for a document that doesn’t contain any legislation. Instead, it directs government agencies, such as the Department of Education, to develop “resources, policies, and guidance regarding AI.”
Higher ed administrators may find it difficult or even impossible to draft or revise an AI policy when they don’t know the specific regulations coming down the pike. But there is a strong hint in the executive order.
The order directs the Secretary of Education to develop policies addressing “safe, responsible, and non-discriminatory uses of AI in education, including the impact AI systems have on vulnerable and underserved communities.” So it’s a good idea to address implicit bias—how to recognize it and how to avoid it—when drafting your own AI policy. (This post from Chapman University summarizes how implicit bias can manifest in AI and what to look out for.)
A second issue is Regular Substantive Interaction (RSI), an important criterion to meet for federal funding of higher education. In short, RSI regulations stipulate that distance learning interactions must be meaningful for the student and at predictable intervals. WCET, a nonprofit that focuses on digital learning, notes in a [recent report](https://wcet.wiche.edu/frontiers/2021/08/26/rsi-refresh-sharing-our-best-interpretation-guidance-requirements/#:~:text=The Department indicated in the,for regular and substantive interaction.) that using AI tools in an online class “will not meet the statutory requirements for regular and substantive interaction.” Thus AI policies for your institution need to address how to use AI tools in substantive ways (more on this below).
Legislation from the European Union may not seem like it should be a grave concern for American institutions. However, like the EU’s data privacy laws, the EU AI Act will have a wide-ranging effect, because it applies to all EU users. Since so many American universities have outposts on EU soil, or European students, or study abroad programs, it makes sense to pay close attention to EU requirements.
Unlike the more vague US executive order, the EU Act takes a specifically risk-based approach, defining activities and their associated level of risk (minima, limited, high, and unacceptable).
Of course, these uses are quite broad by design—since AI is relatively nascent in higher ed, legislators want to cover as many of the possibilities as they can. This may be frustrating to those who have to draft a policy that complies with the legislation. Remember, however, these uses of AI aren’t necessarily prohibited; rather they are subject to “conformity assessments”—meaning higher education institutions should take the necessary steps to demonstrate compliance.
Also, both the EU and the White House have a take on AI in education that could be described as cautious encouragement—the White House’s Fact Sheet on the executive order acknowledges that AI has the potential to “transform education.” Nevertheless, given the uncertainties, what should administrators keep in mind when drafting an AI policy for their own institution?
In universities and other institutions of higher education, policy is usually designed at a high level—for example by Directors of Centers of Teaching and Learning, with input from legal advisors. But an AI policy stands a greater chance of success with both a clear picture of how AI is used in the classroom and clarity on the ed tech products themselves.
Thus it’s a good idea to include all the stakeholders in your AI policy committee whenever possible: department heads, instructors, and students. Ask them how they’re using AI in the classroom—in curriculum and lesson plans, or to respond to assignments. They should be encouraged to voice their ethical concerns as well: when instructors feel students may be over-reliant on AI, or when students have noticed or experienced implicit bias.
These are big questions, but the idea is to foster an environment where stakeholders are aware of the ethical and regulatory issues and feel comfortable with the technology, so it all can be reflected in a comprehensive AI policy. And as we mentioned, the EU regulations are ambiguous in places, and in the US, they’re just getting started. Nevertheless, we know enough to begin the process of drafting an institutional policy. Here are some suggestions:
Explore how to best implement active learning strategies with deep understanding of different modalities
FeedbackFruits announces partnerships with many institutions worldwide over the past 4 months
An overview of the state of competency-based education (CBE) in higher education around the world