57% of higher ed institutions call AI a strategic priority. Only 26% have a formal policy. In this webinar, FeedbackFruits brought together experts from UW-Oshkosh to unpack what institutions making real progress are doing differently and what leadership keeps missing. Learn some insights and get the full recording below.
What the research says, what's actually working, and what leaders keep missing.
Fifty-seven percent of higher ed institutions call AI a strategic priority. Only 26% have a formal policy. And 80% of faculty still don't know how to apply it in their teaching. Something isn't adding up, in this panel discussion we spent some time unpacking exactly why.
On April 14th, FeedbackFruits brought together John Bellotti, Instructional Designer at the University of Wisconsin-Oshkosh, and Ewoud de Kok, Founder & CEO of FeedbackFruits, for a frank panel conversation moderated by Chief Strategy Officer Bas Hintemann. The topic: what institutions are actually doing differently when it comes to AI in 2026 and what most are still getting wrong.

One of the starkest findings from the research shared in the webinar is the growing divide between how students are using AI and how prepared institutions are to respond to it.

The central thesis of the webinar was a reframe that resonated strongly with the live audience: institutions don't have an AI problem. They have an adoption and scaling problem.
AI is already spreading, but informally, inconsistently, faculty member by faculty member and department by department. The result is islands of good practice with no knowledge sharing, no governance, and students experiencing wildly uneven course design depending on where they happen to study.
"Leadership is asking us to tackle everything. There's a lot of confusion as to which direction everybody needs to take."
— John Bellotti, Instructional Designer, University of Wisconsin-Oshkosh
The institutions making real progress, the panel found, aren't the ones with the most tools. They're the ones standardizing on fewer, better ones and building repeatable, trustworthy workflows around them.
01 Start small. Standardize fast.
Pick safe, high-value workflows and make them the institutional baseline. Don't let 50 experimental pilots run in parallel without shared learning. Standardization is what turns individual wins into institutional capability.
02 Guardrails are what makes scaling possible, not what prevents it.
Privacy vetting, data oversight, and compliance aren't obstacles to adoption. They're the foundation that makes faculty willing to trust new tools in the first place. At UW-Oshkosh, every tool goes through IT vetting: a process John described as both "good and occasionally frustrating," but ultimately essential.
03 Faculty need to do, not just see.
Telling faculty about a tool, sending a video, writing an article. none of it works as well as putting it in their hands. John described building an AI accessibility bot that initially got almost no adoption. Once he sat with a few instructors and showed it live, the reaction was immediate: "Really, that's all I have to do?" Hands-on experience is what builds adoption.
04 Peer trust beats top-down mandates every time.
Faculty don't adopt tools because an instructional designer recommends them. They adopt them when a respected colleague does. John's analogy was perfect: "I'm the parent telling my kid to do something. But if the coach tells him? That's completely different." Early adopters are your most powerful lever.
One example from the session made the abstract very concrete. An instructor at UW-Oshkosh was assigning pre-class video viewing and getting under 5% completion. After redesigning the activity with interactive video tools, completion reached 97%. The content didn't change, the design did.
In another case, ungraded AI-powered practice exercises for accounting principles were introduced mid-course. Students used them voluntarily. Test scores improved., the best part? The whole activity took 15 minutes to build.
Small design interventions. Big, measurable outcomes.
This is the kind of evidence that travels upward to leadership and sideways to skeptical colleagues and it's how adoption actually spreads through an institution.
Ewoud raised something the panel felt strongly about: the cognitive load that meaningful AI adoption places on faculty is consistently underestimated by leadership. The efficiency push from AI doesn't reduce the demands on educators it often intensifies them.
"It's pedagogy of technology. AI is still a technology, continue to think from the pedagogical lens first, before the application."
— Ewoud de Kok, Founder & CEO, FeedbackFruits
The panel's closing argument was one worth sitting with: the digitization of education should lead to more human interaction, not less. AI agents can tutor. They cannot replace the empathy, lived experience, and authentic connection that a human educator brings. Designing for that, deliberately, is where the real opportunity lies.
The panel covered significantly more ground than we can fit here, including what friction points leadership consistently overlooks, how to build an evidence case before anyone above you is paying attention, how to verify student work ownership without punishing students by default, and what ADA compliance requirements mean for course redesign before the April 24th deadline.