Activity setup
Activity steps
- Read instructions: Students review the activity guidelines and learning outcomes.
- Submission: Students interact with a peer-created AI Assistant and submit a summary of their observations.
- Provide peer feedback: Students evaluate two peer AI Assistants using the provided scale rating, giving clear and constructive feedback.
- Received reviews and feedback-on-feedback: Students read feedback on their AI Assistant and optionally provide feedback on the usefulness of the received comments.
- Grading: Grades are based on the quality of peer feedback and participation in the feedback-on-feedback process.
To ensure feedback is both clear and actionable, this activity uses a 1 – 5 scoring system across three key categories. Each category is broken down into two focus points, allowing for a nuanced look at performance where 1 is the lowest score and 5 is the highest.
Scale rating
1. Interaction and User Experience
- Ease of Start and Clarity: Was it clear how to start using the AI Assistant and what it was designed to do?
- Score: [1 - 5]
- Interactivity and Engagement: How engaging and interactive was the AI Assistant during your interaction?
- Score: [1 - 5]
2. Accuracy and Reliability
- Accuracy of Responses: How accurate and appropriate were the AI Assistant’s responses to your questions?
- Score: [1 - 5]
- Consistency Across Interactions: How consistent were the responses across similar or repeated interactions?
- Score: [1 - 5]
3. Safety and Risk Management
- Effectiveness of Guardrails: How effectively did the AI handle edge cases, misuse, or off-scope requests?
- Score: [1 - 5]
- Appropriateness of Language/Tone: How professional was the language, including in difficult or sensitive situations?
- Score: [1 - 5]
Learning Activities Used
Peer Review
Stimulate lifelong learning with peer feedback
In this activity
- The activity centers on peer review of Custom GPTs / AI Assistants created by fellow students, guiding reviewers to assess user experience, reliability, safety, and prompt engineering quality.
- The activity promotes AI literacy by requiring students to systematically test AI Assistants across both intended use cases and edge cases, strengthening understanding of limitations, reliability, and boundary-setting.
- A structured scale rating is added supporting evaluation of core AI design principles, including clarity of purpose, scope control, user guidance, reliability, and safety in prompt engineering.
- Students submit their AI Assistant as a group but complete peer reviews individually; this setup can be adjusted to fit your course structure.
- Both submitter and reviewer anonymity are enabled to encourage honest, constructive feedback, though these settings can be modified if preferred.
- Feedback-on-feedback is enabled, allowing students to evaluate the usefulness of the comments they receive and promote more meaningful peer dialogue.

%203.avif)