In our recent workshop, we explored how generative AI can both challenge and enhance students’ evaluative judgment by engaging with its inaccuracies and hallucinations. Participants worked hands-on with tools like ChatGPT, Claude, Perplexity.ai, and MS Copilot to critically assess AI-generated content, recognize misinformation, and develop strategies for scaffolding evaluative judgment in their teaching.
Through interactive activities, attendees:
- Used AI to generate misinformation and analyzed the sources and reasoning behind inaccuracies.
- Examined bias and hallucinations in AI-generated responses, identifying patterns of error and discussing their implications for student learning.
- Designed AI-focused assignments that help students engage in self- and peer-assessment to build critical thinking skills.
- Explored frameworks like SIFT to support students in evaluating AI-generated outputs effectively.
The session highlighted key opportunities and challenges, emphasizing that while generative AI can produce plausible-sounding content, students need structured opportunities to critique and refine their judgment. Scaffolding these experiences through well-designed activities and peer discussions can help students develop essential metacognitive skills.
For continued learning, access the session slides, worksheet, and additional resources below: