GenAI Studio: News, Tools, and Teaching & Learning FAQs
These sixty minute, weekly sessions – facilitated by Technologists and Pedagogy Experts from the CTLT – are designed for faculty and staff at UBC who are using, or thinking about using, Generative AI tools as part of their teaching, researching, or daily work. Each week we discuss the news of the week, highlight a specific tool for use within teaching and learning, and then hold a question and answer session for attendees.
They run on Zoom every Wednesday from 1pm – 2pm and you can register for upcoming events on the CTLT Events Website.
News of the Week
Each week we discuss several new items that happened in the Generative AI space over the past 7 days. There’s usually a flood of new AI-adjacent news every week – as this industry is moving so fast – so we highlight news articles which are relevant to the UBC community.
This week in AI, a recent study warns that students relying heavily on AI for essay writing risk accumulating “cognitive debt,” as they bypass essential mental processes required for learning. In a similar vein, Rachel Horst argues that unstructured use of ChatGPT can weaken students’ capacity to think critically, especially when foundational skills are not yet established. MIT researchers add nuance to this debate, showing that AI reduces cognitive effort but does not inherently harm brain function unless misused. Tim Fawns supports this by advocating for pedagogy-first integration of generative tools to ensure meaningful educational outcomes. Responding to these challenges, a Conversation article calls for a national AI literacy strategy in Canada, emphasizing the need for cohesive policy, curriculum, and teacher support. UBC contributes by launching an AI readiness self-assessment, helping students gauge their ethical and practical preparedness. Meanwhile, the New York Times explores how AI is reshaping computer science education, shifting emphasis from coding to system thinking and ethical analysis. Microsoft outlines its vision for achieving medically-aligned superintelligent models to improve clinical outcomes, stressing expert collaboration. Finally, a Nature study finds that while generative AI boosts creative output, it comes with a significantly higher environmental cost than human-generated content, raising questions about sustainable AI deployment.
Here’s this week’s news:
Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing
A recent paper explores the concept of “cognitive debt,” which arises when students overly rely on AI tools like ChatGPT for essay writing without understanding or critically engaging with the material. The study finds that frequent AI assistance can diminish users’ ability to organize, analyze, and recall information independently. This dependency may lead to surface-level comprehension and reduced skill development over time. The authors argue for more thoughtful integration of AI in education to prevent long-term cognitive erosion.
Review the full research paper on cognitive debt and AI use.
“Your Brain on ChatGPT…” (Rachel Horst, LinkedIn)
In a reflective essay, Rachel Horst warns that using AI tools like ChatGPT before receiving proper guidance can displace rather than enhance learning. She contends that when students skip the foundational work of grappling with complex material, AI can become a shortcut that stunts deeper understanding. Horst advocates for structured pedagogy that frames AI as a supplement—not a substitute—for intellectual effort. Her piece underscores the risks of prematurely deploying generative tools in learning environments.
Read Rachel Horst’s full commentary on AI and cognitive learning.
MIT Study Clarifies: ChatGPT & Brain Health Debate
A viral headline claiming that ChatGPT “rots your brain” misrepresents the more nuanced findings of a recent MIT study. Researchers actually found that while AI can reduce mental effort during tasks, it does not inherently diminish cognitive function if used appropriately. The key variable is how users engage with the AI—passive reliance may impair retention, while active engagement can support learning. The article stresses that the impact of AI on the brain is complex and context-dependent.
Read the MIT study analysis and media correction.
Teaching ChatGPT Use Requires Training (Tim Fawns, LinkedIn)
Tim Fawns argues that learning to use ChatGPT effectively requires explicit instruction and not just open access. Without pedagogical scaffolding, students risk misusing the tool in ways that hinder development and misrepresent learning. He advocates for “pedagogy-first” AI integration, where tool use is shaped by educational objectives and critical reflection. His perspective challenges the assumption that generative AI use is self-evident or naturally beneficial in academic settings.
Explore Tim Fawns’ perspective on structured AI education.
Canada Needs National AI Literacy Strategy (The Conversation)
This article calls for a coordinated, national approach to AI literacy in Canada’s education system. The authors argue that ad hoc exposure to generative tools leaves students unprepared to navigate ethical, academic, and social implications. A formal strategy would involve curriculum design, teacher training, and public engagement to foster critical, equitable AI use. Without it, Canada risks deepening educational inequities and technological misunderstanding.
Learn more about the call for a national AI literacy framework.
UBC Student AI Readiness Assessment Tool
The University of British Columbia has released an online self-assessment tool that helps students gauge their readiness to use AI responsibly in academic work. The tool prompts users to reflect on ethical use, critical evaluation, and awareness of institutional guidelines. It aims to encourage informed and reflective engagement rather than uncritical dependence. This initiative supports AI literacy through self-directed, scaffolded learning.
Try UBC’s AI readiness assessment for students.
How to Teach Computer Science in the AI Era
As AI reshapes both industry and curriculum, computer science educators are rethinking what skills to prioritize. The New York Times reports on shifts toward teaching how to interact with, evaluate, and build on top of AI systems rather than coding alone. Educators are adapting by focusing on conceptual fluency, systems thinking, and ethical awareness. The piece underscores a broader transition from teaching programming languages to cultivating AI fluency.
Read the NYT’s look at computer science education in an AI world.
The Path to Medical Superintelligence
Microsoft outlines its vision for achieving “medical superintelligence” through AI models tailored to healthcare contexts. The strategy emphasizes expert alignment, domain-specific reasoning, and real-time clinical relevance. Rather than a one-size-fits-all model, Microsoft advocates for iterative, medically-grounded development supported by partnerships with health institutions. The goal is to augment clinical decision-making while ensuring safety, transparency, and domain accuracy.
Explore Microsoft’s roadmap for healthcare-focused AI.
AI vs Humans for Carbon Emissions in Writing & Illustration
A recent Nature study aimed to compare the carbon emissions of AI-generated versus human-produced text and illustrations, concluding that AI workflows typically consume more energy. While the findings raise important concerns about sustainability, the study’s methodology is inconsistent—particularly in its treatment of labor time, task variability, and the baseline assumptions used for emissions calculations. It fails to standardize comparisons between vastly different creative processes, making the results difficult to generalize. As a result, the paper opens a valuable conversation but offers limited actionable insight.
Review the full study and its findings.
Questions and Answers
Each studio ends with a question and answer session whereby attendees can ask questions of the pedagogy experts and technologists who facilitate the sessions. We have published a full FAQ section on this site. If you have other questions about GenAI usage, please get in touch.
-
Assessment Design using Generative AI
Generative AI is reshaping assessment design, requiring faculty to adapt assignments to maintain academic integrity. The GENAI Assessment Scale guides AI use in coursework, from study aids to full collaboration, helping educators create assessments that balance AI integration with skill development, fostering critical thinking and fairness in learning.
-
How can I use GenAI in my course?
In education, the integration of GenAI offers a multitude of applications within your courses. Presented is a detailed table categorizing various use cases, outlining the specific roles they play, their pedagogical benefits, and potential risks associated with their implementation. A Complete Breakdown of each use case and the original image can be found here. At […]