GenAI Studio: News, Tools, and Teaching & Learning FAQs
These sixty minute, bi-monthly sessions – facilitated by Technologists from the Learning Technology Innovation Centre (LTIC) – are designed for faculty and staff at UBC who are using, or thinking about using, Generative AI tools as part of their teaching, researching, or daily work. Every 2 weeks, we discuss recent generative AI news, highlight a specific tool for use within teaching and learning, and then hold a question and answer session for attendees.
They run on Zoom on Wednesdays from 1pm – 2pm and you can register for upcoming events on the CTLT Events Website.
News of the Week
Each session we discuss several new items that happened in the Generative AI space over the past 14 days. There’s usually a flood of new AI-adjacent news every week – as this industry is moving so fast – so we highlight news articles which are relevant to the UBC community.
This week in AI spotlighted major developments in model self-awareness, neural network understanding, multilingual systems, privacy-enhanced compute, and a powerful open-source reasoning model. Anthropic showed early signs of introspection in large language models (LLMs), and new research revealed separate neural pathways for memory and reasoning in AI neural networks. Meta released a multilingual Automatic Speech Recognition (ASR) system supporting 1,600+ languages, while Google introduced Private AI Compute for secure, privacy-preserving cloud inference. Moonshot AI launched Kimi K2 Thinking, an open-weights reasoning model rivaling top proprietary systems.
At UBC, The Centre for Teaching, Learning and Technology (CTLT) released new AI-in-assessment guidance, and announced workshops on using generative AI to enhance course content, clarify teaching purpose, and reflect on teaching impact.
Here’s this week’s news:
[This Week’s Recording Will Be Uploaded Here Once It Has Been Processed]Anthropic Releases Study on Signs of Introspection in Large Language Models
In a recent study, Anthropic investigated whether large language models like Claude can introspect — that is, monitor and report their own internal neural representations. Using a method called concept injection, they inserted known activation patterns (in this case, the internal activity associated with certain words, such as “bread” and “dust”) in unrelated contexts, and asked the model whether it could detect the injection and identify the injected word. In some cases, Claude correctly recognized these inserted patterns, showing a limited ability to monitor its own internal state. However, this ability remains inconsistent. Deeper research into AI introspection could help make model reasoning more transparent and understandable.
Researchers Find Distinct Pathways for Memory and Reasoning in AI Neural Networks
A recent study by Goodfire AI shows that AI models store “memory” function and “logic” or reasoning function in separate neural regions. The findings suggest that neural networks may organize their internal computations more like distinct modules for memorizing vs. reasoning than previously assumed.
Read the Full Article Here
Meta Releases Open-Source Multilingual Automatic Speech Recognition (ASR) System Supporting 1,600+ Languages
Meta has released a new open-source automatic speech recognition (ASR) system called Omnilingual ASR that supports 1,600+ languages, and can extend to over 5,400 languages with zero-shot learning, which allows AI models to recognize new languages without any labeled examples. Built on a massive multilingual dataset, the system aims to reduce language-barriers and support underserved languages that are often excluded from digital tools.
Google introduces Private AI Compute: Cloud-Powered Gemini Models with Built-in Enhanced Privacy
Google has introduced Private AI Compute, a system that lets devices use powerful cloud-based AI models while keeping user data private. It allows tasks too complex for on-device models to run securely in the cloud using encrypted, isolated environments that even Google cannot access.
Release of Kimi K2 Thinking — an Open-Source Thinking Model by Moonshot AI
Kimi K2 Thinking is an open-source reasoning model from Moonshot AI designed to handle complex, multi-step problems and extended planning tasks. The model is released with open weights, so developers and researchers can freely inspect it, run it locally, and build on top of it. Despite being fully open-source, it has shown performance that rivals, and in some cases surpasses, leading proprietary systems, particularly on agentic reasoning and agentic search and browsing benchmarks.
Read the Full Article Here
Strategies for Managing Generative AI Use in Assignments
UBC’s Centre for Teaching, Learning and Technology’s AI and Assessments resource provides guidance on using generative AI in assignments. It helps instructors design assignments that responsibly incorporate AI or remain resilient to its use, while supporting learning goals and academic integrity.
Read the Full Article Here
Course Design Studio: Using Generative AI to Transform Text into Engaging Visual and Audio Content
UBC’s Centre for Teaching, Learning and Technology is hosting a hands-on, in-person session (Nov 18th, 2025, 12:30 – 1:30 pm) where instructors will learn how to use generative AI tools like Napkin AI and ElevenLabs to turn text into compelling visual and audio content for courses.
Register for the Event Here
Using Generative AI to Clarify the Purpose of Our Teaching
UBC’s Centre for Teaching, Learning and Technology is hosting an interactive workshop (Nov 25th, 2025, 12:00 – 1:30pm), where participants will use generative AI tools alongside frameworks to clarify why a course matters and who it is really for, helping them design more intentional, inclusive, learner-centered teaching.
The 2025 Centre for Teaching, Learning and Technology Winter Institute
The 2025 Winter Institute, hosted by UBC’s Centre for Teaching, Learning & Technology (CTLT) hosts a series of workshops to invite faculty to reflect on their teaching practices and their impact on student learning. It covers topis like inclusive teaching, relational pedagogy, wellbeing in education, student-faculty partnerships, the use of generative AI in learning and more.
Questions and Answers
Each studio ends with a question and answer session whereby attendees can ask questions of the pedagogy experts and technologists who facilitate the sessions. We have published a full FAQ section on this site. If you have other questions about GenAI usage, please get in touch.
-
Assessment Design using Generative AI
Generative AI is reshaping assessment design, requiring faculty to adapt assignments to maintain academic integrity. The GENAI Assessment Scale guides AI use in coursework, from study aids to full collaboration, helping educators create assessments that balance AI integration with skill development, fostering critical thinking and fairness in learning.
-
How can I use GenAI in my course?
In education, the integration of GenAI offers a multitude of applications within your courses. Presented is a detailed table categorizing various use cases, outlining the specific roles they play, their pedagogical benefits, and potential risks associated with their implementation. A Complete Breakdown of each use case and the original image can be found here. At […]