GenAI Studio: News, Tools, and Teaching & Learning FAQs
These sixty minute, weekly sessions – facilitated by Technologists and Pedagogy Experts from the CTLT – are designed for faculty and staff at UBC who are using, or thinking about using, Generative AI tools as part of their teaching, researching, or daily work. Each week we discuss the news of the week, highlight a specific tool for use within teaching and learning, and then hold a question and answer session for attendees.
They run on Zoom every Wednesday from 1pm – 2pm and you can register for upcoming events on the CTLT Events Website.
News of the Week
Each week we discuss several new items that happened in the Generative AI space over the past 7 days. There’s usually a flood of new AI-adjacent news every week – as this industry is moving so fast – so we highlight news articles which are relevant to the UBC community.
This week, we explored how Anthropic’s new research helps trace the inner workings of language models by visualizing their thought processes; TechCrunch reports a privacy complaint against ChatGPT for generating false claims about a real person; OpenAI introduces 4o Image Generation, a model that improves image-to-text consistency for tasks like matching specific hex colors; DeepMind’s 145-page paper on AGI safety raises concern over unreliable AI outputs but receives mixed reactions; DeepSeek-V3-0324 and Qwen2.5-VL-32B offer smaller, open-source alternatives to ChatGPT with image-processing capabilities; Mistral-Small-3.1-24B pushes the boundaries of lightweight multimodal models; a researcher runs an LLM on a vintage Mac to explore low-spec AI performance; the Model Context Protocol (MCP) aims to standardize how apps provide input context to LLMs; and a new jailbreak technique involving psychological prompts bypasses AI safety filters. Finally, Rich demoed Manus AI, a no-code tool that generates interactive AI visuals and recently completed a task visualizing news updates from top AI companies.
Here’s this week’s news:
Tracing the Thoughts of a Language Model
Anthropic’s latest research visualizes how a language model moves from one prediction to the next, offering transparency into its reasoning process. While LLMs are often described as predicting the next word, this paper shows how the model sometimes diverges when uncertain, leading to potential hallucinations. It also highlights how models approach topics they only partially understand, increasing the risk of generating incorrect information. Read more.
ChatGPT Hit with Privacy Complaint Over Hallucinations
OpenAI faces a privacy complaint after ChatGPT fabricated defamatory information about a real individual. The case draws attention to how even names present in training data can result in harmful outputs when information is sparse. It underscores the ongoing risks of hallucinations in generative models and the privacy implications of AI-generated text. Read more.
Introducing 4o Image Generation by OpenAI
OpenAI launches 4o, a major upgrade in image generation that improves the accuracy of converting images into descriptive text. The model can now identify details like specific hex colors and text more precisely, addressing a longstanding weakness in previous image-to-text models. The tool is available via OpenAI’s platform but requires sign-in to access. Read more.
DeepMind’s AGI Safety Paper: A Cautious Look Forward
DeepMind published a 145-page report outlining technical and ethical considerations around Artificial General Intelligence. It presents possible scenarios for AGI development by 2030 and discusses how AI can reinforce itself with inaccurate outputs. Although thorough, the paper has been criticized for its alarmist tone and lack of concrete solutions. Read more.
DeepSeek-V3-0324: An Open-Source Model with Broad Capabilities
DeepSeek-V3 is a large open-source language model offering a free alternative to proprietary tools. While not supported for UBC coursework, it provides users with flexible access to high-quality generative capabilities for experimentation. Read more.
Qwen2.5-VL-32B: Compact and Multimodal
This new release by Qwen is a lighter model capable of interpreting both text and image inputs. Though less powerful than ChatGPT, it’s accessible, efficient, and can run offline—ideal for users looking for privacy or offline deployment. Read more.
Mistral-Small-3.1-24B: Efficient, Multimodal Processing
Mistral-Small-3.1 is a 24B-parameter model that can process both image and text inputs with impressive accuracy for its size. A future GenAI Studios session will dive deeper into how this model can be leveraged for accessible multimodal applications. Read more.
Running LLMs on a 2005 PowerBook?
In a creative experiment, a researcher ran a large language model on a 2005 PowerBook G4, testing the limits of outdated hardware. While extremely slow, the test offered insight into the bare minimum requirements for LLM inference and sparked discussion about accessible AI. Read more.
Standardizing Model Inputs: The Model Context Protocol (MCP)
MCP is a new open standard aimed at improving interoperability across AI applications. It defines how models receive contextual data and allows them to connect seamlessly with various tools and databases. OpenAI and Microsoft are among the first adopters. Read more.
Gaslighting AI: The Latest Jailbreak Tactic
A new jailbreak method tricks language models into ignoring safety filters by framing requests as fictional future research. This approach, which targets models like Claude 3.7 Sonnet, highlights persistent vulnerabilities in AI alignment and content filtering systems. Read more.
Tool of the Week: Manus AI

What is Manus AI?
Manus AI is a no-code platform designed to help users generate structured, interactive content using large language models. The tool prompts users to describe their task or goal, then generates a customized, often visual, output. It simplifies the process of working with LLMs by guiding users through iterations, asking clarifying questions, and refining outputs based on feedback. Manus is still in beta, and access may need to be requested before use.
How is it used?
In GenAI Studios, we demonstrated how Manus AI can be used to generate a graph-based visualization showing news f
What is it used for?
Manus AI is particularly useful for creating explorable visualizations, mock-ups, and prototypes without needing technical skills. Educators, designers, and content creators can use it to transform their ideas into interactive concepts. It’s also helpful for brainstorming, building workflows, or visual storytelling where LLMs can automate the heavy lifting and logic. Try ManusAI
Questions and Answers
Each studio ends with a question and answer session whereby attendees can ask questions of the pedagogy experts and technologists who facilitate the sessions. We have published a full FAQ section on this site. If you have other questions about GenAI usage, please get in touch.
-
Assessment Design using Generative AI
Generative AI is reshaping assessment design, requiring faculty to adapt assignments to maintain academic integrity. The GENAI Assessment Scale guides AI use in coursework, from study aids to full collaboration, helping educators create assessments that balance AI integration with skill development, fostering critical thinking and fairness in learning.
-
How can I use GenAI in my course?
In education, the integration of GenAI offers a multitude of applications within your courses. Presented is a detailed table categorizing various use cases, outlining the specific roles they play, their pedagogical benefits, and potential risks associated with their implementation. A Complete Breakdown of each use case and the original image can be found here. At […]