The University of British Columbia
UBC - A Place of Mind
The University of British Columbia
A.I. In Teaching and Learning
  • Questions About AI
  • Experiences
    • Submit an Experience
  • Events
  • Resources
    • Glossary of GenAI Terms
    • T&L PIA status
    • AI and Assessments
    • Student AI Readiness Assessment
  • Tools
  • Contact
    • Request Consultant Support
  • Submit Resource

Generative AI Studio December 10th, 2025 – Replay

GenAI Studio

GenAI Studio: News, Tools, and Teaching & Learning FAQs

December 12, 2025

This week

News of the week

Tool Showcase

FAQs

Register for Our Next Session
Check Out Last Session’s Replay

These sixty minute, bi-monthly sessions – facilitated by Technologists from the Learning Technology Innovation Centre (LTIC) – are designed for faculty and staff at UBC who are using, or thinking about using, Generative AI tools as part of their teaching, researching, or daily work. Every 2 weeks, we discuss recent generative AI news, highlight a specific tool for use within teaching and learning, and then hold a question and answer session for attendees.

They run on Zoom on Wednesdays from 1pm – 2pm and you can register for upcoming events on the CTLT Events Website.

News of the Week

Each session we discuss several new items that happened in the Generative AI space over the past 14 days. There’s usually a flood of new AI-adjacent news every week – as this industry is moving so fast – so we highlight news articles which are relevant to the UBC community.

This week in AI highlighted progress in open foundation models, agentic coding tools, and efforts to better understand and evaluate general intelligence, alongside growing attention to how people learn with and use AI in everyday contexts. Mistral’s release of Mistral 3 and Devstral 2 emphasized continued momentum toward open, high-performance models and more autonomous developer workflows, while initiatives like ARC-AGI-2, AAIF, and Poetiq’s open solver advanced transparent benchmarking of complex reasoning. Work on voice generation and reflections on responsible LLM use underscored the importance of human judgement as AI systems become more capable. Meanwhile, new AI certification courses and reporting on teenagers’ use of chatbots for mental health support highlighted how generative AI is increasingly shaping education, skills development, and human interaction.

Here’s this week’s news:

VibeVoice: Microsoft’s Frontier Open-Source Text-to-Speech Model

Microsoft recently introduced VibeVoice, an open-source text-to-speech model built for generating long-form, multi-speaker audio. It supports multilingual and cross-lingual speech while maintaining natural flow, speaker consistency, and expressive conversational tone.

Read the Full Article Here


Mistral 3: Mistral AI’s Latest Open AI Model

Mistral AI recently announced Mistral 3, its newest generation of AI models. The update improves multilingual understanding, reasoning, and flexibility, making the models easier to use across everything from small devices to large-scale applications.

Read the Full Article Here


Devstral 2 & Mistral Vibe CLI: Mistral AI’s New Coding Models and CLI Assistant

Mistral AI recently launched Devstral 2, its next-generation open-source coding model family designed to automate and accelerate developer workflows. Alongside it, Mistral Vibe CLI brings a natural-language, terminal-native coding assistant that can explore, edit, and execute changes across your codebase.

Read the Full Article Here


OpenAI’s New AI Skills & Certification Program

OpenAI recently launched its first Certificate Courses to help people build practical, job-ready AI skills, including foundational training inside ChatGPT and a teacher-focused track on Coursera. The program aims to expand access to real-world AI learning and certify millions of learners for careers in the AI-shaped job market

Read the Full Article Here


Oxide: Responsible LLM Use Guidelines

Oxide Computer released RFD 576, outlining how LLMs should be used responsibly within the company. It emphasizes human judgment, responsibility, rigor, empathy, and teamwork when applying LLMs for tasks like reading, editing, writing, and coding, ensuring AI supports rather than replaces thoughtful work.

Read the Full Article Here


Contains Distressing Content Teenagers & AI Chatbots: Rising Use for Mental Health Support

A Guardian report says about one in four UK teens aged 13–17 have turned to AI chatbots like ChatGPT for mental health support, especially where traditional services are seen as intimidating or hard to access. While some young people describe chatbots as non-judgmental and always available, experts warn these tools aren’t a replacement for professional care and highlight potential risks and the need for proper safeguards.

Read the Full Article Here


Agentic AI Foundation: Open-Source AI Agent Protocols

The Agentic AI Foundation is a new organization under the Linux Foundation that’s creating open standards for AI agents (software that can autonomously complete tasks). It brings together three key projects: Model Context Protocol (helps AI connect to different tools and data), Goose (an AI coding assistant), and AGENTS.md (a standard way to give AI agents instructions about your project). AWS, Anthropic, Google, Microsoft, and OpenAI are collaborating as founding members to ensure AI agents develop as open, interconnected tools that work together, rather than as closed systems controlled by individual companies.

Read the Full Article Here


ARC-AGI-2: New AI Reasoning Benchmark for 2025

ARC-AGI-2 is a new AI reasoning benchmark released in 2025 that reveals a major gap in AI capabilities. While humans can solve every task with an average score of 60%, pure large language models score 0% and advanced AI reasoning systems achieve only single-digit percentages. The benchmark tests capabilities like interpreting symbols mean (not just how they look), applying multiple rules simultaenously, and adapting rules based on context — areas where AI systems tend to spot surface-level patterns and miss the deeper logic. The test now includes an efficiency metric to measure not just whether AI can solve problems, but at what computational cost, rejecting approaches that rely purely on trial and error and don’t represent true intelligence. The challenge aims to inspire researchers to develop genuinely new approaches rather than simply scaling existing methods.

Read Full Article Here


Poetiq’s State-of-the-Art ARC-AGI Results & Cost Efficiency

Poetiq, an AI startup run by a small research team, achieved record-breaking results on the ARC-AGI reasoning benchmarks by building intelligence on top of existing models like GTP-5.1 and Gemini 3. Their system delivers better accuracy at lower costs than previous solutions and works across multiple AI model families, marking significant progress in AI reasoning capabilities.

Read the Full Article Here

Poetiq’s Open-Source ARC-AGI Solver Implementation


Resonant Computing Manifesto: Human-Centered AI Principles

The Resonant Computing Manifesto proposes a new approach to building AI-powered software. It advocates for AI to enhance human wellbeing, and reshape current technologies that exploit attention and drive alienation. The initiative argues that AI can enable technology to adaptively shape itself to serve individual aspirations instead of forcing people into one-size-fits-all solutions. The manifesto outlines five principles: keeping data private under user control, ensuring software works exclusively for users without hidden agendas, distributing power across platforms, adapting to individual needs, and promoting genuine human connection. The group is developing both tools and crowdsourced implementation guidelines to make this vision of “resonant computing” a reality.

Read the Full Article Here


Questions and Answers

Each studio ends with a question and answer session whereby attendees can ask questions of the pedagogy experts and technologists who facilitate the sessions. We have published a full FAQ section on this site. If you have other questions about GenAI usage, please get in touch.

  • Assessment Design using Generative AI

    Generative AI is reshaping assessment design, requiring faculty to adapt assignments to maintain academic integrity. The GENAI Assessment Scale guides AI use in coursework, from study aids to full collaboration, helping educators create assessments that balance AI integration with skill development, fostering critical thinking and fairness in learning.

    See the Full Answer

  • How can I use GenAI in my course?

    In education, the integration of GenAI offers a multitude of applications within your courses. Presented is a detailed table categorizing various use cases, outlining the specific roles they play, their pedagogical benefits, and potential risks associated with their implementation. A Complete Breakdown of each use case and the original image can be found here. At […]

    See the Full Answer

This website is licensed under a Creative Commons Attribution-NonCommercial 4.0 International Public License.

Centre for Teaching, Learning and Technology
Irving K. Barber Learning Centre
214 – 1961 East Mall
Vancouver, BC Canada V6T 1Z1
Tel 604 827 0360
Fax 604 822 9826
Website ai.ctlt.ubc.ca
Email ctlt.info@ubc.ca
Find us on
    
Back to top
The University of British Columbia
  • Emergency Procedures |
  • Terms of Use |
  • Copyright |
  • Accessibility