GenAI Studio: News, Tools, and Teaching & Learning FAQs

These sixty minute, weekly sessions – facilitated by Technologists and Pedagogy Experts from the CTLT – are designed for faculty and staff at UBC who are using, or thinking about using, Generative AI tools as part of their teaching, researching, or daily work. Each week we discuss the news of the week, highlight a specific tool for use within teaching and learning, and then hold a question and answer session for attendees.
They run on Zoom every Wednesday from 1pm – 2pm and you can register for upcoming events on the CTLT Events Website.
News of the Week
Each week we discuss several new items that happened in the Generative AI space over the past 7 days. There’s usually a flood of new AI-adjacent news every week – as this industry is moving so fast – so we highlight news articles which are relevant to the UBC community.
In this week’s tech news, TechCrunch introduces DeepSeek, a series of new open-source AI models from a Chinese startup, highlighting its competitive features that challenge the state of the current AI market. Ollama provides detailed information on DeepSeek-R1, a distilled reasoning AI model that performs similarly to state-of-the-art models. WIRED examines the censorship protocols within DeepSeek’s AI model, discussing how it restricts responses on sensitive topics and methods some users have tried to bypass these limitations. Hugging Face launches the Open-R1 project to fully reproduce DeepSeek-R1’s reasoning model through open-source methods in an effort to enhance transparency and collaboration in AI research. The Allen Institute for AI (Ai2) introduces Tülu 3 405B, an open-source language model with 405 billion parameters that demonstrates superior performance to DeepSeek V3 and GPT-4o across various benchmarks. OpenAI introduces “Deep Research,” an AI tool designed to generate comprehensive reports to streamline data synthesis and analysis, especially in STEM fields. OpenAI announces the o3-mini model, a reasoning model that can be used by free plan users in ChatGPT. Google’s Gemini app integrates advanced reasoning AI models in its 2.0 Pro update that improves coding performance and uses multimodal inputs to better understand world knowledge. Finally, OpenAI publishes a paper discussing the role of reasoning models in counteracting prompt injection attacks.
Here’s this week’s news:
DeepSeek Launches AI Chatbot
DeepSeek, a series of new open-source AI models from a Chinese startup disrupts the competitive AI landscape. Its lineup includes DeepSeek-V3, a large language model (LLM) like ChatGPT and Gemini, and a series of reasoning models, positioning its models as rivals to American and European AI models. DeepSeek has been gaining attention in China and beyond for its models’ extremely low development costs and open-source nature that challenges the leadership of other models that are not open-source. Read more.
DeepSeek Releases a Reasoning Model, DeepSeek-R1
DeepSeek-R1 is an open-source reasoning AI model designed to produce quality results on par with OpenAI-o1. Ollama discusses how DeepSeek distilled the reasoning processes of larger models such as Llama and Qwen into smaller models that perform well on benchmarks, comparable to current state-of-the-art models. The release of DeepSeek-R1 highlights the growing trend of open-source AI development. Read more.
Learn more about the research and development behind DeepSeek-R1 in this paper by DeepSeek’s research team.
WIRED Investigates DeepSeek’s Censorship
WIRED conducts an investigation into DeepSeek AI models, which reveals strict censorship mechanisms embedded within the chatbot. The model filters responses related to topics deemed sensitive by the Chinese government to comply with China’s regulatory guidelines. DeepSeek AI models censor these topics at the application level, so users are only made aware of these restrictions when interacting the model. The model’s built-in biases also highlight the issue of inherent biases in all AI models due to their pre-training and post-training processes. Read more.
Hugging Face Initiates the Open-r1 Project
Hugging Face launches the Open-R1 project to reproduce DeepSeek-R1’s reasoning model through open-source methods. The project seeks to reconstruct the data and training pipeline of DeepSeek-R1, validate its claims, and advance open reasoning models. By doing so, it aims to provide transparency on how reinforcement learning can enhance reasoning and share reproducible insights with the open-source community. Read more.
The Allen Institute for AI Releases Tülu 3
The Allen Institute for AI (Ai2) unveils Tülu 3 405B, an open-source language model comprising 405 billion parameters, marking the first application of fully open post-training recipes to models of this scale. This model employs a novel Reinforcement Learning with Verifiable Rewards (RLVR) approach that enhances its capabilities in tasks such as mathematical problem-solving and instruction following. Benchmark evaluations indicate that Tülu 3 405B achieves competitive or superior performance compared to DeepSeek V3 and GPT-4o in many standard assessments. Smaller versions of the model will need to be released before it can be used for university and individual purposes. Read more.
OpenAI’s Introduces “Deep Research” Initiative
OpenAI introduces “Deep Research,” a tool designed to generate comprehensive analytical reports similar to those produced by human researchers. This initiative aims to streamline data synthesis and provide in-depth insights on complex topics, which is especially useful in STEM fields. Users can ask the “Deep Research” tool to generate a research report on a certain topic, and the tool will compile a report that is comparable to PhD level research reports in some topics. Read more.
Roughly a day after OpenAI released “Deep Research,” Hugging Face released an open-source reproduction of “Deep Research.” Learn more.
OpenAI Releases o3-mini Model
OpenAI unveils o3-mini, a reasoning model that is available for free plan users in ChatGPT. It is a compact model optimized for STEM reasoning and lower computational costs. Additionally, o3-mini has the ability to work with search to provide answers and links to web sources. Read more.
Google’s Gemini Updates to Gemini 2.0 Pro
Google has released the Gemini 2.0 Pro update that introduces enhanced reasoning abilities to its AI models. The update focuses on improving performance in tasks such as coding and other complex queries. Gemini 2.0 Pro accepts multimodal inputs, thus producing a model with a deeper understanding of world knowledge. Read more.
OpenAI Researches Strategies Against Prompt Injection Attacks
OpenAI has published new research on mitigating prompt injection attacks (when hackers use carefully worded prompts to manipulate LLMs into providing inaccurate, sensitive, or harmful information). The paper explores how reasoning models can make AI models more resilient against malicious input manipulations by increasing inference-time compute. Researchers also outline the limitations of using inference-time compute to improve robustness against prompt injection attacks. Read more.
Tool of the Week

Tool of the Week: DeepSeek-r1-Distill-LLama-8B
What is DeepSeek-r1-Distill-Llama-8B?
DeepSeek-R1-Distill-Llama-8B is a distilled version of the DeepSeek-R1 reasoning model, fine-tuned from the Llama3.1-8B-Base model. It is designed to perform complex reasoning tasks, including mathematics and code generation, with a parameter count of 8 billion. This distilled model is part of DeepSeek’s initiative to create efficient, smaller-scale models that maintain high performance.
How is it used?
Users can interact with DeepSeek-R1-Distill-Llama-8B through platforms like Ollama, which provides an interface for running the model locally on your computer. By inputting prompts or questions, the model generates its thinking process while working towards a final answer. Like other reasoning models, DeepSeek-R1-Distill-Llama-8B evaluates its own responses and improves upon them before stating its solution. However please note that distillation techniques may introduce any potential inherent biases from the underlying model into the receiving model, which in turn may change how that model responds to certain prompts.
What is it used for?
DeepSeek-R1-Distill-Llama-8B is utilized for tasks that demand sophisticated reasoning, such as solving mathematical problems, generating code snippets, and understanding complex language queries. Its distilled nature allows for efficient deployment in environments with limited computational resources. The model’s open-source licensing under the MIT License encourages modification and integration into various projects.
Download Ollama here.
Explore the DeepSeek-R1-Distill-Llama-8B model here.
Without a PIA, instructors cannot require students use the tool or service without providing alternatives that do not require use of student private information
Questions and Answers
Each studio ends with a question and answer session whereby attendees can ask questions of the pedagogy experts and technologists who facilitate the sessions. We have published a full FAQ section on this site. If you have other questions about GenAI usage, please get in touch.
-
Assessment Design using Generative AI
Generative AI is reshaping assessment design, requiring faculty to adapt assignments to maintain academic integrity. The GENAI Assessment Scale guides AI use in coursework, from study aids to full collaboration, helping educators create assessments that balance AI integration with skill development, fostering critical thinking and fairness in learning.
-
How can I use GenAI in my course?
In education, the integration of GenAI offers a multitude of applications within your courses. Presented is a detailed table categorizing various use cases, outlining the specific roles they play, their pedagogical benefits, and potential risks associated with their implementation. A Complete Breakdown of each use case and the original image can be found here. At […]