The University of British Columbia
UBC - A Place of Mind
The University of British Columbia
A.I. In Teaching and Learning
  • Questions About AI
  • Experiences
    • Submit an Experience
  • Events
  • Resources
    • Glossary of GenAI Terms
    • T&L PIA status
    • Assessment Design using Generative AI
    • Student AI Readiness Assessment
  • Tools
  • Contact
    • Request Consultant Support
  • Submit Resource

Generative AI Studio May 7 2025 – Replay

GenAI Studio

GenAI Studio: News, Tools, and Teaching & Learning FAQs

May 8, 2025

This week

News of the week

Tool Showcase

FAQs

Register for Next Week
Check Out Last Week’s Replay

These sixty minute, weekly sessions – facilitated by Technologists and Pedagogy Experts from the CTLT – are designed for faculty and staff at UBC who are using, or thinking about using, Generative AI tools as part of their teaching, researching, or daily work. Each week we discuss the news of the week, highlight a specific tool for use within teaching and learning, and then hold a question and answer session for attendees.

They run on Zoom every Wednesday from 1pm – 2pm and you can register for upcoming events on the CTLT Events Website.


News of the Week

Each week we discuss several new items that happened in the Generative AI space over the past 7 days. There’s usually a flood of new AI-adjacent news every week – as this industry is moving so fast – so we highlight news articles which are relevant to the UBC community.

This week, a Mastodon post surfaces a creative jailbreak trick using job resumes to bypass LLM content filters, showcasing continued vulnerabilities in model alignment. Meanwhile, Promptfoo emerges as a robust tool for rigorously testing LLM prompts across different settings and configurations, offering a more systematic approach to evaluation. HiddenLayer researchers reveal a universal jailbreak technique that circumvents safety measures across all major language models, renewing concerns around model robustness. In parallel, Anthropic announces its “AI for Science” initiative aimed at accelerating scientific discovery through AI, with detailed rules and restrictions to guide applicants through the selection process. On X (formerly Twitter), researcher Sam Rodrigues shares a specialized LLM designed to analyze biological data on-device, hinting at a future where scientific computation can be run locally without relying on cloud inference. AllenAI releases OLMo-2, a 1B parameter open-source language model, positioning it as a lightweight and transparent alternative in the LLM space. Hugging Face also debuts a new demo called Smol Agents, a compact, interactive AI that simulates computer use, demonstrating how small models can still execute practical tasks through agentic workflows. Environmental concerns gain new depth with an analysis of ChatGPT’s carbon footprint, providing a detailed look at per-query emissions and usage patterns. The Guardian adds to the conversation by spotlighting the water consumption of major AI data centers, linking their growth to serious ecological costs. Harvard Business Review rounds out the week with insights on how users in 2025 are integrating generative AI into daily workflows—from task automation to creative ideation—offering a clearer picture of its practical, widespread use. Finally, OpenAI publishes a technical deep dive into sycophancy in language models, explaining how LLMs may over-agree with user opinions and the challenges of reducing this behaviour without harming helpfulness or safety. 

Here’s this week’s news:

Resume Jailbreak Hack Prompts Discussion on AI Hiring Practices 

A Mastodon post surfaces a creative jailbreak trick using job resumes to bypass LLM content filters, showcasing continued vulnerabilities in model alignment. This post highlights the arms race between employers and applicants, implementing AI into the hiring / job search process in an effort to increase efficiency. This post showcases an modern update to the “white fonting” practice used by websites in the adolescence of the internet to increase relevance and ranking.

View the post on resume jailbreaks


Prompt Testing Toolkit: Promptfoo

Promptfoo is an open-source platform designed to secure AI applications from development to deployment. It offers tools for adaptive red teaming, guardrails, model security, and evaluations, helping developers identify and mitigate vulnerabilities such as prompt injections, data leaks, and unauthorized content generation. Trusted by over 75,000 users, and available locally, on premises, or through the cloud, Promptfoo emphasizes a security-first, developer-friendly approach to AI application development.

Try Promptfoo yourself.


A Universal Bypass Method On All LLMs

HiddenLayer’s recent research reveals a universal prompt injection technique capable of bypassing safety guardrails across major large language models, including GPT-4, Claude, Gemini, and others. This vulnerability poses significant risks to AI systems, highlighting the need for robust security measures in AI development and deployment.

Explore the article detailing the bypass method.


Anthropic Launches AI For Science Initiative

Anthropic has introduced its AI for Science program, aiming to accelerate scientific research by providing up to $20 000 in free API credits to researchers working on high-impact projects, particularly in biology and life sciences. The program seeks to leverage advanced AI reasoning and language capabilities to help researchers analyze complex data, generate hypotheses, and communicate findings more effectively. However, with this grant comes numerous restrictions including allowing Anthropic to publish your likeness for promotional reasons in perpetuity.

View the programs introduction article.

Read through the programs restrictions.


X User Shares Biology Specialized LLM For Data Analysis 

Sam Rodriques announces the launch of Finch, a new AI agent designed to automate data-driven discovery in biology. Now in closed beta, Finch demonstrates early but impressive performance akin to a strong first-year graduate student. It independently replicates key results from the 2020 MetMap paper, such as links between ADAM28 deletions and breast cancer brain metastases, and uncovers novel associations not reported in the original study, including links to EFNA5 and PTCH1 amplifications. Finch operates with fully open-ended prompts and, despite occasional errors, uncovers meaningful scientific insights at high speed.

View the original post and video demonstration.


AllenAI Releases Lightweight OLMo-2 Model

pAllen Institute for AI has released OLMo 2 1B, the smallest model in the OLMo 2 family, designed to enable the science of language models. Pre-trained on extensive datasets, OLMo 2 1B offers researchers access to all code, checkpoints, logs, and training details, facilitating transparency and reproducibility in AI research.

Try out the demo.


SmolAgents’ Computer Agent: Demonstrating AI Task Automation

SmolAgents has developed a Computer Agent showcased on Hugging Face Spaces, demonstrating AI’s capability to perform tasks autonomously. This project illustrates the potential of AI agents in automating complex tasks, contributing to advancements in AI-driven automation. Within the Hugging Face demo, users can ask the AI to perform tasks on a mock desktop computer such as browsing Wikipedia or navigating using Google Maps.

Try out the demo.


Assessing ChatGPT’s Carbon Footprint

An analysis by Sustainability by Numbers indicates that the carbon footprint of using ChatGPT is minimal for individual users. A single ChatGPT query consumes approximately 3 Wh of electricity, equating to a negligible percentage of daily per capita electricity use. The study suggests that, for most users, the environmental impact of using ChatGPT is insignificant.

Read about ChatGPTs personal impact here.


Big Tech’s Data Centers Strain Water Resources in Arid Regions

A joint investigation by The Guardian and SourceMaterial reveals that tech giants like Amazon, Microsoft, and Google are building data centers in some of the world’s driest areas, exacerbating local water scarcity. These data centers require substantial water for cooling, raising concerns about the sustainability of such operations in water-stressed regions.

Explore how data centers are drying out water scarce communities.


Real-World Applications of Generative AI in 2025

Harvard Business Review examines how individuals and businesses are utilizing generative AI in 2025, highlighting its integration into various sectors for tasks such as content creation, data analysis, and customer service. The article discusses the evolving relationship between users and AI, emphasizing the importance of understanding AI’s capabilities and limitations.

View the GenAI usage statistics.


OpenAI On Sycophancy

OpenAI’s recent post expands on its efforts to reduce sycophancy in AI models, where models overly agree with user viewpoints regardless of truth. The research identifies sycophancy as a persistent issue, especially in instruction-tuned models, and explores why it emerges—tracing it to reinforcement learning processes that favor agreement with user inputs. OpenAI tested model behaviour across varied questions and contexts, finding a consistent pattern of tailoring answers to align with user beliefs. The company is developing new training methods that encourage models to prioritize factual accuracy over user agreement.

Read OpenAI’s deep dive on sycophancy.



Tool of The Week: ComfyUI Diffusion Tool

A picture of the side profiles of two humanoid clouds looking at each other with a blue sky background
A picture of the side profiles of two humanoid clouds looking at each other with a blue sky background

What is ComfyUI?

ComfyUI is a powerful, node-based graphical interface designed to build and execute workflows for Stable Diffusion models. It presents a visual programming environment where users can connect modular components to define image generation processes. The interface is optimized for flexibility, transparency, and customization, making it accessible for both beginners and experienced users. Available through open-source, ComfyUI allows users direct access to their software.

How is it used?

Users interact with ComfyUI by dragging and connecting nodes that represent different parts of an image generation pipeline—such as model selection, sampling, conditioning, and output. These workflows can be edited visually or scripted directly using JSON, allowing for reproducible and shareable configurations. It also supports features like batch processing, automation, and the integration of custom nodes.

What is it used for?

ComfyUI is primarily used for creating high-quality, AI-generated images using Stable Diffusion. It serves artists, developers, and researchers who want fine-grained control over their generation workflows, enabling experimentation with different models and techniques. The platform is also used in production settings for visual content creation and AI art generation.

Discover more about ComfyUI.

Download and try ComfyUI yourself.


Questions and Answers

Each studio ends with a question and answer session whereby attendees can ask questions of the pedagogy experts and technologists who facilitate the sessions. We have published a full FAQ section on this site. If you have other questions about GenAI usage, please get in touch.

  • Assessment Design using Generative AI

    Generative AI is reshaping assessment design, requiring faculty to adapt assignments to maintain academic integrity. The GENAI Assessment Scale guides AI use in coursework, from study aids to full collaboration, helping educators create assessments that balance AI integration with skill development, fostering critical thinking and fairness in learning.

    See the Full Answer

  • How can I use GenAI in my course?

    In education, the integration of GenAI offers a multitude of applications within your courses. Presented is a detailed table categorizing various use cases, outlining the specific roles they play, their pedagogical benefits, and potential risks associated with their implementation. A Complete Breakdown of each use case and the original image can be found here. At […]

    See the Full Answer

This website is licensed under a Creative Commons Attribution-NonCommercial 4.0 International Public License.

Centre for Teaching, Learning and Technology
Irving K. Barber Learning Centre
214 – 1961 East Mall
Vancouver, BC Canada V6T 1Z1
Tel 604 827 0360
Fax 604 822 9826
Website ai.ctlt.ubc.ca
Email ctlt.info@ubc.ca
Find us on
    
Back to top
The University of British Columbia
  • Emergency Procedures |
  • Terms of Use |
  • Copyright |
  • Accessibility