GenAI Studio: News, Tools, and Teaching & Learning FAQs
These sixty minute, weekly sessions – facilitated by Technologists and Pedagogy Experts from the CTLT – are designed for faculty and staff at UBC who are using, or thinking about using, Generative AI tools as part of their teaching, researching, or daily work. Each week we discuss the news of the week, highlight a specific tool for use within teaching and learning, and then hold a question and answer session for attendees.
They run on Zoom every Wednesday from 1pm – 2pm and you can register for upcoming events on the CTLT Events Website.
News of the Week
Each week we discuss several new items that happened in the Generative AI space over the past 7 days. There’s usually a flood of new AI-adjacent news every week – as this industry is moving so fast – so we highlight news articles which are relevant to the UBC community.
In this week’s tech news, Anthropic unveiled its “Prompt Improver” allowing users to optimize their prompts on the Anthropic Console, while DeepSeek’s newest “Deep Think” feature for their model mimics ChatGPT o1-preview’s reasoning capability. Maastricht University showcased a Retrieval-Augmented Generation (RAG) virtual assistant to streamline academic support. Research into generative agents demonstrated their ability to simulate realistic human behavior, furthering AI’s applications in interactive environments. In education, Common Sense Education and OpenAI released a comprehensive guide for integrating ChatGPT into K-12 settings, though skepticism persists among educators. Google’s Gemini chatbot gained a memory feature, allowing for personalized interactions, and Pennsylvania law enforcement grappled with deepfake abuses, sparking legal and ethical discussions on AI misuse. Together, these developments illustrate the transformative potential and challenges of AI in various domains.
Here’s this week’s news:
Anthropic’s Prompt Improver Enhances AI Prompting
Anthropic has introduced the “Prompt Improver,” a tool aimed at refining prompt templates for their AI assistant, Claude. It provides a structured methodology involving example identification, XML-tagged templates, enhanced reasoning instructions, and upgraded examples. This tool is designed to optimize AI performance, particularly for tasks requiring complex and detailed reasoning.
See Anthropic’s X (formerly Twitter) post and Anthropic’s user guide.
DeepSeek Unveils DeepSeek-R1-Lite-Preview Model
DeepSeek has introduced the R1-Lite-Preview, a reasoning-focused large language model (LLM) that demonstrates performance matching OpenAI’s o1-preview model. Utilizing “chain-of-thought” reasoning, it transparently outlines its problem-solving steps, enhancing accuracy in complex tasks. Notably, R1-Lite-Preview excels in benchmarks like the American Invitational Mathematics Examination (AIME) and MATH, showcasing its advanced reasoning capabilities. Currently, it is accessible through DeepSeek Chat, with plans for broader availability.
Enhancing Academic Support with AI: A Case Study from Maastricht University
Researchers at Maastricht University have developed a virtual assistant utilizing Retrieval-Augmented Generation (RAG) to assist students with academic regulations. This AI-driven tool integrates up-to-date, domain-specific information to provide accurate and contextually relevant responses, addressing challenges such as information overload and the need for precise guidance in academic settings.
Read the full paper and check out the github repository.
Simulating Human Behavior with Generative Agents
Last year, researchers have introduced “Generative Agents,” computational entities designed to mimic human behavior in interactive applications. These agents perform daily activities, form opinions, and engage in conversations, utilizing a large language model to record experiences, synthesize memories, and plan actions. Recently, a follow-up paper presented the results of these agents trained on two hours of interview transcripts from over 1000 human participants. In a simulated environment inspired by “The Sims,” the agents demonstrated believable individual and social behaviors, such as organizing and attending a Valentine’s Day party autonomously. This work highlights the potential of integrating large language models with interactive agents to create realistic simulations of human behavior.
Read the original paper and the follow-up paper.
OpenAI Releases Teacher’s Guide to ChatGPT Amid Educator Skepticism
OpenAI has introduced a comprehensive guide titled “ChatGPT: K-12 Foundations,” to assist educators in effectively integrating ChatGPT into their teaching practices. This resource provides an overview of ChatGPT’s capabilities, explores its potential applications in the classroom, and addresses ethical considerations and best practices for its use in K-12 education. However, some educators express concerns regarding potential overreliance on AI, the accuracy of generated content, and the ethical implications of its use in educational settings. This development highlights the ongoing dialogue about the role of AI in education and the need for balanced, informed approaches to its adoption.
Check out the course here and read the full article here.
Google’s Gemini Chatbot Introduces Memory Feature
Google has enhanced its Gemini chatbot with a memory function, enabling it to retain users’ interests and preferences for more personalized interactions. This feature allows users to share details about their work, hobbies, or aspirations, which Gemini utilizes to tailor its responses accordingly. Users can manage this information through the “Saved Info” page, where they can view, edit, or delete data and see when it’s used. Currently, this feature is available to Gemini Advanced subscribers in English.
AI-Generated Deepfake Scandal Leads to Resignations at Pennsylvania School
Lancaster Country Day School in Pennsylvania is facing turmoil after the discovery of AI-generated images depicting female students’ faces on nude bodies. The incident has led to the resignation of school leaders and the expulsion of a juvenile suspect, whose phone was confiscated by authorities. This case underscores the growing misuse of AI in creating illicit content, prompting intensified efforts by U.S. law enforcement to combat such abuses. A new Pennsylvania law criminalizing the creation and dissemination of AI-generated child sexual abuse material is set to take effect soon.
Tool of the Week
Tool of the Week: Undermind
What is Undermind?
Undermind is an AI-powered research assistant designed to assist with locating and analyzing scientific literature. By using advanced language models, Undermind helps users quickly find precise and relevant information from vast amounts of academic papers, making complex queries more manageable.
How is it used?
Researchers interact with Undermind much like they would with a colleague. Users can describe their research questions or topics in natural language, and the AI processes this input to identify, analyze, and summarize key insights from hundreds of papers. Undermind also provides detailed reports with explanations, saving users from manually sifting through countless articles.
What is it used for?
Undermind is designed to save time and enhance productivity for researchers, academics, and professionals working with scientific literature. It simplifies the process of finding relevant studies, identifying key findings, and gaining a deeper understanding of complex topics. With Undermind, researchers can focus more on applying knowledge rather than searching for it.
For additional information, explore Undermind.
Without a PIA, instructors cannot require students use the tool or service without providing alternatives that do not require use of student private information
Questions and Answers
Each studio ends with a question and answer session whereby attendees can ask questions of the pedagogy experts and technologists who facilitate the sessions. We have published a full FAQ section on this site. If you have other questions about GenAI usage, please get in touch.
-
Assessment Design using Generative AI
Generative AI is reshaping assessment design, requiring faculty to adapt assignments to maintain academic integrity. The GENAI Assessment Scale guides AI use in coursework, from study aids to full collaboration, helping educators create assessments that balance AI integration with skill development, fostering critical thinking and fairness in learning.
-
How can I use GenAI in my course?
In education, the integration of GenAI offers a multitude of applications within your courses. Presented is a detailed table categorizing various use cases, outlining the specific roles they play, their pedagogical benefits, and potential risks associated with their implementation. A Complete Breakdown of each use case and the original image can be found here. At […]