GenAI Studio: News, Tools, and Teaching & Learning FAQs
These sixty minute, weekly sessions – facilitated by Technologists and Pedagogy Experts from the CTLT – are designed for faculty and staff at UBC who are using, or thinking about using, Generative AI tools as part of their teaching, researching, or daily work. Each week we discuss the news of the week, highlight a specific tool for use within teaching and learning, and then hold a question and answer session for attendees.
They run on Zoom every Wednesday from 1pm – 2pm and you can register for upcoming events on the CTLT Events Website.
News of the Week
Each week we discuss several new items that happened in the Generative AI space over the past 7 days. There’s usually a flood of new AI-adjacent news every week – as this industry is moving so fast – so we highlight news articles which are relevant to the UBC community.
Here’s this week’s news:
- Microsoft Research has developed Orca-Math, a 7-billion parameter Small Language Model (SLM), by fine-tuning the Mistral 7B model. This tool, aimed at enhancing math education, uses a synthetic dataset and iterative learning to improve problem-solving capabilities, achieving an 86.81% accuracy rate on the GSM8K benchmark.
- The European Parliament has approved the AI Act, a landmark legislation regulating AI systems. It categorizes AI systems into four risk levels, with stringent requirements for high-risk applications like self-driving cars. The Act, effective from May 2025, has received mixed reactions, with some lauding its potential to build public trust, while others criticize it for prioritizing industry interests over human rights.
- The European Parliament has passed the Artificial Intelligence Act, emphasizing safety, fundamental rights compliance, and innovation in AI. The Act, approved with a significant majority, introduces stringent rules for high-risk AI applications, including a ban on certain uses like un-targeted facial recognition scraping and emotion recognition. It also establishes clear obligations for high-risk AI systems, including transparency, risk assessment, and human oversight.
- Pace of Innovation in AI: Ethics and Challenges – The rapid advancement in AI, exemplified by new releases like Anthropic’s Claude 3 and Stability AI’s Stable Diffusion 3, raises questions about the pace of ethical considerations keeping up with technological progress. Major organizations like Google and Bosch are actively addressing these ethical challenges, with Google pausing certain AI functionalities due to concerns about bias and Bosch emphasizing responsible AI practices in their ethical guidelines.
- A significant advancement in matrix multiplication, essential for AI applications, has been made by computer scientists. By addressing inefficiencies in existing methods, the new approach substantially increases the speed of multiplying large matrices, a core component of AI models. This could lead to faster, more efficient AI systems, impacting various fields from AI development to environmental sustainability due to reduced computational power needs.
- Cognition Labs has introduced Devin, the first fully autonomous AI software engineer, capable of executing complex engineering tasks, learning over time, and collaborating with human teammates. With advances in long-term reasoning and planning, Devin sets a new benchmark in the SWE-bench coding benchmark, outperforming previous models significantly. Devin’s abilities include building and deploying apps, debugging, and fine-tuning AI models, demonstrating a significant leap in applied AI for software engineering.
Tool of the Week
Each week we demonstrate a Generative AI tool that can be used within teaching and learning. The GenAI space is evolving rapidly, and as such we demo new tools or new ways people use those tools.
As a reminder not all tools we showcase have successfully been through the PIA process at UBC.
This week’s Tool of the Week: Ollama
Ollama is a tool that enables users to run Large Language Models (LLMs) locally on their own devices, offering several advantages over traditional cloud-based solutions. One of the primary benefits of using Ollama is enhanced data privacy and security. By processing data locally, users can prevent sensitive information from being transmitted over the internet to remote servers, thus safeguarding their data from potential breaches or unauthorized access.
Another advantage of local LLM execution with Ollama is the potential for reduced latency and faster response times. Since processing occurs directly on the user’s device, the delays associated with data transmission to and from the cloud are eliminated. This improved performance is particularly valuable for real-time applications or in environments where internet connectivity is unreliable or limited.
In addition to these practical benefits, Ollama also democratizes access to advanced AI capabilities. By making it possible for users to run and customize LLMs locally, Ollama opens the door for a wider range of developers, researchers, and hobbyists to experiment with and develop innovative applications. This approach encourages a more diverse range of AI-driven innovations, unconstrained by the computational limitations or usage policies of cloud service providers.
However, it is important to note that running LLMs locally with Ollama might present some challenges. The tool’s performance relies heavily on the user’s hardware capabilities, which could limit the complexity or size of the models that can be efficiently executed on less powerful machines. Despite this limitation, Ollama represents a significant step towards making sophisticated AI tools more accessible and user-centric, addressing concerns related to data privacy and internet connectivity while fostering continued innovation in the rapidly evolving field of AI.
Questions and Answers
Each studio ends with a question and answer session whereby attendees can ask questions of the pedagogy experts and technologists who facilitate the sessions. We have published a full FAQ section on this site. If you have other questions about GenAI usage, please get in touch.
-
Anything LLM
Anything LLM offers an AI tool that ensures privacy and flexibility, enabling users to deploy any large language model (LLM) with any document locally on their device, without needing internet connectivity
-
Backyard AI
Backyard AI offers immersive text adventures and AI-powered chat with customizable characters, enabling interactive stories through a desktop app that supports offline use