GenAI Studio: News, Tools, and Teaching & Learning FAQs
These sixty minute, weekly sessions – facilitated by Technologists and Pedagogy Experts from the CTLT – are designed for faculty and staff at UBC who are using, or thinking about using, Generative AI tools as part of their teaching, researching, or daily work. Each week we discuss the news of the week, highlight a specific tool for use within teaching and learning, and then hold a question and answer session for attendees.
They run on Zoom every Wednesday from 1pm – 2pm and you can register for upcoming events on the CTLT Events Website.
News of the Week
Each week we discuss several new items that happened in the Generative AI space over the past 7 days. There’s usually a flood of new AI-adjacent news every week – as this industry is moving so fast – so we highlight news articles which are relevant to the UBC community.
Here’s this week’s news:
- Frame, developed by Brilliant Labs, is an innovative, open-source eyewear integrated with AI technology, enabling visual analysis of the surrounding environment and live web search functionality. It features Whisper for language translation and is compatible with the Noa app, enhancing user interaction with the world through advanced learning and navigation tools.
- OpenAI’s Sora is an advanced AI model designed to create videos from text descriptions, capable of rendering complex scenes and characters while maintaining high visual fidelity. Despite its proficiency in visual generation, Sora sometimes faces challenges with physical simulation accuracy and spatial detail consistency, highlighting the ongoing development and refinement in AI-driven content creation.
- The Gemini 1.5 model, developed by Google DeepMind, features a long context window capable of processing up to 1 million tokens, significantly enhancing its ability to handle and recall extensive information. This advancement allows for more complex interactions with the AI, such as analyzing large codebases or processing extended multimedia content, marking a notable progress in AI’s capacity for detailed and prolonged data analysis.
- Groq has developed an innovative AI chip, the Language Processing Unit (LPU), capable of running language models at speeds up to 500 tokens per second, far exceeding the performance of existing large language models like Gemini Pro and GPT-3.5. This chip, featuring a unique tensor streaming architecture, offers improved efficiency and accuracy, particularly beneficial for real-time AI applications.
- BASE TTS, developed by Mateusz Łajszczak and team, is a pioneering text-to-speech model trained on 100K hours of data, featuring a billion-parameter Transformer for converting text into speech. It sets a new standard in speech naturalness, using a novel tokenization technique and achieving emergent abilities in handling complex sentences.
- OpenAI’s ChatGPT briefly exhibited AI “hallucinations,” generating nonsensical responses and erratic behavior, as reported by The Verge. This incident underscores the inherent unpredictability and complexity of large language models, despite rapid advancements in AI technology.
- Google has introduced Gemma 2B and 7B, smaller open-source AI models designed for English language tasks, as a part of its AI development strategy distinct from the larger, closed AI model Gemini. Reported by The Verge, these models, while less complex, are noted for their speed and cost-effectiveness, capable of running on standard developer hardware and surpassing larger models in key benchmarks.
- Ollama is now available on Windows in preview, making it possible to pull, run and create large language models in a new native Windows experience. Ollama on Windows includes built-in GPU acceleration, access to the full model library, and serves the Ollama API including OpenAI compatibility.
- The French AI startup Mistral is preparing to launch its next language model, “Mistral Next,” which is now available for testing in direct chat mode in the chatbot arena. This upcoming model from Mistral AI, founded by researchers from Deepmind and Meta, is anticipated to be their largest and most capable yet, potentially rivaling GPT-4. Mistral AI has already made significant strides in the open-source LLM scene with its Mixtral 8x7B model, known for its efficiency and performance.
Tool of the Week
Each week we demonstrate a Generative AI tool that can be used within teaching and learning. The GenAI space is evolving rapidly, and as such we demo new tools or new ways people use those tools.
As a reminder not all tools we showcase have successfully been through the PIA process at UBC.
This week’s Tool of the Week: Chatbot Arena / LMSys
Chatbot Arena is an open-source research project developed by members from LMSYS and UC Berkeley SkyLab which allows you to use and compare various Open Source Large Language Models enabling you to see the differences in responses from each of those models.
Questions and Answers
Each studio ends with a question and answer session whereby attendees can ask questions of the pedagogy experts and technologists who facilitate the sessions. We have published a full FAQ section on this site. If you have other questions about GenAI usage, please get in touch.
-
What to know about working with GenAI at UBC in September 2024
The LT Hub has compiled some common questions and useful resources for working with GenAI at UBC. Although the landscape is rapidly evolving, this summary gives a snapshot of what to know as we begin the 2024/25 academic year.
-
Accommodations: What about in-class assessments?
Because there are currently no reliable ways to tell when students are using Generative AI in their at-home or computer-based work, some instructors are considering moving work done at home, to being completed during class. Need CFA support for minor assessments? If CFA support is desired for minor assessments, consider arranging for the student to […]