Glossary of GenAI Terms
Like with most new, popular, large-scale technologies Generative AI comes with a growing number of terms and acronyms. Below we list common words or phrases associated with Generative AI, and provide succinct definitions.
A
Agents
An agent is a computer program or system that is designed to perceive its environment, make decisions and take actions to achieve a specific goal or set of goals. The agent operates autonomously, meaning it is not directly controlled by a human operator.
AGI or Artificial General Intelligence
Artificial General Intelligence (AGI) represents a level of AI development where machines possess the ability to understand, learn, and apply intelligence across a broad range of tasks, mimicking the cognitive abilities of a human being. Unlike most current AI systems, which are designed for specific tasks (narrow AI), AGI can theoretically perform any intellectual task that a human can. It encompasses a wide array of cognitive skills, including reasoning, problem-solving, perception, language understanding, and general knowledge application.
Annotation
Annotation is the process of labelling or tagging data, which is then used to train and fine-tune AI models. This data can be in various forms, such as text, images, or audio. In text-based generative AI, annotation might involve categorizing sentences, identifying parts of speech, or marking sentiment in text snippets. These annotated data-sets become the foundational building blocks that enable the AI to learn and understand patterns, contexts, and nuances of the data it is meant to generate or interpret.
See an example of a project at UBC around using AI to annotate data
ASI or Artificial Super Intelligence
Artificial Super Intelligence (ASI) refers to a stage of artificial intelligence that surpasses human intelligence across all fields, including creativity, general wisdom, and problem-solving capabilities. Unlike Artificial General Intelligence (AGI), which aims to match human cognitive abilities, ASI represents an AI that is vastly more advanced than the best human brains in practically every field, including scientific understanding, general knowledge, and social skills.
B
Bias (in Gen AI)
There are 2 distinct ways of using the term ‘bias’ with regards to Generative AI.
Firstly, bias can, somewhat more commonly known, refer to a systemic skew or prejudice in the AI model’s output, often reflecting inherent or learned prejudices in the data it was trained on. Bias in AI can manifest in various forms, such as cultural, gender, racial, political, or socioeconomic biases. These biases can lead to AI systems making decisions or generating content that is unfair, stereotypical, or discriminative in nature.
Secondly, in the technical construction of AI models, particularly neural networks, bias refers to a parameter that is used alongside “weights” to influence the output of a node in the network. While weights determine how much influence an input will have on a node, biases allow for an adjustment to the output independently of its inputs. The bias parameter is essential in tuning a model‘s behaviour, as it provides the flexibility needed for the model to accurately represent complex patterns in the data. Without biases, a neural network might be significantly less capable of fitting diverse and nuanced datasets, limiting its effectiveness and accuracy.
Bot
In the context of Generative AI, a ‘bot’ (short for robot) typically refers to a software application that is programmed to perform automated tasks. These tasks can range from simple, repetitive activities to more complex functions involving decision-making and interactions with human users. They are often equipped with advanced capabilities such as understanding and generating language, responding to user queries, or creating content based on specific guidelines or prompts.
Certain tools such as ChatGPT or Poe allow you to create your own ‘bots’ (called GPTs for ChatGPT).
C
Chat Bot
A chatbot is a software application designed to simulate conversation with human users, especially over the internet. It utilizes techniques from the field of natural language processing (NLP) and sometimes machine learning (ML) to understand and respond to user queries. Chatbots can range from simple, rule-based systems that respond to specific keywords or phrases with pre-defined responses, to more sophisticated AI-driven bots capable of handling complex, nuanced, and context-dependent conversations.
ChatGPT
ChatGPT is an AI language model tool, designed to process and generate human-like text, assisting with diverse academic tasks and communication. As of December 2023, there are 2 models – a free version, referred to at Chat GPT-3.5 and a paid-for version referred to as Chat GPT 4.
Neither version of ChatGPT has passed a Privacy Impact Assessment at UBC and as such cannot be required for use within a course.
Read our brief Intro to ChatGPT
See our list of Gen AI Tools and their use-cases.
Completions
Completions are the output produced by AI in response to a given input or prompt. When a user inputs a prompt, the AI model processes it and generates text that logically follows or completes the given input. These completions are based on the patterns, structures, and information the model has learned during its training phase on vast datasets.
Conversational or Chat AI
Conversational AI or Chat AI refers to the branch of artificial intelligence focused on enabling machines to understand, process, and respond to human language in a natural and conversational manner. This technology underpins chat bots and virtual assistants, which are designed to simulate human-like conversations with users, providing responses that are contextually relevant and coherent. Conversational AI combines elements of natural language processing (NLP), machine learning (ML), and sometimes speech recognition to interpret and engage in dialogue.
E
Embedding
Embeddings are the representation of words, phrases, or even entire documents as vectors in a high-dimensional space. These vectors capture the semantic and syntactic essence of the text, enabling AI to understand and process language in a more nuanced and meaningful way. Embeddings are generated through algorithms that analyze the context in which words appear and understand their relationships and usage patterns.
For example, in a vector space model, words with similar meanings or used in similar contexts are represented by vectors that are close to each other. This allows an AI to recognize synonyms, understand analogies, and grasp subtler aspects of language like sentiment or tone.
F
Few-shot learning
Few-shot learning is a concept in machine learning where the model is designed to learn and make accurate predictions or decisions based on a very limited amount of training data. Traditional machine learning models typically require large datasets to learn effectively. However, few-shot learning techniques enable AI models to generalize from a small number of examples, often just a handful or even a single instance. This approach is especially valuable in situations where collecting large datasets is impractical or impossible, such as specialized academic fields or rare languages.
Fine-tune
Fine-tuning is the process of taking a pre-trained AI model and further training it on a specific, often smaller, dataset to adapt it to particular tasks or requirements. This is relevant in scenarios where a general AI model, trained on varied datasets, needs to be specialized or optimized for specific applications.
A general language model could be fine-tuned with academic papers and texts from a specific discipline to better understand and generate text relevant to that field. This process involves adjusting the model’s parameters slightly so that it better aligns with the nuances and terminologies of the target domain while retaining the broad knowledge it gained during initial training.
Fine-tuning offers a balance between the extensive learning of a large, general model and the specific expertise required for particular tasks.
See also Tuning.
G
Generative AI
Generative AI refers to artificial intelligence systems that can generate new content—such as texts, images, audio, and video—in response to prompts by a user, after being trained on an earlier set of data. Platforms like Dall-E and NightCafe produce digital art and images that appear like photos, and tools like Synthesia allow users to generate videos from text, using AI avatars. Large Language Models (LLMs), on the other hand, generate text by predicting the next word based on patterns learned from vast amounts of data.
See our full description of Generative AI.
See our list of Gen AI Tools and their use-cases.
GPT or Generative Pre-Trained Transformers
Generative Pre-trained Transformers (GPT) are a type of advanced artificial intelligence model primarily used for natural language processing tasks. GPT models are based on the transformer architecture, which allows them to efficiently process and generate human-like text by learning from vast amounts of data. The “pre-trained” aspect refers to the initial extensive training these models undergo on large text corpora, allowing them to understand and predict language patterns. This pre-training equips the GPT models with a broad understanding of language, context, and aspects of world knowledge.
The Generative aspect is important to remember – these tools are designed to generate human-like responses rather than, for example, a Google search which regurgitates information.
H
Hallucinations
Besides being Cambridge University’s Word of the Year for 2023, hallucinations are incorrect or misleading results that AI models generate. These errors can be caused by a variety of factors, including insufficient training data, incorrect assumptions made by the model, or biases in the data used to train the model. The concept of AI hallucinations underscores the need for critical evaluation and verification of AI-generated information, as relying solely on AI outputs without scrutiny could lead to the dissemination of misinformation or flawed analyses.
I
Inference
Inference is the process where a trained AI model applies its learned knowledge to new, unseen data to make predictions, decisions, or generate content. It is essentially the phase where the AI model, after being trained on a large dataset, is now being used in real-world applications. Unlike the training phase, where the model is learning from examples, during inference, the model is utilizing its learned patterns to perform the specific tasks it was designed for.
For example a language model that has been trained on a vast corpus of text can perform inference by generating a new essay, answering a student’s query, or summarizing a research article.
L
Large Language Model
Large Language Models (LLMs) are artificial intelligence systems specifically designed to understand, generate, and interact with human language on a large scale. These models are trained on enormous datasets comprising a wide range of text sources, enabling them to grasp the nuances, complexities, and varied contexts of natural language. LLMs like GPT (Generative Pre-trained Transformer) use deep learning techniques, particularly transformer architectures, to process and predict text sequences, making them adept at tasks such as language translation, question-answering, content generation, and sentiment analysis.
M
Model
Models are the computational structure and algorithms that enable Generative AI to process data, learn patterns, and perform tasks such as generating text, images, or making decisions. Essentially, it is the core framework that embodies an AI’s learned knowledge and capabilities. A model in AI is created through a process called training, where it is fed large amounts of data and learns to recognize patterns, make predictions, or generate outputs based on that data.
Each model has its specific architecture (such as neural networks) and parameters, which define its abilities and limitations. The quality, diversity, and size of the data used in training also significantly influence a model’s effectiveness and reliability in practical applications.
N
NLP or Natural Language Programming
NLP is a field at the intersection of computer science, artificial intelligence, and linguistics, focused on enabling computers to understand, interpret, and generate human language in a way that is both meaningful and useful. It involves the development of algorithms and systems that can analyze, comprehend, and respond to text or voice data in a manner similar to how humans do.
P
Parameters
Parameters are the internal variables of an AI model that are learned from the training data. These parameters are the core components that define the behaviour of the model and determine how it processes input data to produce output. In a neural network, parameters typically include weights and biases associated with the neurons.
Each neuron in a neural network has a weight assigned to its input, which signifies the importance or influence of that input in the neuron’s overall calculation. The bias is an additional parameter that allows the neuron to adjust its output independently of its input. During the training process, the model adjusts these parameters to minimize the difference between its output and the actual data. The better these parameters are tuned, the more accurately the model can perform its intended task.
When looking at open source models you may see them prefixed with “7b” or “70b” i.e. llama2-70b. This often refers to the number of parameters, in this case 70 billion.
Prompt
A prompt is the input given to an AI model to initiate or guide its generation process. This input acts as a directive or a set of instructions that the AI uses to produce its output. Prompts are crucial in defining the nature, scope, and specificity of the output generated by the AI system. For instance, in a text-based Generative AI model like GPT (Generative Pre-trained Transformer), a prompt could be a sentence or a question that the model then completes or answers in a coherent and contextually appropriate manner.
View our Promptathon session from July 2023 on ways to improve your prompts.
Prompt Engineering
Prompt engineering in the context of Generative AI refers to the crafting of input prompts to effectively guide AI models, particularly those like Generative Pre-trained Transformers (GPT), in producing specific and desired outputs. This practice involves formulating and structuring prompts to leverage the AI’s understanding and capabilities, thereby optimizing the relevance, accuracy, and quality of the generated content.
See our resource on Prompt Engineering
R
Reinforcement Learning
Reinforcement Learning (RL) is a type of learning algorithm where an agent learns to make decisions by performing actions in an environment to achieve a certain goal. The learning process is guided by feedback in the form of rewards or punishments — positive reinforcement for desired actions and negative reinforcement for undesired actions. The agent learns to maximize its cumulative reward through trial and error, gradually improving its strategy or policy over time.
Retrieval Augmented Generation (RAG)
Retrieval Augmented Generation (RAG) is a technique that combines the strengths of both retrieval-based and generative models. In this approach, an AI system first retrieves information from a large dataset or knowledge base and then uses this retrieved data to generate a response or output. Essentially, the RAG model augments the generation process with additional context or information pulled from relevant sources.
For example, using an RAG pipeline, you might be able to provide private research data (without exposing it to 3rd-party tools) which you can then ask complex questions about, and ask for analyses on. Normally, this data wouldn’t be available to a general model, but with the RAG pipeline, you can provide custom, private data.
S
Semantic Network
A semantic network, or frame network is a graphical representation of knowledge that interlinks concepts through their semantic relationships. In these networks, nodes represent concepts or entities, and the edges represent the relationships between these concepts, such as “is a type of,” “has a property of,” or “is part of.” This structure enables the representation of complex interrelationships and hierarchies within a given set of data or knowledge .
Semantic networks can enhance natural language processing capabilities by helping systems understand context and the relationships between different words or phrases.
T
Temperature
Tokens
Tokens are the smallest units of data that an AI model processes. In natural language processing (NLP), tokens typically represent words, parts of words (like syllables or sub-words), or even individual characters, depending on the tokenization method used.
Tokenization is the process of converting text into these smaller, manageable units for the AI to analyze and understand.
When using AI Tools such as ChatGPT they will often quote how much a query costs in terms of tokens.
Training
Training is the process by which a machine learning model, such as a neural network, learns to perform a specific task. This is achieved by exposing the model to a large set of data, known as the training dataset, and allowing it to iteratively adjust its internal parameters to minimize errors in its output.
During training, the model makes predictions or generates outputs based on its current state. These outputs are then compared to the desired results, and the difference (or error) is used to adjust the model’s parameters. This process is repeated numerous times, with the model gradually improving its accuracy and ability to perform the task. For example, a language model is trained on vast amounts of text so that it learns to understand and generate human-like language.
Transformer
A transformer is a type of architecture in deep learning, a subfield of artificial intelligence (AI) that represent a departure from previous models which processed data sequentially. Instead, transformers use a mechanism known as ‘self-attention’ to process entire sequences of data (like sentences in a paragraph) simultaneously. This approach allows transformers to capture complex relationships and dependencies in the data, regardless of their distance within the sequence.
Transformers have the ability to weigh the significance of different parts of the input data. This capacity for handling contextual relationships in language makes them effective for a variety of natural language processing tasks, including but not limited to translation, content generation, and text summarization.
The ‘T’ in ChatGPT (or, generally, GPTs) stands for Transformer.
Read the seminal paper from 2017 on Transformers titled “Attention is All You Need“
Tuning
Tuning describes the process of adjusting a pre-trained model to better suit a specific task or set of data. This involves modifying the model’s parameters so that it can more effectively process, understand, and generate information relevant to a particular application. Tuning is different from the initial training phase, where a model learns from a large, diverse dataset. Instead, it focuses on refining the model’s capabilities based on a more targeted dataset or specific performance objectives.
See also Fine Tune.
Z
Zero-Shot Learning
Zero-shot learning is a concept that describes a concept where an AI model learns to perform tasks that it has not explicitly been trained to do. Unlike traditional machine learning methods that require examples from each class or category they’re expected to handle, zero-shot learning enables the model to generalize from its training and make inferences about new, unseen categories.
This is achieved by training the model to understand and relate abstract concepts or attributes that can be applied broadly. For instance, a model trained in zero-shot learning could categorize animals it has never seen before based on learned attributes like size, habitat, or diet. It infers knowledge about these new categories by relying on its understanding of the relationships and similarities between different concepts.