On-Device Usage of: Ollama.
Ollama, a tool for running large language models (LLMs) locally on your own device, is available for macOS, Linux, and Windows.
For on-device usage, instructors at UBC may run this tool.
This tool evaluation is for Ollama version 0.3.6 available in August 2024.

This tool evaluation is provided for instructors at UBC for their use on their own device. It is to help you make a more informed decision about what generative AI software is available to use if you wish to interact with large language models on your own device, rather than using online third-party tools. Using these tools on your own device can be useful if, for example, you wish to interact with a large language model using personal information or intellectual property – something you are unable to do with 3rd-party online tools, unless that tool has specifically been through a Privacy Impact Assessment.
Without a PIA, instructors cannot require students use the tool or service without providing alternatives that do not require use of student private information
Tool Evaluation
Tool Name | Ollama |
Closed vs. Open Source This refers to whether the tool’s source code is publicly available (open source) or not (closed source). The expected response would be either “Open Source” or “Closed Source”, along with any relevant details such as the repository link and star count if it’s open source. | Open Source. Repository: https://github.com/ollama/ollama Star Count: 85,000 |
Solution Provider This refers to the organization or individual that developed and maintains the tool. The expected response would be the name of the provider and potentially some background information about them. | Ollama — Canadian based start up |
Regionality of Data This refers to where the data processed by the tool is stored and handled. The expected response would be “Locally Processed” if the data stays on the user’s device, “Cloud Processed” if it’s sent to a server, or a combination of both. | Locally Processed |
App Analytics & Anonymous sharing of prompts This refers to whether the tool collects usage data and/or allows users to share their prompts anonymously. The expected response would be a description of the tool’s practices in this area. | None appear to be present, or users are not provided a choice. |
Prompt Retention & Encryption This refers to how long the tool retains the prompts that users input, and whether those prompts are stored in an encrypted format. The expected response would be a description of the tool’s data retention and encryption policies. | Previous 100 lines of prompts are stored, unencrypted. Can be disabled. See Usage Recommendations below. |
Administrative Capabilities / Enterprise Offering This refers to whether the tool offers features that are useful for managing its use in a large organization, such as user management, usage reporting, etc. The expected response would be a description of these features, if any. | None |
Terms of Use This refers to the legal agreement that users must accept to use the tool. The expected response would be a summary of the key points in the agreement, such as the license type and any major restrictions or obligations. | MIT License (Full Use — Limitations on liability) |
OS Support & Installation Method This refers to which operating systems the tool can run on, and how it’s installed. The expected response would be a list of supported OSes and a description of the installation process | Download from Website/Github — Windows, Mac, Linux. (Can provide email to be notified about updates no clickthrough agreement required.) |
In App Retrieval Augmented Generation (RAG) Capabilities This refers to whether the tool can use local files to provide more contextually relevant responses. The expected response would be a description of this capability, if present. | None |
Third Party Integrations Supported This refers to whether the tool can be connected to other services or platforms. The expected response would be a description of these integrations, if any. | Not directly |
Usage Guidelines
Ollama is a command-line interface (CLI) tool that allows you to interact with large language models that it allows you to download. As it is a CLI tool, you must be familiar – and comfortable – with the command line.
Note: ollama will, by default, save – to disk – the last 100 lines of text that you input into the application. It saves this as a text file on your computer, unencrypted. This means that if you enter any personal information, or intellectual property – as a prompt – into ollama, then it will still be present on your computer after you end the session with ollama. We encourage you to remove the file stored at $home/.ollama/history after each time you close the application. You can also prevent this file from being created, by running ollama using the OLLAMA_NOHISTORY flag such as
OLLAMA_NOHISTORY=1 ollama run llama3.1