Setting Up Ollama

An Oregon based software and hardware company

Setting Up Ollama on Linux and Windows: A Comprehensive Guide

Introduction
Ollama is a powerful local large language model (LLM) runner designed for efficient on-device inference. Whether you’re a developer, researcher, or hobbyist, Ollama offers a seamless experience for running LLMs locally. In this guide, we will walk you through the installation and setup of Ollama on both Linux and Windows systems, along with some initial usage examples.


1. Prerequisites

  • A computer with at least 8GB of RAM (16GB recommended for larger models)
  • Docker installed (recommended but optional for advanced use cases)
  • Administrator or root access

2. Installing Ollama on Linux

Step 1: Update Your System

sudo apt update && sudo apt upgrade -y

Step 2: Download Ollama
Visit the official Ollama website and download the latest Linux installer. Step 3: Install Ollama

chmod +x ollama-linux-installer.sh
sudo ./ollama-linux-installer.sh

Step 4: Verify Installation

ollama --version

Step 5: Run Your First Model

ollama run llama2

3. Installing Ollama on Windows

Step 1: Download the Installer

  • Visit Ollama.ai and download the Windows installer. Step 2: Run the Installer
  • Double-click the downloaded file and follow the installation prompts. Step 3: Verify Installation
  • Open PowerShell and run:
ollama --version

Step 4: Run a Model

ollama run llama2

4. Basic Initial Usage Examples

List Available Models:

ollama list

Generate Text with a Prompt:

ollama run llama2 --prompt "Write a poem about the sea."

Save Model Output to a File:

ollama run llama2 --prompt "Summarize the benefits of local LLMs." > output.txt

5. Using Ollama with Docker (Optional)

Step 1: Pull the Ollama Docker Image:

docker pull ollama/ollama

Step 2: Run Ollama in a Docker Container:

docker run -it --rm ollama/ollama run llama2

6. Updating Ollama

  • On Linux: Re-run the installer or use the package manager if installed via a repository.
  • On Windows: Download and install the latest version from the website.

7. Conclusion

Setting up Ollama is straightforward on both Linux and Windows. With local inference capabilities, it opens doors to fast, private, and cost-effective LLM use. Try the provided examples to get started and explore Ollama’s potential. Happy experimenting!

Leave a Reply

Your email address will not be published. Required fields are marked *