How to run LLM's locally with Ollama

Large language models (LLMs) have revolutionized the field of natural language processing (NLP), demonstrating exceptional capabilities in understanding and generating human-like text. These models, often based on transformer architectures, include pioneers like GPT-3 (Generative Pre-trained Transformer) available on platforms such as Hugging Face Transformers and OpenAI.

Among the pioneers in this domain are models like GPT-3 (Generative Pre-trained Transformer) and others available through platforms like Hugging Face Transformers and OpenAI. In this Answer, we’ll explore how to run LLMs locally with Ollama, providing you with practical insights into the process.

Running LLMs locally with Ollama

Ollama is a CLI-based tool that facilitates local LLM deployment for models like Llama, Mistral, Orca, Phi, and others. While primarily supported on Linux and macOS, Ollama can be used on Windows via WSL (Windows Subsystem for Linux).

To install Ollama on a Linux-based system, use the following command:

curl https://ollama.ai/install.sh | sh
Command to install Ollama

Otherwise, for macOS and Windows, one can download it from the official Ollama site. Once installed, initiate Ollama with any model you want by using the following command:

ollama run [MODEL NAME]
Command to start Ollama locally

If the specified model hasn’t been downloaded yet, Ollama will automatically download all the necessary resources, including the model and config files, on the first run. Once the download is complete, you will see an empty line in the terminal followed by three arrows >>>, indicating that the model has been successfully downloaded and is now running.

Open-source LLMs available on Ollama

Here are some of the few open-source LLMs available on Ollama:

Open-source LLMs

Model

Parameters

Size

Download Command

Meta's Llama 3

7B

3.8GB

ollama run llama3

Google's Gemma 2

13B

7.3GB

ollama run gemma2

Qwen2

70B

39GB

ollama run qwen2

Mistral

7B

4.1GB

ollama run mistral

Phi-3

2.7B

1.7GB

ollama run phi3

Note: Model compatibility depends on the hardware of your system. Larger models might not be able to run on every system due to their high resource requirements. Ensure your system meets the necessary specifications before attempting to run large models.

Let’s put our knowledge to test with some simple questions.

1

What is Ollama?

A)

A large language model

B)

A CLI-based application for running LLMs

C)

An open-source LLM developed by OpenAI

D)

A platform for sentiment analysis

Question 1 of 30 attempted

Free Resources

Copyright ©2025 Educative, Inc. All rights reserved