Sem descrição

Bruce MacDonald 4819239e73 convert: cli for direct conversion há 3 meses atrás
.github 581a4a5553 ci: fix artifact path prefix for missing windows payloads (#8052) há 4 meses atrás
api 84a2314463 examples: remove codified examples (#8267) há 3 meses atrás
app cf4d7c52c4 win: builtin arm runner (#8039) há 4 meses atrás
auth b732beba6a lint há 9 meses atrás
cmd 4819239e73 convert: cli for direct conversion há 3 meses atrás
convert abfdc4710f all: fix typos in documentation, code, and comments (#7021) há 4 meses atrás
discover 2d33c4e97d discover: remove leading new-line for linter há 3 meses atrás
docs 84a2314463 examples: remove codified examples (#8267) há 3 meses atrás
envconfig 4879a234c4 build: Make target improvements (#7499) há 4 meses atrás
format b732beba6a lint há 9 meses atrás
integration abfdc4710f all: fix typos in documentation, code, and comments (#7021) há 4 meses atrás
llama 1deafd8254 llama: update vendored code to commit 46e3556 (#8308) há 3 meses atrás
llm 1deafd8254 llama: update vendored code to commit 46e3556 (#8308) há 3 meses atrás
macapp 8f805dd74b darwin: restore multiple runners for x86 (#8125) há 4 meses atrás
make 1deafd8254 llama: update vendored code to commit 46e3556 (#8308) há 3 meses atrás
model 8c9fb8eb73 imageproc mllama refactor (#7537) há 4 meses atrás
openai e28f2d4900 openai: return usage as final chunk for streams (#6784) há 4 meses atrás
parser 32bd37adf8 make the modelfile path relative for `ollama create` (#8380) há 3 meses atrás
progress f7e3b9190f cmd: spinner progress for transfer model data (#6100) há 8 meses atrás
readline cb40d60469 chore: upgrade to gods v2 há 4 meses atrás
runners 8f805dd74b darwin: restore multiple runners for x86 (#8125) há 4 meses atrás
scripts a72f2dce45 scripts: sign renamed macOS binary (#8131) há 4 meses atrás
server 8bccae4f92 show a more descriptive error in the client if it is newer than the server (#8351) há 3 meses atrás
template c7cb0f0602 image processing for llama3.2 (#6963) há 6 meses atrás
types b1fd7fef86 server: more support for mixed-case model names (#8017) há 4 meses atrás
util cb42e607c5 llm: speed up gguf decoding by a lot (#5246) há 10 meses atrás
version 2c7f956b38 add version há 1 ano atrás
.dockerignore b754f5a6a3 Remove submodule and shift to Go server - 0.4.0 (#7157) há 6 meses atrás
.gitattributes b754f5a6a3 Remove submodule and shift to Go server - 0.4.0 (#7157) há 6 meses atrás
.gitignore 4819239e73 convert: cli for direct conversion há 3 meses atrás
.golangci.yaml 87f0a49fe6 llm: do not silently fail for supplied, but invalid formats (#8130) há 4 meses atrás
.prettierrc.json 8685a5ad18 move .prettierrc.json to root há 1 ano atrás
CONTRIBUTING.md 369479cc30 docs: fix spelling error (#6391) há 8 meses atrás
Dockerfile cdf3a181dc Add CUSTOM_CPU_FLAGS to Dockerfile. (#8284) há 3 meses atrás
LICENSE df5fdd6647 `proto` -> `ollama` há 1 ano atrás
Makefile 8f805dd74b darwin: restore multiple runners for x86 (#8125) há 4 meses atrás
README.md 17fcdea698 readme: move discord link há 3 meses atrás
SECURITY.md 463a8aa273 Create SECURITY.md há 9 meses atrás
go.mod cb40d60469 chore: upgrade to gods v2 há 4 meses atrás
go.sum cb40d60469 chore: upgrade to gods v2 há 4 meses atrás
main.go b732beba6a lint há 9 meses atrás

README.md

  ollama

Ollama

Get up and running with large language models.

macOS

Download

Windows

Download

Linux

curl -fsSL https://ollama.com/install.sh | sh

Manual install instructions

Docker

The official Ollama Docker image ollama/ollama is available on Docker Hub.

Libraries

Community

Quickstart

To run and chat with Llama 3.2:

ollama run llama3.2

Model library

Ollama supports a list of models available on ollama.com/library

Here are some example models that can be downloaded:

Model Parameters Size Download
Llama 3.3 70B 43GB ollama run llama3.3
Llama 3.2 3B 2.0GB ollama run llama3.2
Llama 3.2 1B 1.3GB ollama run llama3.2:1b
Llama 3.2 Vision 11B 7.9GB ollama run llama3.2-vision
Llama 3.2 Vision 90B 55GB ollama run llama3.2-vision:90b
Llama 3.1 8B 4.7GB ollama run llama3.1
Llama 3.1 405B 231GB ollama run llama3.1:405b
Phi 4 14B 9.1GB ollama run phi4
Phi 3 Mini 3.8B 2.3GB ollama run phi3
Gemma 2 2B 1.6GB ollama run gemma2:2b
Gemma 2 9B 5.5GB ollama run gemma2
Gemma 2 27B 16GB ollama run gemma2:27b
Mistral 7B 4.1GB ollama run mistral
Moondream 2 1.4B 829MB ollama run moondream
Neural Chat 7B 4.1GB ollama run neural-chat
Starling 7B 4.1GB ollama run starling-lm
Code Llama 7B 3.8GB ollama run codellama
Llama 2 Uncensored 7B 3.8GB ollama run llama2-uncensored
LLaVA 7B 4.5GB ollama run llava
Solar 10.7B 6.1GB ollama run solar

[!NOTE] You should have at least 8 GB of RAM available to run the 7B models, 16 GB to run the 13B models, and 32 GB to run the 33B models.

Customize a model

Import from GGUF

Ollama supports importing GGUF models in the Modelfile:

  1. Create a file named Modelfile, with a FROM instruction with the local filepath to the model you want to import.

    FROM ./vicuna-33b.Q4_0.gguf
    
  2. Create the model in Ollama

    ollama create example -f Modelfile
    
  3. Run the model

    ollama run example
    

Import from Safetensors

See the guide on importing models for more information.

Customize a prompt

Models from the Ollama library can be customized with a prompt. For example, to customize the llama3.2 model:

ollama pull llama3.2

Create a Modelfile:

FROM llama3.2

# set the temperature to 1 [higher is more creative, lower is more coherent]
PARAMETER temperature 1

# set the system message
SYSTEM """
You are Mario from Super Mario Bros. Answer as Mario, the assistant, only.
"""

Next, create and run the model:

ollama create mario -f ./Modelfile
ollama run mario
>>> hi
Hello! It's your friend Mario.

For more examples, see the examples directory. For more information on working with a Modelfile, see the Modelfile documentation.

CLI Reference

Create a model

ollama create is used to create a model from a Modelfile.

ollama create mymodel -f ./Modelfile

Pull a model

ollama pull llama3.2

This command can also be used to update a local model. Only the diff will be pulled.

Remove a model

ollama rm llama3.2

Copy a model

ollama cp llama3.2 my-model

Multiline input

For multiline input, you can wrap text with """:

>>> """Hello,
... world!
... """
I'm a basic program that prints the famous "Hello, world!" message to the console.

Multimodal models

ollama run llava "What's in this image? /Users/jmorgan/Desktop/smile.png"
The image features a yellow smiley face, which is likely the central focus of the picture.

Pass the prompt as an argument

$ ollama run llama3.2 "Summarize this file: $(cat README.md)"
 Ollama is a lightweight, extensible framework for building and running language models on the local machine. It provides a simple API for creating, running, and managing models, as well as a library of pre-built models that can be easily used in a variety of applications.

Show model information

ollama show llama3.2

List models on your computer

ollama list

List which models are currently loaded

ollama ps

Stop a model which is currently running

ollama stop llama3.2

Start Ollama

ollama serve is used when you want to start ollama without running the desktop application.

Building

See the developer guide

Running local builds

Next, start the server:

./ollama serve

Finally, in a separate shell, run a model:

./ollama run llama3.2

REST API

Ollama has a REST API for running and managing models.

Generate a response

curl http://localhost:11434/api/generate -d '{
  "model": "llama3.2",
  "prompt":"Why is the sky blue?"
}'

Chat with a model

curl http://localhost:11434/api/chat -d '{
  "model": "llama3.2",
  "messages": [
    { "role": "user", "content": "why is the sky blue?" }
  ]
}'

See the API documentation for all endpoints.

Community Integrations

Web & Desktop

Cloud

Terminal

Apple Vision Pro

Database

  • pgai - PostgreSQL as a vector database (Create and search embeddings from Ollama models using pgvector)
  • MindsDB (Connects Ollama models with nearly 200 data platforms and apps)
  • chromem-go with example
  • Kangaroo (AI-powered SQL client and admin tool for popular databases)

Package managers

Libraries

Mobile

  • Enchanted
  • Maid
  • Ollama App (Modern and easy-to-use multi-platform client for Ollama)
  • ConfiChat (Lightweight, standalone, multi-platform, and privacy focused LLM chat interface with optional encryption)

Extensions & Plugins

Supported backends

  • llama.cpp project founded by Georgi Gerganov.

Observability

  • OpenLIT is an OpenTelemetry-native tool for monitoring Ollama Applications & GPUs using traces and metrics.
  • HoneyHive is an AI observability and evaluation platform for AI agents. Use HoneyHive to evaluate agent performance, interrogate failures, and monitor quality in production.