Bez popisu

Patrick Devine 7550fd1b7f use a pulsating spinner před 1 rokem
api bc22d5a38b no blob response před 1 rokem
app cbfff4f868 update dependencies in `app/` před 1 rokem
cmd df07e4a097 remove redundant filename parameter (#1213) před 1 rokem
docs f24741ff39 Documenting how to view `Modelfile`s (#723) před 1 rokem
examples 4936b5bb37 add jupyter readme před 1 rokem
format 93a108214c only show decimal points for smaller file size numbers před 1 rokem
llm a3fcecf943 only set `main_gpu` if value > 0 is provided před 1 rokem
parser a0c3e989de deprecate modelfile embed command (#759) před 1 rokem
progress 7550fd1b7f use a pulsating spinner před 1 rokem
readline f42f3d9b27 go fmt před 1 rokem
scripts 85e4441c6a cache docker builds před 1 rokem
server 35c4b5ec16 calculate hash separately from http request před 1 rokem
version 2c7f956b38 add version před 1 rokem
.dockerignore 85e4441c6a cache docker builds před 1 rokem
.gitignore 85e4441c6a cache docker builds před 1 rokem
.gitmodules 058d0cd04b silence warm up log před 1 rokem
.prettierrc.json 8685a5ad18 move .prettierrc.json to root před 1 rokem
Dockerfile 89ba19feca use Go `1.21.3` in `Dockerfile` před 1 rokem
Dockerfile.build d890890f66 use lower glibc versions in `Dockerfile.build` před 1 rokem
LICENSE df5fdd6647 `proto` -> `ollama` před 1 rokem
README.md 2fdf1b5ff8 add laravel package to README.md (#1208) před 1 rokem
go.mod 01ea6002c4 replace go-humanize with format.HumanBytes před 1 rokem
go.sum 01ea6002c4 replace go-humanize with format.HumanBytes před 1 rokem
main.go 7550fd1b7f use a pulsating spinner před 1 rokem

README.md

logo

Ollama

Discord

Get up and running with large language models locally.

macOS

Download

Windows

Coming soon!

Linux & WSL2

curl https://ollama.ai/install.sh | sh

Manual install instructions

Docker

The official Ollama Docker image ollama/ollama is available on Docker Hub.

Quickstart

To run and chat with Llama 2:

ollama run llama2

Model library

Ollama supports a list of open-source models available on ollama.ai/library

Here are some example open-source models that can be downloaded:

Model Parameters Size Download
Mistral 7B 4.1GB ollama run mistral
Llama 2 7B 3.8GB ollama run llama2
Code Llama 7B 3.8GB ollama run codellama
Llama 2 Uncensored 7B 3.8GB ollama run llama2-uncensored
Llama 2 13B 13B 7.3GB ollama run llama2:13b
Llama 2 70B 70B 39GB ollama run llama2:70b
Orca Mini 3B 1.9GB ollama run orca-mini
Vicuna 7B 3.8GB ollama run vicuna

Note: You should have at least 8 GB of RAM to run the 3B models, 16 GB to run the 7B models, and 32 GB to run the 13B models.

Customize your own model

Import from GGUF

Ollama supports importing GGUF models in the Modelfile:

  1. Create a file named Modelfile, with a FROM instruction with the local filepath to the model you want to import.

    FROM ./vicuna-33b.Q4_0.gguf
    
  2. Create the model in Ollama

    ollama create example -f Modelfile
    
  3. Run the model

    ollama run example
    

Import from PyTorch or Safetensors

See the guide on importing models for more information.

Customize a prompt

Models from the Ollama library can be customized with a prompt. For example, to customize the llama2 model:

ollama pull llama2

Create a Modelfile:

FROM llama2

# set the temperature to 1 [higher is more creative, lower is more coherent]
PARAMETER temperature 1

# set the system prompt
SYSTEM """
You are Mario from Super Mario Bros. Answer as Mario, the assistant, only.
"""

Next, create and run the model:

ollama create mario -f ./Modelfile
ollama run mario
>>> hi
Hello! It's your friend Mario.

For more examples, see the examples directory. For more information on working with a Modelfile, see the Modelfile documentation.

CLI Reference

Create a model

ollama create is used to create a model from a Modelfile.

Pull a model

ollama pull llama2

This command can also be used to update a local model. Only the diff will be pulled.

Remove a model

ollama rm llama2

Copy a model

ollama cp llama2 my-llama2

Multiline input

For multiline input, you can wrap text with """:

>>> """Hello,
... world!
... """
I'm a basic program that prints the famous "Hello, world!" message to the console.

Pass in prompt as arguments

$ ollama run llama2 "Summarize this file: $(cat README.md)"
 Ollama is a lightweight, extensible framework for building and running language models on the local machine. It provides a simple API for creating, running, and managing models, as well as a library of pre-built models that can be easily used in a variety of applications.

List models on your computer

ollama list

Start Ollama

ollama serve is used when you want to start ollama without running the desktop application.

Building

Install cmake and go:

brew install cmake go

Then generate dependencies and build:

go generate ./...
go build .

Next, start the server:

./ollama serve

Finally, in a separate shell, run a model:

./ollama run llama2

REST API

Ollama has a REST API for running and managing models. For example, to generate text from a model:

curl http://localhost:11434/api/generate -d '{
  "model": "llama2",
  "prompt":"Why is the sky blue?"
}'

See the API documentation for all endpoints.

Community Integrations

Web & Desktop

Terminal

Libraries

Mobile

  • Maid (Mobile Artificial Intelligence Distribution)

Extensions & Plugins