Keine Beschreibung

Patrick Devine b0b7641b80 add ls alias vor 1 Jahr
api 1f27d7f1b8 fix stream errors vor 1 Jahr
app dfceca48a7 update icons to have different images for bright and dark mode vor 1 Jahr
cmd b0b7641b80 add ls alias vor 1 Jahr
docs 31f0cb7742 new `Modelfile` syntax vor 1 Jahr
examples 31f0cb7742 new `Modelfile` syntax vor 1 Jahr
format 5bea29f610 add new list command (#97) vor 1 Jahr
library 6a19724d5f remove colon from library modelfiles vor 1 Jahr
llama 8526e1f5f1 add llama.cpp mpi, opencl files vor 1 Jahr
parser d59b164fa2 add prompt back to parser vor 1 Jahr
progressbar e4d7f3e287 vendor in progress bar and change to bytes instead of bibytes (#130) vor 1 Jahr
scripts 4dd296e155 build app in publish script vor 1 Jahr
server 6cea2061ec windows: fix model pulling vor 1 Jahr
web e4b2ccfb23 web: clean up remaining `models.json` usage vor 1 Jahr
.dockerignore 6292f4b64c update `Dockerfile` vor 1 Jahr
.gitignore 7c71c10d4f fix compilation issue in Dockerfile, remove from `README.md` until ready vor 1 Jahr
.prettierrc.json 8685a5ad18 move .prettierrc.json to root vor 1 Jahr
Dockerfile 7c71c10d4f fix compilation issue in Dockerfile, remove from `README.md` until ready vor 1 Jahr
LICENSE df5fdd6647 `proto` -> `ollama` vor 1 Jahr
README.md 23a37dc466 clean up `README.md` vor 1 Jahr
ggml-metal.metal e64ef69e34 look for ggml-metal in the same directory as the binary vor 1 Jahr
go.mod e4d7f3e287 vendor in progress bar and change to bytes instead of bibytes (#130) vor 1 Jahr
go.sum e4d7f3e287 vendor in progress bar and change to bytes instead of bibytes (#130) vor 1 Jahr
main.go 1775647f76 continue conversation vor 1 Jahr

README.md

logo

Ollama

Discord

Note: Ollama is in early preview. Please report any issues you find.

Run, create, and share large language models (LLMs).

Download

  • Download for macOS on Apple Silicon (Intel coming soon)
  • Download for Windows and Linux (coming soon)
  • Build from source

Quickstart

To run and chat with Llama 2, the new model by Meta:

ollama run llama2

Model library

ollama includes a library of open-source models:

Model Parameters Size Download
Llama2 7B 3.8GB ollama pull llama2
Llama2 13B 13B 7.3GB ollama pull llama2:13b
Orca Mini 3B 1.9GB ollama pull orca
Vicuna 7B 3.8GB ollama pull vicuna
Nous-Hermes 13B 7.3GB ollama pull nous-hermes
Wizard Vicuna Uncensored 13B 7.3GB ollama pull wizard-vicuna

Note: You should have at least 8 GB of RAM to run the 3B models, 16 GB to run the 7B models, and 32 GB to run the 13B models.

Examples

Run a model

ollama run llama2
>>> hi
Hello! How can I help you today?

Create a custom model

Pull a base model:

ollama pull llama2

Create a Modelfile:

FROM llama2

# set the temperature to 1 [higher is more creative, lower is more coherent]
PARAMETER temperature 1

# set the system prompt
SYSTEM """
You are Mario from Super Mario Bros. Answer as Mario, the assistant, only.
"""

Next, create and run the model:

ollama create mario -f ./Modelfile
ollama run mario
>>> hi
Hello! It's your friend Mario.

For more examples, see the examples directory.

Pull a model from the registry

ollama pull orca

Listing local models

ollama list

Model packages

Overview

Ollama bundles model weights, configuration, and data into a single package, defined by a Modelfile.

logo

Building

go build .

To run it start the server:

./ollama serve &

Finally, run a model!

./ollama run llama2