بدون توضیح

Patrick Devine a0ae700d5d add llama2:13b model to the readme 1 سال پیش
api 68df36ae50 fix pull 0 bytes on completed layer 1 سال پیش
app 4c1dc52083 app: create `/usr/local/bin/` if it does not exist 1 سال پیش
cmd a6d03dd510 Merge pull request #110 from jmorganca/fix-pull-0-bytes 1 سال پیش
docs 9310ee3967 First stab at a modelfile doc 1 سال پیش
examples 3e10f902f5 add `mario` example 1 سال پیش
format 5bea29f610 add new list command (#97) 1 سال پیش
llama 40c9dc0a31 fix multibyte responses 1 سال پیش
parser 572fc9099f add license layers to the parser (#116) 1 سال پیش
scripts 4dd296e155 build app in publish script 1 سال پیش
server 4ca7c4be1f dont consume reader when calculating digest 1 سال پیش
web f08c050e57 fix page transitions flickering 1 سال پیش
.dockerignore 6292f4b64c update `Dockerfile` 1 سال پیش
.gitignore 7c71c10d4f fix compilation issue in Dockerfile, remove from `README.md` until ready 1 سال پیش
.prettierrc.json 8685a5ad18 move .prettierrc.json to root 1 سال پیش
Dockerfile 7c71c10d4f fix compilation issue in Dockerfile, remove from `README.md` until ready 1 سال پیش
LICENSE df5fdd6647 `proto` -> `ollama` 1 سال پیش
README.md a0ae700d5d add llama2:13b model to the readme 1 سال پیش
ggml-metal.metal e64ef69e34 look for ggml-metal in the same directory as the binary 1 سال پیش
go.mod 5bea29f610 add new list command (#97) 1 سال پیش
go.sum 5bea29f610 add new list command (#97) 1 سال پیش
main.go 1775647f76 continue conversation 1 سال پیش
models.json 5028de2901 update vicuna model 1 سال پیش

README.md

logo

Ollama

Create, run, and share self-contained large language models (LLMs). Ollama bundles a model’s weights, configuration, prompts, and more into self-contained packages that run anywhere.

Note: Ollama is in early preview. Please report any issues you find.

Download

  • Download for macOS on Apple Silicon (Intel coming soon)
  • Download for Windows and Linux (coming soon)
  • Build from source

Examples

Quickstart

ollama run llama2
>>> hi
Hello! How can I help you today?

Creating a custom model

Create a Modelfile:

FROM llama2
PROMPT """
You are Mario from Super Mario Bros. Answer as Mario, the assistant, only.

User: {{ .Prompt }}
Mario:
"""

Next, create and run the model:

ollama create mario -f ./Modelfile
ollama run mario
>>> hi
Hello! It's your friend Mario.

Model library

Ollama includes a library of open-source, pre-trained models. More models are coming soon. You should have at least 8 GB of RAM to run the 3B models, 16 GB to run the 7B models, and 32 GB to run the 13B models.

Model Parameters Size Download
Llama2 7B 3.8GB ollama pull llama2
Llama2 13B 13B 7.3GB ollama pull llama2:13b
Orca Mini 3B 1.9GB ollama pull orca
Vicuna 7B 3.8GB ollama pull vicuna
Nous-Hermes 13B 7.3GB ollama pull nous-hermes
Wizard Vicuna Uncensored 13B 7.3GB ollama pull wizard-vicuna

Building

go build .

To run it start the server:

./ollama serve &

Finally, run a model!

./ollama run llama2