Nav apraksta

Patrick Devine 206fab0e15 add license layers to the parser 1 gadu atpakaļ
api 68df36ae50 fix pull 0 bytes on completed layer 1 gadu atpakaļ
app 280fbe8019 app: use `llama2` instead of `orca` 1 gadu atpakaļ
cmd a6d03dd510 Merge pull request #110 from jmorganca/fix-pull-0-bytes 1 gadu atpakaļ
docs 9310ee3967 First stab at a modelfile doc 1 gadu atpakaļ
examples 3e10f902f5 add `mario` example 1 gadu atpakaļ
format 5bea29f610 add new list command (#97) 1 gadu atpakaļ
llama 40c9dc0a31 fix multibyte responses 1 gadu atpakaļ
parser 206fab0e15 add license layers to the parser 1 gadu atpakaļ
scripts 4dd296e155 build app in publish script 1 gadu atpakaļ
server 206fab0e15 add license layers to the parser 1 gadu atpakaļ
web 820e581ad8 web: fix typos and add link to discord 1 gadu atpakaļ
.dockerignore 6292f4b64c update `Dockerfile` 1 gadu atpakaļ
.gitignore 7c71c10d4f fix compilation issue in Dockerfile, remove from `README.md` until ready 1 gadu atpakaļ
.prettierrc.json 8685a5ad18 move .prettierrc.json to root 1 gadu atpakaļ
Dockerfile 7c71c10d4f fix compilation issue in Dockerfile, remove from `README.md` until ready 1 gadu atpakaļ
LICENSE df5fdd6647 `proto` -> `ollama` 1 gadu atpakaļ
README.md d14785738e README typo fix (#106) 1 gadu atpakaļ
ggml-metal.metal e64ef69e34 look for ggml-metal in the same directory as the binary 1 gadu atpakaļ
go.mod 5bea29f610 add new list command (#97) 1 gadu atpakaļ
go.sum 5bea29f610 add new list command (#97) 1 gadu atpakaļ
main.go 1775647f76 continue conversation 1 gadu atpakaļ
models.json 5028de2901 update vicuna model 1 gadu atpakaļ

README.md

logo

Ollama

Create, run, and share self-contained large language models (LLMs). Ollama bundles a model’s weights, configuration, prompts, and more into self-contained packages that run anywhere.

Note: Ollama is in early preview. Please report any issues you find.

Download

  • Download for macOS on Apple Silicon (Intel coming soon)
  • Download for Windows and Linux (coming soon)
  • Build from source

Examples

Quickstart

ollama run llama2
>>> hi
Hello! How can I help you today?

Creating a custom model

Create a Modelfile:

FROM llama2
PROMPT """
You are Mario from Super Mario Bros. Answer as Mario, the assistant, only.

User: {{ .Prompt }}
Mario:
"""

Next, create and run the model:

ollama create mario -f ./Modelfile
ollama run mario
>>> hi
Hello! It's your friend Mario.

Model library

Ollama includes a library of open-source, pre-trained models. More models are coming soon.

Model Parameters Size Download
Llama2 7B 3.8GB ollama pull llama2
Orca Mini 3B 1.9GB ollama pull orca
Vicuna 7B 3.8GB ollama pull vicuna
Nous-Hermes 13B 7.3GB ollama pull nous-hermes

Building

go build .

To run it start the server:

./ollama server &

Finally, run a model!

./ollama run llama2