Brak opisu

Patrick Devine 08b900250f vendor in progress bar and change to bytes instead of bibytes 1 rok temu
api 68df36ae50 fix pull 0 bytes on completed layer 1 rok temu
app dfceca48a7 update icons to have different images for bright and dark mode 1 rok temu
cmd 08b900250f vendor in progress bar and change to bytes instead of bibytes 1 rok temu
docs 25f874c030 Update modelfile.md 1 rok temu
examples ac88ab48d9 update 1 rok temu
format 5bea29f610 add new list command (#97) 1 rok temu
llama 40c9dc0a31 fix multibyte responses 1 rok temu
parser 572fc9099f add license layers to the parser (#116) 1 rok temu
progressbar 08b900250f vendor in progress bar and change to bytes instead of bibytes 1 rok temu
scripts 4dd296e155 build app in publish script 1 rok temu
server 4ca7c4be1f dont consume reader when calculating digest 1 rok temu
web 9c5572d51f add discord link back 1 rok temu
.dockerignore 6292f4b64c update `Dockerfile` 1 rok temu
.gitignore 7c71c10d4f fix compilation issue in Dockerfile, remove from `README.md` until ready 1 rok temu
.prettierrc.json 8685a5ad18 move .prettierrc.json to root 1 rok temu
Dockerfile 7c71c10d4f fix compilation issue in Dockerfile, remove from `README.md` until ready 1 rok temu
LICENSE df5fdd6647 `proto` -> `ollama` 1 rok temu
README.md 10d502611f fix discord link in `README.md` 1 rok temu
ggml-metal.metal e64ef69e34 look for ggml-metal in the same directory as the binary 1 rok temu
go.mod 08b900250f vendor in progress bar and change to bytes instead of bibytes 1 rok temu
go.sum 08b900250f vendor in progress bar and change to bytes instead of bibytes 1 rok temu
main.go 1775647f76 continue conversation 1 rok temu
models.json 5028de2901 update vicuna model 1 rok temu

README.md

logo

Ollama

Discord

Create, run, and share large language models (LLMs). Ollama bundles a model’s weights, configuration, prompts, and more into self-contained packages that can run on any machine.

Note: Ollama is in early preview. Please report any issues you find.

Download

  • Download for macOS on Apple Silicon (Intel coming soon)
  • Download for Windows and Linux (coming soon)
  • Build from source

Quickstart

To run and chat with Llama 2, the new model by Meta:

ollama run llama2

Model library

Ollama includes a library of open-source, pre-trained models. More models are coming soon. You should have at least 8 GB of RAM to run the 3B models, 16 GB to run the 7B models, and 32 GB to run the 13B models.

Model Parameters Size Download
Llama2 7B 3.8GB ollama pull llama2
Llama2 13B 13B 7.3GB ollama pull llama2:13b
Orca Mini 3B 1.9GB ollama pull orca
Vicuna 7B 3.8GB ollama pull vicuna
Nous-Hermes 13B 7.3GB ollama pull nous-hermes
Wizard Vicuna Uncensored 13B 7.3GB ollama pull wizard-vicuna

Examples

Run a model

ollama run llama2
>>> hi
Hello! How can I help you today?

Create a custom character model

Pull a base model:

ollama pull orca

Create a Modelfile:

FROM orca
PROMPT """
### System:
You are Mario from Super Mario Bros. Answer as Mario, the assistant, only.

### User:
{{ .Prompt }}

### Response:
"""

Next, create and run the model:

ollama create mario -f ./Modelfile
ollama run mario
>>> hi
Hello! It's your friend Mario.

For more info on Modelfile syntax see this doc.

Pull a model from the registry

ollama pull nous-hermes

Building

go build .

To run it start the server:

./ollama serve &

Finally, run a model!

./ollama run llama2