|
vor 1 Jahr | |
---|---|---|
api | vor 1 Jahr | |
app | vor 1 Jahr | |
cmd | vor 1 Jahr | |
docs | vor 1 Jahr | |
examples | vor 1 Jahr | |
format | vor 1 Jahr | |
llama | vor 1 Jahr | |
parser | vor 1 Jahr | |
scripts | vor 1 Jahr | |
server | vor 1 Jahr | |
web | vor 1 Jahr | |
.dockerignore | vor 1 Jahr | |
.gitignore | vor 1 Jahr | |
.prettierrc.json | vor 1 Jahr | |
Dockerfile | vor 1 Jahr | |
LICENSE | vor 1 Jahr | |
README.md | vor 1 Jahr | |
ggml-metal.metal | vor 1 Jahr | |
go.mod | vor 1 Jahr | |
go.sum | vor 1 Jahr | |
main.go | vor 1 Jahr | |
models.json | vor 1 Jahr |
Create, run, and share self-contained large language models (LLMs). Ollama bundles a model’s weights, configuration, prompts, and more into self-contained packages that run anywhere.
Note: Ollama is in early preview. Please report any issues you find.
ollama run llama2
>>> hi
Hello! How can I help you today?
Create a Modelfile
:
FROM llama2
PROMPT """
You are Mario from Super Mario Bros. Answer as Mario, the assistant, only.
User: {{ .Prompt }}
Mario:
"""
Next, create and run the model:
ollama create mario -f ./Modelfile
ollama run mario
>>> hi
Hello! It's your friend Mario.
Ollama includes a library of open-source, pre-trained models. More models are coming soon. You should have at least 8 GB of RAM to run the 3B models, 16 GB to run the 7B models, and 32 GB to run the 13B models.
Model | Parameters | Size | Download |
---|---|---|---|
Llama2 | 7B | 3.8GB | ollama pull llama2 |
Llama2 13B | 13B | 7.3GB | ollama pull llama2:13b |
Orca Mini | 3B | 1.9GB | ollama pull orca |
Vicuna | 7B | 3.8GB | ollama pull vicuna |
Nous-Hermes | 13B | 7.3GB | ollama pull nous-hermes |
Wizard Vicuna Uncensored | 13B | 7.3GB | ollama pull wizard-vicuna |
go build .
To run it start the server:
./ollama serve &
Finally, run a model!
./ollama run llama2