|
il y a 1 an | |
---|---|---|
api | il y a 1 an | |
app | il y a 1 an | |
cmd | il y a 1 an | |
docs | il y a 1 an | |
examples | il y a 1 an | |
format | il y a 1 an | |
llama | il y a 1 an | |
parser | il y a 1 an | |
scripts | il y a 1 an | |
server | il y a 1 an | |
web | il y a 1 an | |
.dockerignore | il y a 1 an | |
.gitignore | il y a 1 an | |
.prettierrc.json | il y a 1 an | |
Dockerfile | il y a 1 an | |
LICENSE | il y a 1 an | |
README.md | il y a 1 an | |
ggml-metal.metal | il y a 1 an | |
go.mod | il y a 1 an | |
go.sum | il y a 1 an | |
main.go | il y a 1 an | |
models.json | il y a 1 an |
Create, run, and share self-contained large language models (LLMs). Ollama bundles a model’s weights, configuration, prompts, and more into self-contained packages that run anywhere.
Note: Ollama is in early preview. Please report any issues you find.
ollama run llama2
>>> hi
Hello! How can I help you today?
Create a Modelfile
:
FROM llama2
PROMPT """
You are Mario from Super Mario Bros. Answer as Mario, the assistant, only.
User: {{ .Prompt }}
Mario:
"""
Next, create and run the model:
ollama create mario -f ./Modelfile
ollama run mario
>>> hi
Hello! It's your friend Mario.
Ollama includes a library of open-source, pre-trained models. More models are coming soon.
Model | Parameters | Size | Download |
---|---|---|---|
Llama2 | 7B | 3.8GB | ollama pull llama2 |
Orca Mini | 3B | 1.9GB | ollama pull orca |
Vicuna | 7B | 3.8GB | ollama pull vicuna |
Nous-Hermes | 13B | 7.3GB | ollama pull nous-hermes |
go build .
To run it start the server:
./ollama server &
Finally, run a model!
./ollama run llama2