|
пре 1 година | |
---|---|---|
api | пре 1 година | |
app | пре 1 година | |
cmd | пре 1 година | |
docs | пре 1 година | |
examples | пре 1 година | |
format | пре 1 година | |
llm | пре 1 година | |
parser | пре 1 година | |
progressbar | пре 1 година | |
scripts | пре 1 година | |
server | пре 1 година | |
vector | пре 1 година | |
.dockerignore | пре 1 година | |
.gitignore | пре 1 година | |
.prettierrc.json | пре 1 година | |
CMakeLists.txt | пре 1 година | |
Dockerfile | пре 1 година | |
LICENSE | пре 1 година | |
README.md | пре 1 година | |
deps.sh | пре 1 година | |
go.mod | пре 1 година | |
go.sum | пре 1 година | |
main.go | пре 1 година |
Run, create, and share large language models (LLMs).
Note: Ollama is in early preview. Please report any issues you find.
To run and chat with Llama 2, the new model by Meta:
ollama run llama2
Ollama supports a list of open-source models available on ollama.ai/library
Here are some example open-source models that can be downloaded:
Model | Parameters | Size | Download |
---|---|---|---|
Llama2 | 7B | 3.8GB | ollama pull llama2 |
Llama2 13B | 13B | 7.3GB | ollama pull llama2:13b |
Llama2 70B | 70B | 39GB | ollama pull llama2:70b |
Llama2 Uncensored | 7B | 3.8GB | ollama pull llama2-uncensored |
Orca Mini | 3B | 1.9GB | ollama pull orca-mini |
Vicuna | 7B | 3.8GB | ollama pull vicuna |
Nous-Hermes | 7B | 3.8GB | ollama pull nous-hermes |
Nous-Hermes 13B | 13B | 7.3GB | ollama pull nous-hermes:13b |
Wizard Vicuna Uncensored | 13B | 7.3GB | ollama pull wizard-vicuna |
Note: You should have at least 8 GB of RAM to run the 3B models, 16 GB to run the 7B models, and 32 GB to run the 13B models.
ollama run llama2
>>> hi
Hello! How can I help you today?
For multiline input, you can wrap text with """
:
>>> """Hello,
... world!
... """
I'm a basic program that prints the famous "Hello, world!" message to the console.
Pull a base model:
ollama pull llama2
To update a model to the latest version, run
ollama pull llama2
again. The model will be updated (if necessary).
Create a Modelfile
:
FROM llama2
# set the temperature to 1 [higher is more creative, lower is more coherent]
PARAMETER temperature 1
# set the system prompt
SYSTEM """
You are Mario from Super Mario Bros. Answer as Mario, the assistant, only.
"""
Next, create and run the model:
ollama create mario -f ./Modelfile
ollama run mario
>>> hi
Hello! It's your friend Mario.
For more examples, see the examples directory. For more information on creating a Modelfile, see the Modelfile documentation.
ollama pull orca
ollama list
Ollama bundles model weights, configuration, and data into a single package, defined by a Modelfile.
go build .
To run it start the server:
./ollama serve &
Finally, run a model!
./ollama run llama2
See the API documentation for all endpoints.
Ollama has an API for running and managing models. For example to generate text from a model:
curl -X POST http://localhost:11434/api/generate -d '{
"model": "llama2",
"prompt":"Why is the sky blue?"
}'