暂无描述

Roy Han 2647a0e443 num parallel embed 9 月之前
.github 5d604eec5b Bump Go patch version 9 月之前
api 84e5721f3a always provide content even if empty (#5778) 9 月之前
app 33627331a3 app: also clean up tempdir runners on install (#5646) 9 月之前
auth 0a7fdbe533 prompt to display and add local ollama keys to account (#3717) 1 年之前
cmd cc269ba094 Remove no longer supported max vram var 9 月之前
convert d835368eb8 convert: capture `head_dim` for mistral (#5818) 9 月之前
docs f5e3939220 Update api.md (#5968) 9 月之前
envconfig cc269ba094 Remove no longer supported max vram var 9 月之前
examples 94d37fdcae fix: examples/langchain-python-rag-privategpt/requirements.txt (#3382) 10 月之前
format e40145a39d lint 11 月之前
gpu 283948c83b Adjust windows ROCm discovery 9 月之前
integration ac33aa7d37 Fix Embed Test Flakes (#5893) 9 月之前
llm bbf8f102ee Revert "llm(llama): pass rope factors (#5924)" (#5963) 9 月之前
macapp 8aadad9c72 updated updateURL 11 月之前
openai c57317cbf0 OpenAI: Function Based Testing (#5752) 9 月之前
parser 7e571f95f0 trimspace test case 10 月之前
progress e40145a39d lint 11 月之前
readline 8ce4032e72 more lint 11 月之前
scripts b44320db13 Bundle missing CRT libraries 9 月之前
server 2647a0e443 num parallel embed 9 月之前
template ec4c35fe99 Merge pull request #5512 from ollama/mxyng/detect-stop 9 月之前
types 631cfd9e62 types/model: remove knowledge of digest (#5500) 10 月之前
util cb42e607c5 llm: speed up gguf decoding by a lot (#5246) 10 月之前
version 2c7f956b38 add version 1 年之前
.dockerignore 5017a15bcb add `macapp` to `.dockerignore` 1 年之前
.gitattributes f7dc7dcc64 Update .gitattributes 11 月之前
.gitignore 34a4a94f13 ignore debug bin files 1 年之前
.gitmodules fac9060da5 Init submodule with new path 1 年之前
.golangci.yaml 6297f85606 gofmt, goimports 11 月之前
.prettierrc.json 8685a5ad18 move .prettierrc.json to root 1 年之前
Dockerfile f02f83660c bump go version to 1.22.5 to fix security vulnerabilities 9 月之前
LICENSE df5fdd6647 `proto` -> `ollama` 1 年之前
README.md 37096790a7 Merge pull request #5552 from ollama/mxyng/messages-docs 9 月之前
go.mod fb6cbc02fb update named templates 10 月之前
go.sum 9b6c2e6eb6 detect chat template from KV 10 月之前
main.go 1b272d5bcd change `github.com/jmorganca/ollama` to `github.com/ollama/ollama` (#3347) 1 年之前

README.md

 ollama

Ollama

Discord

Get up and running with large language models.

macOS

Download

Windows preview

Download

Linux

curl -fsSL https://ollama.com/install.sh | sh

Manual install instructions

Docker

The official Ollama Docker image ollama/ollama is available on Docker Hub.

Libraries

Quickstart

To run and chat with Llama 3:

ollama run llama3

Model library

Ollama supports a list of models available on ollama.com/library

Here are some example models that can be downloaded:

Model Parameters Size Download
Llama 3 8B 4.7GB ollama run llama3
Llama 3 70B 40GB ollama run llama3:70b
Phi 3 Mini 3.8B 2.3GB ollama run phi3
Phi 3 Medium 14B 7.9GB ollama run phi3:medium
Gemma 2 9B 5.5GB ollama run gemma2
Gemma 2 27B 16GB ollama run gemma2:27b
Mistral 7B 4.1GB ollama run mistral
Moondream 2 1.4B 829MB ollama run moondream
Neural Chat 7B 4.1GB ollama run neural-chat
Starling 7B 4.1GB ollama run starling-lm
Code Llama 7B 3.8GB ollama run codellama
Llama 2 Uncensored 7B 3.8GB ollama run llama2-uncensored
LLaVA 7B 4.5GB ollama run llava
Solar 10.7B 6.1GB ollama run solar

[!NOTE] You should have at least 8 GB of RAM available to run the 7B models, 16 GB to run the 13B models, and 32 GB to run the 33B models.

Customize a model

Import from GGUF

Ollama supports importing GGUF models in the Modelfile:

  1. Create a file named Modelfile, with a FROM instruction with the local filepath to the model you want to import.

    FROM ./vicuna-33b.Q4_0.gguf
    
  2. Create the model in Ollama

    ollama create example -f Modelfile
    
  3. Run the model

    ollama run example
    

Import from PyTorch or Safetensors

See the guide on importing models for more information.

Customize a prompt

Models from the Ollama library can be customized with a prompt. For example, to customize the llama3 model:

ollama pull llama3

Create a Modelfile:

FROM llama3

# set the temperature to 1 [higher is more creative, lower is more coherent]
PARAMETER temperature 1

# set the system message
SYSTEM """
You are Mario from Super Mario Bros. Answer as Mario, the assistant, only.
"""

Next, create and run the model:

ollama create mario -f ./Modelfile
ollama run mario
>>> hi
Hello! It's your friend Mario.

For more examples, see the examples directory. For more information on working with a Modelfile, see the Modelfile documentation.

CLI Reference

Create a model

ollama create is used to create a model from a Modelfile.

ollama create mymodel -f ./Modelfile

Pull a model

ollama pull llama3

This command can also be used to update a local model. Only the diff will be pulled.

Remove a model

ollama rm llama3

Copy a model

ollama cp llama3 my-model

Multiline input

For multiline input, you can wrap text with """:

>>> """Hello,
... world!
... """
I'm a basic program that prints the famous "Hello, world!" message to the console.

Multimodal models

>>> What's in this image? /Users/jmorgan/Desktop/smile.png
The image features a yellow smiley face, which is likely the central focus of the picture.

Pass the prompt as an argument

$ ollama run llama3 "Summarize this file: $(cat README.md)"
 Ollama is a lightweight, extensible framework for building and running language models on the local machine. It provides a simple API for creating, running, and managing models, as well as a library of pre-built models that can be easily used in a variety of applications.

Show model information

ollama show llama3

List models on your computer

ollama list

Start Ollama

ollama serve is used when you want to start ollama without running the desktop application.

Building

See the developer guide

Running local builds

Next, start the server:

./ollama serve

Finally, in a separate shell, run a model:

./ollama run llama3

REST API

Ollama has a REST API for running and managing models.

Generate a response

curl http://localhost:11434/api/generate -d '{
  "model": "llama3",
  "prompt":"Why is the sky blue?"
}'

Chat with a model

curl http://localhost:11434/api/chat -d '{
  "model": "llama3",
  "messages": [
    { "role": "user", "content": "why is the sky blue?" }
  ]
}'

See the API documentation for all endpoints.

Community Integrations

Web & Desktop

Terminal

Database

Package managers

Libraries

Mobile

Extensions & Plugins

Supported backends

  • llama.cpp project founded by Georgi Gerganov.