Nincs leírás

Jeffrey Morgan bbc05f957f add `docker` instruction 1 éve
desktop 1d0c84a6c7 update `README.md` with instructions for `poetry` 1 éve
docs 9811956938 update development.md 1 éve
ollama ab49a18a33 add help and descriptions to cli 1 éve
.gitignore d34985b9df add templates to prompt command 1 éve
Dockerfile 48920c873b add basic `Dockerfile` 1 éve
LICENSE df5fdd6647 `proto` -> `ollama` 1 éve
README.md bbc05f957f add `docker` instruction 1 éve
build.py 3b4f45f6bf desktop: fixes for initial publish 1 éve
models.json 8fc8a00752 use correct name for `orca-mini` 1 éve
poetry.lock faa1ce3195 spinner on generate 1 éve
pyproject.toml 8a267d482b bump to 0.0.6 1 éve
requirements.txt 3d76b6bb28 remove server extras for now 1 éve

README.md

Ollama

Ollama is a tool for running any large language model on any machine. It's designed to be easy to use and fast, supporting the largest number of models possible by using the fastest loader available for your platform and model.

Note: this project is a work in progress. Certain models that can be run with ollama are intended for research and/or non-commercial use only.

Install

pip install ollama
docker run ollama/ollama

Quickstart

To run a model, use ollama run:

ollama run orca-mini-3b

You can also run models from hugging face:

ollama run huggingface.co/TheBloke/orca_mini_3B-GGML

Or directly via downloaded model files:

ollama run ~/Downloads/orca-mini-13b.ggmlv3.q4_0.bin

Python SDK

Example

import ollama
ollama.generate("orca-mini-3b", "hi")

ollama.generate(model, message)

Generate a completion

ollama.generate("./llama-7b-ggml.bin", "hi")

ollama.models()

List available local models

models = ollama.models()

ollama.load(model)

Manually a model for generation

ollama.load("model")

ollama.unload(model)

Unload a model

ollama.unload("model")

ollama.pull(model)

Download a model

ollama.pull("huggingface.co/thebloke/llama-7b-ggml")

Coming Soon

ollama.search("query")

Search for compatible models that Ollama can run

ollama.search("llama-7b")

Documentation