Install prerequisites:
Then build and run Ollama from the root directory of the repository:
go run . serve
macOS Apple Silicon supports Metal which is built-in to the Ollama binary. No additional steps are required.
Install prerequisites:
brew install cmake
Then, configure and build the project:
cmake -B build
cmake --build build
Lastly, run Ollama:
go run . serve
Install prerequisites:
Then, configure and build the project:
cmake -B build
cmake --build build --config Release
[!IMPORTANT] Building for ROCm requires additional flags:
> cmake -B build -G Ninja -DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=clang++ > cmake --build build --config Release > ``` Lastly, run Ollama:
shell go run . serve
## Windows (ARM) Windows ARM does not support additional acceleration libraries at this time. Do not use cmake, simply `go run` or `go build`. ## Linux Install prerequisites: - [CMake](https://cmake.org/download/) or `sudo apt install cmake` or `sudo dnf install cmake` - (Optional) AMD GPU support - [ROCm](https://rocm.docs.amd.com/projects/install-on-linux/en/latest/install/quick-start.html) - (Optional) NVIDIA GPU support - [CUDA SDK](https://developer.nvidia.com/cuda-downloads) > [!IMPORTANT] > Ensure prerequisites are in `PATH` before running CMake. Then, configure and build the project:
shell cmake -B build cmake --build build
Lastly, run Ollama:
shell go run . serve
## Docker
shell docker build .
### ROCm
shell docker build --build-arg FLAVOR=rocm .
## Running tests To run tests, use `go test`:
shell go test ./... ```
Ollama looks for acceleration libraries in the following paths relative to the ollama
executable:
./lib/ollama
(Windows)../lib/ollama
(Linux).
(macOS)build/lib/ollama
(for development)If the libraries are not found, Ollama will not run with any acceleration libraries.