Install required tools:
Optionally enable debugging and more verbose logging:
# At build time
export CGO_CFLAGS="-g"
# At runtime
export OLLAMA_DEBUG=1
Get the required libraries and build the native LLM code: (Adjust the job count based on your number of processors for a faster build)
make -j 5
Then build ollama:
go build .
Now you can run ollama
:
./ollama
If you are using Xcode newer than version 14, you may see a warning during go build
about ld: warning: ignoring duplicate libraries: '-lobjc'
due to Golang issue https://github.com/golang/go/issues/67799 which can be safely ignored. You can suppress the warning with export CGO_LDFLAGS="-Wl,-no_warn_duplicate_libraries"
Your operating system distribution may already have packages for NVIDIA CUDA. Distro packages are often preferable, but instructions are distro-specific. Please consult distro-specific docs for dependencies if available!
Install make
, gcc
and golang
as well as NVIDIA CUDA
development and runtime packages.
Typically the build scripts will auto-detect CUDA, however, if your Linux distro
or installation approach uses unusual paths, you can specify the location by
specifying an environment variable CUDA_LIB_DIR
to the location of the shared
libraries, and CUDACXX
to the location of the nvcc compiler. You can customize
a set of target CUDA architectures by setting CMAKE_CUDA_ARCHITECTURES
(e.g. "50;60;70")
Then generate dependencies: (Adjust the job count based on your number of processors for a faster build)
make -j 5
Then build the binary:
go build .
Your operating system distribution may already have packages for AMD ROCm and CLBlast. Distro packages are often preferable, but instructions are distro-specific. Please consult distro-specific docs for dependencies if available!
Install CLBlast and ROCm development packages first, as well as make
, gcc
, and golang
.
Typically the build scripts will auto-detect ROCm, however, if your Linux distro
or installation approach uses unusual paths, you can specify the location by
specifying an environment variable ROCM_PATH
to the location of the ROCm
install (typically /opt/rocm
), and CLBlast_DIR
to the location of the
CLBlast install (typically /usr/lib/cmake/CLBlast
). You can also customize
the AMD GPU targets by setting AMDGPU_TARGETS (e.g. AMDGPU_TARGETS="gfx1101;gfx1102"
)
Then generate dependencies: (Adjust the job count based on your number of processors for a faster build)
make -j 5
Then build the binary:
go build .
ROCm requires elevated privileges to access the GPU at runtime. On most distros you can add your user account to the render
group, or run as root.
By default, running make
will compile a few different variations
of the LLM library based on common CPU families and vector math capabilities,
including a lowest-common-denominator which should run on almost any 64 bit CPU
somewhat slowly. At runtime, Ollama will auto-detect the optimal variation to
load.
Custom CPU settings are not currently supported in the new Go server build but will be added back after we complete the transition.
If you have Docker available, you can build linux binaries with ./scripts/build_linux.sh
which has the CUDA and ROCm dependencies included. The resulting binary is placed in ./dist
The following tools are required as a minimal development environment to build CPU inference support.
pacman -S mingw-w64-clang-x86_64-gcc-compat mingw-w64-clang-x86_64-clang make
to install the required toolsC:\msys64\clang64\bin
and c:\msys64\usr\bin
to your environment variable PATH
where you will perform the build steps below (e.g. system-wide, account-level, powershell, cmd, etc.)[!NOTE]
Due to bugs in the GCC C++ library for unicode support, Ollama should be built with clang on windows.
Then, build the ollama
binary:
$env:CGO_ENABLED="1"
make -j 8
go build .
The GPU tools require the Microsoft native build tools. To build either CUDA or ROCm, you must first install MSVC via Visual Studio:
Desktop development with C++
as a Workload during the Visual Studio installcl.exe
) to your PATH
In addition to the common Windows development tools and MSVC described above:
In addition to the common Windows development tools and MSVC described above:
The default Developer PowerShell for VS 2022
may default to x86 which is not what you want. To ensure you get an arm64 development environment, start a plain PowerShell terminal and run:
import-module 'C:\\Program Files\\Microsoft Visual Studio\\2022\\Community\\Common7\\Tools\\Microsoft.VisualStudio.DevShell.dll'
Enter-VsDevShell -Arch arm64 -vsinstallpath 'C:\\Program Files\\Microsoft Visual Studio\\2022\\Community' -skipautomaticlocation
You can confirm with write-host $env:VSCMD_ARG_TGT_ARCH
Follow the instructions at https://www.msys2.org/wiki/arm64/ to set up an arm64 msys2 environment. Ollama requires gcc and mingw32-make to compile, which is not currently available on Windows arm64, but a gcc compatibility adapter is available via mingw-w64-clang-aarch64-gcc-compat
. At a minimum you will need to install the following:
pacman -S mingw-w64-clang-aarch64-clang mingw-w64-clang-aarch64-gcc-compat mingw-w64-clang-aarch64-make make
You will need to ensure your PATH includes go, cmake, gcc and clang mingw32-make to build ollama from source. (typically C:\msys64\clangarm64\bin\
)