Parth Sareen 630e7dc6ff api: structured outputs - chat endpoint (#7900) 4 months ago
..
ggml-cuda c7cb0f0602 image processing for llama3.2 (#6963) 6 months ago
llamafile 96efd9052f Re-introduce the `llama` package (#5034) 6 months ago
make df011054fa Jetpack support for Go server (#7217) 5 months ago
patches c826e57475 runner.go: Better abstract vision model integration 6 months ago
runner 1bdab9fdb1 llm: introduce k/v context quantization (vRAM improvements) (#6279) 4 months ago
.gitignore 96efd9052f Re-introduce the `llama` package (#5034) 6 months ago
Makefile 3085c47bea Improve dependency gathering logic (#7345) 6 months ago
README.md 39e29ae5dd llama: fix typo and formatting in readme (#7876) 5 months ago
base64.hpp 96efd9052f Re-introduce the `llama` package (#5034) 6 months ago
build-info.cpp bf4018b9ec llama: Decouple patching script from submodule (#7139) 6 months ago
clip.cpp f2890a4494 IBM granite/granitemoe architecture support (#6760) 6 months ago
clip.h f2890a4494 IBM granite/granitemoe architecture support (#6760) 6 months ago
common.cpp f2890a4494 IBM granite/granitemoe architecture support (#6760) 6 months ago
common.h f2890a4494 IBM granite/granitemoe architecture support (#6760) 6 months ago
ggml-aarch64.c f2890a4494 IBM granite/granitemoe architecture support (#6760) 6 months ago
ggml-aarch64.h f2890a4494 IBM granite/granitemoe architecture support (#6760) 6 months ago
ggml-alloc.c f2890a4494 IBM granite/granitemoe architecture support (#6760) 6 months ago
ggml-alloc.h f2890a4494 IBM granite/granitemoe architecture support (#6760) 6 months ago
ggml-backend-impl.h f2890a4494 IBM granite/granitemoe architecture support (#6760) 6 months ago
ggml-backend.c f2890a4494 IBM granite/granitemoe architecture support (#6760) 6 months ago
ggml-backend.h f2890a4494 IBM granite/granitemoe architecture support (#6760) 6 months ago
ggml-blas.cpp f2890a4494 IBM granite/granitemoe architecture support (#6760) 6 months ago
ggml-blas.h f2890a4494 IBM granite/granitemoe architecture support (#6760) 6 months ago
ggml-common.h f2890a4494 IBM granite/granitemoe architecture support (#6760) 6 months ago
ggml-cpu-impl.h f2890a4494 IBM granite/granitemoe architecture support (#6760) 6 months ago
ggml-cuda.cu c7cb0f0602 image processing for llama3.2 (#6963) 6 months ago
ggml-cuda.h f2890a4494 IBM granite/granitemoe architecture support (#6760) 6 months ago
ggml-impl.h f2890a4494 IBM granite/granitemoe architecture support (#6760) 6 months ago
ggml-metal.h f2890a4494 IBM granite/granitemoe architecture support (#6760) 6 months ago
ggml-metal.metal c7cb0f0602 image processing for llama3.2 (#6963) 6 months ago
ggml-metal_darwin_arm64.m c7cb0f0602 image processing for llama3.2 (#6963) 6 months ago
ggml-quants.c f2890a4494 IBM granite/granitemoe architecture support (#6760) 6 months ago
ggml-quants.h f2890a4494 IBM granite/granitemoe architecture support (#6760) 6 months ago
ggml.c c7cb0f0602 image processing for llama3.2 (#6963) 6 months ago
ggml.h c7cb0f0602 image processing for llama3.2 (#6963) 6 months ago
json-schema-to-grammar.cpp f2890a4494 IBM granite/granitemoe architecture support (#6760) 6 months ago
json-schema-to-grammar.h f2890a4494 IBM granite/granitemoe architecture support (#6760) 6 months ago
json.hpp 96efd9052f Re-introduce the `llama` package (#5034) 6 months ago
llama-grammar.cpp f2890a4494 IBM granite/granitemoe architecture support (#6760) 6 months ago
llama-grammar.h f2890a4494 IBM granite/granitemoe architecture support (#6760) 6 months ago
llama-impl.h f2890a4494 IBM granite/granitemoe architecture support (#6760) 6 months ago
llama-sampling.cpp f2890a4494 IBM granite/granitemoe architecture support (#6760) 6 months ago
llama-sampling.h f2890a4494 IBM granite/granitemoe architecture support (#6760) 6 months ago
llama-vocab.cpp 099f7077a1 Fix deepseek deseret regex (#7369) 6 months ago
llama-vocab.h f2890a4494 IBM granite/granitemoe architecture support (#6760) 6 months ago
llama.cpp c826e57475 runner.go: Better abstract vision model integration 6 months ago
llama.go 630e7dc6ff api: structured outputs - chat endpoint (#7900) 4 months ago
llama.h c826e57475 runner.go: Better abstract vision model integration 6 months ago
llama_darwin.c 96efd9052f Re-introduce the `llama` package (#5034) 6 months ago
llama_darwin.go 96efd9052f Re-introduce the `llama` package (#5034) 6 months ago
llama_test.go 630e7dc6ff api: structured outputs - chat endpoint (#7900) 4 months ago
llava.cpp c826e57475 runner.go: Better abstract vision model integration 6 months ago
llava.h f2890a4494 IBM granite/granitemoe architecture support (#6760) 6 months ago
log.cpp f2890a4494 IBM granite/granitemoe architecture support (#6760) 6 months ago
log.h f2890a4494 IBM granite/granitemoe architecture support (#6760) 6 months ago
mllama.cpp c7cb0f0602 image processing for llama3.2 (#6963) 6 months ago
mllama.h c7cb0f0602 image processing for llama3.2 (#6963) 6 months ago
sampling.cpp f2890a4494 IBM granite/granitemoe architecture support (#6760) 6 months ago
sampling.h f2890a4494 IBM granite/granitemoe architecture support (#6760) 6 months ago
sampling_ext.cpp 630e7dc6ff api: structured outputs - chat endpoint (#7900) 4 months ago
sampling_ext.h 630e7dc6ff api: structured outputs - chat endpoint (#7900) 4 months ago
sgemm.cpp f2890a4494 IBM granite/granitemoe architecture support (#6760) 6 months ago
sgemm.h 96efd9052f Re-introduce the `llama` package (#5034) 6 months ago
stb_image.h 96efd9052f Re-introduce the `llama` package (#5034) 6 months ago
unicode-data.cpp f2890a4494 IBM granite/granitemoe architecture support (#6760) 6 months ago
unicode-data.h f2890a4494 IBM granite/granitemoe architecture support (#6760) 6 months ago
unicode.cpp 099f7077a1 Fix deepseek deseret regex (#7369) 6 months ago
unicode.h f2890a4494 IBM granite/granitemoe architecture support (#6760) 6 months ago
vendoring b754f5a6a3 Remove submodule and shift to Go server - 0.4.0 (#7157) 6 months ago

README.md

llama

This package integrates the llama.cpp library as a Go package and makes it easy to build it with tags for different CPU and GPU processors.

Supported:

  • CPU
  • avx, avx2
  • macOS Metal
  • Windows CUDA
  • Windows ROCm
  • Linux CUDA
  • Linux ROCm
  • Llava

Extra build steps are required for CUDA and ROCm on Windows since nvcc and hipcc both require using msvc as the host compiler. For these shared libraries are created:

  • ggml_cuda.dll on Windows or ggml_cuda.so on Linux
  • ggml_hipblas.dll on Windows or ggml_hipblas.so on Linux

Note: it's important that memory is allocated and freed by the same compiler (e.g. entirely by code compiled with msvc or mingw). Issues from this should be rare, but there are some places where pointers are returned by the CUDA or HIP runtimes and freed elsewhere, causing a a crash. In a future change the same runtime should be used in both cases to avoid crashes.

Building

go build .

AVX

go build -tags avx .

AVX2

# go doesn't recognize `-mfma` as a valid compiler flag
# see https://github.com/golang/go/issues/17895
go env -w "CGO_CFLAGS_ALLOW=-mfma|-mf16c"
go env -w "CGO_CXXFLAGS_ALLOW=-mfma|-mf16c"
go build -tags=avx,avx2 .

Linux

CUDA

Install the CUDA toolkit v11.3.1:

make ggml_cuda.so
go build -tags avx,cuda .

ROCm

Install ROCm.

make ggml_hipblas.so
go build -tags avx,rocm .

Windows

Download w64devkit for a simple MinGW development environment.

CUDA

Install the CUDA toolkit v11.3.1 then build the cuda code:

make ggml_cuda.dll
go build -tags avx,cuda .

ROCm

Install ROCm.

make ggml_hipblas.dll
go build -tags avx,rocm .

Building runners

# build all runners for this platform
make -j

Vendoring

Ollama currently vendors llama.cpp and ggml through a vendoring model. While we generally strive to contribute changes back upstream to avoid drift, we cary a small set of patches which are applied to the tracking commit. A set of make targets are available to aid developers in updating to a newer tracking commit, or to work on changes.

If you update the vendoring code, start by running the following command to establish the tracking llama.cpp repo in the ./vendor/ directory.

make apply-patches

Updating Base Commit

Pin to new base commit

To update to a newer base commit, select the upstream git tag or commit and update llama/vendoring

Applying patches

When updating to a newer base commit, the existing patches may not apply cleanly and require manual merge resolution.

Start by applying the patches. If any of the patches have conflicts, the git am will stop at the first failure.

make apply-patches

If you see an error message about a conflict, go into the ./vendor/ directory, and perform merge resolution using your preferred tool to the patch commit which failed. Save the file(s) and continue the patch series with git am --continue . If any additional patches fail, follow the same pattern until the full patch series is applied. Once finished, run a final create-patches and sync target to ensure everything is updated.

make create-patches sync

Build and test Ollama, and make any necessary changes to the Go code based on the new base commit. Submit your PR to the Ollama repo.

Generating Patches

When working on new fixes or features that impact vendored code, use the following model. First get a clean tracking repo with all current patches applied:

make apply-patches

Now edit the upstream native code in the ./vendor/ directory. You do not need to commit every change in order to build, a dirty working tree in the tracking repo is OK while developing. Simply save in your editor, and run the following to refresh the vendored code with your changes, build the backend(s) and build ollama:

make sync
make -j 8
go build .

[!IMPORTANT] Do NOT run apply-patches while you're iterating as that will reset the tracking repo. It will detect a dirty tree and abort, but if your tree is clean and you accidentally ran this target, use git reflog to recover your commit(s).

Iterate until you're ready to submit PRs. Once your code is ready, commit a change in the ./vendor/ directory, then generate the patches for ollama with

make create-patches

[!IMPORTANT] Once you have completed this step, it is safe to run apply-patches since your change is preserved in the patches.

In your ./vendor/ directory, create a branch, and cherry-pick the new commit to that branch, then submit a PR upstream to llama.cpp.

Commit the changes in the ollama repo and submit a PR to Ollama, which will include the vendored code update with your change, along with the patches.

After your PR upstream is merged, follow the Updating Base Commit instructions above, however first remove your patch before running apply-patches since the new base commit contains your change already.