jmorganca 8f79a2e86a cleanup stop code 11 月之前
..
example a4d402c403 fix `example` 8 月之前
ggml-cuda 01ccbc07fe replace static build in `llm` 8 月之前
patches beb847b40f add license headers 8 月之前
runner 8f79a2e86a cleanup stop code 8 月之前
.gitignore b1696e308e Add missing hipcc flags 8 月之前
README.md 7d0a452938 num predict 8 月之前
base64.hpp a8f91d3cc1 add llava 8 月之前
build-info.cpp a8f91d3cc1 add llava 8 月之前
build_cuda.sh 922d0acbdb improve cuda and hipblas build scripts 8 月之前
build_hipblas.sh 87af27dac0 fix output in build_hipblas.sh 8 月之前
clip.cpp a9884ae136 llama: add clip dependencies 8 月之前
clip.h a9884ae136 llama: add clip dependencies 8 月之前
common.cpp a8f91d3cc1 add llava 8 月之前
common.h a8f91d3cc1 add llava 8 月之前
ggml-alloc.c beb847b40f add license headers 8 月之前
ggml-alloc.h beb847b40f add license headers 8 月之前
ggml-backend-impl.h beb847b40f add license headers 8 月之前
ggml-backend.c beb847b40f add license headers 8 月之前
ggml-backend.h beb847b40f add license headers 8 月之前
ggml-common.h beb847b40f add license headers 8 月之前
ggml-cuda.cu beb847b40f add license headers 8 月之前
ggml-cuda.h beb847b40f add license headers 8 月之前
ggml-impl.h beb847b40f add license headers 8 月之前
ggml-metal-darwin_arm64.m ec17359a68 wip 8 月之前
ggml-metal.h beb847b40f add license headers 8 月之前
ggml-metal.metal beb847b40f add license headers 8 月之前
ggml-metal.o 01ccbc07fe replace static build in `llm` 8 月之前
ggml-quants.c beb847b40f add license headers 8 月之前
ggml-quants.h beb847b40f add license headers 8 月之前
ggml.c beb847b40f add license headers 8 月之前
ggml.h beb847b40f add license headers 8 月之前
grammar-parser.cpp a8f91d3cc1 add llava 8 月之前
grammar-parser.h a8f91d3cc1 add llava 8 月之前
json-schema-to-grammar.cpp a8f91d3cc1 add llava 8 月之前
json-schema-to-grammar.h a8f91d3cc1 add llava 8 月之前
json.hpp a8f91d3cc1 add llava 8 月之前
llama.cpp beb847b40f add license headers 8 月之前
llama.go 43efc893d7 basic progress 8 月之前
llama.h beb847b40f add license headers 8 月之前
llava.cpp a8f91d3cc1 add llava 8 月之前
llava.h a8f91d3cc1 add llava 8 月之前
log.h a9884ae136 llama: add clip dependencies 8 月之前
sampling.cpp a8f91d3cc1 add llava 8 月之前
sampling.h a8f91d3cc1 add llava 8 月之前
sampling_ext.cpp ce15ed6d69 remove dependency on `llm` 8 月之前
sampling_ext.h c0b94376b2 grammar 8 月之前
sgemm.cpp 0110994d06 Initial `llama` Go module 8 月之前
sgemm.h 0110994d06 Initial `llama` Go module 8 月之前
stb_image.h a9884ae136 llama: add clip dependencies 8 月之前
unicode-data.cpp beb847b40f add license headers 8 月之前
unicode-data.h beb847b40f add license headers 8 月之前
unicode.cpp beb847b40f add license headers 8 月之前
unicode.h beb847b40f add license headers 8 月之前

README.md

llama

This package integrates llama.cpp as a Go package that's easy to build with tags for different CPU and GPU processors.

Supported:

  • CPU
  • avx, avx2
  • macOS Metal
  • Windows CUDA
  • Windows ROCm
  • Linux CUDA
  • Linux ROCm
  • Llava
  • Parallel Requests

Extra build steps are required for CUDA and ROCm on Windows since nvcc and hipcc both require using msvc as the host compiler. For these small dlls are created:

  • ggml-cuda.dll
  • ggml-hipblas.dll

Note: it's important that memory is allocated and freed by the same compiler (e.g. entirely by code compiled with msvc or mingw). Issues from this should be rare, but there are some places where pointers are returned by the CUDA or HIP runtimes and freed elsewhere, causing a a crash. In a future change the same runtime should be used in both cases to avoid crashes.

Building

go build .

AVX

go build -tags avx .

AVX2

# go doesn't recognize `-mfma` as a valid compiler flag
# see https://github.com/golang/go/issues/17895
go env -w "CGO_CFLAGS_ALLOW=-mfma|-mf16c"
go env -w "CGO_CXXFLAGS_ALLOW=-mfma|-mf16c"
go build -tags=avx,avx2 .

Linux

CUDA

Install the CUDA toolkit v11.3.1 then build libggml-cuda.so:

./build_cuda.sh

Then build the package with the cuda tag:

go build -tags=cuda .

Windows

CUDA

Install the CUDA toolkit v11.3.1 then build the cuda code:

Build ggml-cuda.dll:

./build_cuda.ps1

Then build the package with the cuda tag:

go build -tags=cuda .

ROCm

Install ROCm 5.7.1 and Strawberry Perl.

Then, build ggml-hipblas.dll:

./build_hipblas.sh

Then build the package with the rocm tag:

go build -tags=rocm .

Syncing with llama.cpp

To update this package to the latest llama.cpp code, use the scripts/sync_llama.sh script from the root of this repo:

cd ollama
./scripts/sync_llama.sh ../llama.cpp