jmorganca 8f79a2e86a cleanup stop code 11 months ago
..
example a4d402c403 fix `example` 8 months ago
ggml-cuda 01ccbc07fe replace static build in `llm` 8 months ago
patches beb847b40f add license headers 8 months ago
runner 8f79a2e86a cleanup stop code 8 months ago
.gitignore b1696e308e Add missing hipcc flags 8 months ago
README.md 7d0a452938 num predict 8 months ago
base64.hpp a8f91d3cc1 add llava 8 months ago
build-info.cpp a8f91d3cc1 add llava 8 months ago
build_cuda.sh 922d0acbdb improve cuda and hipblas build scripts 8 months ago
build_hipblas.sh 87af27dac0 fix output in build_hipblas.sh 8 months ago
clip.cpp a9884ae136 llama: add clip dependencies 8 months ago
clip.h a9884ae136 llama: add clip dependencies 8 months ago
common.cpp a8f91d3cc1 add llava 8 months ago
common.h a8f91d3cc1 add llava 8 months ago
ggml-alloc.c beb847b40f add license headers 8 months ago
ggml-alloc.h beb847b40f add license headers 8 months ago
ggml-backend-impl.h beb847b40f add license headers 8 months ago
ggml-backend.c beb847b40f add license headers 8 months ago
ggml-backend.h beb847b40f add license headers 8 months ago
ggml-common.h beb847b40f add license headers 8 months ago
ggml-cuda.cu beb847b40f add license headers 8 months ago
ggml-cuda.h beb847b40f add license headers 8 months ago
ggml-impl.h beb847b40f add license headers 8 months ago
ggml-metal-darwin_arm64.m ec17359a68 wip 8 months ago
ggml-metal.h beb847b40f add license headers 8 months ago
ggml-metal.metal beb847b40f add license headers 8 months ago
ggml-metal.o 01ccbc07fe replace static build in `llm` 8 months ago
ggml-quants.c beb847b40f add license headers 8 months ago
ggml-quants.h beb847b40f add license headers 8 months ago
ggml.c beb847b40f add license headers 8 months ago
ggml.h beb847b40f add license headers 8 months ago
grammar-parser.cpp a8f91d3cc1 add llava 8 months ago
grammar-parser.h a8f91d3cc1 add llava 8 months ago
json-schema-to-grammar.cpp a8f91d3cc1 add llava 8 months ago
json-schema-to-grammar.h a8f91d3cc1 add llava 8 months ago
json.hpp a8f91d3cc1 add llava 8 months ago
llama.cpp beb847b40f add license headers 8 months ago
llama.go 43efc893d7 basic progress 8 months ago
llama.h beb847b40f add license headers 8 months ago
llava.cpp a8f91d3cc1 add llava 8 months ago
llava.h a8f91d3cc1 add llava 8 months ago
log.h a9884ae136 llama: add clip dependencies 8 months ago
sampling.cpp a8f91d3cc1 add llava 8 months ago
sampling.h a8f91d3cc1 add llava 8 months ago
sampling_ext.cpp ce15ed6d69 remove dependency on `llm` 8 months ago
sampling_ext.h c0b94376b2 grammar 8 months ago
sgemm.cpp 0110994d06 Initial `llama` Go module 8 months ago
sgemm.h 0110994d06 Initial `llama` Go module 8 months ago
stb_image.h a9884ae136 llama: add clip dependencies 8 months ago
unicode-data.cpp beb847b40f add license headers 8 months ago
unicode-data.h beb847b40f add license headers 8 months ago
unicode.cpp beb847b40f add license headers 8 months ago
unicode.h beb847b40f add license headers 8 months ago

README.md

llama

This package integrates llama.cpp as a Go package that's easy to build with tags for different CPU and GPU processors.

Supported:

  • CPU
  • avx, avx2
  • macOS Metal
  • Windows CUDA
  • Windows ROCm
  • Linux CUDA
  • Linux ROCm
  • Llava
  • Parallel Requests

Extra build steps are required for CUDA and ROCm on Windows since nvcc and hipcc both require using msvc as the host compiler. For these small dlls are created:

  • ggml-cuda.dll
  • ggml-hipblas.dll

Note: it's important that memory is allocated and freed by the same compiler (e.g. entirely by code compiled with msvc or mingw). Issues from this should be rare, but there are some places where pointers are returned by the CUDA or HIP runtimes and freed elsewhere, causing a a crash. In a future change the same runtime should be used in both cases to avoid crashes.

Building

go build .

AVX

go build -tags avx .

AVX2

# go doesn't recognize `-mfma` as a valid compiler flag
# see https://github.com/golang/go/issues/17895
go env -w "CGO_CFLAGS_ALLOW=-mfma|-mf16c"
go env -w "CGO_CXXFLAGS_ALLOW=-mfma|-mf16c"
go build -tags=avx,avx2 .

Linux

CUDA

Install the CUDA toolkit v11.3.1 then build libggml-cuda.so:

./build_cuda.sh

Then build the package with the cuda tag:

go build -tags=cuda .

Windows

CUDA

Install the CUDA toolkit v11.3.1 then build the cuda code:

Build ggml-cuda.dll:

./build_cuda.ps1

Then build the package with the cuda tag:

go build -tags=cuda .

ROCm

Install ROCm 5.7.1 and Strawberry Perl.

Then, build ggml-hipblas.dll:

./build_hipblas.sh

Then build the package with the rocm tag:

go build -tags=rocm .

Syncing with llama.cpp

To update this package to the latest llama.cpp code, use the scripts/sync_llama.sh script from the root of this repo:

cd ollama
./scripts/sync_llama.sh ../llama.cpp