Daniel Hiltgen f9584deba5 Fix build leakages (#7141) 6 ay önce
..
ggml-cuda 96efd9052f Re-introduce the `llama` package (#5034) 6 ay önce
llamafile 96efd9052f Re-introduce the `llama` package (#5034) 6 ay önce
make 96efd9052f Re-introduce the `llama` package (#5034) 6 ay önce
patches 96efd9052f Re-introduce the `llama` package (#5034) 6 ay önce
runner 96efd9052f Re-introduce the `llama` package (#5034) 6 ay önce
.gitignore 96efd9052f Re-introduce the `llama` package (#5034) 6 ay önce
Dockerfile 96efd9052f Re-introduce the `llama` package (#5034) 6 ay önce
Makefile f9584deba5 Fix build leakages (#7141) 6 ay önce
README.md 96efd9052f Re-introduce the `llama` package (#5034) 6 ay önce
base64.hpp 96efd9052f Re-introduce the `llama` package (#5034) 6 ay önce
build-info.cpp 96efd9052f Re-introduce the `llama` package (#5034) 6 ay önce
clip.cpp 96efd9052f Re-introduce the `llama` package (#5034) 6 ay önce
clip.h 96efd9052f Re-introduce the `llama` package (#5034) 6 ay önce
common.cpp 96efd9052f Re-introduce the `llama` package (#5034) 6 ay önce
common.h 96efd9052f Re-introduce the `llama` package (#5034) 6 ay önce
ggml-aarch64.c 96efd9052f Re-introduce the `llama` package (#5034) 6 ay önce
ggml-aarch64.h 96efd9052f Re-introduce the `llama` package (#5034) 6 ay önce
ggml-alloc.c 96efd9052f Re-introduce the `llama` package (#5034) 6 ay önce
ggml-alloc.h 96efd9052f Re-introduce the `llama` package (#5034) 6 ay önce
ggml-backend-impl.h 96efd9052f Re-introduce the `llama` package (#5034) 6 ay önce
ggml-backend.c 96efd9052f Re-introduce the `llama` package (#5034) 6 ay önce
ggml-backend.h 96efd9052f Re-introduce the `llama` package (#5034) 6 ay önce
ggml-blas.cpp 96efd9052f Re-introduce the `llama` package (#5034) 6 ay önce
ggml-blas.h 96efd9052f Re-introduce the `llama` package (#5034) 6 ay önce
ggml-common.h 96efd9052f Re-introduce the `llama` package (#5034) 6 ay önce
ggml-cuda.cu 96efd9052f Re-introduce the `llama` package (#5034) 6 ay önce
ggml-cuda.h 96efd9052f Re-introduce the `llama` package (#5034) 6 ay önce
ggml-impl.h 96efd9052f Re-introduce the `llama` package (#5034) 6 ay önce
ggml-metal.h 96efd9052f Re-introduce the `llama` package (#5034) 6 ay önce
ggml-metal.metal 96efd9052f Re-introduce the `llama` package (#5034) 6 ay önce
ggml-metal_darwin_arm64.m 96efd9052f Re-introduce the `llama` package (#5034) 6 ay önce
ggml-quants.c 96efd9052f Re-introduce the `llama` package (#5034) 6 ay önce
ggml-quants.h 96efd9052f Re-introduce the `llama` package (#5034) 6 ay önce
ggml.c 96efd9052f Re-introduce the `llama` package (#5034) 6 ay önce
ggml.h 96efd9052f Re-introduce the `llama` package (#5034) 6 ay önce
grammar-parser.cpp 96efd9052f Re-introduce the `llama` package (#5034) 6 ay önce
grammar-parser.h 96efd9052f Re-introduce the `llama` package (#5034) 6 ay önce
json-schema-to-grammar.cpp 96efd9052f Re-introduce the `llama` package (#5034) 6 ay önce
json-schema-to-grammar.h 96efd9052f Re-introduce the `llama` package (#5034) 6 ay önce
json.hpp 96efd9052f Re-introduce the `llama` package (#5034) 6 ay önce
llama-grammar.cpp 96efd9052f Re-introduce the `llama` package (#5034) 6 ay önce
llama-grammar.h 96efd9052f Re-introduce the `llama` package (#5034) 6 ay önce
llama-impl.h 96efd9052f Re-introduce the `llama` package (#5034) 6 ay önce
llama-sampling.cpp 96efd9052f Re-introduce the `llama` package (#5034) 6 ay önce
llama-sampling.h 96efd9052f Re-introduce the `llama` package (#5034) 6 ay önce
llama-vocab.cpp 96efd9052f Re-introduce the `llama` package (#5034) 6 ay önce
llama-vocab.h 96efd9052f Re-introduce the `llama` package (#5034) 6 ay önce
llama.cpp 96efd9052f Re-introduce the `llama` package (#5034) 6 ay önce
llama.go 96efd9052f Re-introduce the `llama` package (#5034) 6 ay önce
llama.h 96efd9052f Re-introduce the `llama` package (#5034) 6 ay önce
llama_darwin.c 96efd9052f Re-introduce the `llama` package (#5034) 6 ay önce
llama_darwin.go 96efd9052f Re-introduce the `llama` package (#5034) 6 ay önce
llama_test.go 96efd9052f Re-introduce the `llama` package (#5034) 6 ay önce
llava.cpp 96efd9052f Re-introduce the `llama` package (#5034) 6 ay önce
llava.h 96efd9052f Re-introduce the `llama` package (#5034) 6 ay önce
log.h 96efd9052f Re-introduce the `llama` package (#5034) 6 ay önce
sampling.cpp 96efd9052f Re-introduce the `llama` package (#5034) 6 ay önce
sampling.h 96efd9052f Re-introduce the `llama` package (#5034) 6 ay önce
sampling_ext.cpp 96efd9052f Re-introduce the `llama` package (#5034) 6 ay önce
sampling_ext.h 96efd9052f Re-introduce the `llama` package (#5034) 6 ay önce
sgemm.cpp 96efd9052f Re-introduce the `llama` package (#5034) 6 ay önce
sgemm.h 96efd9052f Re-introduce the `llama` package (#5034) 6 ay önce
stb_image.h 96efd9052f Re-introduce the `llama` package (#5034) 6 ay önce
sync.sh 96efd9052f Re-introduce the `llama` package (#5034) 6 ay önce
unicode-data.cpp 96efd9052f Re-introduce the `llama` package (#5034) 6 ay önce
unicode-data.h 96efd9052f Re-introduce the `llama` package (#5034) 6 ay önce
unicode.cpp 96efd9052f Re-introduce the `llama` package (#5034) 6 ay önce
unicode.h 96efd9052f Re-introduce the `llama` package (#5034) 6 ay önce

README.md

llama

This package integrates the llama.cpp library as a Go package and makes it easy to build it with tags for different CPU and GPU processors.

Supported:

  • CPU
  • avx, avx2
  • macOS Metal
  • Windows CUDA
  • Windows ROCm
  • Linux CUDA
  • Linux ROCm
  • Llava

Extra build steps are required for CUDA and ROCm on Windows since nvcc and hipcc both require using msvc as the host compiler. For these shared libraries are created:

  • ggml_cuda.dll on Windows or ggml_cuda.so on Linux
  • ggml_hipblas.dll on Windows or ggml_hipblas.so on Linux

Note: it's important that memory is allocated and freed by the same compiler (e.g. entirely by code compiled with msvc or mingw). Issues from this should be rare, but there are some places where pointers are returned by the CUDA or HIP runtimes and freed elsewhere, causing a a crash. In a future change the same runtime should be used in both cases to avoid crashes.

Building

go build .

AVX

go build -tags avx .

AVX2

# go doesn't recognize `-mfma` as a valid compiler flag
# see https://github.com/golang/go/issues/17895
go env -w "CGO_CFLAGS_ALLOW=-mfma|-mf16c"
go env -w "CGO_CXXFLAGS_ALLOW=-mfma|-mf16c"
go build -tags=avx,avx2 .

Linux

CUDA

Install the CUDA toolkit v11.3.1:

make ggml_cuda.so
go build -tags avx,cuda .

ROCm

Install the CUDA toolkit v11.3.1:

make ggml_hipblas.so
go build -tags avx,rocm .

Windows

Download w64devkit for a simple MinGW development environment.

CUDA

Install the CUDA toolkit v11.3.1 then build the cuda code:

make ggml_cuda.dll
go build -tags avx,cuda .

ROCm

Install ROCm 5.7.1.

make ggml_hipblas.dll
go build -tags avx,rocm .

Building runners

# build all runners for this platform
make -j

Syncing with llama.cpp

To update this package to the latest llama.cpp code, use the sync.sh script:

./sync.sh ../../llama.cpp