jmorganca f443dd7b81 llama: sync llama.cpp to commit 8962422 8 meses atrás
..
example e4a091bafd runner.go: Support resource usage command line options 8 meses atrás
ggml-cuda f443dd7b81 llama: sync llama.cpp to commit 8962422 8 meses atrás
patches f443dd7b81 llama: sync llama.cpp to commit 8962422 8 meses atrás
runner f443dd7b81 llama: sync llama.cpp to commit 8962422 8 meses atrás
.gitignore 6c0d892498 Prefix all build artifacts with an OS/ARCH dir 8 meses atrás
Makefile 189ca38f1d Wire up native source file dependencies 8 meses atrás
README.md 3d5a08c315 add note in readme 8 meses atrás
base64.hpp 763d7b601c sync 8 meses atrás
build-info.cpp f443dd7b81 llama: sync llama.cpp to commit 8962422 8 meses atrás
clip.cpp f443dd7b81 llama: sync llama.cpp to commit 8962422 8 meses atrás
clip.h f443dd7b81 llama: sync llama.cpp to commit 8962422 8 meses atrás
common.cpp f443dd7b81 llama: sync llama.cpp to commit 8962422 8 meses atrás
common.h f443dd7b81 llama: sync llama.cpp to commit 8962422 8 meses atrás
ggml-aarch64.c f443dd7b81 llama: sync llama.cpp to commit 8962422 8 meses atrás
ggml-aarch64.h f443dd7b81 llama: sync llama.cpp to commit 8962422 8 meses atrás
ggml-alloc.c f443dd7b81 llama: sync llama.cpp to commit 8962422 8 meses atrás
ggml-alloc.h f443dd7b81 llama: sync llama.cpp to commit 8962422 8 meses atrás
ggml-backend-impl.h f443dd7b81 llama: sync llama.cpp to commit 8962422 8 meses atrás
ggml-backend.c f443dd7b81 llama: sync llama.cpp to commit 8962422 8 meses atrás
ggml-backend.h f443dd7b81 llama: sync llama.cpp to commit 8962422 8 meses atrás
ggml-common.h f443dd7b81 llama: sync llama.cpp to commit 8962422 8 meses atrás
ggml-cuda.cu f443dd7b81 llama: sync llama.cpp to commit 8962422 8 meses atrás
ggml-cuda.h f443dd7b81 llama: sync llama.cpp to commit 8962422 8 meses atrás
ggml-impl.h f443dd7b81 llama: sync llama.cpp to commit 8962422 8 meses atrás
ggml-metal.h f443dd7b81 llama: sync llama.cpp to commit 8962422 8 meses atrás
ggml-metal.metal f443dd7b81 llama: sync llama.cpp to commit 8962422 8 meses atrás
ggml-metal_darwin_arm64.m f443dd7b81 llama: sync llama.cpp to commit 8962422 8 meses atrás
ggml-quants.c f443dd7b81 llama: sync llama.cpp to commit 8962422 8 meses atrás
ggml-quants.h f443dd7b81 llama: sync llama.cpp to commit 8962422 8 meses atrás
ggml.c f443dd7b81 llama: sync llama.cpp to commit 8962422 8 meses atrás
ggml.h f443dd7b81 llama: sync llama.cpp to commit 8962422 8 meses atrás
grammar-parser.cpp f443dd7b81 llama: sync llama.cpp to commit 8962422 8 meses atrás
grammar-parser.h f443dd7b81 llama: sync llama.cpp to commit 8962422 8 meses atrás
json-schema-to-grammar.cpp f443dd7b81 llama: sync llama.cpp to commit 8962422 8 meses atrás
json-schema-to-grammar.h f443dd7b81 llama: sync llama.cpp to commit 8962422 8 meses atrás
json.hpp 763d7b601c sync 8 meses atrás
llama-grammar.cpp f443dd7b81 llama: sync llama.cpp to commit 8962422 8 meses atrás
llama-grammar.h f443dd7b81 llama: sync llama.cpp to commit 8962422 8 meses atrás
llama-impl.h f443dd7b81 llama: sync llama.cpp to commit 8962422 8 meses atrás
llama-sampling.cpp f443dd7b81 llama: sync llama.cpp to commit 8962422 8 meses atrás
llama-sampling.h f443dd7b81 llama: sync llama.cpp to commit 8962422 8 meses atrás
llama-vocab.cpp f443dd7b81 llama: sync llama.cpp to commit 8962422 8 meses atrás
llama-vocab.h f443dd7b81 llama: sync llama.cpp to commit 8962422 8 meses atrás
llama.cpp f443dd7b81 llama: sync llama.cpp to commit 8962422 8 meses atrás
llama.go f443dd7b81 llama: sync llama.cpp to commit 8962422 8 meses atrás
llama.h f443dd7b81 llama: sync llama.cpp to commit 8962422 8 meses atrás
llama_darwin.c 9d8129b8bb llama: delete unused files (#6523) 8 meses atrás
llama_darwin.go a483a4c4ed lint 8 meses atrás
llama_test.go ce00e387c3 wip meta 8 meses atrás
llava.cpp f443dd7b81 llama: sync llama.cpp to commit 8962422 8 meses atrás
llava.h f443dd7b81 llama: sync llama.cpp to commit 8962422 8 meses atrás
log.h f443dd7b81 llama: sync llama.cpp to commit 8962422 8 meses atrás
sampling.cpp f443dd7b81 llama: sync llama.cpp to commit 8962422 8 meses atrás
sampling.h f443dd7b81 llama: sync llama.cpp to commit 8962422 8 meses atrás
sampling_ext.cpp 76718ead40 runner.go: Support MinP parameter 8 meses atrás
sampling_ext.h 76718ead40 runner.go: Support MinP parameter 8 meses atrás
sgemm.cpp f443dd7b81 llama: sync llama.cpp to commit 8962422 8 meses atrás
sgemm.h e9dd656ff5 Update sync with latest llama.cpp layout, and run against b3485 8 meses atrás
stb_image.h f443dd7b81 llama: sync llama.cpp to commit 8962422 8 meses atrás
sync.sh fd4ecd1ff5 llama: fix sync script ggml-metal_darwin_arm64.m filename (#6610) 8 meses atrás
unicode-data.cpp f443dd7b81 llama: sync llama.cpp to commit 8962422 8 meses atrás
unicode-data.h f443dd7b81 llama: sync llama.cpp to commit 8962422 8 meses atrás
unicode.cpp f443dd7b81 llama: sync llama.cpp to commit 8962422 8 meses atrás
unicode.h f443dd7b81 llama: sync llama.cpp to commit 8962422 8 meses atrás

README.md

llama

Note: this package is not used in Ollama yet. For now, see the llm package.

This package integrates the llama.cpp library as a Go package and makes it easy to build it with tags for different CPU and GPU processors.

Supported:

  • CPU
  • avx, avx2
  • macOS Metal
  • Windows CUDA
  • Windows ROCm
  • Linux CUDA
  • Linux ROCm
  • Llava

Extra build steps are required for CUDA and ROCm on Windows since nvcc and hipcc both require using msvc as the host compiler. For these shared libraries are created:

  • ggml_cuda.dll on Windows or ggml_cuda.so on Linux
  • ggml_hipblas.dll on Windows or ggml_hipblas.so on Linux

Note: it's important that memory is allocated and freed by the same compiler (e.g. entirely by code compiled with msvc or mingw). Issues from this should be rare, but there are some places where pointers are returned by the CUDA or HIP runtimes and freed elsewhere, causing a a crash. In a future change the same runtime should be used in both cases to avoid crashes.

Building

go build .

AVX

go build -tags avx .

AVX2

# go doesn't recognize `-mfma` as a valid compiler flag
# see https://github.com/golang/go/issues/17895
go env -w "CGO_CFLAGS_ALLOW=-mfma|-mf16c"
go env -w "CGO_CXXFLAGS_ALLOW=-mfma|-mf16c"
go build -tags=avx,avx2 .

Linux

CUDA

Install the CUDA toolkit v11.3.1:

make ggml_cuda.so
go build -tags avx,cuda .

ROCm

Install the CUDA toolkit v11.3.1:

make ggml_hipblas.so
go build -tags avx,rocm .

Windows

Download w64devkit for a simple MinGW development environment.

CUDA

Install the CUDA toolkit v11.3.1 then build the cuda code:

make ggml_cuda.dll
go build -tags avx,cuda .

ROCm

Install ROCm 5.7.1.

make ggml_hipblas.dll
go build -tags avx,rocm .

Building runners

# build all runners for this platform
make -j

Syncing with llama.cpp

To update this package to the latest llama.cpp code, use the sync.sh script:

./sync.sh ../../llama.cpp