Jeffrey Morgan f5ca7f8c8e add license in file header for vendored llama.cpp code (#3351) 1 ano atrás
..
CMakeLists.txt 85129d3a32 Adapt our build for imported server.cpp 1 ano atrás
README.md 8da7bef05f Support multiple variants for a given llm lib type 1 ano atrás
ext_server.cpp 53c107e20e chore: fix typo (#3073) 1 ano atrás
ext_server.h 4613a080e7 update llama.cpp submodule to `66c1968f7` (#2618) 1 ano atrás
httplib.h 9ac6440da3 Import server.cpp as of b2356 1 ano atrás
json.hpp 9ac6440da3 Import server.cpp as of b2356 1 ano atrás
server.cpp f5ca7f8c8e add license in file header for vendored llama.cpp code (#3351) 1 ano atrás
utils.hpp f5ca7f8c8e add license in file header for vendored llama.cpp code (#3351) 1 ano atrás

README.md

Extern C Server

This directory contains a thin facade we layer on top of the Llama.cpp server to expose extern C interfaces to access the functionality through direct API calls in-process. The llama.cpp code uses compile time macros to configure GPU type along with other settings. During the go generate ./... execution, the build will generate one or more copies of the llama.cpp extern C server based on what GPU libraries are detected to support multiple GPU types as well as CPU only support. The Ollama go build then embeds these different servers to support different GPUs and settings at runtime.

If you are making changes to the code in this directory, make sure to disable caching during your go build to ensure you pick up your changes. A typical iteration cycle from the top of the source tree looks like:

go generate ./... && go build -a .