Michael Yang 1e0a669f75 Merge pull request #3682 from ollama/mxyng/quantize-all-the-things 1 年之前
..
ext_server 44869c59d6 omit prompt and generate settings from final response 1 年之前
generate 8a65717f55 Do not build AVX runners on ARM64 1 年之前
llama.cpp @ 952d03dbea e33d5c2dbc update llama.cpp commit to `952d03d` 1 年之前
patches 1b0e6c9c0e Fix llava models not working after first request (#4164) 1 年之前
filetype.go 01811c176a comments 1 年之前
ggla.go 8b2c10061c refactor tensor query 1 年之前
ggml.go 01811c176a comments 1 年之前
gguf.go 14476d48cc fixes for gguf (#3863) 1 年之前
llm.go 01811c176a comments 1 年之前
llm_darwin_amd64.go 58d95cc9bd Switch back to subprocessing for llama.cpp 1 年之前
llm_darwin_arm64.go 58d95cc9bd Switch back to subprocessing for llama.cpp 1 年之前
llm_linux.go 58d95cc9bd Switch back to subprocessing for llama.cpp 1 年之前
llm_windows.go 058f6cd2cc Move nested payloads to installer and zip file on windows 1 年之前
memory.go 4736391bfb llm: add minimum based on layer size 1 年之前
payload.go 058f6cd2cc Move nested payloads to installer and zip file on windows 1 年之前
server.go 380378cc80 Use our libraries first 1 年之前
status.go 58d95cc9bd Switch back to subprocessing for llama.cpp 1 年之前