Jeffrey Morgan 717f7229eb Do not shift context for sliding window models (#5368) 10 月之前
..
ext_server 717f7229eb Do not shift context for sliding window models (#5368) 10 月之前
generate 96624aa412 Merge pull request #5072 from dhiltgen/windows_path 10 月之前
llama.cpp @ 7c26775adb 152fc202f5 llm: update llama.cpp commit to `7c26775` (#4896) 10 月之前
patches 4d311eb731 llm: architecture patch (#5316) 10 月之前
filetype.go d6f692ad1a Add support for IQ1_S, IQ3_S, IQ2_S, IQ4_XS. IQ4_NL (#4322) 11 月之前
ggla.go cb42e607c5 llm: speed up gguf decoding by a lot (#5246) 10 月之前
ggml.go de2163dafd gemma2 graph 10 月之前
ggml_test.go cb42e607c5 llm: speed up gguf decoding by a lot (#5246) 10 月之前
gguf.go cb42e607c5 llm: speed up gguf decoding by a lot (#5246) 10 月之前
llm.go 829ff87bd1 revert tokenize ffi (#4761) 11 月之前
llm_darwin_amd64.go 58d95cc9bd Switch back to subprocessing for llama.cpp 1 年之前
llm_darwin_arm64.go 58d95cc9bd Switch back to subprocessing for llama.cpp 1 年之前
llm_linux.go 58d95cc9bd Switch back to subprocessing for llama.cpp 1 年之前
llm_windows.go 058f6cd2cc Move nested payloads to installer and zip file on windows 1 年之前
memory.go 8e0641a9bf handle asymmetric embedding KVs 10 月之前
memory_test.go cb42e607c5 llm: speed up gguf decoding by a lot (#5246) 10 月之前
payload.go b2799f111b Move libraries out of users path 10 月之前
server.go cb42e607c5 llm: speed up gguf decoding by a lot (#5246) 10 月之前
status.go 58d95cc9bd Switch back to subprocessing for llama.cpp 1 年之前