.. |
ext_server
|
b9f5e16c80
Introduce `/api/embed` endpoint supporting batch embedding (#5127)
|
9 months ago |
generate
|
283948c83b
Adjust windows ROCm discovery
|
9 months ago |
llama.cpp @ d94c6e0ccb
|
f8fedbda20
Update llama.cpp submodule commit to `d94c6e0c` (#5805)
|
9 months ago |
patches
|
bbf8f102ee
Revert "llm(llama): pass rope factors (#5924)" (#5963)
|
9 months ago |
filetype.go
|
d6f692ad1a
Add support for IQ1_S, IQ3_S, IQ2_S, IQ4_XS. IQ4_NL (#4322)
|
11 months ago |
ggla.go
|
cb42e607c5
llm: speed up gguf decoding by a lot (#5246)
|
10 months ago |
ggml.go
|
5a739ff4cb
chatglm graph
|
9 months ago |
ggml_test.go
|
cb42e607c5
llm: speed up gguf decoding by a lot (#5246)
|
10 months ago |
gguf.go
|
4a565cbf94
add chat and generate tests with mock runner
|
9 months ago |
llm.go
|
10e768826c
fix: quant err message (#5616)
|
9 months ago |
llm_darwin_amd64.go
|
58d95cc9bd
Switch back to subprocessing for llama.cpp
|
1 year ago |
llm_darwin_arm64.go
|
58d95cc9bd
Switch back to subprocessing for llama.cpp
|
1 year ago |
llm_linux.go
|
58d95cc9bd
Switch back to subprocessing for llama.cpp
|
1 year ago |
llm_windows.go
|
058f6cd2cc
Move nested payloads to installer and zip file on windows
|
1 year ago |
memory.go
|
8e0641a9bf
handle asymmetric embedding KVs
|
10 months ago |
memory_test.go
|
cb42e607c5
llm: speed up gguf decoding by a lot (#5246)
|
10 months ago |
payload.go
|
0e982bc1f4
Fix corner cases on tmp cleaner on mac
|
10 months ago |
server.go
|
a3c20e3f18
Refine error reporting for subprocess crash
|
9 months ago |
status.go
|
4d71c559b2
fix error detection by limiting model loading error parsing (#5472)
|
10 months ago |