.. |
ext_server
|
43799532c1
Bump llama.cpp to b2474
|
il y a 1 an |
generate
|
dfc6721b20
add support for libcudart.so for CUDA devices (adds Jetson support)
|
il y a 1 an |
llama.cpp @ ad3a0505e3
|
8091ef2eeb
Bump llama.cpp to b2527
|
il y a 1 an |
patches
|
43799532c1
Bump llama.cpp to b2474
|
il y a 1 an |
dyn_ext_server.c
|
6c5ccb11f9
Revamp ROCm support
|
il y a 1 an |
dyn_ext_server.go
|
1b272d5bcd
change `github.com/jmorganca/ollama` to `github.com/ollama/ollama` (#3347)
|
il y a 1 an |
dyn_ext_server.h
|
39928a42e8
Always dynamically load the llm server library
|
il y a 1 an |
ggla.go
|
0085297928
refactor readseeker
|
il y a 1 an |
ggml.go
|
0085297928
refactor readseeker
|
il y a 1 an |
gguf.go
|
1b272d5bcd
change `github.com/jmorganca/ollama` to `github.com/ollama/ollama` (#3347)
|
il y a 1 an |
llama.go
|
1b272d5bcd
change `github.com/jmorganca/ollama` to `github.com/ollama/ollama` (#3347)
|
il y a 1 an |
llm.go
|
1b272d5bcd
change `github.com/jmorganca/ollama` to `github.com/ollama/ollama` (#3347)
|
il y a 1 an |
payload_common.go
|
1b272d5bcd
change `github.com/jmorganca/ollama` to `github.com/ollama/ollama` (#3347)
|
il y a 1 an |
payload_darwin_amd64.go
|
1ffb1e2874
update llama.cpp submodule to `77d1ac7` (#3030)
|
il y a 1 an |
payload_darwin_arm64.go
|
1b249748ab
Add multiple CPU variants for Intel Mac
|
il y a 1 an |
payload_linux.go
|
6c5ccb11f9
Revamp ROCm support
|
il y a 1 an |
payload_test.go
|
1b272d5bcd
change `github.com/jmorganca/ollama` to `github.com/ollama/ollama` (#3347)
|
il y a 1 an |
payload_windows.go
|
1b249748ab
Add multiple CPU variants for Intel Mac
|
il y a 1 an |
utils.go
|
fccf8d179f
partial decode ggml bin for more info
|
il y a 1 an |