Michael Yang 75a07dd8f7 integrate mllama.cpp to server.cpp 7 月之前
..
ext_server 75a07dd8f7 integrate mllama.cpp to server.cpp 6 月之前
generate d632e23fba Add Windows arm64 support to official builds (#5712) 7 月之前
llama.cpp @ 8962422b1c 5e2653f9fe llm: update llama.cpp commit to 8962422 (#6618) 8 月之前
patches 7d5e0ff80e add server.cpp and patches 7 月之前
filetype.go d6f692ad1a Add support for IQ1_S, IQ3_S, IQ2_S, IQ4_XS. IQ4_NL (#4322) 11 月之前
ggla.go 6b252918fb update convert test to check result data 9 月之前
ggml.go bf612cd608 Merge pull request #6260 from ollama/mxyng/mem 7 月之前
ggml_test.go cb42e607c5 llm: speed up gguf decoding by a lot (#5246) 10 月之前
gguf.go 6ffb5cb017 add conversion for microsoft phi 3 mini/medium 4k, 128 8 月之前
llm.go d632e23fba Add Windows arm64 support to official builds (#5712) 7 月之前
llm_darwin.go cd5c8f6471 Optimize container images for startup (#6547) 7 月之前
llm_linux.go cd5c8f6471 Optimize container images for startup (#6547) 7 月之前
llm_windows.go dbba73469d runner: Set windows above normal priority (#6905) 7 月之前
memory.go 56318fb365 Improve logging on GPU too small (#6666) 7 月之前
memory_test.go 77903ab8b4 llama3.1 8 月之前
server.go a2d33ee390 linter feeding 7 月之前
status.go 04210aa6dd Catch one more error log 9 月之前