jmorganca 9b5b69c00f llm: update llama.cpp submodule to `7c26775` 10 mēneši atpakaļ
..
ext_server fb9cdfa723 Fix server.cpp for the new cuda build macros 10 mēneši atpakaļ
generate 0577af98f4 More parallelism on windows generate 10 mēneši atpakaļ
llama.cpp @ 7c26775adb 9b5b69c00f llm: update llama.cpp submodule to `7c26775` 10 mēneši atpakaļ
patches ce0dc33cb8 llm: patch to fix qwen 2 temporarily on nvidia (#4897) 10 mēneši atpakaļ
filetype.go d6f692ad1a Add support for IQ1_S, IQ3_S, IQ2_S, IQ4_XS. IQ4_NL (#4322) 11 mēneši atpakaļ
ggla.go 171eb040fc simplify safetensors reading 11 mēneši atpakaļ
ggml.go 6fd04ca922 Improve multi-gpu handling at the limit 10 mēneši atpakaļ
gguf.go 7bdcd1da94 Revert "Merge pull request #4938 from ollama/mxyng/fix-byte-order" 10 mēneši atpakaļ
llm.go 829ff87bd1 revert tokenize ffi (#4761) 11 mēneši atpakaļ
llm_darwin_amd64.go 58d95cc9bd Switch back to subprocessing for llama.cpp 1 gadu atpakaļ
llm_darwin_arm64.go 58d95cc9bd Switch back to subprocessing for llama.cpp 1 gadu atpakaļ
llm_linux.go 58d95cc9bd Switch back to subprocessing for llama.cpp 1 gadu atpakaļ
llm_windows.go 058f6cd2cc Move nested payloads to installer and zip file on windows 1 gadu atpakaļ
memory.go 17df6520c8 Remove mmap related output calc logic 10 mēneši atpakaļ
memory_test.go 6f351bf586 review comments and coverage 10 mēneši atpakaļ
payload.go 6f351bf586 review comments and coverage 10 mēneši atpakaļ
server.go da3bf23354 Workaround gfx900 SDMA bugs 10 mēneši atpakaļ
status.go 58d95cc9bd Switch back to subprocessing for llama.cpp 1 gadu atpakaļ