Josh Yan 24e8292e94 new changes преди 10 месеца
..
ext_server d8def1ff94 llm: allow gemma 2 to context shift (#5534) преди 10 месеца
generate 0bacb30007 Workaround broken ROCm p2p copy преди 10 месеца
llama.cpp @ 7c26775adb c63b4ecbf7 quantize преди 10 месеца
patches 571dc61955 Update llama.cpp submodule to `a8db2a9c` (#5530) преди 10 месеца
filetype.go d6f692ad1a Add support for IQ1_S, IQ3_S, IQ2_S, IQ4_XS. IQ4_NL (#4322) преди 11 месеца
ggla.go cb42e607c5 llm: speed up gguf decoding by a lot (#5246) преди 10 месеца
ggml.go de2163dafd gemma2 graph преди 10 месеца
ggml_test.go cb42e607c5 llm: speed up gguf decoding by a lot (#5246) преди 10 месеца
gguf.go cb42e607c5 llm: speed up gguf decoding by a lot (#5246) преди 10 месеца
llm.go 24e8292e94 new changes преди 10 месеца
llm_darwin_amd64.go 58d95cc9bd Switch back to subprocessing for llama.cpp преди 1 година
llm_darwin_arm64.go 58d95cc9bd Switch back to subprocessing for llama.cpp преди 1 година
llm_linux.go 58d95cc9bd Switch back to subprocessing for llama.cpp преди 1 година
llm_windows.go 058f6cd2cc Move nested payloads to installer and zip file on windows преди 1 година
memory.go 8e0641a9bf handle asymmetric embedding KVs преди 10 месеца
memory_test.go cb42e607c5 llm: speed up gguf decoding by a lot (#5246) преди 10 месеца
payload.go 0e982bc1f4 Fix corner cases on tmp cleaner on mac преди 10 месеца
quantize.diff 24e8292e94 new changes преди 10 месеца
server.go 9bbddc37a7 Merge pull request #5126 from ollama/mxyng/messages преди 10 месеца
status.go 4d71c559b2 fix error detection by limiting model loading error parsing (#5472) преди 10 месеца