Patrick Devine
|
1b272d5bcd
change `github.com/jmorganca/ollama` to `github.com/ollama/ollama` (#3347)
|
1 gadu atpakaļ |
Michael Yang
|
3c4ad0ecab
dyn global
|
1 gadu atpakaļ |
Bruce MacDonald
|
3e22611200
token repeat limit for prediction requests (#3080)
|
1 gadu atpakaļ |
Bruce MacDonald
|
2f804068bd
warn when json format is expected but not mentioned in prompt (#3081)
|
1 gadu atpakaļ |
Bruce MacDonald
|
b80661e8c7
relay load model errors to the client (#3065)
|
1 gadu atpakaļ |
Daniel Hiltgen
|
6c5ccb11f9
Revamp ROCm support
|
1 gadu atpakaļ |
Jeffrey Morgan
|
4613a080e7
update llama.cpp submodule to `66c1968f7` (#2618)
|
1 gadu atpakaļ |
Daniel Hiltgen
|
6680761596
Shutdown faster
|
1 gadu atpakaļ |
Jeffrey Morgan
|
f11bf0740b
use `llm.ImageData`
|
1 gadu atpakaļ |
Jeffrey Morgan
|
2e06ed01d5
remove unknown `CPPFLAGS` option
|
1 gadu atpakaļ |
Jeffrey Morgan
|
a64570dcae
Fix clearing kv cache between requests with the same prompt (#2186)
|
1 gadu atpakaļ |
Daniel Hiltgen
|
3bc28736cd
Merge pull request #2143 from dhiltgen/llm_verbosity
|
1 gadu atpakaļ |
Daniel Hiltgen
|
730dcfcc7a
Refine debug logging for llm
|
1 gadu atpakaļ |
Daniel Hiltgen
|
27a2d5af54
Debug logging on init failure
|
1 gadu atpakaļ |
Jeffrey Morgan
|
89c4aee29e
Unlock mutex when failing to load model (#2117)
|
1 gadu atpakaļ |
Daniel Hiltgen
|
fedd705aea
Mechanical switch from log to slog
|
1 gadu atpakaļ |
Daniel Hiltgen
|
1b249748ab
Add multiple CPU variants for Intel Mac
|
1 gadu atpakaļ |
Bruce MacDonald
|
a897e833b8
do not cache prompt (#2018)
|
1 gadu atpakaļ |
Daniel Hiltgen
|
2ecb247276
Fix intel mac build
|
1 gadu atpakaļ |
Daniel Hiltgen
|
39928a42e8
Always dynamically load the llm server library
|
1 gadu atpakaļ |