Sam
|
e15307fdf4
feat: add support for flash_attn (#4120)
|
11 months ago |
Michael Yang
|
58876091f7
log clean up
|
1 year ago |
Daniel Hiltgen
|
920a4b0794
Merge remote-tracking branch 'upstream/main' into pr3702
|
1 year ago |
Michael Yang
|
44869c59d6
omit prompt and generate settings from final response
|
1 year ago |
jmorganca
|
fcf4d60eee
llm: add back check for empty token cache
|
1 year ago |
Jeffrey Morgan
|
18d9a7e1f1
update llama.cpp submodule to `f364eb6` (#4060)
|
1 year ago |
Daniel Hiltgen
|
23d23409a0
Update llama.cpp (#4036)
|
1 year ago |
ManniX-ITA
|
c942e4a07b
Fixed startup sequence to report model loading
|
1 year ago |
Jeffrey Morgan
|
7c9792a6e0
Support unicode characters in model path (#3681)
|
1 year ago |
Daniel Hiltgen
|
0a0e9f3e0f
Apply 01-cache.diff
|
1 year ago |
Daniel Hiltgen
|
58d95cc9bd
Switch back to subprocessing for llama.cpp
|
1 year ago |
Jeffrey Morgan
|
f5ca7f8c8e
add license in file header for vendored llama.cpp code (#3351)
|
1 year ago |
Daniel Hiltgen
|
43799532c1
Bump llama.cpp to b2474
|
1 year ago |
Jeffrey Morgan
|
e95ffc7448
llama: remove server static assets (#3174)
|
1 year ago |
Daniel Hiltgen
|
85129d3a32
Adapt our build for imported server.cpp
|
1 year ago |
Daniel Hiltgen
|
9ac6440da3
Import server.cpp as of b2356
|
1 year ago |