Commit History

Autor SHA1 Mensaxe Data
  Jeffrey Morgan e04c7012c2 update llama.cpp submodule to `1e6f6554` (#6208) hai 9 meses
  royjhan 86b907f82a sort batch results (#6189) hai 9 meses
  royjhan 1b44d873e7 Add Metrics to `api\embed` response (#5709) hai 9 meses
  Jeffrey Morgan 68ee42f995 update llama.cpp submodule to `6eeaeba1` (#6039) hai 9 meses
  Daniel Hiltgen e12fff8810 Enable windows error dialog for subprocess startup hai 9 meses
  royjhan b9f5e16c80 Introduce `/api/embed` endpoint supporting batch embedding (#5127) hai 9 meses
  Jeffrey Morgan d8def1ff94 llm: allow gemma 2 to context shift (#5534) hai 10 meses
  Jeffrey Morgan 0e09c380fc llm: print caching notices in debug only (#5533) hai 10 meses
  Jeffrey Morgan d89454de80 Use slot with cached prompt instead of least recently used (#5492) hai 10 meses
  royjhan 3b5a4a77f3 Return Correct Prompt Eval Count Regardless of Cache Prompt (#5371) hai 10 meses
  Jeffrey Morgan 717f7229eb Do not shift context for sliding window models (#5368) hai 10 meses
  Michael Yang 9d91e5e587 remove confusing log message hai 10 meses
  Daniel Hiltgen fb9cdfa723 Fix server.cpp for the new cuda build macros hai 11 meses
  Jeffrey Morgan ead259d877 llm: fix seed value not being applied to requests (#4986) hai 10 meses
  Jeffrey Morgan 34f142797a llm: always add bos token to prompt (#4941) hai 10 meses
  Michael Yang 829ff87bd1 revert tokenize ffi (#4761) hai 11 meses
  Michael Yang de781b37c8 rm unused infill hai 11 meses
  Michael Yang 3e21799377 rm unused system prompt hai 11 meses
  Michael Yang 26a00a0410 use ffi for tokenizing/detokenizing hai 11 meses
  Michael Yang 714adb8bd1 bump (#4597) hai 11 meses
  Daniel Hiltgen b37b496a12 Wire up load progress hai 11 meses
  Sam e15307fdf4 feat: add support for flash_attn (#4120) hai 11 meses
  Michael Yang 58876091f7 log clean up hai 11 meses
  Daniel Hiltgen 920a4b0794 Merge remote-tracking branch 'upstream/main' into pr3702 hai 1 ano
  Michael Yang 44869c59d6 omit prompt and generate settings from final response hai 1 ano
  jmorganca fcf4d60eee llm: add back check for empty token cache hai 1 ano
  Jeffrey Morgan 18d9a7e1f1 update llama.cpp submodule to `f364eb6` (#4060) hai 1 ano
  Daniel Hiltgen 23d23409a0 Update llama.cpp (#4036) hai 1 ano
  ManniX-ITA c942e4a07b Fixed startup sequence to report model loading hai 1 ano
  Jeffrey Morgan 7c9792a6e0 Support unicode characters in model path (#3681) hai 1 ano