Commit History

Author SHA1 Message Date
  Bruce MacDonald f2ba1311aa improve vram safety with 5% vram memory buffer (#724) 1 year ago
  Bruce MacDonald 5d22319a2c rename server subprocess (#700) 1 year ago
  Bruce MacDonald 9e2de1bd2c increase streaming buffer size (#692) 1 year ago
  Michael Yang c02c0cd483 starcoder 1 year ago
  Bruce MacDonald b1f7123301 clean up num_gpu calculation code (#673) 1 year ago
  Bruce MacDonald 1fbf3585d6 Relay default values to llama runner (#672) 1 year ago
  Bruce MacDonald 9771b1ec51 windows runner fixes (#637) 1 year ago
  Michael Yang f40b3de758 use int64 consistently 1 year ago
  Bruce MacDonald 86279f4ae3 unbound max num gpu layers (#591) 1 year ago
  Bruce MacDonald 4cba75efc5 remove tmp directories created by previous servers (#559) 1 year ago
  Bruce MacDonald 1255bc9b45 only package 11.8 runner 1 year ago
  Bruce MacDonald 4e8be787c7 pack in cuda libs 1 year ago
  Bruce MacDonald 66003e1d05 subprocess improvements (#524) 1 year ago
  Bruce MacDonald 2540c9181c support for packaging in multiple cuda runners (#509) 1 year ago
  Michael Yang 7dee25a07f fix falcon decode 1 year ago
  Bruce MacDonald f221637053 first pass at linux gpu support (#454) 1 year ago
  Bruce MacDonald 09dd2aeff9 GGUF support (#441) 1 year ago
  Bruce MacDonald 42998d797d subprocess llama.cpp server (#401) 1 year ago
  Quinn Slack f4432e1dba treat stop as stop sequences, not exact tokens (#442) 1 year ago
  Michael Yang 5ca05c2e88 fix ModelType() 1 year ago
  Michael Yang a894cc792d model and file type as strings 1 year ago
  Bruce MacDonald 4b2d366c37 Update llama.go 1 year ago
  Bruce MacDonald 56fd4e4ef2 log embedding eval timing 1 year ago
  Jeffrey Morgan 22885aeaee update `llama.cpp` to `f64d44a` 1 year ago
  Michael Yang 6de5d032e1 implement loading ggml lora adapters through the modelfile 1 year ago
  Michael Yang fccf8d179f partial decode ggml bin for more info 1 year ago