Commit History

Autor SHA1 Mensaxe Data
  Michael Yang 829ff87bd1 revert tokenize ffi (#4761) hai 11 meses
  Jeffrey Morgan 763bb65dbb use `int32_t` for call to tokenize (#4738) hai 11 meses
  Michael Yang bf54c845e9 vocab only hai 11 meses
  Michael Yang 26a00a0410 use ffi for tokenizing/detokenizing hai 11 meses
  Michael Yang 01811c176a comments hai 1 ano
  Michael Yang 9685c34509 quantize any fp16/fp32 model hai 1 ano
  Hernan Martinez 86e67fc4a9 Add import declaration for windows,arm64 to llm.go hai 1 ano
  Michael Yang 9502e5661f cgo quantize hai 1 ano
  Daniel Hiltgen 58d95cc9bd Switch back to subprocessing for llama.cpp hai 1 ano
  Michael Yang 91b3e4d282 update memory calcualtions hai 1 ano
  Michael Yang d338d70492 refactor model parsing hai 1 ano
  Patrick Devine 1b272d5bcd change `github.com/jmorganca/ollama` to `github.com/ollama/ollama` (#3347) hai 1 ano
  Jeffrey Morgan f9cd55c70b disable gpu for certain model architectures and fix divide-by-zero on memory estimation hai 1 ano
  Daniel Hiltgen 6c5ccb11f9 Revamp ROCm support hai 1 ano
  Daniel Hiltgen a1dfab43b9 Ensure the libraries are present hai 1 ano
  Jeffrey Morgan 4458efb73a Load all layers on `arm64` macOS if model is small enough (#2149) hai 1 ano
  Daniel Hiltgen fedd705aea Mechanical switch from log to slog hai 1 ano
  Michael Yang eaed6f8c45 add max context length check hai 1 ano
  Daniel Hiltgen 7427fa1387 Fix up the CPU fallback selection hai 1 ano
  Daniel Hiltgen de2fbdec99 Merge pull request #1819 from dhiltgen/multi_variant hai 1 ano
  Michael Yang f4f939de28 Merge pull request #1552 from jmorganca/mxyng/lint-test hai 1 ano
  Daniel Hiltgen 39928a42e8 Always dynamically load the llm server library hai 1 ano
  Daniel Hiltgen d88c527be3 Build multiple CPU variants and pick the best hai 1 ano
  Jeffrey Morgan ab6be852c7 revisit memory allocation to account for full kv cache on main gpu hai 1 ano
  Daniel Hiltgen 8da7bef05f Support multiple variants for a given llm lib type hai 1 ano
  Jeffrey Morgan b24e8d17b2 Increase minimum CUDA memory allocation overhead and fix minimum overhead for multi-gpu (#1896) hai 1 ano
  Michael Yang f921e2696e typo hai 1 ano
  Jeffrey Morgan f387e9631b use runner if cuda alloc won't fit hai 1 ano
  Jeffrey Morgan cb534e6ac2 use 10% vram overhead for cuda hai 1 ano
  Jeffrey Morgan 58ce2d8273 better estimate scratch buffer size hai 1 ano