Commit Verlauf

Autor SHA1 Nachricht Datum
  jmorganca 01ccbc07fe replace static build in `llm` vor 1 Jahr
  Daniel Hiltgen 0b03b9c32f llm: Align cmake define for cuda no peer copy (#6455) vor 8 Monaten
  Daniel Hiltgen a017cf2fea Split rocm back out of bundle (#6432) vor 9 Monaten
  Daniel Hiltgen 88bb9e3328 Adjust layout to bin+lib/ollama vor 9 Monaten
  Daniel Hiltgen d470ebe78b Add Jetson cuda variants for arm vor 11 Monaten
  Daniel Hiltgen c7bcb00319 Wire up ccache and pigz in the docker based build vor 9 Monaten
  Daniel Hiltgen 74d45f0102 Refactor linux packaging vor 10 Monaten
  Jeffrey Morgan efbf41ed81 llm: dont link cuda with compat libs (#5621) vor 10 Monaten
  Jeffrey Morgan 4e262eb2a8 remove `GGML_CUDA_FORCE_MMQ=on` from build (#5588) vor 10 Monaten
  Daniel Hiltgen 0bacb30007 Workaround broken ROCm p2p copy vor 10 Monaten
  Jeffrey Morgan 4607c70641 llm: add `-DBUILD_SHARED_LIBS=off` to common cpu cmake flags (#5520) vor 10 Monaten
  Jeffrey Morgan 2cc854f8cb llm: fix missing dylibs by restoring old build behavior on Linux and macOS (#5511) vor 10 Monaten
  Jeffrey Morgan 8f8e736b13 update llama.cpp submodule to `d7fd29f` (#5475) vor 10 Monaten
  Daniel Hiltgen b0930626c5 Add back lower level parallel flags vor 11 Monaten
  Jeffrey Morgan 152fc202f5 llm: update llama.cpp commit to `7c26775` (#4896) vor 11 Monaten
  Daniel Hiltgen ab8c929e20 Add ability to skip oneapi generate vor 11 Monaten
  Daniel Hiltgen 646371f56d Merge pull request #3278 from zhewang1-intc/rebase_ollama_main vor 11 Monaten
  Wang,Zhe fd5971be0b support ollama run on Intel GPUs vor 11 Monaten
  Daniel Hiltgen c48c1d7c46 Port cuda/rocm skip build vars to linux vor 1 Jahr
  Roy Yang 5f73c08729 Remove trailing spaces (#3889) vor 1 Jahr
  Daniel Hiltgen cc5a71e0e3 Merge pull request #3709 from remy415/custom-gpu-defs vor 1 Jahr
  Jeremy 440b7190ed Update gen_linux.sh vor 1 Jahr
  Jeremy 52f5370c48 add support for custom gpu build flags for llama.cpp vor 1 Jahr
  Jeremy 7c000ec3ed adds support for OLLAMA_CUSTOM_GPU_DEFS to customize GPU build flags vor 1 Jahr
  Jeremy 8aec92fa6d rearranged conditional logic for static build, dockerfile updated vor 1 Jahr
  Jeremy 70261b9bb6 move static build to its own flag vor 1 Jahr
  Blake Mizerany 1524f323a3 Revert "build.go: introduce a friendlier way to build Ollama (#3548)" (#3564) vor 1 Jahr
  Blake Mizerany fccf3eecaa build.go: introduce a friendlier way to build Ollama (#3548) vor 1 Jahr
  Jeffrey Morgan 63efa075a0 update generate scripts with new `LLAMA_CUDA` variable, set `HIP_PLATFORM` to avoid compiler errors (#3528) vor 1 Jahr
  Daniel Hiltgen 58d95cc9bd Switch back to subprocessing for llama.cpp vor 1 Jahr