提交历史

作者 SHA1 备注 提交日期
  Daniel Hiltgen 0b03b9c32f llm: Align cmake define for cuda no peer copy (#6455) 8 月之前
  Daniel Hiltgen a017cf2fea Split rocm back out of bundle (#6432) 8 月之前
  Daniel Hiltgen 88bb9e3328 Adjust layout to bin+lib/ollama 8 月之前
  Daniel Hiltgen d470ebe78b Add Jetson cuda variants for arm 11 月之前
  Daniel Hiltgen c7bcb00319 Wire up ccache and pigz in the docker based build 8 月之前
  Daniel Hiltgen 74d45f0102 Refactor linux packaging 10 月之前
  Jeffrey Morgan efbf41ed81 llm: dont link cuda with compat libs (#5621) 9 月之前
  Jeffrey Morgan 4e262eb2a8 remove `GGML_CUDA_FORCE_MMQ=on` from build (#5588) 9 月之前
  Daniel Hiltgen 0bacb30007 Workaround broken ROCm p2p copy 10 月之前
  Jeffrey Morgan 4607c70641 llm: add `-DBUILD_SHARED_LIBS=off` to common cpu cmake flags (#5520) 10 月之前
  Jeffrey Morgan 2cc854f8cb llm: fix missing dylibs by restoring old build behavior on Linux and macOS (#5511) 10 月之前
  Jeffrey Morgan 8f8e736b13 update llama.cpp submodule to `d7fd29f` (#5475) 10 月之前
  Daniel Hiltgen b0930626c5 Add back lower level parallel flags 10 月之前
  Jeffrey Morgan 152fc202f5 llm: update llama.cpp commit to `7c26775` (#4896) 10 月之前
  Daniel Hiltgen ab8c929e20 Add ability to skip oneapi generate 11 月之前
  Daniel Hiltgen 646371f56d Merge pull request #3278 from zhewang1-intc/rebase_ollama_main 11 月之前
  Wang,Zhe fd5971be0b support ollama run on Intel GPUs 11 月之前
  Daniel Hiltgen c48c1d7c46 Port cuda/rocm skip build vars to linux 11 月之前
  Roy Yang 5f73c08729 Remove trailing spaces (#3889) 1 年之前
  Daniel Hiltgen cc5a71e0e3 Merge pull request #3709 from remy415/custom-gpu-defs 1 年之前
  Jeremy 440b7190ed Update gen_linux.sh 1 年之前
  Jeremy 52f5370c48 add support for custom gpu build flags for llama.cpp 1 年之前
  Jeremy 7c000ec3ed adds support for OLLAMA_CUSTOM_GPU_DEFS to customize GPU build flags 1 年之前
  Jeremy 8aec92fa6d rearranged conditional logic for static build, dockerfile updated 1 年之前
  Jeremy 70261b9bb6 move static build to its own flag 1 年之前
  Blake Mizerany 1524f323a3 Revert "build.go: introduce a friendlier way to build Ollama (#3548)" (#3564) 1 年之前
  Blake Mizerany fccf3eecaa build.go: introduce a friendlier way to build Ollama (#3548) 1 年之前
  Jeffrey Morgan 63efa075a0 update generate scripts with new `LLAMA_CUDA` variable, set `HIP_PLATFORM` to avoid compiler errors (#3528) 1 年之前
  Daniel Hiltgen 58d95cc9bd Switch back to subprocessing for llama.cpp 1 年之前
  Jeremy dfc6721b20 add support for libcudart.so for CUDA devices (adds Jetson support) 1 年之前