Ver código fonte

update go to 1.22 in other places (#2975)

Jeffrey Morgan 1 ano atrás
pai
commit
d481fb3cc8
3 arquivos alterados com 20 adições e 21 exclusões
  1. 7 7
      .github/workflows/test.yaml
  2. 1 1
      Dockerfile
  3. 12 13
      docs/development.md

+ 7 - 7
.github/workflows/test.yaml

@@ -21,7 +21,7 @@ jobs:
       - uses: actions/checkout@v4
       - uses: actions/checkout@v4
       - uses: actions/setup-go@v5
       - uses: actions/setup-go@v5
         with:
         with:
-          go-version: '1.21'
+          go-version: '1.22'
           cache: true
           cache: true
       - run: go get ./...
       - run: go get ./...
       - run: go generate -x ./...
       - run: go generate -x ./...
@@ -46,7 +46,7 @@ jobs:
       - uses: actions/checkout@v4
       - uses: actions/checkout@v4
       - uses: actions/setup-go@v4
       - uses: actions/setup-go@v4
         with:
         with:
-          go-version: '1.21'
+          go-version: '1.22'
           cache: true
           cache: true
       - run: go get ./...
       - run: go get ./...
       - run: |
       - run: |
@@ -76,7 +76,7 @@ jobs:
       - uses: actions/checkout@v4
       - uses: actions/checkout@v4
       - uses: actions/setup-go@v4
       - uses: actions/setup-go@v4
         with:
         with:
-          go-version: '1.21'
+          go-version: '1.22'
           cache: true
           cache: true
       - run: go get ./...
       - run: go get ./...
       - run: |
       - run: |
@@ -103,14 +103,14 @@ jobs:
     runs-on: ${{ matrix.os }}
     runs-on: ${{ matrix.os }}
     env:
     env:
       GOARCH: ${{ matrix.arch }}
       GOARCH: ${{ matrix.arch }}
-      CGO_ENABLED: "1"
+      CGO_ENABLED: '1'
     steps:
     steps:
       - uses: actions/checkout@v4
       - uses: actions/checkout@v4
         with:
         with:
           submodules: recursive
           submodules: recursive
       - uses: actions/setup-go@v5
       - uses: actions/setup-go@v5
         with:
         with:
-          go-version: '1.21'
+          go-version: '1.22'
           cache: false
           cache: false
       - run: |
       - run: |
           mkdir -p llm/llama.cpp/build/linux/${{ matrix.arch }}/stub/lib/
           mkdir -p llm/llama.cpp/build/linux/${{ matrix.arch }}/stub/lib/
@@ -140,14 +140,14 @@ jobs:
     runs-on: ${{ matrix.os }}
     runs-on: ${{ matrix.os }}
     env:
     env:
       GOARCH: ${{ matrix.arch }}
       GOARCH: ${{ matrix.arch }}
-      CGO_ENABLED: "1"
+      CGO_ENABLED: '1'
     steps:
     steps:
       - uses: actions/checkout@v4
       - uses: actions/checkout@v4
         with:
         with:
           submodules: recursive
           submodules: recursive
       - uses: actions/setup-go@v5
       - uses: actions/setup-go@v5
         with:
         with:
-          go-version: '1.21'
+          go-version: '1.22'
           cache: true
           cache: true
       - run: go get
       - run: go get
       - uses: actions/download-artifact@v4
       - uses: actions/download-artifact@v4

+ 1 - 1
Dockerfile

@@ -1,4 +1,4 @@
-ARG GOLANG_VERSION=1.21.3
+ARG GOLANG_VERSION=1.22.1
 ARG CMAKE_VERSION=3.22.1
 ARG CMAKE_VERSION=3.22.1
 ARG CUDA_VERSION=11.3.1
 ARG CUDA_VERSION=11.3.1
 
 

+ 12 - 13
docs/development.md

@@ -3,7 +3,7 @@
 Install required tools:
 Install required tools:
 
 
 - cmake version 3.24 or higher
 - cmake version 3.24 or higher
-- go version 1.21 or higher
+- go version 1.22 or higher
 - gcc version 11.4.0 or higher
 - gcc version 11.4.0 or higher
 
 
 ```bash
 ```bash
@@ -42,15 +42,15 @@ Now you can run `ollama`:
 
 
 #### Linux CUDA (NVIDIA)
 #### Linux CUDA (NVIDIA)
 
 
-*Your operating system distribution may already have packages for NVIDIA CUDA. Distro packages are often preferable, but instructions are distro-specific. Please consult distro-specific docs for dependencies if available!*
+_Your operating system distribution may already have packages for NVIDIA CUDA. Distro packages are often preferable, but instructions are distro-specific. Please consult distro-specific docs for dependencies if available!_
 
 
 Install `cmake` and `golang` as well as [NVIDIA CUDA](https://developer.nvidia.com/cuda-downloads)
 Install `cmake` and `golang` as well as [NVIDIA CUDA](https://developer.nvidia.com/cuda-downloads)
-development and runtime packages. 
+development and runtime packages.
 
 
 Typically the build scripts will auto-detect CUDA, however, if your Linux distro
 Typically the build scripts will auto-detect CUDA, however, if your Linux distro
 or installation approach uses unusual paths, you can specify the location by
 or installation approach uses unusual paths, you can specify the location by
 specifying an environment variable `CUDA_LIB_DIR` to the location of the shared
 specifying an environment variable `CUDA_LIB_DIR` to the location of the shared
-libraries, and `CUDACXX` to the location of the nvcc compiler.  You can customize
+libraries, and `CUDACXX` to the location of the nvcc compiler. You can customize
 set set of target CUDA architectues by setting `CMAKE_CUDA_ARCHITECTURES` (e.g. "50;60;70")
 set set of target CUDA architectues by setting `CMAKE_CUDA_ARCHITECTURES` (e.g. "50;60;70")
 
 
 Then generate dependencies:
 Then generate dependencies:
@@ -67,7 +67,7 @@ go build .
 
 
 #### Linux ROCm (AMD)
 #### Linux ROCm (AMD)
 
 
-*Your operating system distribution may already have packages for AMD ROCm and CLBlast. Distro packages are often preferable, but instructions are distro-specific. Please consult distro-specific docs for dependencies if available!*
+_Your operating system distribution may already have packages for AMD ROCm and CLBlast. Distro packages are often preferable, but instructions are distro-specific. Please consult distro-specific docs for dependencies if available!_
 
 
 Install [CLBlast](https://github.com/CNugteren/CLBlast/blob/master/doc/installation.md) and [ROCm](https://rocm.docs.amd.com/en/latest/deploy/linux/quick_start.html) development packages first, as well as `cmake` and `golang`.
 Install [CLBlast](https://github.com/CNugteren/CLBlast/blob/master/doc/installation.md) and [ROCm](https://rocm.docs.amd.com/en/latest/deploy/linux/quick_start.html) development packages first, as well as `cmake` and `golang`.
 
 
@@ -75,7 +75,7 @@ Typically the build scripts will auto-detect ROCm, however, if your Linux distro
 or installation approach uses unusual paths, you can specify the location by
 or installation approach uses unusual paths, you can specify the location by
 specifying an environment variable `ROCM_PATH` to the location of the ROCm
 specifying an environment variable `ROCM_PATH` to the location of the ROCm
 install (typically `/opt/rocm`), and `CLBlast_DIR` to the location of the
 install (typically `/opt/rocm`), and `CLBlast_DIR` to the location of the
-CLBlast install (typically `/usr/lib/cmake/CLBlast`).  You can also customize
+CLBlast install (typically `/usr/lib/cmake/CLBlast`). You can also customize
 the AMD GPU targets by setting AMDGPU_TARGETS (e.g. `AMDGPU_TARGETS="gfx1101;gfx1102"`)
 the AMD GPU targets by setting AMDGPU_TARGETS (e.g. `AMDGPU_TARGETS="gfx1101;gfx1102"`)
 
 
 ```
 ```
@@ -88,17 +88,17 @@ Then build the binary:
 go build .
 go build .
 ```
 ```
 
 
-ROCm requires elevated privileges to access the GPU at runtime.  On most distros you can add your user account to the `render` group, or run as root.
+ROCm requires elevated privileges to access the GPU at runtime. On most distros you can add your user account to the `render` group, or run as root.
 
 
 #### Advanced CPU Settings
 #### Advanced CPU Settings
 
 
 By default, running `go generate ./...` will compile a few different variations
 By default, running `go generate ./...` will compile a few different variations
 of the LLM library based on common CPU families and vector math capabilities,
 of the LLM library based on common CPU families and vector math capabilities,
 including a lowest-common-denominator which should run on almost any 64 bit CPU
 including a lowest-common-denominator which should run on almost any 64 bit CPU
-somewhat slowly.  At runtime, Ollama will auto-detect the optimal variation to
-load.  If you would like to build a CPU-based build customized for your
+somewhat slowly. At runtime, Ollama will auto-detect the optimal variation to
+load. If you would like to build a CPU-based build customized for your
 processor, you can set `OLLAMA_CUSTOM_CPU_DEFS` to the llama.cpp flags you would
 processor, you can set `OLLAMA_CUSTOM_CPU_DEFS` to the llama.cpp flags you would
-like to use.  For example, to compile an optimized binary for an Intel i9-9880H,
+like to use. For example, to compile an optimized binary for an Intel i9-9880H,
 you might use:
 you might use:
 
 
 ```
 ```
@@ -108,8 +108,7 @@ go build .
 
 
 #### Containerized Linux Build
 #### Containerized Linux Build
 
 
-If you have Docker available, you can build linux binaries with `./scripts/build_linux.sh` which has the CUDA and ROCm dependencies included.  The resulting binary is placed in `./dist`
-
+If you have Docker available, you can build linux binaries with `./scripts/build_linux.sh` which has the CUDA and ROCm dependencies included. The resulting binary is placed in `./dist`
 
 
 ### Windows
 ### Windows
 
 
@@ -118,7 +117,7 @@ Note: The windows build for Ollama is still under development.
 Install required tools:
 Install required tools:
 
 
 - MSVC toolchain - C/C++ and cmake as minimal requirements
 - MSVC toolchain - C/C++ and cmake as minimal requirements
-- go version 1.21 or higher
+- go version 1.22 or higher
 - MinGW (pick one variant) with GCC.
 - MinGW (pick one variant) with GCC.
   - <https://www.mingw-w64.org/>
   - <https://www.mingw-w64.org/>
   - <https://www.msys2.org/>
   - <https://www.msys2.org/>