浏览代码

README instructions and build fixes

Jannik Streidl 1 年之前
父节点
当前提交
33ad2381aa
共有 2 个文件被更改,包括 60 次插入3 次删除
  1. 59 0
      README.md
  2. 1 3
      backend/config.py

+ 59 - 0
README.md

@@ -113,6 +113,65 @@ Don't forget to explore our sibling project, [Open WebUI Community](https://open
 
 - After installation, you can access Open WebUI at [http://localhost:3000](http://localhost:3000). Enjoy! 😄
 
+- **If you want to customize your build with additional args**, use this commands:
+
+  > [!NOTE]  
+  > If you only want to use Open WebUI with Ollama included or CUDA acelleration it's recomented to use our official images with the tags :cuda or :with-ollama
+  > If you want a combination of both or more customisation options like a different embedding model and/or CUDA version you need to build the image yourself following the instructions below.
+
+  **For the build:**
+
+  ```bash
+  docker build -t open-webui
+  ```
+
+  Optional build ARGS (use them in the docker build command below if needed):
+
+      e.g.
+
+  ```bash
+  --build-arg="USE_EMBEDDING_MODEL=intfloat/multilingual-e5-large"
+  ```
+
+  For "intfloat/multilingual-e5-large" custom embedding model (default is all-MiniLM-L6-v2), only works with [sentence transforer models](https://huggingface.co/models?library=sentence-transformers). Current [Leaderbord](https://huggingface.co/spaces/mteb/leaderboard) of embedding models.
+
+  ```bash
+  --build-arg="USE_OLLAMA=true"
+  ```
+
+  For including ollama in the image.
+
+  ```bash
+  --build-arg="USE_CUDA=true"
+  ```
+
+  To use CUDA exeleration for the embedding and whisper models.
+
+  > [!NOTE]
+  > You need to install the [Nvidia CUDA container toolkit](https://docs.nvidia.com/dgx/nvidia-container-runtime-upgrade/) on your machine to be able to set CUDA as the Docker engine. Only works with Linux - use WSL for Windows!
+
+  ```bash
+  --build-arg="USE_CUDA_VER=cu117"
+  ```
+
+  For CUDA 11 (default is CUDA 12)
+
+  **To run the image:**
+
+  - **If you DID NOT use the USE_CUDA=true build ARG**, use this command:
+
+  ```bash
+    docker run -d -p 3000:8080 -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
+  ```
+
+  - **If you DID use the USE_CUDA=true build ARG**, use this command:
+
+  ```bash
+    docker run --gpus all -d -p 3000:8080 -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
+  ```
+
+  - After installation, you can access Open WebUI at [http://localhost:3000](http://localhost:3000). Enjoy! 😄
+
 #### Open WebUI: Server Connection Error
 
 If you're experiencing connection issues, it’s often due to the WebUI docker container not being able to reach the Ollama server at 127.0.0.1:11434 (host.docker.internal:11434) inside the container . Use the `--network=host` flag in your docker command to resolve this. Note that the port changes from 3000 to 8080, resulting in the link: `http://localhost:8080`.

+ 1 - 3
backend/config.py

@@ -255,7 +255,6 @@ OLLAMA_BASE_URL = os.environ.get("OLLAMA_BASE_URL", "")
 K8S_FLAG = os.environ.get("K8S_FLAG", "")
 USE_OLLAMA_DOCKER = os.environ.get("USE_OLLAMA_DOCKER", "false")
 
-
 if OLLAMA_BASE_URL == "" and OLLAMA_API_BASE_URL != "":
     OLLAMA_BASE_URL = (
         OLLAMA_API_BASE_URL[:-4]
@@ -264,14 +263,13 @@ if OLLAMA_BASE_URL == "" and OLLAMA_API_BASE_URL != "":
     )
 
 if ENV == "prod":
-    if OLLAMA_BASE_URL == "/ollama":
+    if OLLAMA_BASE_URL == "/ollama" and not K8S_FLAG:
         if USE_OLLAMA_DOCKER.lower() == "true":
             # if you use all-in-one docker container (Open WebUI + Ollama) 
             # with the docker build arg USE_OLLAMA=true (--build-arg="USE_OLLAMA=true") this only works with http://localhost:11434
             OLLAMA_BASE_URL = "http://localhost:11434"
         else:    
             OLLAMA_BASE_URL = "http://host.docker.internal:11434"
-
     elif K8S_FLAG:
         OLLAMA_BASE_URL = "http://ollama-service.open-webui.svc.cluster.local:11434"