|
@@ -120,40 +120,76 @@ Don't forget to explore our sibling project, [Open WebUI Community](https://open
|
|
|
> [!TIP]
|
|
|
> If you wish to utilize Open WebUI with Ollama included or CUDA acceleration, we recommend utilizing our official images tagged with either `:cuda` or `:ollama`. To enable CUDA, you must install the [Nvidia CUDA container toolkit](https://docs.nvidia.com/dgx/nvidia-container-runtime-upgrade/) on your Linux/WSL system.
|
|
|
|
|
|
-**If Ollama is on your computer**, use this command:
|
|
|
+### Installation with Default Configuration
|
|
|
|
|
|
-```bash
|
|
|
-docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
|
|
|
-```
|
|
|
+- **If Ollama is on your computer**, use this command:
|
|
|
|
|
|
-**If Ollama is on a Different Server**, use this command:
|
|
|
+ ```bash
|
|
|
+ docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
|
|
|
+ ```
|
|
|
|
|
|
-To connect to Ollama on another server, change the `OLLAMA_BASE_URL` to the server's URL:
|
|
|
+- **If Ollama is on a Different Server**, use this command:
|
|
|
|
|
|
-```bash
|
|
|
-docker run -d -p 3000:8080 -e OLLAMA_BASE_URL=https://example.com -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
|
|
|
-```
|
|
|
+ To connect to Ollama on another server, change the `OLLAMA_BASE_URL` to the server's URL:
|
|
|
|
|
|
-After installation, you can access Open WebUI at [http://localhost:3000](http://localhost:3000). Enjoy! 😄
|
|
|
+ ```bash
|
|
|
+ docker run -d -p 3000:8080 -e OLLAMA_BASE_URL=https://example.com -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
|
|
|
+ ```
|
|
|
|
|
|
-#### Open WebUI: Server Connection Error
|
|
|
+ - **To run Open WebUI with Nvidia GPU support**, use this command:
|
|
|
|
|
|
-If you're experiencing connection issues, it’s often due to the WebUI docker container not being able to reach the Ollama server at 127.0.0.1:11434 (host.docker.internal:11434) inside the container . Use the `--network=host` flag in your docker command to resolve this. Note that the port changes from 3000 to 8080, resulting in the link: `http://localhost:8080`.
|
|
|
+ ```bash
|
|
|
+ docker run -d -p 3000:8080 --gpus all --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:cuda
|
|
|
+ ```
|
|
|
|
|
|
-**Example Docker Command**:
|
|
|
+### Installation for OpenAI API Usage Only
|
|
|
|
|
|
-```bash
|
|
|
-docker run -d --network=host -v open-webui:/app/backend/data -e OLLAMA_BASE_URL=http://127.0.0.1:11434 --name open-webui --restart always ghcr.io/open-webui/open-webui:main
|
|
|
-```
|
|
|
+- **If you're only using OpenAI API**, use this command:
|
|
|
+
|
|
|
+ ```bash
|
|
|
+ docker run -d -p 3000:8080 -e OPENAI_API_KEY=your_secret_key -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
|
|
|
+ ```
|
|
|
+
|
|
|
+### Installing Open WebUI with Bundled Ollama Support
|
|
|
+
|
|
|
+This installation method uses a single container image that bundles Open WebUI with Ollama, allowing for a streamlined setup via a single command. Choose the appropriate command based on your hardware setup:
|
|
|
+
|
|
|
+- **With GPU Support**:
|
|
|
+ Utilize GPU resources by running the following command:
|
|
|
+
|
|
|
+ ```bash
|
|
|
+ docker run -d -p 3000:8080 --gpus=all -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama
|
|
|
+ ```
|
|
|
+
|
|
|
+- **For CPU Only**:
|
|
|
+ If you're not using a GPU, use this command instead:
|
|
|
+
|
|
|
+ ```bash
|
|
|
+ docker run -d -p 3000:8080 -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama
|
|
|
+ ```
|
|
|
+
|
|
|
+Both commands facilitate a built-in, hassle-free installation of both Open WebUI and Ollama, ensuring that you can get everything up and running swiftly.
|
|
|
+
|
|
|
+After installation, you can access Open WebUI at [http://localhost:3000](http://localhost:3000). Enjoy! 😄
|
|
|
|
|
|
### Other Installation Methods
|
|
|
|
|
|
-We offer various installation alternatives, including non-Docker methods, Docker Compose, Kustomize, and Helm. Visit our [Open WebUI Documentation](https://docs.openwebui.com/getting-started/) or join our [Discord community](https://discord.gg/5rJgQTnV4s) for comprehensive guidance.
|
|
|
+We offer various installation alternatives, including non-Docker native installation methods, Docker Compose, Kustomize, and Helm. Visit our [Open WebUI Documentation](https://docs.openwebui.com/getting-started/) or join our [Discord community](https://discord.gg/5rJgQTnV4s) for comprehensive guidance.
|
|
|
|
|
|
### Troubleshooting
|
|
|
|
|
|
Encountering connection issues? Our [Open WebUI Documentation](https://docs.openwebui.com/troubleshooting/) has got you covered. For further assistance and to join our vibrant community, visit the [Open WebUI Discord](https://discord.gg/5rJgQTnV4s).
|
|
|
|
|
|
+#### Open WebUI: Server Connection Error
|
|
|
+
|
|
|
+If you're experiencing connection issues, it’s often due to the WebUI docker container not being able to reach the Ollama server at 127.0.0.1:11434 (host.docker.internal:11434) inside the container . Use the `--network=host` flag in your docker command to resolve this. Note that the port changes from 3000 to 8080, resulting in the link: `http://localhost:8080`.
|
|
|
+
|
|
|
+**Example Docker Command**:
|
|
|
+
|
|
|
+```bash
|
|
|
+docker run -d --network=host -v open-webui:/app/backend/data -e OLLAMA_BASE_URL=http://127.0.0.1:11434 --name open-webui --restart always ghcr.io/open-webui/open-webui:main
|
|
|
+```
|
|
|
+
|
|
|
### Keeping Your Docker Installation Up-to-Date
|
|
|
|
|
|
In case you want to update your local Docker installation to the latest version, you can do it with [Watchtower](https://containrrr.dev/watchtower/):
|