Няма описание

Timothy Jaeryang Baek b03fc97e28 Update bug_report.yaml преди 1 месец
.github b03fc97e28 Update bug_report.yaml преди 1 месец
backend d4fca9dabf chore: format преди 1 месец
cypress f46b95300b feat: rich text input for chat преди 6 месеца
docs 76f99d193f docs/apache.md: Add websocket proxy преди 3 месеца
kubernetes 82fbfd69a5 Improve `kustomization` usage преди 7 месеца
scripts eeb00a5ca2 chore: format преди 2 месеца
src c0ecff9d70 fix: pinyin keyboard преди 1 месец
static 1639fbb544 fix: site.webmanifest преди 1 месец
test a937f6504a feat: add image gen with automatic1111 to integration test преди 11 месеца
.dockerignore 701f40aedd feat: Initial support for pgvector преди 5 месеца
.env.example 79f440f302 refac: byebye litellm преди 11 месеца
.eslintignore 5e03670f1e chat feature added преди 1 година
.eslintrc.cjs 730befce45 feat: add basic cypress test as initial work towards e2e tests преди 1 година
.gitattributes beb5031f7b fixed Docker problem some users experienced преди 1 година
.gitignore 8a38b0a286 feat(config): exclude .vscode/settings.json from version control преди 10 месеца
.npmrc 5e03670f1e chat feature added преди 1 година
.prettierignore 0a9092156c chore: format преди 11 месеца
.prettierrc 5e03670f1e chat feature added преди 1 година
CHANGELOG.md 1173459eee doc: changelog преди 1 месец
CODE_OF_CONDUCT.md 8f1953e667 [DATALAD RUNCMD] run codespell throughout fixing few left typos automagically преди 3 месеца
Caddyfile.localhost c307777a6d feat: update .env.example and add Caddyfile преди 1 година
Dockerfile 1cd036e768 refac преди 6 месеца
INSTALLATION.md e05ae3481e rename преди 1 година
LICENSE a76068d69c Update LICENSE преди 3 месеца
Makefile e1e66f708f Chose between "docker-compose" and "docker compose" in Makefile преди 1 година
README.md e8b11a78d1 chore: format преди 2 месеца
TROUBLESHOOTING.md 12516c8a45 fix: Fix typos преди 6 месеца
confirm_remove.sh 4570f6fb0e Chose between "docker-compose" and "docker compose" in confirm_remove.sh преди 1 година
cypress.config.ts 730befce45 feat: add basic cypress test as initial work towards e2e tests преди 1 година
demo.gif d28925de05 doc: demo.gif преди 11 месеца
docker-compose.a1111-test.yaml a937f6504a feat: add image gen with automatic1111 to integration test преди 11 месеца
docker-compose.amdgpu.yaml 8db03f3ab2 Add variables преди 1 година
docker-compose.api.yaml 93aea0a4d4 Removed version synatax as its no longer needed per Docker Docs преди 11 месеца
docker-compose.data.yaml 93aea0a4d4 Removed version synatax as its no longer needed per Docker Docs преди 11 месеца
docker-compose.gpu.yaml 93aea0a4d4 Removed version synatax as its no longer needed per Docker Docs преди 11 месеца
docker-compose.playwright.yaml bc82f48ebf refac: RAG_WEB_LOADER -> RAG_WEB_LOADER_ENGINE преди 2 месеца
docker-compose.yaml 93aea0a4d4 Removed version synatax as its no longer needed per Docker Docs преди 11 месеца
hatch_build.py 4fdb26fdc4 refac: rename build hash vars преди 11 месеца
i18next-parser.config.ts e8ffb2c023 fix: configure i18next to not return empty strings преди 1 година
package-lock.json aaaebfabbe chore: format преди 1 месец
package.json aaaebfabbe chore: format преди 1 месец
postcss.config.js a79a6a5c34 chore: format преди 2 месеца
pyproject.toml 976c94ba5a chore: pyproject преди 1 месец
run-compose.sh c3df481b22 Introduce docker-compose.playwright.yaml + run-compose update преди 2 месеца
run-ollama-docker.sh 685d37418f fix: run-ollama-docker.sh преди 1 година
run.sh 61411b22e2 Update run.sh преди 1 година
svelte.config.js 642a093d02 refac: tool calls преди 2 месеца
tailwind.config.js f2006c7efe refac: styling преди 2 месеца
tsconfig.json 5e03670f1e chat feature added преди 1 година
update_ollama_models.sh 3f0dd80898 fix: chmod +x update_ollama_models.sh преди 4 месеца
uv.lock 1868cf0a91 increment unstructured package version преди 2 месеца
vite.config.ts 181fca4707 refac: transformer.js преди 2 месеца

README.md

Open WebUI 👋

GitHub stars GitHub forks GitHub watchers GitHub repo size GitHub language count GitHub top language GitHub last commit Hits Discord

Open WebUI is an extensible, feature-rich, and user-friendly self-hosted AI platform designed to operate entirely offline. It supports various LLM runners like Ollama and OpenAI-compatible APIs, with built-in inference engine for RAG, making it a powerful AI deployment solution.

Open WebUI Demo

[!TIP]
Looking for an Enterprise Plan?Speak with Our Sales Team Today!

Get enhanced capabilities, including custom theming and branding, Service Level Agreement (SLA) support, Long-Term Support (LTS) versions, and more!

For more information, be sure to check out our Open WebUI Documentation.

Key Features of Open WebUI ⭐

  • 🚀 Effortless Setup: Install seamlessly using Docker or Kubernetes (kubectl, kustomize or helm) for a hassle-free experience with support for both :ollama and :cuda tagged images.

  • 🤝 Ollama/OpenAI API Integration: Effortlessly integrate OpenAI-compatible APIs for versatile conversations alongside Ollama models. Customize the OpenAI API URL to link with LMStudio, GroqCloud, Mistral, OpenRouter, and more.

  • 🛡️ Granular Permissions and User Groups: By allowing administrators to create detailed user roles and permissions, we ensure a secure user environment. This granularity not only enhances security but also allows for customized user experiences, fostering a sense of ownership and responsibility amongst users.

  • 📱 Responsive Design: Enjoy a seamless experience across Desktop PC, Laptop, and Mobile devices.

  • 📱 Progressive Web App (PWA) for Mobile: Enjoy a native app-like experience on your mobile device with our PWA, providing offline access on localhost and a seamless user interface.

  • ✒️🔢 Full Markdown and LaTeX Support: Elevate your LLM experience with comprehensive Markdown and LaTeX capabilities for enriched interaction.

  • 🎤📹 Hands-Free Voice/Video Call: Experience seamless communication with integrated hands-free voice and video call features, allowing for a more dynamic and interactive chat environment.

  • 🛠️ Model Builder: Easily create Ollama models via the Web UI. Create and add custom characters/agents, customize chat elements, and import models effortlessly through Open WebUI Community integration.

  • 🐍 Native Python Function Calling Tool: Enhance your LLMs with built-in code editor support in the tools workspace. Bring Your Own Function (BYOF) by simply adding your pure Python functions, enabling seamless integration with LLMs.

  • 📚 Local RAG Integration: Dive into the future of chat interactions with groundbreaking Retrieval Augmented Generation (RAG) support. This feature seamlessly integrates document interactions into your chat experience. You can load documents directly into the chat or add files to your document library, effortlessly accessing them using the # command before a query.

  • 🔍 Web Search for RAG: Perform web searches using providers like SearXNG, Google PSE, Brave Search, serpstack, serper, Serply, DuckDuckGo, TavilySearch, SearchApi and Bing and inject the results directly into your chat experience.

  • 🌐 Web Browsing Capability: Seamlessly integrate websites into your chat experience using the # command followed by a URL. This feature allows you to incorporate web content directly into your conversations, enhancing the richness and depth of your interactions.

  • 🎨 Image Generation Integration: Seamlessly incorporate image generation capabilities using options such as AUTOMATIC1111 API or ComfyUI (local), and OpenAI's DALL-E (external), enriching your chat experience with dynamic visual content.

  • ⚙️ Many Models Conversations: Effortlessly engage with various models simultaneously, harnessing their unique strengths for optimal responses. Enhance your experience by leveraging a diverse set of models in parallel.

  • 🔐 Role-Based Access Control (RBAC): Ensure secure access with restricted permissions; only authorized individuals can access your Ollama, and exclusive model creation/pulling rights are reserved for administrators.

  • 🌐🌍 Multilingual Support: Experience Open WebUI in your preferred language with our internationalization (i18n) support. Join us in expanding our supported languages! We're actively seeking contributors!

  • 🧩 Pipelines, Open WebUI Plugin Support: Seamlessly integrate custom logic and Python libraries into Open WebUI using Pipelines Plugin Framework. Launch your Pipelines instance, set the OpenAI URL to the Pipelines URL, and explore endless possibilities. Examples include Function Calling, User Rate Limiting to control access, Usage Monitoring with tools like Langfuse, Live Translation with LibreTranslate for multilingual support, Toxic Message Filtering and much more.

  • 🌟 Continuous Updates: We are committed to improving Open WebUI with regular updates, fixes, and new features.

Want to learn more about Open WebUI's features? Check out our Open WebUI documentation for a comprehensive overview!

🔗 Also Check Out Open WebUI Community!

Don't forget to explore our sibling project, Open WebUI Community, where you can discover, download, and explore customized Modelfiles. Open WebUI Community offers a wide range of exciting possibilities for enhancing your chat interactions with Open WebUI! 🚀

How to Install 🚀

Installation via Python pip 🐍

Open WebUI can be installed using pip, the Python package installer. Before proceeding, ensure you're using Python 3.11 to avoid compatibility issues.

  1. Install Open WebUI: Open your terminal and run the following command to install Open WebUI:

    pip install open-webui
    
  2. Running Open WebUI: After installation, you can start Open WebUI by executing:

    open-webui serve
    

This will start the Open WebUI server, which you can access at http://localhost:8080

Quick Start with Docker 🐳

[!NOTE]
Please note that for certain Docker environments, additional configurations might be needed. If you encounter any connection issues, our detailed guide on Open WebUI Documentation is ready to assist you.

[!WARNING] When using Docker to install Open WebUI, make sure to include the -v open-webui:/app/backend/data in your Docker command. This step is crucial as it ensures your database is properly mounted and prevents any loss of data.

[!TIP]
If you wish to utilize Open WebUI with Ollama included or CUDA acceleration, we recommend utilizing our official images tagged with either :cuda or :ollama. To enable CUDA, you must install the Nvidia CUDA container toolkit on your Linux/WSL system.

Installation with Default Configuration

  • If Ollama is on your computer, use this command:

    docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
    
  • If Ollama is on a Different Server, use this command:

To connect to Ollama on another server, change the OLLAMA_BASE_URL to the server's URL:

  docker run -d -p 3000:8080 -e OLLAMA_BASE_URL=https://example.com -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
  • To run Open WebUI with Nvidia GPU support, use this command:

    docker run -d -p 3000:8080 --gpus all --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:cuda
    

Installation for OpenAI API Usage Only

  • If you're only using OpenAI API, use this command:

    docker run -d -p 3000:8080 -e OPENAI_API_KEY=your_secret_key -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
    

Installing Open WebUI with Bundled Ollama Support

This installation method uses a single container image that bundles Open WebUI with Ollama, allowing for a streamlined setup via a single command. Choose the appropriate command based on your hardware setup:

  • With GPU Support: Utilize GPU resources by running the following command:

    docker run -d -p 3000:8080 --gpus=all -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama
    
  • For CPU Only: If you're not using a GPU, use this command instead:

    docker run -d -p 3000:8080 -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama
    

Both commands facilitate a built-in, hassle-free installation of both Open WebUI and Ollama, ensuring that you can get everything up and running swiftly.

After installation, you can access Open WebUI at http://localhost:3000. Enjoy! 😄

Other Installation Methods

We offer various installation alternatives, including non-Docker native installation methods, Docker Compose, Kustomize, and Helm. Visit our Open WebUI Documentation or join our Discord community for comprehensive guidance.

Troubleshooting

Encountering connection issues? Our Open WebUI Documentation has got you covered. For further assistance and to join our vibrant community, visit the Open WebUI Discord.

Open WebUI: Server Connection Error

If you're experiencing connection issues, it’s often due to the WebUI docker container not being able to reach the Ollama server at 127.0.0.1:11434 (host.docker.internal:11434) inside the container . Use the --network=host flag in your docker command to resolve this. Note that the port changes from 3000 to 8080, resulting in the link: http://localhost:8080.

Example Docker Command:

docker run -d --network=host -v open-webui:/app/backend/data -e OLLAMA_BASE_URL=http://127.0.0.1:11434 --name open-webui --restart always ghcr.io/open-webui/open-webui:main

Keeping Your Docker Installation Up-to-Date

In case you want to update your local Docker installation to the latest version, you can do it with Watchtower:

docker run --rm --volume /var/run/docker.sock:/var/run/docker.sock containrrr/watchtower --run-once open-webui

In the last part of the command, replace open-webui with your container name if it is different.

Check our Updating Guide available in our Open WebUI Documentation.

Using the Dev Branch 🌙

[!WARNING] The :dev branch contains the latest unstable features and changes. Use it at your own risk as it may have bugs or incomplete features.

If you want to try out the latest bleeding-edge features and are okay with occasional instability, you can use the :dev tag like this:

docker run -d -p 3000:8080 -v open-webui:/app/backend/data --name open-webui --add-host=host.docker.internal:host-gateway --restart always ghcr.io/open-webui/open-webui:dev

Offline Mode

If you are running Open WebUI in an offline environment, you can set the HF_HUB_OFFLINE environment variable to 1 to prevent attempts to download models from the internet.

export HF_HUB_OFFLINE=1

What's Next? 🌟

Discover upcoming features on our roadmap in the Open WebUI Documentation.

License 📜

This project is licensed under the BSD-3-Clause License - see the LICENSE file for details. 📄

Support 💬

If you have any questions, suggestions, or need assistance, please open an issue or join our Open WebUI Discord community to connect with us! 🤝

Star History

<source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=open-webui/open-webui&type=Date&theme=dark" />
<source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/svg?repos=open-webui/open-webui&type=Date" />
<img alt="Star History Chart" src="https://api.star-history.com/svg?repos=open-webui/open-webui&type=Date" />


Created by Timothy Jaeryang Baek - Let's make Open WebUI even more amazing together! 💪