Browse Source

Refactor docker-compose configuration for modularity

Split the original docker-compose.yml into three separate files for enhanced modularity and ease of use. Created docker-compose.api.yml for API exposure configuration and docker-compose.gpu.yml for GPU support. This change simplifies the management of different deployment environments and configurations, making it easier to enable or disable specific features such as GPU support and API access without modifying the main docker-compose file.
icervero 1 year ago
parent
commit
9bbae0e25a
4 changed files with 28 additions and 13 deletions
  1. 8 1
      README.md
  2. 7 0
      docker-compose.api.yml
  3. 13 0
      docker-compose.gpu.yml
  4. 0 12
      docker-compose.yml

+ 8 - 1
README.md

@@ -75,7 +75,14 @@ If you don't have Ollama installed yet, you can use the provided Docker Compose
 docker compose up -d --build
 docker compose up -d --build
 ```
 ```
 
 
-This command will install both Ollama and Ollama Web UI on your system. Ensure to modify the `compose.yaml` file for GPU support and Exposing Ollama API outside the container stack if needed.
+This command will install both Ollama and Ollama Web UI on your system. 
+Enable GPU support or Exposing Ollama API outside the container stack with the following command:
+```bash
+docker compose -f docker-compose.yml \
+   -f docker-compose.gpu.yml \
+   -f docker-compose.api.yml \
+   up -d --build
+```
 
 
 ### Installing Ollama Web UI Only
 ### Installing Ollama Web UI Only
 
 

+ 7 - 0
docker-compose.api.yml

@@ -0,0 +1,7 @@
+version: '3.6'
+
+services:
+  ollama:
+    # Expose Ollama API outside the container stack
+    ports:
+      - 11434:11434

+ 13 - 0
docker-compose.gpu.yml

@@ -0,0 +1,13 @@
+version: '3.6'
+
+services:
+  ollama:
+    # GPU support
+    deploy:
+      resources:
+        reservations:
+          devices:
+            - driver: nvidia
+              count: 1
+              capabilities:
+                - gpu

+ 0 - 12
docker-compose.yml

@@ -2,20 +2,8 @@ version: '3.6'
 
 
 services:
 services:
   ollama:
   ollama:
-    # Uncomment below for GPU support
-    # deploy:
-    #   resources:
-    #     reservations:
-    #       devices:
-    #         - driver: nvidia
-    #           count: 1
-    #           capabilities:
-    #             - gpu
     volumes:
     volumes:
       - ollama:/root/.ollama
       - ollama:/root/.ollama
-    # Uncomment below to expose Ollama API outside the container stack
-    # ports:
-    #   - 11434:11434
     container_name: ollama
     container_name: ollama
     pull_policy: always
     pull_policy: always
     tty: true
     tty: true