瀏覽代碼

Correct the kubernetes terminology (#3843)

* add details on kubernetes deployment and separate the testing process

* Update examples/kubernetes/README.md

thanks for suggesting this change, I agree with you and let's make this project better together !

Co-authored-by: JonZeolla <Zeolla@gmail.com>

---------

Co-authored-by: QIN Mélony <MQN1@dsone.3ds.com>
Co-authored-by: JonZeolla <Zeolla@gmail.com>
Mélony QIN 1 年之前
父節點
當前提交
3f71ba406a
共有 1 個文件被更改,包括 14 次插入12 次删除
  1. 14 12
      examples/kubernetes/README.md

+ 14 - 12
examples/kubernetes/README.md

@@ -7,12 +7,24 @@
 
 ## Steps
 
-1. Create the Ollama namespace, daemon set, and service
+1. Create the Ollama namespace, deployment, and service
 
    ```bash
    kubectl apply -f cpu.yaml
    ```
 
+## (Optional) Hardware Acceleration
+
+Hardware acceleration in Kubernetes requires NVIDIA's [`k8s-device-plugin`](https://github.com/NVIDIA/k8s-device-plugin) which is deployed in Kubernetes in form of daemonset. Follow the link for more details.
+
+Once configured, create a GPU enabled Ollama deployment.
+
+```bash
+kubectl apply -f gpu.yaml
+```
+
+## Test
+
 1. Port forward the Ollama service to connect and use it locally
 
    ```bash
@@ -23,14 +35,4 @@
 
    ```bash
    ollama run orca-mini:3b
-   ```
-
-## (Optional) Hardware Acceleration
-
-Hardware acceleration in Kubernetes requires NVIDIA's [`k8s-device-plugin`](https://github.com/NVIDIA/k8s-device-plugin). Follow the link for more details.
-
-Once configured, create a GPU enabled Ollama deployment.
-
-```bash
-kubectl apply -f gpu.yaml
-```
+   ```