瀏覽代碼

add some missing code directives in docs (#664)

Jiayu Liu 1 年之前
父節點
當前提交
4fc10acce9
共有 4 個文件被更改,包括 24 次插入25 次删除
  1. 4 4
      docs/development.md
  2. 2 3
      docs/faq.md
  3. 8 8
      docs/linux.md
  4. 10 10
      docs/modelfile.md

+ 4 - 4
docs/development.md

@@ -10,25 +10,25 @@ Install required tools:
 - go version 1.20 or higher
 - gcc version 11.4.0 or higher
 
-```
+```bash
 brew install go cmake gcc
 ```
 
 Get the required libraries:
 
-```
+```bash
 go generate ./...
 ```
 
 Then build ollama:
 
-```
+```bash
 go build .
 ```
 
 Now you can run `ollama`:
 
-```
+```bash
 ./ollama
 ```
 

+ 2 - 3
docs/faq.md

@@ -2,13 +2,13 @@
 
 ## How can I expose the Ollama server?
 
-```
+```bash
 OLLAMA_HOST=0.0.0.0:11435 ollama serve
 ```
 
 By default, Ollama allows cross origin requests from `127.0.0.1` and `0.0.0.0`. To support more origins, you can use the `OLLAMA_ORIGINS` environment variable:
 
-```
+```bash
 OLLAMA_ORIGINS=http://192.168.1.1:*,https://example.com ollama serve
 ```
 
@@ -16,4 +16,3 @@ OLLAMA_ORIGINS=http://192.168.1.1:*,https://example.com ollama serve
 
 * macOS: Raw model data is stored under `~/.ollama/models`.
 * Linux: Raw model data is stored under `/usr/share/ollama/.ollama/models`
-

+ 8 - 8
docs/linux.md

@@ -2,7 +2,7 @@
 
 > Note: A one line installer for Ollama is available by running:
 >
-> ```
+> ```bash
 > curl https://ollama.ai/install.sh | sh
 > ```
 
@@ -10,7 +10,7 @@
 
 Ollama is distributed as a self-contained binary. Download it to a directory in your PATH:
 
-```
+```bash
 sudo curl -L https://ollama.ai/download/ollama-linux-amd64 -o /usr/bin/ollama
 sudo chmod +x /usr/bin/ollama
 ```
@@ -19,13 +19,13 @@ sudo chmod +x /usr/bin/ollama
 
 Start Ollama by running `ollama serve`:
 
-```
+```bash
 ollama serve
 ```
 
 Once Ollama is running, run a model in another terminal session:
 
-```
+```bash
 ollama run llama2
 ```
 
@@ -35,7 +35,7 @@ ollama run llama2
 
 Verify that the drivers are installed by running the following command, which should print details about your GPU:
 
-```
+```bash
 nvidia-smi
 ```
 
@@ -43,7 +43,7 @@ nvidia-smi
 
 Create a user for Ollama:
 
-```
+```bash
 sudo useradd -r -s /bin/false -m -d /usr/share/ollama ollama
 ```
 
@@ -68,7 +68,7 @@ WantedBy=default.target
 
 Then start the service:
 
-```
+```bash
 sudo systemctl daemon-reload
 sudo systemctl enable ollama
 ```
@@ -77,7 +77,7 @@ sudo systemctl enable ollama
 
 To view logs of Ollama running as a startup service, run:
 
-```
+```bash
 journalctl -u ollama
 ```
 

+ 10 - 10
docs/modelfile.md

@@ -44,7 +44,7 @@ INSTRUCTION arguments
 
 An example of a model file creating a mario blueprint:
 
-```
+```modelfile
 FROM llama2
 # sets the temperature to 1 [higher is more creative, lower is more coherent]
 PARAMETER temperature 1
@@ -70,13 +70,13 @@ More examples are available in the [examples directory](../examples).
 
 The FROM instruction defines the base model to use when creating a model.
 
-```
+```modelfile
 FROM <model name>:<tag>
 ```
 
 #### Build from llama2
 
-```
+```modelfile
 FROM llama2
 ```
 
@@ -85,7 +85,7 @@ A list of available base models:
 
 #### Build from a bin file
 
-```
+```modelfile
 FROM ./ollama-model.bin
 ```
 
@@ -95,7 +95,7 @@ This bin file location should be specified as an absolute path or relative to th
 
 The EMBED instruction is used to add embeddings of files to a model. This is useful for adding custom data that the model can reference when generating an answer. Note that currently only text files are supported, formatted with each line as one embedding.
 
-```
+```modelfile
 FROM <model name>:<tag>
 EMBED <file path>.txt
 EMBED <different file path>.txt
@@ -106,7 +106,7 @@ EMBED <path to directory>/*.txt
 
 The `PARAMETER` instruction defines a parameter that can be set when the model is run.
 
-```
+```modelfile
 PARAMETER <parameter> <parametervalue>
 ```
 
@@ -142,7 +142,7 @@ PARAMETER <parameter> <parametervalue>
 | `{{ .Prompt }}` | The incoming prompt, this is not specified in the model file and will be set based on input.                 |
 | `{{ .First }}`  | A boolean value used to render specific template information for the first generation of a session.          |
 
-```
+```modelfile
 TEMPLATE """
 {{- if .First }}
 ### System:
@@ -162,7 +162,7 @@ SYSTEM """<system message>"""
 
 The `SYSTEM` instruction specifies the system prompt to be used in the template, if applicable.
 
-```
+```modelfile
 SYSTEM """<system message>"""
 ```
 
@@ -170,7 +170,7 @@ SYSTEM """<system message>"""
 
 The `ADAPTER` instruction specifies the LoRA adapter to apply to the base model. The value of this instruction should be an absolute path or a path relative to the Modelfile and the file must be in a GGML file format. The adapter should be tuned from the base model otherwise the behaviour is undefined.
 
-```
+```modelfile
 ADAPTER ./ollama-lora.bin
 ```
 
@@ -178,7 +178,7 @@ ADAPTER ./ollama-lora.bin
 
 The `LICENSE` instruction allows you to specify the legal license under which the model used with this Modelfile is shared or distributed.
 
-```
+```modelfile
 LICENSE """
 <license text>
 """