|
@@ -28,7 +28,7 @@ A model file is the blueprint to create and share models with Ollama.
|
|
|
|
|
|
The format of the `Modelfile`:
|
|
The format of the `Modelfile`:
|
|
|
|
|
|
-```modelfile
|
|
|
|
|
|
+```
|
|
# comment
|
|
# comment
|
|
INSTRUCTION arguments
|
|
INSTRUCTION arguments
|
|
```
|
|
```
|
|
@@ -49,7 +49,7 @@ INSTRUCTION arguments
|
|
|
|
|
|
An example of a `Modelfile` creating a mario blueprint:
|
|
An example of a `Modelfile` creating a mario blueprint:
|
|
|
|
|
|
-```modelfile
|
|
|
|
|
|
+```
|
|
FROM llama3.2
|
|
FROM llama3.2
|
|
# sets the temperature to 1 [higher is more creative, lower is more coherent]
|
|
# sets the temperature to 1 [higher is more creative, lower is more coherent]
|
|
PARAMETER temperature 1
|
|
PARAMETER temperature 1
|
|
@@ -69,24 +69,30 @@ To use this:
|
|
|
|
|
|
To view the Modelfile of a given model, use the `ollama show --modelfile` command.
|
|
To view the Modelfile of a given model, use the `ollama show --modelfile` command.
|
|
|
|
|
|
- ```bash
|
|
|
|
- > ollama show --modelfile llama3.2
|
|
|
|
- # Modelfile generated by "ollama show"
|
|
|
|
- # To build a new Modelfile based on this one, replace the FROM line with:
|
|
|
|
- # FROM llama3.2:latest
|
|
|
|
- FROM /Users/pdevine/.ollama/models/blobs/sha256-00e1317cbf74d901080d7100f57580ba8dd8de57203072dc6f668324ba545f29
|
|
|
|
- TEMPLATE """{{ if .System }}<|start_header_id|>system<|end_header_id|>
|
|
|
|
-
|
|
|
|
- {{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|>
|
|
|
|
|
|
+```shell
|
|
|
|
+ollama show --modelfile llama3.2
|
|
|
|
+```
|
|
|
|
|
|
- {{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|>
|
|
|
|
|
|
+> **Output**:
|
|
|
|
+>
|
|
|
|
+> ```
|
|
|
|
+> # Modelfile generated by "ollama show"
|
|
|
|
+> # To build a new Modelfile based on this one, replace the FROM line with:
|
|
|
|
+> # FROM llama3.2:latest
|
|
|
|
+> FROM /Users/pdevine/.ollama/models/blobs/sha256-00e1317cbf74d901080d7100f57580ba8dd8de57203072dc6f668324ba545f29
|
|
|
|
+> TEMPLATE """{{ if .System }}<|start_header_id|>system<|end_header_id|>
|
|
|
|
+>
|
|
|
|
+> {{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|>
|
|
|
|
+>
|
|
|
|
+> {{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|>
|
|
|
|
+>
|
|
|
|
+> {{ .Response }}<|eot_id|>"""
|
|
|
|
+> PARAMETER stop "<|start_header_id|>"
|
|
|
|
+> PARAMETER stop "<|end_header_id|>"
|
|
|
|
+> PARAMETER stop "<|eot_id|>"
|
|
|
|
+> PARAMETER stop "<|reserved_special_token"
|
|
|
|
+> ```
|
|
|
|
|
|
- {{ .Response }}<|eot_id|>"""
|
|
|
|
- PARAMETER stop "<|start_header_id|>"
|
|
|
|
- PARAMETER stop "<|end_header_id|>"
|
|
|
|
- PARAMETER stop "<|eot_id|>"
|
|
|
|
- PARAMETER stop "<|reserved_special_token"
|
|
|
|
- ```
|
|
|
|
|
|
|
|
## Instructions
|
|
## Instructions
|
|
|
|
|
|
@@ -94,13 +100,13 @@ To view the Modelfile of a given model, use the `ollama show --modelfile` comman
|
|
|
|
|
|
The `FROM` instruction defines the base model to use when creating a model.
|
|
The `FROM` instruction defines the base model to use when creating a model.
|
|
|
|
|
|
-```modelfile
|
|
|
|
|
|
+```
|
|
FROM <model name>:<tag>
|
|
FROM <model name>:<tag>
|
|
```
|
|
```
|
|
|
|
|
|
#### Build from existing model
|
|
#### Build from existing model
|
|
|
|
|
|
-```modelfile
|
|
|
|
|
|
+```
|
|
FROM llama3.2
|
|
FROM llama3.2
|
|
```
|
|
```
|
|
|
|
|
|
@@ -111,7 +117,7 @@ Additional models can be found at:
|
|
|
|
|
|
#### Build from a Safetensors model
|
|
#### Build from a Safetensors model
|
|
|
|
|
|
-```modelfile
|
|
|
|
|
|
+```
|
|
FROM <model directory>
|
|
FROM <model directory>
|
|
```
|
|
```
|
|
|
|
|
|
@@ -125,7 +131,7 @@ Currently supported model architectures:
|
|
|
|
|
|
#### Build from a GGUF file
|
|
#### Build from a GGUF file
|
|
|
|
|
|
-```modelfile
|
|
|
|
|
|
+```
|
|
FROM ./ollama-model.gguf
|
|
FROM ./ollama-model.gguf
|
|
```
|
|
```
|
|
|
|
|
|
@@ -136,7 +142,7 @@ The GGUF file location should be specified as an absolute path or relative to th
|
|
|
|
|
|
The `PARAMETER` instruction defines a parameter that can be set when the model is run.
|
|
The `PARAMETER` instruction defines a parameter that can be set when the model is run.
|
|
|
|
|
|
-```modelfile
|
|
|
|
|
|
+```
|
|
PARAMETER <parameter> <parametervalue>
|
|
PARAMETER <parameter> <parametervalue>
|
|
```
|
|
```
|
|
|
|
|
|
@@ -183,7 +189,7 @@ TEMPLATE """{{ if .System }}<|im_start|>system
|
|
|
|
|
|
The `SYSTEM` instruction specifies the system message to be used in the template, if applicable.
|
|
The `SYSTEM` instruction specifies the system message to be used in the template, if applicable.
|
|
|
|
|
|
-```modelfile
|
|
|
|
|
|
+```
|
|
SYSTEM """<system message>"""
|
|
SYSTEM """<system message>"""
|
|
```
|
|
```
|
|
|
|
|
|
@@ -193,7 +199,7 @@ The `ADAPTER` instruction specifies a fine tuned LoRA adapter that should apply
|
|
|
|
|
|
#### Safetensor adapter
|
|
#### Safetensor adapter
|
|
|
|
|
|
-```modelfile
|
|
|
|
|
|
+```
|
|
ADAPTER <path to safetensor adapter>
|
|
ADAPTER <path to safetensor adapter>
|
|
```
|
|
```
|
|
|
|
|
|
@@ -204,7 +210,7 @@ Currently supported Safetensor adapters:
|
|
|
|
|
|
#### GGUF adapter
|
|
#### GGUF adapter
|
|
|
|
|
|
-```modelfile
|
|
|
|
|
|
+```
|
|
ADAPTER ./ollama-lora.gguf
|
|
ADAPTER ./ollama-lora.gguf
|
|
```
|
|
```
|
|
|
|
|
|
@@ -212,7 +218,7 @@ ADAPTER ./ollama-lora.gguf
|
|
|
|
|
|
The `LICENSE` instruction allows you to specify the legal license under which the model used with this Modelfile is shared or distributed.
|
|
The `LICENSE` instruction allows you to specify the legal license under which the model used with this Modelfile is shared or distributed.
|
|
|
|
|
|
-```modelfile
|
|
|
|
|
|
+```
|
|
LICENSE """
|
|
LICENSE """
|
|
<license text>
|
|
<license text>
|
|
"""
|
|
"""
|
|
@@ -222,7 +228,7 @@ LICENSE """
|
|
|
|
|
|
The `MESSAGE` instruction allows you to specify a message history for the model to use when responding. Use multiple iterations of the MESSAGE command to build up a conversation which will guide the model to answer in a similar way.
|
|
The `MESSAGE` instruction allows you to specify a message history for the model to use when responding. Use multiple iterations of the MESSAGE command to build up a conversation which will guide the model to answer in a similar way.
|
|
|
|
|
|
-```modelfile
|
|
|
|
|
|
+```
|
|
MESSAGE <role> <message>
|
|
MESSAGE <role> <message>
|
|
```
|
|
```
|
|
|
|
|
|
@@ -237,7 +243,7 @@ MESSAGE <role> <message>
|
|
|
|
|
|
#### Example conversation
|
|
#### Example conversation
|
|
|
|
|
|
-```modelfile
|
|
|
|
|
|
+```
|
|
MESSAGE user Is Toronto in Canada?
|
|
MESSAGE user Is Toronto in Canada?
|
|
MESSAGE assistant yes
|
|
MESSAGE assistant yes
|
|
MESSAGE user Is Sacramento in Canada?
|
|
MESSAGE user Is Sacramento in Canada?
|