|
@@ -46,7 +46,7 @@ Advanced parameters (optional):
|
|
|
- `format`: the format to return a response in. Currently the only accepted value is `json`
|
|
|
- `options`: additional model parameters listed in the documentation for the [Modelfile](./modelfile.md#valid-parameters-and-values) such as `temperature`
|
|
|
- `system`: system message to (overrides what is defined in the `Modelfile`)
|
|
|
-- `template`: the full prompt or prompt template (overrides what is defined in the `Modelfile`)
|
|
|
+- `template`: the prompt template to use (overrides what is defined in the `Modelfile`)
|
|
|
- `context`: the context parameter returned from a previous request to `/generate`, this can be used to keep a short conversational memory
|
|
|
- `stream`: if `false` the response will be returned as a single response object, rather than a stream of objects
|
|
|
- `raw`: if `true` no formatting will be applied to the prompt. You may choose to use the `raw` parameter if you are specifying a full templated prompt in your request to the API.
|
|
@@ -377,7 +377,7 @@ Advanced parameters (optional):
|
|
|
|
|
|
- `format`: the format to return a response in. Currently the only accepted value is `json`
|
|
|
- `options`: additional model parameters listed in the documentation for the [Modelfile](./modelfile.md#valid-parameters-and-values) such as `temperature`
|
|
|
-- `template`: the full prompt or prompt template (overrides what is defined in the `Modelfile`)
|
|
|
+- `template`: the prompt template to use (overrides what is defined in the `Modelfile`)
|
|
|
- `stream`: if `false` the response will be returned as a single response object, rather than a stream of objects
|
|
|
|
|
|
### Examples
|