|
@@ -12,7 +12,6 @@
|
|
|
- [Push a Model](#push-a-model)
|
|
|
- [Generate Embeddings](#generate-embeddings)
|
|
|
|
|
|
-
|
|
|
## Conventions
|
|
|
|
|
|
### Model names
|
|
@@ -40,12 +39,13 @@ Generate a response for a given prompt with a provided model. This is a streamin
|
|
|
- `model`: (required) the [model name](#model-names)
|
|
|
- `prompt`: the prompt to generate a response for
|
|
|
|
|
|
-Advanced parameters:
|
|
|
+Advanced parameters (optional):
|
|
|
|
|
|
- `options`: additional model parameters listed in the documentation for the [Modelfile](./modelfile.md#valid-parameters-and-values) such as `temperature`
|
|
|
- `system`: system prompt to (overrides what is defined in the `Modelfile`)
|
|
|
- `template`: the full prompt or prompt template (overrides what is defined in the `Modelfile`)
|
|
|
- `context`: the context parameter returned from a previous request to `/generate`, this can be used to keep a short conversational memory
|
|
|
+- `stream`: if `false` the response will be be returned as a single response object, rather than a stream of objects
|
|
|
|
|
|
### Request
|
|
|
|
|
@@ -80,6 +80,7 @@ The final response in the stream also includes additional data about the generat
|
|
|
- `eval_count`: number of tokens the response
|
|
|
- `eval_duration`: time in nanoseconds spent generating the response
|
|
|
- `context`: an encoding of the conversation used in this response, this can be sent in the next request to keep a conversational memory
|
|
|
+- `response`: empty if the response was streamed, if not streamed, this will contain the full response
|
|
|
|
|
|
To calculate how fast the response is generated in tokens per second (token/s), divide `eval_count` / `eval_duration`.
|
|
|
|
|
@@ -87,6 +88,7 @@ To calculate how fast the response is generated in tokens per second (token/s),
|
|
|
{
|
|
|
"model": "llama2:7b",
|
|
|
"created_at": "2023-08-04T19:22:45.499127Z",
|
|
|
+ "response": "",
|
|
|
"context": [1, 2, 3],
|
|
|
"done": true,
|
|
|
"total_duration": 5589157167,
|
|
@@ -112,6 +114,7 @@ Create a model from a [`Modelfile`](./modelfile.md)
|
|
|
|
|
|
- `name`: name of the model to create
|
|
|
- `path`: path to the Modelfile
|
|
|
+- `stream`: (optional) if `false` the response will be be returned as a single response object, rather than a stream of objects
|
|
|
|
|
|
### Request
|
|
|
|
|
@@ -179,7 +182,7 @@ Show details about a model including modelfile, template, parameters, license, a
|
|
|
|
|
|
### Request
|
|
|
|
|
|
-```shell
|
|
|
+```shell
|
|
|
curl http://localhost:11434/api/show -d '{
|
|
|
"name": "llama2:7b"
|
|
|
}'
|
|
@@ -189,10 +192,10 @@ curl http://localhost:11434/api/show -d '{
|
|
|
|
|
|
```json
|
|
|
{
|
|
|
- "license": "<contents of license block>",
|
|
|
- "modelfile": "# Modelfile generated by \"ollama show\"\n# To build a new Modelfile based on this one, replace the FROM line with:\n# FROM llama2:latest\n\nFROM /Users/username/.ollama/models/blobs/sha256:8daa9615cce30c259a9555b1cc250d461d1bc69980a274b44d7eda0be78076d8\nTEMPLATE \"\"\"[INST] {{ if and .First .System }}<<SYS>>{{ .System }}<</SYS>>\n\n{{ end }}{{ .Prompt }} [/INST] \"\"\"\nSYSTEM \"\"\"\"\"\"\nPARAMETER stop [INST]\nPARAMETER stop [/INST]\nPARAMETER stop <<SYS>>\nPARAMETER stop <</SYS>>\n",
|
|
|
- "parameters": "stop [INST]\nstop [/INST]\nstop <<SYS>>\nstop <</SYS>>",
|
|
|
- "template": "[INST] {{ if and .First .System }}<<SYS>>{{ .System }}<</SYS>>\n\n{{ end }}{{ .Prompt }} [/INST] "
|
|
|
+ "license": "<contents of license block>",
|
|
|
+ "modelfile": "# Modelfile generated by \"ollama show\"\n# To build a new Modelfile based on this one, replace the FROM line with:\n# FROM llama2:latest\n\nFROM /Users/username/.ollama/models/blobs/sha256:8daa9615cce30c259a9555b1cc250d461d1bc69980a274b44d7eda0be78076d8\nTEMPLATE \"\"\"[INST] {{ if and .First .System }}<<SYS>>{{ .System }}<</SYS>>\n\n{{ end }}{{ .Prompt }} [/INST] \"\"\"\nSYSTEM \"\"\"\"\"\"\nPARAMETER stop [INST]\nPARAMETER stop [/INST]\nPARAMETER stop <<SYS>>\nPARAMETER stop <</SYS>>\n",
|
|
|
+ "parameters": "stop [INST]\nstop [/INST]\nstop <<SYS>>\nstop <</SYS>>",
|
|
|
+ "template": "[INST] {{ if and .First .System }}<<SYS>>{{ .System }}<</SYS>>\n\n{{ end }}{{ .Prompt }} [/INST] "
|
|
|
}
|
|
|
```
|
|
|
|
|
@@ -245,6 +248,7 @@ Download a model from the ollama library. Cancelled pulls are resumed from where
|
|
|
|
|
|
- `name`: name of the model to pull
|
|
|
- `insecure`: (optional) allow insecure connections to the library. Only use this if you are pulling from your own library during development.
|
|
|
+- `stream`: (optional) if `false` the response will be be returned as a single response object, rather than a stream of objects
|
|
|
|
|
|
### Request
|
|
|
|
|
@@ -275,7 +279,8 @@ Upload a model to a model library. Requires registering for ollama.ai and adding
|
|
|
### Parameters
|
|
|
|
|
|
- `name`: name of the model to push in the form of `<namespace>/<model>:<tag>`
|
|
|
-- `insecure`: (optional) allow insecure connections to the library. Only use this if you are pushing to your library during development.
|
|
|
+- `insecure`: (optional) allow insecure connections to the library. Only use this if you are pushing to your library during development.
|
|
|
+- `stream`: (optional) if `false` the response will be be returned as a single response object, rather than a stream of objects
|
|
|
|
|
|
### Request
|
|
|
|
|
@@ -290,15 +295,16 @@ curl -X POST http://localhost:11434/api/push -d '{
|
|
|
Streaming response that starts with:
|
|
|
|
|
|
```json
|
|
|
-{"status":"retrieving manifest"}
|
|
|
+{ "status": "retrieving manifest" }
|
|
|
```
|
|
|
|
|
|
and then:
|
|
|
|
|
|
```json
|
|
|
{
|
|
|
-"status":"starting upload","digest":"sha256:bc07c81de745696fdf5afca05e065818a8149fb0c77266fb584d9b2cba3711ab",
|
|
|
-"total":1928429856
|
|
|
+ "status": "starting upload",
|
|
|
+ "digest": "sha256:bc07c81de745696fdf5afca05e065818a8149fb0c77266fb584d9b2cba3711ab",
|
|
|
+ "total": 1928429856
|
|
|
}
|
|
|
```
|
|
|
|
|
@@ -306,9 +312,10 @@ Then there is a series of uploading responses:
|
|
|
|
|
|
```json
|
|
|
{
|
|
|
-"status":"starting upload",
|
|
|
-"digest":"sha256:bc07c81de745696fdf5afca05e065818a8149fb0c77266fb584d9b2cba3711ab",
|
|
|
-"total":1928429856}
|
|
|
+ "status": "starting upload",
|
|
|
+ "digest": "sha256:bc07c81de745696fdf5afca05e065818a8149fb0c77266fb584d9b2cba3711ab",
|
|
|
+ "total": 1928429856
|
|
|
+}
|
|
|
```
|
|
|
|
|
|
Finally, when the upload is complete:
|