Browse Source

small `README.md` tweaks

Jeffrey Morgan 1 year ago
parent
commit
7e6fd7b457
1 changed files with 13 additions and 13 deletions
  1. 13 13
      README.md

+ 13 - 13
README.md

@@ -18,23 +18,23 @@ ollama.generate("./llama-7b-ggml.bin", "hi")
 
 ## Reference
 
-### `ollama.load`
+### `ollama.generate(model, message)`
 
-Load a model for generation
+Generate a completion
 
 ```python
-ollama.load("model name")
+ollama.generate("./llama-7b-ggml.bin", "hi")
 ```
 
-### `ollama.generate("message")`
+### `ollama.load(model)`
 
-Generate a completion
+Load a model for generation
 
 ```python
-ollama.generate(model, "hi")
+ollama.load("model name")
 ```
 
-### `ollama.models`
+### `ollama.models()`
 
 List available local models
 
@@ -42,13 +42,13 @@ List available local models
 models = ollama.models()
 ```
 
-### `ollama.serve`
+### `ollama.serve()`
 
 Serve the ollama http server
 
-## Cooing Soon
+## Cooming Soon
 
-### `ollama.pull`
+### `ollama.pull("model")`
 
 Download a model
 
@@ -56,7 +56,7 @@ Download a model
 ollama.pull("huggingface.co/thebloke/llama-7b-ggml")
 ```
 
-### `ollama.import`
+### `ollama.import("file")`
 
 Import a model from a file
 
@@ -64,7 +64,7 @@ Import a model from a file
 ollama.import("./path/to/model")
 ```
 
-### `ollama.search`
+### `ollama.search("query")`
 
 Search for compatible models that Ollama can run
 
@@ -74,7 +74,7 @@ ollama.search("llama-7b")
 
 ## Future CLI
 
-In the future, there will be an easy CLI for testing out models
+In the future, there will be an easy CLI for running models
 
 ```
 ollama run huggingface.co/thebloke/llama-7b-ggml