Przeglądaj źródła

reorganize `README.md` files

Jeffrey Morgan 1 rok temu
rodzic
commit
e1388938d4
2 zmienionych plików z 64 dodań i 28 usunięć
  1. 47 25
      README.md
  2. 17 3
      desktop/README.md

+ 47 - 25
README.md

@@ -1,41 +1,56 @@
 # Ollama
 
-The easiest way to run ai models.
+Run ai models locally.
 
-## Download
+_Note: this project is a work in progress. The features below are still in development_
 
-- [macOS](https://ollama.ai/download/darwin_arm64) (Apple Silicon)
-- macOS (Intel – Coming soon)
-- Windows (Coming soon)
-- Linux (Coming soon)
+**Features**
 
-## Python SDK
+- Run models locally on macOS (Windows, Linux and other platforms coming soon)
+- Ollama uses the fastest loader available for your platform and model (e.g. llama.cpp, core ml and other loaders coming soon)
+- Import models from local files
+- Find and download models on Hugging Face and other sources (coming soon)
+- Support for running and switching between multiple models at a time (coming soon)
+- Native desktop experience (coming soon)
+- Built-in memory (coming soon)
+
+## Install
 
 ```
 pip install ollama
 ```
 
-### Python SDK quickstart
+## Quickstart
 
-```python
-import ollama
-ollama.generate("./llama-7b-ggml.bin", "hi")
 ```
+% ollama run huggingface.co/TheBloke/orca_mini_3B-GGML
+Pulling huggingface.co/TheBloke/orca_mini_3B-GGML...
+Downloading [================>          ] 66.67% (2/3) 30.2MB/s
 
-### `ollama.generate(model, message)`
+...
+...
+...
 
-Generate a completion
+> Hello
+
+Hello, how may I help you?
+```
+
+## Python SDK
+
+### Example
 
 ```python
+import ollama
 ollama.generate("./llama-7b-ggml.bin", "hi")
 ```
 
-### `ollama.load(model)`
+### `ollama.generate(model, message)`
 
-Load a model for generation
+Generate a completion
 
 ```python
-ollama.load("model")
+ollama.generate("./llama-7b-ggml.bin", "hi")
 ```
 
 ### `ollama.models()`
@@ -58,6 +73,22 @@ Add a model by importing from a file
 ollama.add("./path/to/model")
 ```
 
+### `ollama.load(model)`
+
+Manually a model for generation
+
+```python
+ollama.load("model")
+```
+
+### `ollama.unload(model)`
+
+Unload a model
+
+```python
+ollama.unload("model")
+```
+
 ## Cooming Soon
 
 ### `ollama.pull(model)`
@@ -76,15 +107,6 @@ Search for compatible models that Ollama can run
 ollama.search("llama-7b")
 ```
 
-## Future CLI
-
-In the future, there will be an `ollama` CLI for running models on servers, in containers or for local development environments.
-
-```
-ollama generate huggingface.co/thebloke/llama-7b-ggml "hi"
-> Downloading [================>          ] 66.67% (2/3) 30.2MB/s
-```
-
 ## Documentation
 
 - [Development](docs/development.md)

+ 17 - 3
desktop/README.md

@@ -1,18 +1,32 @@
 # Desktop
 
-The Ollama desktop experience
+The Ollama desktop experience. This is an experimental, easy-to-use app for running models with [`ollama`](https://github.com/jmorganca/ollama).
+
+## Download
+
+- [macOS](https://ollama.ai/download/darwin_arm64) (Apple Silicon)
+- macOS (Intel – Coming soon)
+- Windows (Coming soon)
+- Linux (Coming soon)
 
 ## Running
 
-In the background run the `ollama.py` [development](../docs/development.md) server:
+In the background run the ollama server `ollama.py` server:
 
 ```
 python ../ollama.py serve --port 7734
 ```
 
-Then run the desktop app:
+Then run the desktop app with `npm start`:
 
 ```
 npm install
 npm start
 ```
+
+## Coming soon
+
+- Browse the latest available models on Hugging Face and other sources
+- Keep track of previous conversations with models
+- Switch between models
+- Connect to remote Ollama servers to run models