瀏覽代碼

`import.md`: formatting and spelling

Jeffrey Morgan 1 年之前
父節點
當前提交
c416087339
共有 1 個文件被更改,包括 7 次插入7 次删除
  1. 7 7
      docs/import.md

+ 7 - 7
docs/import.md

@@ -1,6 +1,6 @@
 # Import a model
 # Import a model
 
 
-This guide walks through creating an Ollama model from an existing model on HuggingFace from PyTorch, Safetensors or GGUF. It optionally covers pushing the model to [ollama.ai](https://ollama.ai/library).
+This guide walks through importing a PyTorch, Safetensors or GGUF model from a HuggingFace repo to Ollama.
 
 
 ## Supported models
 ## Supported models
 
 
@@ -11,7 +11,7 @@ Ollama supports a set of model architectures, with support for more coming soon:
 - GPT-NeoX
 - GPT-NeoX
 - BigCode
 - BigCode
 
 
-To view a model's architecture, check its `config.json` file. You should see an entry under `architecture` (e.g. `LlamaForCausalLM`).
+To view a model's architecture, check the `config.json` file in its HuggingFace repo. You should see an entry under `architectures` (e.g. `LlamaForCausalLM`).
 
 
 ## Importing
 ## Importing
 
 
@@ -23,7 +23,7 @@ git clone https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1
 cd Mistral-7B-Instruct-v0.1
 cd Mistral-7B-Instruct-v0.1
 ```
 ```
 
 
-### Step 2: Convert and quantize
+### Step 2: Convert and quantize (PyTorch and Safetensors)
 
 
 A [Docker image](https://hub.docker.com/r/ollama/quantize) with the tooling required to convert and quantize models is available.
 A [Docker image](https://hub.docker.com/r/ollama/quantize) with the tooling required to convert and quantize models is available.
 
 
@@ -55,7 +55,7 @@ FROM ./q4_0.bin
 TEMPLATE "[INST] {{ .Prompt }} [/INST]"
 TEMPLATE "[INST] {{ .Prompt }} [/INST]"
 ```
 ```
 
 
-### Step 4: Create an Ollama model
+### Step 4: Create the Ollama model
 
 
 Finally, create a model from your `Modelfile`:
 Finally, create a model from your `Modelfile`:
 
 
@@ -69,12 +69,12 @@ Next, test the model with `ollama run`:
 ollama run example "What is your favourite condiment?"
 ollama run example "What is your favourite condiment?"
 ```
 ```
 
 
-### Step 5: Publish your model (optional - in alpha)
+### Step 5: Publish your model (optional – early alpha)
 
 
 Publishing models is in early alpha. If you'd like to publish your model to share with others, follow these steps:
 Publishing models is in early alpha. If you'd like to publish your model to share with others, follow these steps:
 
 
 1. Create [an account](https://ollama.ai/signup)
 1. Create [an account](https://ollama.ai/signup)
-2. Ollama uses SSH keys similar to Git. Find your public key with `cat ~/.ollama/id_ed25519.pub` and copy it to your clipboard.
+2. Run `cat ~/.ollama/id_ed25519.pub` to view your Ollama public key. Copy this to the clipboard.
 3. Add your public key to your [Ollama account](https://ollama.ai/settings/keys)
 3. Add your public key to your [Ollama account](https://ollama.ai/settings/keys)
 
 
 Next, copy your model to your username's namespace:
 Next, copy your model to your username's namespace:
@@ -89,7 +89,7 @@ Then push the model:
 ollama push <your username>/example
 ollama push <your username>/example
 ```
 ```
 
 
-After publishing, your model will be available at `https://ollama.ai/<your username>/example`
+After publishing, your model will be available at `https://ollama.ai/<your username>/example`.
 
 
 ## Quantization reference
 ## Quantization reference