Browse Source

remove mention of gpt-neox in import (#1381)

Signed-off-by: Matt Williams <m@technovangelist.com>
Matt Williams 1 năm trước cách đây
mục cha
commit
f1ef3f9947
1 tập tin đã thay đổi với 0 bổ sung4 xóa
  1. 0 4
      docs/import.md

+ 0 - 4
docs/import.md

@@ -43,7 +43,6 @@ Ollama supports a set of model architectures, with support for more coming soon:
 
 - Llama & Mistral
 - Falcon & RW
-- GPT-NeoX
 - BigCode
 
 To view a model's architecture, check the `config.json` file in its HuggingFace repo. You should see an entry under `architectures` (e.g. `LlamaForCausalLM`).
@@ -184,9 +183,6 @@ python convert.py <path to model directory>
 # FalconForCausalLM
 python convert-falcon-hf-to-gguf.py <path to model directory>
 
-# GPTNeoXForCausalLM
-python convert-gptneox-hf-to-gguf.py <path to model directory>
-
 # GPTBigCodeForCausalLM
 python convert-starcoder-hf-to-gguf.py <path to model directory>
 ```