|
há 6 meses atrás | |
---|---|---|
.. | ||
README.md | 4e988ad5d6 Move Go code out of llm package | há 6 meses atrás |
common.go | 4e988ad5d6 Move Go code out of llm package | há 6 meses atrás |
llama-server.go | 4e988ad5d6 Move Go code out of llm package | há 6 meses atrás |
llama-status.go | 4e988ad5d6 Move Go code out of llm package | há 6 meses atrás |
llama_darwin.go | 4e988ad5d6 Move Go code out of llm package | há 6 meses atrás |
llama_linux.go | 4e988ad5d6 Move Go code out of llm package | há 6 meses atrás |
llama_windows.go | 4e988ad5d6 Move Go code out of llm package | há 6 meses atrás |
runners_test.go | cd5c8f6471 Optimize container images for startup (#6547) | há 7 meses atrás |
runners
Ollama uses a subprocess model to run one or more child processes to load the LLM. On some platforms (Linux non-containerized, MacOS) these executables are carried as payloads inside the main executable via the ../build package. Extraction and discovery of these runners at runtime is implemented in this package. This package also provides the abstraction to communicate with these subprocesses.