To upgrade Ollama, run the installation process again. On the Mac, click the Ollama icon in the menubar and choose the restart option if an update is available.
Review the Troubleshooting docs for more about using logs.
Ollama server can be configured with environment variables.
If Ollama is run as a macOS application, environment variables should be set using launchctl
:
For each environment variable, call launchctl setenv
.
launchctl setenv OLLAMA_HOST "0.0.0.0"
Restart Ollama application.
If Ollama is run as a systemd service, environment variables should be set using systemctl
:
Edit the systemd service by calling systemctl edit ollama.service
. This will open an editor.
For each environment variable, add a line Environment
under section [Service]
:
[Service]
Environment="OLLAMA_HOST=0.0.0.0"
Save and exit.
Reload systemd
and restart Ollama:
systemctl daemon-reload
systemctl restart ollama
Ollama binds 127.0.0.1 port 11434 by default. Change the bind address with the OLLAMA_HOST
environment variable.
Refer to the section above for how to set environment variables on your platform.
Ollama allows cross-origin requests from 127.0.0.1
and 0.0.0.0
by default. Additional origins can be configured with OLLAMA_ORIGINS
.
Refer to the section above for how to set environment variables on your platform.
~/.ollama/models
./usr/share/ollama/.ollama/models
If a different directory needs to be used, set the environment variable OLLAMA_MODELS
to the chosen directory.
Refer to the section above for how to set environment variables on your platform.
No, Ollama runs entirely locally, and conversation data will never leave your machine.
There is already a large collection of plugins available for VSCode as well as other editors that leverage Ollama. See the list of extensions & plugins at the bottom of the main repository readme.
Ollama is compatible with proxy servers if HTTP_PROXY
or HTTPS_PROXY
are configured. When using either variables, ensure it is set where ollama serve
can access the values. When using HTTPS_PROXY
, ensure the proxy certificate is installed as a system certificate. Refer to the section above for how to use environment variables on your platform.
The Ollama Docker container image can be configured to use a proxy by passing -e HTTPS_PROXY=https://proxy.example.com
when starting the container.
Alternatively, the Docker daemon can be configured to use a proxy. Instructions are available for Docker Desktop on macOS, Windows, and Linux, and Docker daemon with systemd.
Ensure the certificate is installed as a system certificate when using HTTPS. This may require a new Docker image when using a self-signed certificate.
FROM ollama/ollama
COPY my-ca.pem /usr/local/share/ca-certificates/my-ca.crt
RUN update-ca-certificates
Build and run this image:
docker build -t ollama-with-ca .
docker run -d -e HTTPS_PROXY=https://my.proxy.example.com -p 11434:11434 ollama-with-ca
The Ollama Docker container can be configured with GPU acceleration in Linux or Windows (with WSL2). This requires the nvidia-container-toolkit. See ollama/ollama for more details.
GPU acceleration is not available for Docker Desktop in macOS due to the lack of GPU passthrough and emulation.
This can impact both installing Ollama, as well as downloading models.
Open Control Panel > Networking and Internet > View network status and tasks
and click on Change adapter settings
on the left panel. Find the vEthernel (WSL)
adapter, right click and select Properties
.
Click on Configure
and open the Advanced
tab. Search through each of the properties until you find Large Send Offload Version 2 (IPv4)
and Large Send Offload Version 2 (IPv6)
. Disable both of these
properties.