On macOS:
cat ~/.ollama/logs/server.log
On Linux:
journalctl -u ollama
If you're running ollama serve
directly, the logs will be printed to the console.
Ollama binds to 127.0.0.1 port 11434 by default. Change the bind address with the OLLAMA_HOST
environment variable.
On macOS:
OLLAMA_HOST=0.0.0.0:11435 ollama serve
On Linux:
Create a systemd
drop-in directory and set Environment=OLLAMA_HOST
mkdir -p /etc/systemd/system/ollama.service.d
echo '[Service]' >>/etc/systemd/system/ollama.service.d/environment.conf
echo 'Environment="OLLAMA_HOST=0.0.0.0:11434"' >>/etc/systemd/system/ollama.service.d/environment.conf
Reload systemd
and restart Ollama:
systemctl daemon-reload
systemctl restart ollama
Ollama allows cross origin requests from 127.0.0.1
and 0.0.0.0
by default. Add additional origins with the OLLAMA_ORIGINS
environment variable:
On macOS:
OLLAMA_ORIGINS=http://192.168.1.1:*,https://example.com ollama serve
On Linux:
echo 'Environment="OLLAMA_ORIGINS=http://129.168.1.1:*,https://example.com"' >>/etc/systemd/system/ollama.service.d/environment.conf
Reload systemd
and restart Ollama:
systemctl daemon-reload
systemctl restart ollama
~/.ollama/models
./usr/share/ollama/.ollama/models
Below the models directory you will find a structure similar to the following:
.
├── blobs
└── manifests
└── registry.ollama.ai
├── f0rodo
├── library
├── mattw
└── saikatkumardey
There is a manifests/registry.ollama.ai/namespace
path. In example above, the user has downloaded models from the official library
, f0rodo
, mattw
, and saikatkumardey
namespaces. Within each of those directories, you will find directories for each of the models downloaded. And in there you will find a file name representing each tag. Each tag file is the manifest for the model.
The manifest lists all the layers used in this model. You will see a media type
for each layer, along with a digest. That digest corresponds with a file in the models/blobs directory
.
To modify where models are stored, you can use the OLLAMA_MODELS
environment variable. Note that on Linux this means defining OLLAMA_MODELS
in a drop-in /etc/systemd/system/ollama.service.d
service file, reloading systemd, and restarting the ollama service.
No. Anything you do with Ollama, such as generate a response from the model, stays with you. We don't collect any data about how you use the model. You are always in control of your own data.
There is already a large collection of plugins available for VSCode as well as other editors that leverage Ollama. You can see the list of extensions & plugins at the bottom of the main repository readme.
Ollama is compatible with proxy servers if HTTP_PROXY
or HTTPS_PROXY
are configured. When using either variables, ensure it is set where ollama serve
can access the values.
When using HTTPS_PROXY
, ensure the proxy certificate is installed as a system certificate.
On macOS:
HTTPS_PROXY=http://proxy.example.com ollama serve
On Linux:
echo 'Environment="HTTPS_PROXY=https://proxy.example.com"' >>/etc/systemd/system/ollama.service.d/environment.conf
Reload systemd
and restart Ollama:
systemctl daemon-reload
systemctl restart ollama
The Ollama Docker container image can be configured to use a proxy by passing -e HTTPS_PROXY=https://proxy.example.com
when starting the container.
Alternatively, Docker daemon can be configured to use a proxy. Instructions are available for Docker Desktop on macOS, Windows, and Linux, and Docker daemon with systemd.
Ensure the certificate is installed as a system certificate when using HTTPS. This may require a new Docker image when using a self-signed certificate.
FROM ollama/ollama
COPY my-ca.pem /usr/local/share/ca-certificates/my-ca.crt
RUN update-ca-certificates
Build and run this image:
docker build -t ollama-with-ca .
docker run -d -e HTTPS_PROXY=https://my.proxy.example.com -p 11434:11434 ollama-with-ca
The Ollama Docker container can be configured with GPU acceleration in Linux or Windows (with WSL2). This requires the nvidia-container-toolkit. See ollama/ollama for more details.
GPU acceleration is not available for Docker Desktop in macOS due to the lack of GPU passthrough and emulation.