Welcome to the Ollama Windows preview.
No more WSL required!
Ollama now runs as a native Windows application, including NVIDIA and AMD Radeon GPU support.
After installing Ollama Windows Preview, Ollama will run in the background and
the ollama
command line is available in cmd
, powershell
or your favorite
terminal application. As usual the Ollama api will be served on
http://localhost:11434
.
As this is a preview release, you should expect a few bugs here and there. If you run into a problem you can reach out on Discord, or file an issue. Logs will often be helpful in diagnosing the problem (see Troubleshooting below)
Ollama uses unicode characters for progress indication, which may render as unknown squares in some older terminal fonts in Windows 10. If you see this, try changing your terminal font settings.
The Ollama install does not require Administrator, and installs in your home directory by default. You'll need at least 4GB of space for the binary install. Once you've installed Ollama, you'll need additional space for storing the Large Language models, which can be tens to hundreds of GB in size. If your home directory doesn't have enough space, you can change where the binaries are installed, and where the models are stored.
To install the Ollama application in a location different than your home directory, start the installer with the following flag
OllamaSetup.exe /DIR="d:\some\location"
To change where Ollama stores the downloaded models instead of using your home directory, set the environment variable OLLAMA_MODELS
in your user account.
Start the Settings (Windows 11) or Control Panel (Windows 10) application and search for environment variables.
Click on Edit environment variables for your account.
Edit or create a new variable for your user account for OLLAMA_MODELS
where you want the models stored
Click OK/Apply to save.
If Ollama is already running, Quit the tray application and relaunch it from the Start menu, or a new terminal started after you saved the environment variables.
Here's a quick example showing API access from powershell
(Invoke-WebRequest -method POST -Body '{"model":"llama3.2", "prompt":"Why is the sky blue?", "stream": false}' -uri http://localhost:11434/api/generate ).Content | ConvertFrom-json
While we're in preview, OLLAMA_DEBUG
is always enabled, which adds
a "view logs" menu item to the app, and increases logging for the GUI app and
server.
Ollama on Windows stores files in a few different locations. You can view them in
the explorer window by hitting <cmd>+R
and type in:
explorer %LOCALAPPDATA%\Ollama
contains logs, and downloaded updates
explorer %LOCALAPPDATA%\Programs\Ollama
contains the binaries (The installer adds this to your user PATH)explorer %HOMEPATH%\.ollama
contains models and configurationexplorer %TEMP%
contains temporary executable files in one or more ollama*
directoriesThe Ollama Windows installer registers an Uninstaller application. Under Add or remove programs
in Windows Settings, you can uninstall Ollama.
[!NOTE] If you have changed the OLLAMA_MODELS location, the installer will not remove your downloaded models
The easiest way to install Ollama on Windows is to use the OllamaSetup.exe
installer. It installs in your account without requiring Administrator rights.
We update Ollama regularly to support the latest models, and this installer will
help you keep up to date.
If you'd like to install or integrate Ollama as a service, a standalone
ollama-windows-amd64.zip
zip file is available containing only the Ollama CLI
and GPU library dependencies for Nvidia and AMD. This allows for embedding
Ollama in existing applications, or running it as a system service via ollama
serve
with tools such as NSSM.