make {/path/to/whisper.cpp/server}
whisperServer
in routes.go
with path to server./ollama run llama3 [PROMPT] --speech
./ollama run llama3 --speech
Notes: uses default model
speech
(required):
audio
(required): path to audio filemodel
(optional): path to whisper model, uses default if nulltranscribe
(optional): if true, will transcribe and return the audio filekeep_alive
: (optional): sets how long the model is stored in memory (default: 5m
)prompt
(optional): if not null, passed in with the transcribed audiocurl http://localhost:11434/api/generate -d '{
"speech": {
"model": "/Users/royhan-ollama/.ollama/whisper/ggml-base.en.bin",
"audio": "/Users/royhan-ollama/ollama/llm/whisper.cpp/samples/jfk.wav",
"transcribe": true,
"keep_alive": "1m"
},
"stream": false
}' | jq
curl http://localhost:11434/api/generate -d '{
"model": "llama3",
"prompt": "What do you think about this quote?",
"speech": {
"model": "/Users/royhan-ollama/.ollama/whisper/ggml-base.en.bin",
"audio": "/Users/royhan-ollama/ollama/llm/whisper.cpp/samples/jfk.wav",
"keep_alive": "1m"
},
"stream": false
}' | jq
model
(required): language model to chat withspeech
(optional):
model
(optional): path to whisper model, uses default if nullkeep_alive
: (optional): sets how long the model is stored in memory (default: 5m
)run_speech
(optional): either this flag must be true or speech
must be passed in for speech mode to runmessages
/message
/audio
(required): path to audio file
curl http://localhost:11434/api/chat -d '{
"model": "llama3",
"speech": {
"model": "/Users/royhan-ollama/.ollama/whisper/ggml-base.en.bin",
"keep_alive": "10m"
},
"messages": [
{
"role": "system",
"content": "You are a Canadian Nationalist"
},
{
"role": "user",
"content": "What do you think about this quote?",
"audio": "/Users/royhan-ollama/ollama/llm/whisper.cpp/samples/jfk.wav"
}
],
"stream": false
}' | jq