ollama/runner
Baptiste Jamin 59241c5bee
server: add logprobs and top_logprobs support to Ollama's API (#12899)
Adds logprobs support to Ollama's API including support for Ollama's
OpenAI-compatible API. By specifying the new 'logprobs' boolean parameter
in the API, Ollama will return the log probabilities for each token generated.
'top_logprobs', an integer value can also be specified up to the value 20.
When specified, the API will also provide the number of most likely tokens to
return at each token position

Co-authored-by: Baptiste Jamin <baptiste@crisp.chat>
2025-11-11 08:49:50 -08:00
..
common server: add logprobs and top_logprobs support to Ollama's API (#12899) 2025-11-11 08:49:50 -08:00
llamarunner server: add logprobs and top_logprobs support to Ollama's API (#12899) 2025-11-11 08:49:50 -08:00
ollamarunner server: add logprobs and top_logprobs support to Ollama's API (#12899) 2025-11-11 08:49:50 -08:00
README.md Runner for Ollama engine 2025-02-13 17:09:26 -08:00
runner.go Runner for Ollama engine 2025-02-13 17:09:26 -08:00

README.md

runner

Note: this is a work in progress

A minimial runner for loading a model and running inference via a http web server.

./runner -model <model binary>

Completion

curl -X POST -H "Content-Type: application/json" -d '{"prompt": "hi"}' http://localhost:8080/completion

Embeddings

curl -X POST -H "Content-Type: application/json" -d '{"prompt": "turn me into an embedding"}' http://localhost:8080/embedding