OurBigBook About$ Donate
 Sign in Sign up

ollama-expect

Ciro Santilli (@cirosantilli, 37) ... Text-to-text model Large language model Open source LLM Ollama Ollama parameter Ollama set parameter on CLI
Created 2025-05-21 Updated 2025-07-19  0 By others on same topic  0 Discussions Create my own version
(root) / ollama-expect
Usage:
./ollama-expect <model> <prompt>
e.g.:
./ollama-expect llama3.2 'What is quantum field theory?'
This generates 100 tokens for the given prompt with the given model.
Benchmarks:
  • P14s: 4.8s, CPU only: ~21 tokens / s. For comparison, using the Vulkan backend of llama.cpp gave ~23 tokens/s
  • P51: 9.6s, uses Nvidia GPU: ~10 tokens / s

 Ancestors (17)

  1. Ollama set parameter on CLI
  2. Ollama parameter
  3. Ollama
  4. Open source LLM
  5. Large language model
  6. Text-to-text model
  7. AI text generation
  8. Generative AI by modality
  9. Generative AI
  10. AI by capability
  11. Artificial intelligence
  12. Machine learning
  13. Computer
  14. Information technology
  15. Area of technology
  16. Technology
  17.  Home

 Incoming links (3)

  • Ciro Santilli's hardware / P14s benchmark
  • Ollama
  • Ollama set parameter on CLI

 View article source

 Discussion (0)

New discussion

There are no discussions about this article yet.

 Articles by others on the same topic (0)

There are currently no matching articles.
  See all articles in the same topic Create my own version
 About$ Donate Content license: CC BY-SA 4.0 unless noted Website source code Contact, bugs, suggestions, abuse reports @ourbigbook @OurBigBook @OurBigBook