Join us

ContentUpdates and recent posts about Ollama..
 Activity
@adrian_schmidt started using tool TypeScript , 5 days, 22 hours ago.
 Activity
@adrian_schmidt started using tool React , 5 days, 22 hours ago.
 Activity
@adrian_schmidt started using tool Express , 5 days, 22 hours ago.
 Activity
@adrian_schmidt started using tool AWS Lambda , 5 days, 22 hours ago.
 Activity
@adrian_schmidt started using tool Amazon Web Services , 5 days, 22 hours ago.
 Activity
@adrian_schmidt started using tool Amazon SES , 5 days, 22 hours ago.
 Activity
@adrian_schmidt started using tool Amazon S3 , 5 days, 22 hours ago.
 Activity
@adrian_schmidt started using tool Amazon EC2 , 5 days, 22 hours ago.
 Activity
@adrian_schmidt started using tool Amazon Cloudfront , 5 days, 22 hours ago.
 Activity
@adrian_schmidt started using tool Amazon ALB , 5 days, 22 hours ago.
Ollama is an open source tool for running large language models locally on your own machine. It packages model weights, configuration, and a runtime into a single binary with a simple CLI, letting developers pull and run models like Llama, Mistral, or Qwen with one command (`ollama run <model>`). It exposes an HTTP API compatible with parts of the OpenAI spec, which makes it easy to swap into existing tooling. Ollama is one of the most popular entry points for local LLM inference, particularly on macOS and Linux developer machines.