Join us

ContentUpdates and recent posts about Ollama..
 Activity
@koukibadr started using tool Azure Pipelines , 1 week, 3 days ago.
 Activity
@koukibadr started using tool Amazon S3 , 1 week, 3 days ago.
 Activity
@ravikyada started using tool Kubernetes , 1 week, 3 days ago.
 Activity
@ravikyada started using tool Jenkins , 1 week, 3 days ago.
 Activity
@ravikyada started using tool Grafana , 1 week, 3 days ago.
 Activity
@ravikyada started using tool Docker , 1 week, 3 days ago.
 Activity
@ravikyada started using tool Amazon Web Services , 1 week, 3 days ago.
Link
@varbear shared a link, 1 week, 5 days ago
FAUN.dev()

Why are top university websites serving p0rn? It comes down to shoddy housekeeping.

Researcher Alex Shakhov found scammers commandeering staleCNAMErecords. They hijack university subdomains (eg.berkeley.edu,columbia.edu,washu.edu) and serve p0rn and scam pages. Shakhov found hundreds of abused subdomains across at least34universities. He counted thousands of hijacked pages indexed .. read more  

Why are top university websites serving p0rn? It comes down to shoddy housekeeping.
Link
@varbear shared a link, 1 week, 5 days ago
FAUN.dev()

PostgreSQL MVCC, Byte by Byte

PostgreSQL's MVCC stores two 32-bit XIDs per tuple -xminandxmax. The transaction snapshot decides visibility per tuple. Updates append new tuples and mark the old withxmax.VACUUMreclaims versions only when no active snapshot can see them. Long-runningREPEATABLE READsnapshots pin versions and cause b.. read more  

PostgreSQL MVCC, Byte by Byte
Link
@varbear shared a link, 1 week, 5 days ago
FAUN.dev()

The AWS Lambda 'Kiss of Death'

A Galera writer node froze afterInnoDBundo history ballooned. PooledAWS Lambdaconnections left transactions open and pinned MVCC read views. The team killed stalled sessions, enabledinnodb_undo_log_truncate, and cappedinnodb_max_undo_log_size. They also set sessiontransaction_isolation=READ-COMMITTE.. read more  

The AWS Lambda 'Kiss of Death'
Ollama is an open source tool for running large language models locally on your own machine. It packages model weights, configuration, and a runtime into a single binary with a simple CLI, letting developers pull and run models like Llama, Mistral, or Qwen with one command (`ollama run <model>`). It exposes an HTTP API compatible with parts of the OpenAI spec, which makes it easy to swap into existing tooling. Ollama is one of the most popular entry points for local LLM inference, particularly on macOS and Linux developer machines.