Join us

ContentUpdates and recent posts about Pelagia..
Link
@varbear shared a link, 5 months, 2 weeks ago
FAUN.dev()

Partitions, Sharding, and Split-for-Heat in DynamoDB

DynamoDB starts to grumble when a single partition gets hit with more than 1,000WCU. To dodge throttling, writes need to fan out across shards. Recommended move: start with10 logical shards. WatchCloudWatch metrics. DialNup or down. Letburstandadaptive capacitybuy you breathing room - untilSplit-for.. read more  

Partitions, Sharding, and Split-for-Heat in DynamoDB
Link
@varbear shared a link, 5 months, 2 weeks ago
FAUN.dev()

How to Get Developers in Your Team to Contribute to Your Test Automation

A fresh blog post dives into how to get devs pulling their weight ontest automation- not as extra credit, but as part of shipping code. The playbook: tie automation work straight to thedefinition of done, clear up who owns what, and stop pretending delivery pressure is a mystery. The big idea? Most .. read more  

How to Get Developers in Your Team to Contribute to Your Test Automation
Link
@kaptain shared a link, 5 months, 2 weeks ago
FAUN.dev()

In-place Pod resizing in Kubernetes: How it works and how to use it

Kubernetes 1.33 and 1.34 takein-place Pod resource updatesfrom beta to battle-ready. You can now tweak CPU and memory on the fly - no Pod restarts needed. It's on by default. What’s new: memory downsizing with guardrails, kubelet metrics that actually tell you what’s going on, and smarter retries th.. read more  

In-place Pod resizing in Kubernetes: How it works and how to use it
Link
@kaptain shared a link, 5 months, 2 weeks ago
FAUN.dev()

KubeCon North America 2025 Recap: Federation and

HAProxy just droppedUniversal Mesh, a fresh spin on service mesh design. Forget the per-service sidecars - this model plants high-speed gateways at the network edges instead. Result? Lighter by 30–50% on resources, easier to upgrade, and way less hassle routing traffic across Kubernetes, VMs, and cl.. read more  

KubeCon North America 2025 Recap: Federation and
Link
@kaptain shared a link, 5 months, 2 weeks ago
FAUN.dev()

Ingress NGINX Is Retiring. Here’s Your Path Forward with HAProxy

TheIngress NGINX projectis riding off into the sunset by March 2026. Time to pick a new horse. One strong contender: theHAProxy Kubernetes Ingress Controller. It matches feature-for-feature, comes with deeper observability, and reloads configs without taking your cluster offline. HAProxy’s not stopp.. read more  

Ingress NGINX Is Retiring. Here’s Your Path Forward with HAProxy
Link
@kaptain shared a link, 5 months, 2 weeks ago
FAUN.dev()

udwall: A Tool for Making UFW and Docker Play Nice With Each Other

Hexmos droppedudwall, a declarative firewall manager that finally makesUFWandDockerplay nice. Docker’s notorious for bulldozing past UFW rules via iptables. udwall patches that hole. It syncs rules across both, auto-reconciles changes, backs up configs, and plugs cleanly intoAnsible. No more duct-ta.. read more  

udwall: A Tool for Making UFW and Docker Play Nice With Each Other
Link
@kaptain shared a link, 5 months, 2 weeks ago
FAUN.dev()

Developers don’t care about Kubernetes clusters

Most cloud-native tools obsess over clusters. Not developers. That means poor support for things like promoting code between environments or deploying by feature - not just by repo. The author pushes for a better way: platforms that hide the Kubernetes mess and tame CI/CD. Think feature-driven deplo.. read more  

Developers don’t care about Kubernetes clusters
Link
@kaptain shared a link, 5 months, 2 weeks ago
FAUN.dev()

You Want Microservices—But Do You Need Them?

Amazon Prime Video ditched its pricey microservices maze and rebuilt as asingle-process monolith, cutting ops costs by 90%. No big press release. Just results. Same move from Twilio Segment. And Shopify. Both pulled their tangled systems back intomodular monoliths- cleaner, faster, easier to test, a.. read more  

You Want Microservices—But Do You Need Them?
Link
@kaptain shared a link, 5 months, 2 weeks ago
FAUN.dev()

Kubernetes Configuration Good Practices

Stripped down and sharp, the blog lays out Kubernetes config best practices: keep YAML manifests in version control, use Deployments (not raw Pods), and label like you mean it - semantically, not just alphabet soup. It digs into sneaky pain points too, like how YAML mangles booleans (yes≠true), and .. read more  

Link
@kaptain shared a link, 5 months, 2 weeks ago
FAUN.dev()

The Grafana trust problem

Grafana’s been busy clearing the shelves.Grafana Agent,Agent Flow, andOnCall? All deprecated. The replacement:Grafana Alloy- a one-stop observability agent that handles logs, metrics, traces, and OTEL without flinching. Meanwhile,Mimir 3.0ships with a Kafka-powered ingestion pipeline. More scalabili.. read more  

Pelagia is a Kubernetes controller that provides all-in-one management for Ceph clusters installed by Rook. It delivers two main features:

Aggregates all Rook Custom Resources (CRs) into a single CephDeployment resource, simplifying the management of Ceph clusters.
Provides automated lifecycle management (LCM) of Rook Ceph OSD nodes for bare-metal clusters. Automated LCM is managed by the special CephOsdRemoveTask resource.

It is designed to simplify the management of Ceph clusters in Kubernetes installed by Rook.

Being solid Rook users, we had dozens of Rook CRs to manage. Thus, one day we decided to create a single resource that would aggregate all Rook CRs and deliver a smoother LCM experience. This is how Pelagia was born.

It supports almost all Rook CRs API, including CephCluster, CephBlockPool, CephFilesystem, CephObjectStore, and others, aggregating them into a single specification. We continuously work on improving Pelagia's API, adding new features, and enhancing existing ones.

Pelagia collects Ceph cluster state and all Rook CRs statuses into single CephDeploymentHealth CR. This resource highlights of Ceph cluster and Rook APIs issues, if any.

Another important thing we implemented in Pelagia is the automated lifecycle management of Rook Ceph OSD nodes for bare-metal clusters. This feature is delivered by the CephOsdRemoveTask resource, which automates the process of removing OSD disks and nodes from the cluster. We are using this feature in our everyday day-2 operations routine.