Join us

ContentUpdates and recent posts about Pelagia..
Link
@varbear shared a link, 2 days, 14 hours ago
FAUN.dev()

Confessions of a Software Developer: No More Self-Censorship

A mid-career dev hits pause after ten years in the game -realizing core skills likepolymorphism, SQL, and automated testingnever quite clicked. Leadership roles, shipping products, mentoring junior devs - none of it filled those gaps. They'd been writingC#/.NETfor a while too. Not out of love, just .. read more  

Confessions of a Software Developer: No More Self-Censorship
Link
@varbear shared a link, 2 days, 14 hours ago
FAUN.dev()

Building a Blockchain in Go: From 'Hello, Block' to 10,000 TPS

A new Go tutorial shows how to build a lean, fast blockchain - clocking ~10,000 TPS - without the usual bloat. It covers the full stack:P2P networking,custom consensus, and properstate management. No unbounded mempools. No missing snapshots. Just a chain that actually runs, benchmarked on real machi.. read more  

Link
@varbear shared a link, 2 days, 14 hours ago
FAUN.dev()

Inside the GitHub Infrastructure Powering North Korea’s Contagious Interview npm Attacks

The Socket Threat Research Team has been following North Korea’s Contagious Interview operation as it targets blockchain and Web3 developers through fake job interviews. The campaign has added at least197 malicious npm packagesand over31,000 downloadssince last report, showcasing the adaptability of.. read more  

Link
@varbear shared a link, 2 days, 14 hours ago
FAUN.dev()

Before You Push: Implementing Quality Gates in Your Software Project

This post discusses best practices for automated testing in software engineering, including unit tests and integration tests for databases, APIs, and emulators. It also covers end-to-end tests using tools like Cypress, Appium, Postman, and more. Additionally, it highlights the importance of environm.. read more  

Link
@varbear shared a link, 2 days, 14 hours ago
FAUN.dev()

Partitions, Sharding, and Split-for-Heat in DynamoDB

DynamoDB starts to grumble when a single partition gets hit with more than 1,000WCU. To dodge throttling, writes need to fan out across shards. Recommended move: start with10 logical shards. WatchCloudWatch metrics. DialNup or down. Letburstandadaptive capacitybuy you breathing room - untilSplit-for.. read more  

Partitions, Sharding, and Split-for-Heat in DynamoDB
Link
@varbear shared a link, 2 days, 14 hours ago
FAUN.dev()

How to Get Developers in Your Team to Contribute to Your Test Automation

A fresh blog post dives into how to get devs pulling their weight ontest automation- not as extra credit, but as part of shipping code. The playbook: tie automation work straight to thedefinition of done, clear up who owns what, and stop pretending delivery pressure is a mystery. The big idea? Most .. read more  

How to Get Developers in Your Team to Contribute to Your Test Automation
Link
@varbear shared a link, 2 days, 14 hours ago
FAUN.dev()

Building Mac Farm: Running 2000+ iOS Pipelines Daily

At Trendyol, they runover 2,000 iOSpipelines daily across130 Mac machines, executing50,000+ unit testsand10,000+ UI testsfor their iOS apps. The team initiated a mobile CI transformation to address the challenges of scale and performance as their team grew and AI usage increased. They built a macOS .. read more  

Link
@kaptain shared a link, 2 days, 14 hours ago
FAUN.dev()

In-place Pod resizing in Kubernetes: How it works and how to use it

Kubernetes 1.33 and 1.34 takein-place Pod resource updatesfrom beta to battle-ready. You can now tweak CPU and memory on the fly - no Pod restarts needed. It's on by default. What’s new: memory downsizing with guardrails, kubelet metrics that actually tell you what’s going on, and smarter retries th.. read more  

In-place Pod resizing in Kubernetes: How it works and how to use it
Link
@kaptain shared a link, 2 days, 14 hours ago
FAUN.dev()

KubeCon North America 2025 Recap: Federation and

HAProxy just droppedUniversal Mesh, a fresh spin on service mesh design. Forget the per-service sidecars - this model plants high-speed gateways at the network edges instead. Result? Lighter by 30–50% on resources, easier to upgrade, and way less hassle routing traffic across Kubernetes, VMs, and cl.. read more  

KubeCon North America 2025 Recap: Federation and
Link
@kaptain shared a link, 2 days, 14 hours ago
FAUN.dev()

Ingress NGINX Is Retiring. Here’s Your Path Forward with HAProxy

TheIngress NGINX projectis riding off into the sunset by March 2026. Time to pick a new horse. One strong contender: theHAProxy Kubernetes Ingress Controller. It matches feature-for-feature, comes with deeper observability, and reloads configs without taking your cluster offline. HAProxy’s not stopp.. read more  

Ingress NGINX Is Retiring. Here’s Your Path Forward with HAProxy
Pelagia is a Kubernetes controller that provides all-in-one management for Ceph clusters installed by Rook. It delivers two main features:

Aggregates all Rook Custom Resources (CRs) into a single CephDeployment resource, simplifying the management of Ceph clusters.
Provides automated lifecycle management (LCM) of Rook Ceph OSD nodes for bare-metal clusters. Automated LCM is managed by the special CephOsdRemoveTask resource.

It is designed to simplify the management of Ceph clusters in Kubernetes installed by Rook.

Being solid Rook users, we had dozens of Rook CRs to manage. Thus, one day we decided to create a single resource that would aggregate all Rook CRs and deliver a smoother LCM experience. This is how Pelagia was born.

It supports almost all Rook CRs API, including CephCluster, CephBlockPool, CephFilesystem, CephObjectStore, and others, aggregating them into a single specification. We continuously work on improving Pelagia's API, adding new features, and enhancing existing ones.

Pelagia collects Ceph cluster state and all Rook CRs statuses into single CephDeploymentHealth CR. This resource highlights of Ceph cluster and Rook APIs issues, if any.

Another important thing we implemented in Pelagia is the automated lifecycle management of Rook Ceph OSD nodes for bare-metal clusters. This feature is delivered by the CephOsdRemoveTask resource, which automates the process of removing OSD disks and nodes from the cluster. We are using this feature in our everyday day-2 operations routine.