Join us

ContentUpdates and recent posts about Pelagia..
Link
@faun shared a link, 3 weeks, 3 days ago

Writing Load Balancer From Scratch In 250 Line of Code

A developer rolled out a fully working **Go load balancer** with a clean **Round Robin** setup—and hooks for dropping in smarter strategies like **Least Connection** or **IP Hash**. Backend servers live in a custom server pool. Swapping balancing logic? Just plug into the interface...

Writing Load Balancer From Scratch In 250 Line of Code
Link
@faun shared a link, 3 weeks, 3 days ago

Privacy for subdomains: the solution

A two-container setup using **acme.sh** gets Let's Encrypt certs running on a Synology NAS—thanks, Docker. No built-in Certbot support? No problem. Cloudflare DNS API token handles auth. Scheduled tasks handle renewal...

Privacy for subdomains: the solution
Link
@faun shared a link, 3 weeks, 3 days ago

Uncommon Uses of Common Python Standard Library Functions

A fresh guide gives old Python friends a second look—turns out, tools like **itertools.groupby**, **zip**, **bisect**, and **heapq** aren’t just standard; they’re slick solutions to real problems. Think run-length encoding, matrix transposes, or fast, sorted inserts without bringing in another depen..

Link
@faun shared a link, 3 weeks, 3 days ago

Authentication Explained: When to Use Basic, Bearer, OAuth2, JWT & SSO

Modern apps don’t just check passwords—they rely on **API tokens**, **OAuth**, and **Single Sign-On (SSO)** to know who’s knocking before they open the door...

Link
@faun shared a link, 3 weeks, 3 days ago

Becoming a Research Engineer at a Big LLM Lab - 18 Months of Strategic Career Development

To land a big career role like Mistral, mix efficient **tactical** moves (like LeetCode practice) with **strategic** ups, like building a powerful portfolio and a solid network. Balance is key; aim to impress and prepare well without overlooking the power of strategy in shaping a successful career...

Link
@faun shared a link, 3 weeks, 3 days ago

Jupyter Agents: training LLMs to reason with notebooks

Hugging Face dropped an open pipeline and dataset for training small models—think **Qwen3-4B**—into sharp **Jupyter-native data science agents**. They pulled curated Kaggle notebooks, whipped up synthetic QA pairs, added lightweight **scaffolding**, and went full fine-tune. Net result? A **36% jump ..

Jupyter Agents: training LLMs to reason with notebooks
Link
@faun shared a link, 3 weeks, 3 days ago

Building a Natural Language Interface for Apache Pinot with LLM Agents

MiQ plugged **Google’s Agent Development Kit** into their stack to spin up **LLM agents** that turn plain English into clean, validated SQL. These agents speak directly to **Apache Pinot**, firing off real-time queries without the usual parsing pain. Behind the scenes, it’s a slick handoff: NL2SQL ..

Building a Natural Language Interface for Apache Pinot with LLM Agents
Link
@faun shared a link, 3 weeks, 3 days ago

The productivity paradox of AI coding assistants

A July 2025 METR trial dropped a twist: seasoned devs using Cursor with Claude 3.5/3.7 moved **19% slower** - while thinking they were **20% faster**. Chalk it up to AI-induced confidence inflation. Faros AI tracked over **10,000 developers**. More AI didn’t mean more done. It meant more juggling, ..

The productivity paradox of AI coding assistants
Link
@faun shared a link, 3 weeks, 3 days ago

Inside NVIDIA GPUs: Anatomy of high performance matmul kernels

NVIDIA Hopper packs serious architectural tricks. At the core: **Tensor Memory Accelerator (TMA)**, **tensor cores**, and **swizzling**—the trio behind async, cache-friendly matmul kernels that flirt with peak throughput. But folks aren't stopping at cuBLAS. They're stacking new tactics: **warp-gro..

Inside NVIDIA GPUs: Anatomy of high performance matmul kernels
Link
@faun shared a link, 3 weeks, 3 days ago

5 Free AI Courses from Hugging Face

Hugging Face just rolled out a sharp set of free AI courses. Real topics, real tools—think **AI agents, LLMs, diffusion models, deep RL**, and more. It’s hands-on from the jump, packed with frameworks like LangGraph, Diffusers, and Stable Baselines3. You don’t just read about models—you build ‘em i..

Pelagia is a Kubernetes controller that provides all-in-one management for Ceph clusters installed by Rook. It delivers two main features:

Aggregates all Rook Custom Resources (CRs) into a single CephDeployment resource, simplifying the management of Ceph clusters.
Provides automated lifecycle management (LCM) of Rook Ceph OSD nodes for bare-metal clusters. Automated LCM is managed by the special CephOsdRemoveTask resource.

It is designed to simplify the management of Ceph clusters in Kubernetes installed by Rook.

Being solid Rook users, we had dozens of Rook CRs to manage. Thus, one day we decided to create a single resource that would aggregate all Rook CRs and deliver a smoother LCM experience. This is how Pelagia was born.

It supports almost all Rook CRs API, including CephCluster, CephBlockPool, CephFilesystem, CephObjectStore, and others, aggregating them into a single specification. We continuously work on improving Pelagia's API, adding new features, and enhancing existing ones.

Pelagia collects Ceph cluster state and all Rook CRs statuses into single CephDeploymentHealth CR. This resource highlights of Ceph cluster and Rook APIs issues, if any.

Another important thing we implemented in Pelagia is the automated lifecycle management of Rook Ceph OSD nodes for bare-metal clusters. This feature is delivered by the CephOsdRemoveTask resource, which automates the process of removing OSD disks and nodes from the cluster. We are using this feature in our everyday day-2 operations routine.