Join us

ContentUpdates and recent posts about Pelagia..
 Activity
@simme started using tool PostgreSQL , 3 days, 9 hours ago.
 Activity
@simme started using tool lxd , 3 days, 9 hours ago.
 Activity
@simme started using tool Kubernetes , 3 days, 9 hours ago.
 Activity
@simme started using tool K6 , 3 days, 9 hours ago.
 Activity
@simme started using tool Juju , 3 days, 9 hours ago.
 Activity
@simme started using tool Grafana Tempo , 3 days, 9 hours ago.
 Activity
@simme started using tool Grafana Mimir , 3 days, 9 hours ago.
 Activity
@simme started using tool Grafana Loki , 3 days, 9 hours ago.
 Activity
@simme started using tool Grafana , 3 days, 9 hours ago.
 Activity
@simme started using tool Go , 3 days, 9 hours ago.
Pelagia is a Kubernetes controller that provides all-in-one management for Ceph clusters installed by Rook. It delivers two main features:

Aggregates all Rook Custom Resources (CRs) into a single CephDeployment resource, simplifying the management of Ceph clusters.
Provides automated lifecycle management (LCM) of Rook Ceph OSD nodes for bare-metal clusters. Automated LCM is managed by the special CephOsdRemoveTask resource.

It is designed to simplify the management of Ceph clusters in Kubernetes installed by Rook.

Being solid Rook users, we had dozens of Rook CRs to manage. Thus, one day we decided to create a single resource that would aggregate all Rook CRs and deliver a smoother LCM experience. This is how Pelagia was born.

It supports almost all Rook CRs API, including CephCluster, CephBlockPool, CephFilesystem, CephObjectStore, and others, aggregating them into a single specification. We continuously work on improving Pelagia's API, adding new features, and enhancing existing ones.

Pelagia collects Ceph cluster state and all Rook CRs statuses into single CephDeploymentHealth CR. This resource highlights of Ceph cluster and Rook APIs issues, if any.

Another important thing we implemented in Pelagia is the automated lifecycle management of Rook Ceph OSD nodes for bare-metal clusters. This feature is delivered by the CephOsdRemoveTask resource, which automates the process of removing OSD disks and nodes from the cluster. We are using this feature in our everyday day-2 operations routine.