Feedback

Chat Icon

Helm in Practice

Designing, Deploying, and Operating Kubernetes Applications at Scale

2%

Preface

In the early days of Kubernetes, shortly after its open-source release by Google in June 2014, the feeling that users needed more than the core API was almost instantaneous. While Kubernetes provided the primitives for orchestration - Pods, Services, Deployments - it lacked a cohesive way to manage the lifecycle of an application as a single entity.

At the time, developers and operators resorted to manually creating and managing YAML files for each application, Kubernetes resource, each environment, and each deployment scenario.

In the first year of Kubernetes' life (v1.0 to v1.2), engineers managed applications by applying individual manifest files. This worked for simple microservices, but as applications grew, a single logical service often required five or six different Kubernetes objects: a Deployment, a Service, an Ingress, a ConfigMap, and perhaps a PersistentVolumeClaim.

Today, the number of resources per application has only increased. CRDs (Custom Resource Definitions) were introduced, and automation tools proliferated, from autoscalers to Prometheus exporters and service meshes. Over time, managing these resources manually became not only complicated but error-prone.

The transition from manual YAML management to specialized deployment tools was a turning point in the ecosystem. Many tools emerged to fill this lacuna. The pain point emerged from two main issues: complexity and repeatability, and the solution was obviously abstraction and automation. Kubernetes, without an additional layer for application management, was simply not sufficient. This is not to say that Kubernetes was not powerful, but like I often say, the cloud-native world looks a lot like the Unix philosophy: "Do one thing and do it well." Kubernetes is a powerful container orchestrator, but it was not designed to be an application lifecycle manager. This manifested in two main problems:

1) The complexity and repeatability problem: There is no native way to reuse the same YAML for "Staging" and "Production" without manually editing values like replica counts or image tags. Engineers found themselves creating complex sed or awk scripts, or using Kubernetes imperative patch commands, just to update a version number across multiple resources.

2) The rollback problem

Helm in Practice

Designing, Deploying, and Operating Kubernetes Applications at Scale

Enroll now to unlock current content and receive all future updates for free. Your purchase supports the author and fuels the creation of more exciting content. Act fast, as the price will rise as the course nears completion!

Unlock now  $15.99$11.99

Hurry! This limited time offer ends in:

To redeem this offer, copy the coupon code below and apply it at checkout:

Learn More