Understanding Resource Management in Kubernetes
From Containers to Pods: Understanding Resource Management
If you’ve ever worked with standalone Docker containers, you probably know the trouble that may come from not setting proper CPU and memory limits for hungry containers. If you haven’t, you might one day wonder why your computer is suddenly running slowly.
When containers on a host use too much memory, the Linux kernel may trigger an Out-of-Memory (OOM) event and start killing processes to recover space — sometimes even the Docker daemon itself. Docker tries to reduce that risk by lowering the daemon’s OOM priority, but this protection doesn’t extend to your running containers, which remain the first to go. This applies to CPU as well.
To avoid this, it's best to control how much of the host’s resources each container can use.
For example:
# Limit the container to 50% of the CPUCloud-Native Microservices With Kubernetes - 2nd Edition
A Comprehensive Guide to Building, Scaling, Deploying, Observing, and Managing Highly-Available Microservices in KubernetesEnroll now to unlock all content and receive all future updates for free.
