@cloudgeek7 ・ Feb 03,2022 ・ 2 min read ・ 589 views ・ Originally posted on medium.com
Containers have been around since the 1970s for creating an isolated environment where applications and services can run without interfering with other processes.
A brief history of Containers
Containers have been around since the 1970s for creating an isolated environment where applications and services can run without interfering with other processes. Containers began as a Linux kernel process isolation construct encompassing cgroups (control groups). The release of Docker in 2013 popularised containers for the masses. Docker packages software into standardised units called containers with everything the software needs to run, including libraries, system tools, code, and runtime.
Origin of Kubernetes
Around 2003–2004 Google developed an internal ‘run everything in containers mechanism’ called Borg — the predecessor to Kubernetes. In 2015, Kubernetes 1.0 (K8s) was released and quickly became the accepted container orchestration standard. Kubernetes is an open-source container orchestration engine for automating deployment, scaling, and managing containerised applications. Kubernetes is Greek for pilot or helmsman, hence the steering wheel in the Kubernetes logo.
The classic challenges with Kubernetes
Architecture and technology innovation leaders invest in container platform tools to improve productivity and agility and reduce technical debt. And while it’s plain that Kubernetes is a popular platform for building cloud-native applications, the Cloud Native Computing Foundation (CNCF) identified that several factors — such as culture and skills shortages, give rise to challenges. For example, around security, complexity, and monitoring. Further, many enterprises lack mature DevOps practices to operationalise and succeed with large-scale deployments.
According to Red Hat, misconfiguration is the top reason for Kubernetes-related security incidents, and 29% of those surveyed said their biggest concern about their company’s container strategy was a lack of investment in container security.
What is Amazon EKS?
Diagram of the EKS architecture
Amazon Elastic Kubernetes Service (Amazon EKS) is a managed container service for running and scaling Kubernetes applications in the cloud or on-premises. The benefits of having Kubernetes on Amazon EKS include reduced maintenance overhead and ease of integration with AWS services.
How Amazon EKS helps security
Due to the nature of the public cloud, data protection measures are paramount. AWS services like Key Management Service (KMS) help encrypt persistent data used in EKS Clusters. For example, EBS volumes attached to EKS worker nodes.
With Identity and Access Management (IAM) identity-based policies, you can specify allowed or denied actions and resources as well as the conditions under which activities are permitted or prohibited. Amazon EKS supports specific actions, resources, and condition keys.
We have adopted AWS’s Well-Architected Framework’s Security pillar principle for our EKS managed service, which will help you meet your business and regulatory requirements by following current AWS recommendations.
Cloud native managed EKS service includes below:
Amazon Elastic Kubernetes Service (EKS) — K8s Cluster provisioning
AWS Key Management Service (KMS) — Security — encryption of data at rest
Terraform — Infrastructure as Code (IaC) — Automation for EKS Landing zone
Identity and Access Management (IAM) — Fine-grained access control
Amazon CloudWatch / OpenSearch — Logging
Amazon Managed Service for Prometheus — Metric collection
Amazon Managed Service for Grafana — Monitoring/Dashboards
For more details, please refer to: https://www.t-systems.com/de/en/newsroom/expert-blogs/managed-eks-service-solves-kubernetes-challenges-484860
Join other developers and claim your FAUN account now!
Only registered users can post comments. Please, login or signup.