Join us

The Origins of Kubernetes

Origin of Kubernetes.jpeg

It is 2016, Karl Isenberg is on the center stage, "Container Orchestration Wars," he said.

The stage was set for the orchestration race.

Armed to the teeth, the warriors at this time were DCOS Mesos, Kubernetes, Nomad, and Docker Swarm, among others.

Behind each technology, a company. Google, Hashicorp, Docker inc, and Apache. Each one is willing to win the day and add a feather in their cap.

The global application container market was expected to grow from 1.2 billion USD in 2018 to 4.98 billion USD by 2023, at a compound annual growth rate of 32.9% during the forecast period.

Which orchestration system will win the war? Who's going to have the lion's share?

Will the winner take all? Or will there be multiple winners?

It is all obvious now, Elvis has left the building, and some technologies weren't able to cut the mustard. But back then, this topic was a hot potato in the computing world.

Docker and containers already entered the mainstream conversation. Docker inc pulled a rabbit out of the hat, and with their containerization technology, they solved a lot of problems. A puzzle that most other container systems partially solve.

However, there were still some questions. Questions like "how do we use it in production?", "How do we automate it?" and "How do we spread containers between virtual machines?".

I'm your host Kassandra Russel and today, you will get your free ticket to travel back to the 50's, to discover the first containers, then go back to the 70's and so on until the present day.

We are going to go through the interesting history of containerization and discover how it has evolved. We will talk about the containers orchestration systems. Docker and the problems it solved. We are going to understand why Docker and containers became a big deal and finally, we'll wrap up with the history of Kubernetes.

Before we dig deeper into container orchestration and why there was a race for market share in container orchestration. Let's go back to 1979 when we started using the "Change Root" Jail or what it's known as chroot Jail and is considered to be one of the first containerization technologies.

In a nutshell, Chroot Jail allows you to isolate a process and its children from the rest of the operating system. However, this can easily be circumvented and was never intended to be a security mechanism. That all changed with the introduction of FreeBSD Jail.

FreeBSD Jail allows you to isolate not only processes but jail-binds it to a particular filesystem. But why? Why does someone need this isolation?

There are many reasons.

Take, for example, an FTP administrator who needs to have isolated environments for different users within the system. Or simply take the example of a user who wants to have their own home folder in a shared computer.

This marked the start of containerization.

It was 1952, Malcolm Mclean was developing plans to carry his company's trucks into ships along the US Atlantic coast from North Carolina to New York. However, he soon noticed that there are a lot of vacant spaces on the ship due to the shape and size of trucks. The irregular chassis of these trucks caused a lower number of them being able to fit in the ships.

Instead of the whole truck loaded in, Malcolm started the best thing since sliced bread. He thought about using just the cargo part of the truck, which was shaped rectangular. A lot more of these containers will fit in the ships.

In the 1950's most cargoes were loaded and unloaded by hand by longshoremen. It was expensive. With this new system, we are able to move cargo efficiently. Fortunately for us, it did not stop there.

He championed standardization, and his efforts were rewarded, patents were awarded to him. The best thing he did was he made his patent royalty-free and available to the International Organization for Standardization.

This started the worldwide shipping containerization boom that we still benefit from today.

How does Malcolm's achievement in the shipping industry relate to containerization in the computing industry?

Both of them went through a rough patch and had a million and one problems like the insufficiency of efficiency.

Containerization, here and there, solved the same problem. After all, as we say, "trouble shared is trouble halved."

When the operating system-level virtualization capabilities were added to the Linux kernel, Linux VServer was introduced in 2001, and it used both a chroot-like mechanism combined with "security contexts" and operating system-level virtualization to provide a virtualization solution. It is more advanced than the simple chroot, and it lets you run multiple Linux distributions on a single distribution.

In 2004, Sun Microsystems, later acquired by Oracle, released Solaris Containers, an implementation of Linux-Vserver for X86 and SPARC processors.

A Solaris Container is actually a combination of system resource controls and the boundary separation provided by what we call "zone."

In a similar vein, OpenVZ, another technology was able to solve these problems.

Many companies started selling Virtualized Private Servers based on top of these two technologies. The containerized environment nature of these servers was a testament to the success of containerization.

The only problem at that time was that Linux-VServer and OpenVZ required patching the Kernel to add some control mechanisms used to create an isolated container.

In 2007, Google released CGroups, a mechanism that limits and isolates the resource usage like CPU, memory, disk I/O and network of a collection of processes. Unlike the predecessors, CGroups was adopted into the Linux Kernel.

In 2008, LXC, also known as Linux Containers, was released. LXC leveraged the functionalities of the Kernel using CGroups to create containers. Later on, Docker was built on top of LXC Technology.

Then, CloudFoundry created Warden in 2013, an API to manage isolated, ephemeral, and resource-controlled environments. In its first versions, Warden used LXC.

In 2013, the first version of Docker was introduced. It performs, like OpenVZ and Solaris Containers, operating-system-level virtualization.

In 2014, Google introduced "Let Me Contain That For You," the open-source version of Google's container stack, which provides Linux application containers.

Later on, Google engineers have been collaborating with Docker over libcontainer and porting the core concepts and abstractions to libcontainer. So, the project is not actively being developed.

"Let Me Contain That For You" runs applications in isolated environments on the same Kernel and without patching it since it uses CGroups, Namespaces, and other Linux Kernel features.

Google is a leader in the container industry. Everything at Google runs on containers.

According to the "Register," more than 2 billion containers run on Google infrastructure every week.

2 billion! Yes. If the number is too big to be perceived, imagine that for every second of every minute of every hour of every day, Google fires up 3,300 containers.. and that was in 2014.

If all these existing container technologies solved the problems of isolation, portability, and efficiency, what sets Docker apart? How did Docker become popular?

Our goal is to simplify the complexity of technology. Docker was able to take this ingenious concept of a software container and make it simple and make it accessible.

Steve Singh, CEO Docker

Before Docker, you need to know many things in Linux systems to run containers. Because of this, people have a high entry barrier to learn this technology.

Docker revolutionized the way we worked with containers. In two words, it made things "simple" and "accessible" to developers.

The container operations use high-level APIs that abstracted complicated Kernel concepts.

Initially, Docker didn't reinvent the wheel and used LXC to control low-level Linux kernel primitives. These primitives such as namespaces, CGroups, Netlink, and Netfilter were later controlled by libcontainer.

Docker engine was then used to translate the high-level API Commands to LXC and later on libcontainer, but in 2015, the company announced runC: A lightweight, portable container runtime.

runC is basically a little command-line tool to leverage libcontainer directly, without going through the Docker Engine. This project is, in fact, a step towards container standardization. It was donated to the Open Container Initiative.

In reality, libcontainer was not abandoned, but it was moved to runC repository.

After giving the runC project to the OCI, Docker started using containerd in 2016, as a container runtime that interfaces with the underlying low-level-runtime runC.

Containerd has full support for starting OCI bundles and managing their lifecycle. It uses runC to run containers but also implements other high-level features like image management and high-level APIs.

In the same year, Docker inc broke out containerD from the Docker engine, and it was later donated to the cloud-native computing foundation as an independent tool.

If you are interested in the technical details, you can google "The Missing Introduction to containerization," and you'll find a story with the same title published on our Medium publication.

In conclusion, Docker made the user experience simple and the learning curve gentle. It standardized the way we work with containers. This was done by leveraging abstractions that were decoupled.

The container ecosystem started out as monoliths with integration and inspiration built on top of existing technologies. This made industry-wide adoption hard in the beginning. However, as time passed by and through active community development, container technologies became more widespread and ubiquitous.

Containers opened the gates to other paradigms like distributed systems and multicloud. There are actually many schools of thought that defined both terms in different manners, but it seems that definitions converged at the end.

In Kaptain Topic from FAUN newsletter we share the most interesting stories from the containerization and orchestration landscape, if you are interested in reading the best news, tutorials, and stories each week, don't forget to join us by visiting FAUN.dev

"So let me get this straight. You want to build an external version of the Borg task scheduler. One of our most important competitive advantages. The one we don't even talk about externally. And, on top of that, you want to open source it?"

It was the summer of 2013 at Google; Urs Holzle was sitting in a room with Joe Beda, Brendan Burns, and Craig McLuckie. Joe, Brendan, and Craig were pitching an idea of an open-source container orchestration to Urs.

Before all of this happened, Google was already running billions of containers using Borg - an in-house orchestration framework. It allowed the utilization of Google resources to be efficient by allowing collocated containers. Later on, Omega was created as a replacement for Borg.

The idea was initially rejected—however, one fateful day on a Google Shuttle ride to the Campus. Craig accidentally stumbled upon Eric Brewer, the VP of Google Cloud. Craig pitched the idea as a way to optimize the efficiency of infrastructure within Google Cloud. The project was green-lit.

In keeping with the Star Trek Borg theme, the project was initially called "Seven of Nine." This is why the kubernetes logo has seven sides. Brian Grant and Tim Hockin then joined the project, and it was officially announced by Google in mid-2014.

With Kubernetes getting widespread adoption, progress in the container world seemed inevitable.

In the DevOps Fauncast, we tell you the story behind the stories of technologies we love like Kubernetes. In the upcoming episodes, we are going to tell you more stories about this orchestration framework. You are going to discover stories from the early days of orchestration and Kubernetes. We're going to discuss and explain the technical aspects of this technology too.

Don't forget to follow @joinFAUN on Twitter. You can also join our online community by visiting faun.dev/join.

If you want to reach us, you can also use our email community@faun.dev.

If you love the DevOps Fauncast, we'd love for you to subscribe, rate, and give a review on iTunes.

Until next time!


Start blogging about your favorite technologies, reach more readers and earn rewards!

Join other developers and claim your FAUN account now!

Avatar

Aymen El Amri

Founder, FAUN

@eon01
Founder of FAUN, author, maker, trainer, and polymath software engineer (DevOps, CloudNative, CloudComputing, Python, NLP)
User Popularity
2k

Influence

208k

Total Hits

38

Posts