The Missing Introduction to Containerization
We Are Made by History
Docker is one of the most widely adopted container platforms today. While its first release dates back to 2013, the underlying ideas of process isolation and lightweight containment predate Docker by several decades.
To understand where modern containers come from, it's useful to rewind to 1979, when Unix introduced the chroot mechanism. From there, a series of technologies progressively expanded the concepts of isolation, resource control, and operating system level virtualization. Tracing this evolution provides valuable context, helping explain not only when these concepts appeared, but why modern container platforms like Docker are designed the way they are today.
It all started with the chroot jail. The chroot system calls were introduced during the development of Version 7 Unix in 1979. Chroot, short for change root, is often cited as an early precursor to containerization, although it provides only filesystem isolation. It allows a process and its children to see a restricted view of the filesystem, separate from the rest of the operating system.
The chroot mechanism works by changing the root directory of a process, creating an apparently isolated environment. However, it was never designed as a security boundary. Root processes can escape chroot jails, device files can be misused, and isolation is limited to the filesystem while the kernel is fully shared. While chroot remains useful for specific tasks, it is not suitable for running untrusted workloads. Modern containerization and virtualization technologies provide significantly stronger isolation guarantees.
FreeBSD Jails were introduced in the FreeBSD operating system in 2000 to address the limitations of chroot. In addition to filesystem isolation, FreeBSD Jails isolate process trees, users, and networking, providing a much stronger and more secure containment model than chroot alone.
Linux VServer appeared in 2001, introducing operating system level virtualization capabilities to the Linux kernel through the use of security contexts and process isolation. It allows multiple isolated virtual private servers to run on a single Linux kernel while sharing the same hardware resources. Each virtual server behaves much like a standalone system, with its own users, services, and process space.
Initially developed by Jacques Gelinas, Linux VServer proposed a soft partitioning model based on security contexts, enabling efficient isolation without the overhead of full virtual machines. The implementation paper abstract states:
A soft partitioning concept based on Security Contexts which permits the creation of many independent Virtual Private Servers (VPS) that run simultaneously on a single physical server at full speed, efficiently sharing hardware resources.
A VPS provides an almost identical operating environment to a conventional Linux server. All services, such as SSH, mail, web, and databases, can be started on such a VPS, without (or, in special cases, with only minimal) modification, just like on any real server.
Each virtual server has its own user account database and root password, and is isolated from other virtual servers, except that they share the same hardware resources.
This project is not related to the Linux Virtual Server project, which focuses on network load balancing and failover rather than system isolation.
In February 2004, Sun Microsystems introduced Solaris Containers, also known as Solaris Zones, as a native operating-system-level virtualization feature of Solaris. Solaris Containers provided strong isolation while allowing multiple workloads to share the same kernel and supported both SPARC and x86 architectures.
OpenVZ followed in 2005 as another Linux-based operating-system-level virtualization technology. Like Linux VServer, it enabled multiple isolated environments to run on a single kernel and became popular among hosting providers for offering virtual private servers. Because containers share the host kernel, OpenVZ and similar technologies cannot run guests that require different kernel versions or architectures.
Both Linux VServer and OpenVZ relied on kernel patching to implement isolation and resource control mechanisms. While many of these ideas later influenced upstream Linux development, OpenVZ itself remained based on out-of-tree kernel patches for much of its history.
In 2007, Google introduced control groups, or cgroups, and unlike OpenVZ Kernel, Google's implementation was merged into the mainline Linux kernel. cgroups provide fine-grained control over CPU, memory, disk I/O, and network usage for groups of processes and form a core building block of modern container runtimes.
In 2008, the first version of LXC, Linux Containers, was released. LXC combines Linux namespaces and cgroups to provide lightweight operating-system-level virtualization without requiring kernel patches, making containers more accessible on standard Linux distributions.
Painless Docker - 2nd Edition
A Comprehensive Guide to Mastering Docker and its EcosystemEnroll now to unlock all content and receive all future updates for free.
Hurry! This limited time offer ends in:
To redeem this offer, copy the coupon code below and apply it at checkout:
