Feedback

Chat Icon

Helm in Practice

Designing, Deploying, and Operating Kubernetes Applications at Scale

Setting up the Environment
12%

The Infrastructure

In this guide, we will need a dedicated workspace server with some essential tools installed. This can be your local machine, but for consistency and to follow along with the guide, we recommend using a dedicated server as your workspace. We will use Ubuntu 25.10 as the operating system for this server.

On this workspace server, we will set up everything required for our development tasks. From this space, we will deploy our application to a Kubernetes cluster.

Our cluster will be composed of two nodes: a master and a worker node. We will use the same operating system, Ubuntu 25.10, for both nodes, and the cluster will use K3s as the Kubernetes distribution.

We are going to use DigitalOcean to create the infrastructure, but this can be replaced with any other cloud provider, including private providers like OpenStack or VMware. The choice of DigitalOcean is based on the ease of use of the platform (you can use my referral link to get $200 in free credit for 60 days on DigitalOcean).

As a summary, here is the infrastructure we will set up:

  • Workspace Server: Ubuntu 25.10, hostname workspace
  • Kubernetes Cluster:
  • Master Node: Ubuntu 25.10 with K3s (server mode), hostname master
  • Worker Node: Ubuntu 25.10 with K3s (agent mode), hostname worker

If you want to create these servers manually, you can skip the next section. Otherwise, we will provide a Terraform configuration to automate the creation.

Install zip, unzip, and jq on your local machine, and install Terraform.

# install zip and unzip
apt update && apt install zip unzip jq -y

# Set the Terraform version
TERRAFORM_VERSION="1.10.3"
TERRAFORM_ZIP="terraform_${TERRAFORM_VERSION}_linux_amd64.zip"
TERRAFORM_URL="https://releases.hashicorp.com/terraform/${TERRAFORM_VERSION}/${TERRAFORM_ZIP}"

# Download and extract the Terraform binary
curl -LO $TERRAFORM_URL
unzip $TERRAFORM_ZIP
mv terraform /usr/local/bin/

Create a directory where we will store some initial files.

# First of all, choose a directory where you want to store the files we will use.
PROJECT_NAME="learning-helm"

# Create the folder structure
mkdir -p $PROJECT_NAME

Generate the SSH keys. We will use the same key for all the servers.

# Create a unique name for the SSH key to avoid conflicts
# with other keys in your ~/.ssh directory
# Make sure you are not overwriting an existing key
SSH_UNIQUE_NAME="$HOME/.ssh/$PROJECT_NAME"

# generate the keys (public and private)
# This will overwrite the keys if they already exist
ssh-keygen -t rsa \
    -b 4096 \
    -C "$PROJECT_NAME" \
    -f $SSH_UNIQUE_NAME -N "" \
    <<< y

# add the key to the ssh-agent
ssh-add $SSH_UNIQUE_NAME

Export the DigitalOcean token, as well as other variables that we will use later when creating the servers using Terraform.

# Export the DigitalOcean token.
# Get one here: https://cloud.digitalocean.com/account/api/tokens
export DIGITALOCEAN_TOKEN="[CHANGE_ME]"

# Choose the best region for you.
# More options here: https://www.digitalocean.com/docs/platform/availability-matrix/
export DIGITALOCEAN_REGION="fra1"

# I recommend using Ubuntu 25.10 for this project.
export DIGITALOCEAN_IMAGE="ubuntu-25-10-x64"

# SSH key variables
export DIGITALOCEAN_SSH_KEY_NAME="$SSH_UNIQUE_NAME"
export DIGITALOCEAN_SSH_PUBLIC_KEY_PATH="$SSH_UNIQUE_NAME.pub"
export DIGITALOCEAN_SSH_PRIVATE_KEY_PATH="$SSH_UNIQUE_NAME"

# VPC variables.
# You can use the default VPC or create a new one.
# Use doctl to get the VPC UUID (`doctl vpcs list`)
export DIGITALOCEAN_VPC_UUID="[CHANGE_ME]"
export DIGITALOCEAN_PROJECT_NAME="$PROJECT_NAME"

# Workspace cluster variables
export DIGITALOCEAN_WORKSPACE_VM_NAME="workspace"
# Change the size if needed but default is fine for most cases
export DIGITALOCEAN_WORKSPACE_VM_SIZE="s-2vcpu-4gb"

# Workload cluster variables
export DIGITALOCEAN_WORKLOAD_VMS_NAMES='["master", "worker"]'
# Change the size if needed but default is fine for most cases
export DIGITALOCEAN_WORKLOAD_VMS_SIZE="s-2vcpu-4gb"

Create a Terraform variable file to store all the variables we will use in our Terraform script.

# Create a Terraform variable file.
cat << EOF > $PROJECT_NAME/variables.tf
variable "region" {
  default = "${DIGITALOCEAN_REGION}"
}
variable "image" {
  default = "${DIGITALOCEAN_IMAGE}"
}
variable "vpc_uuid" {
  default = "${DIGITALOCEAN_VPC_UUID}"
}
variable "workspace_vm_size" {
  default = "${DIGITALOCEAN_WORKSPACE_VM_SIZE}"
}
variable "workspace_vm_name" {
  default = "${DIGITALOCEAN_WORKSPACE_VM_NAME}"
}
variable "workload_vms_size" {
  default = "${DIGITALOCEAN_WORKLOAD_VMS_SIZE}"
}
variable "workload_vms_names" {
  default = ${DIGITALOCEAN_WORKLOAD_VMS_NAMES}
}
variable "project_name" {
  default = "${DIGITALOCEAN_PROJECT_NAME}"
}
variable "ssh_key_name" {
  default = "${DIGITALOCEAN_SSH_KEY_NAME}"
}
variable "ssh_public_key_path" {
  default = "${DIGITALOCEAN_SSH_PUBLIC_KEY_PATH}"
}
variable "ssh_private_key_path" {
  default = "${DIGITALOCEAN_SSH_PRIVATE_KEY_PATH}"
}
EOF

Let's move on to creating the Terraform script that will launch our infrastructure.

cat << EOF > $PROJECT_NAME/main.tf
terraform {
  required_providers {
    digitalocean = {
      source = "digitalocean/digitalocean"
      version = "~> 2.0"
    }
  }
}

# Create a DigitalOcean project
resource "digitalocean_project" "learning_helm_project" {
  name        = var.project_name
  description = "Project for learning"
  purpose     = "Development"
  environment = "Development"
}

data "digitalocean_project" "project" {
  depends_on = [digitalocean_project.learning_helm_project]
  name       = var.project_name
}

# Resource: Define the SSH key to be used for all VMs
resource "digitalocean_ssh_key" "default_ssh_key" {
  name       = var.ssh_key_name
  public_key = file(var.ssh_public_key_path)
}

# Resource: Workload VMs
resource "digitalocean_droplet" "workload_vms" {
  for_each = { for name in var.workload_vms_names : name => name }

  image      = var.image
  name       = each.value
  region     = var.region
  size       = var.workload_vms_size
  ssh_keys   = [digitalocean_ssh_key.default_ssh_key.id]
  monitoring = false
  vpc_uuid   = var.vpc_uuid

  connection {
    agent       = false
    type        = "ssh"
    user        = "root"
    private_key = file(var.ssh_private_key_path)
    host        = self.ipv4_address
    timeout     = "5m"
  }
}

# Resource: Workspace VM
resource "digitalocean_droplet" "workspace_vm" {
  image      = var.image
  name       = var.workspace_vm_name
  region     = var.region

Helm in Practice

Designing, Deploying, and Operating Kubernetes Applications at Scale

Enroll now to unlock current content and receive all future updates for free. Your purchase supports the author and fuels the creation of more exciting content. Act fast, as the price will rise as the course nears completion!

Unlock now  $15.99$11.99

Hurry! This limited time offer ends in:

To redeem this offer, copy the coupon code below and apply it at checkout:

Learn More