Feedback

Chat Icon

End-to-End Kubernetes with Rancher, RKE2, K3s, Fleet, Longhorn, and NeuVector

The full journey from nothing to production

Using Fleet for GitOps Workflows: A Practical Guide
46%

Deploying Applications with Fleet

Prerequisites

Now, we have a good understanding of the Fleet architecture and workflow. So, let's put it into practice by deploying an application. At this stage, we have:

  • Our RKE2 cluster up and running. It consists of a single control plane node and a single worker node.
  • A Gitea repository with the application code. It's accessible at https://gitea.$WORKSPACE_PUBLIC_IP.sslip.io/gitea_admin/todo-app-repository.
  • A Harbor registry to store the application images.
  • Both Harbor and Gitea are secured with self-signed certificates. We have to deal with this when configuring the GitOps workflow.
  • Fleet will use Gitea as the source of truth for the application manifests and will create a Bundle to deploy the application to the RKE2 cluster.
  • The RKE2 cluster will pull the image from the Harbor registry and apply the bundle to deploy the application.

Since our Harbor registry is secured with a self-signed certificate, we need to ensure that the RKE2 cluster can access it to pull the image. We have some steps to follow before deploying the application, on each node in the RKE2 cluster:

  • Download the Harbor certificate and add it to accepted certificates.
  • Configure the registries.yaml file to allow the Kubelet to pull images from the Harbor registry. This file (located in /etc/rancher/rke2/registries.yaml) contains the path to the certificate file for the registry (that we can download from https://harbor.$WORKSPACE_PUBLIC_IP.sslip.io/api/v2.0/systeminfo/getcert). This is a requirement for RKE2, without it, the Kubelet will not be able to pull images from the registry.

We can run the following script from the workspace server (or any other machine where WORKLOAD_CONTROLPLANE_01_PUBLIC_IP and the other IPs are exported) to configure the nodes in the RKE2 cluster:

#!/bin/bash

# Variables
HARBOR_URL="harbor.$WORKSPACE_PUBLIC_IP.sslip.io"
CA_CERT_PATH="/etc/ssl/certs/$HARBOR_URL/ca.crt"
REGISTRIES_YAML="/etc/rancher/rke2/registries.yaml"
HARBOR_USER="admin"
HARBOR_PASSWORD="p@ssword"

# Define control plane and worker nodes
CONTROL_PLANE_NODES=(
  "$WORKLOAD_CONTROLPLANE_01_PUBLIC_IP"
  # Add more control plane IPs here if needed
)

WORKER_NODES=(
  "$WORKLOAD_NODE_01_PUBLIC_IP"
  # Add more worker IPs here if needed
)

# Function to configure a node
configure_node() {
  local node_ip=$1
  local service=$2

  ssh root@$node_ip << EOF
    mkdir -p "/etc/ssl/certs/$HARBOR_URL"
    curl -kL "https://$HARBOR_URL/api/v2.0/systeminfo/getcert" -o "$CA_CERT_PATH"
    update-ca-certificates
    cat << EOT > "$REGISTRIES_YAML"
configs:
  $HARBOR_URL:
    auth:
      username: $HARBOR_USER
      password: $HARBOR_PASSWORD
    tls:
      ca_file: $CA_CERT_PATH
EOT
    systemctl restart $service
EOF
}

# Configure control plane nodes
echo "Configuring control plane nodes..."
for node in "${CONTROL_PLANE_NODES[@]}"; do
  echo "Configuring control plane node: $node"
  configure_node "$node" "rke2-server"
done

# Configure worker nodes
echo "Configuring worker nodes..."
for node in "${WORKER_NODES[@]}"; do
  echo "Configuring worker node: $node"
  configure_node "$node" "rke2-agent"
done

echo "Configuration completed."

Note that we are not going to run our application Pod on the control plane node (we will use a nodeSelector to run it on the worker node ($WORKLOAD_NODE_01_PUBLIC_IP), therefore the registries.yaml file is, in reality, only needed on the worker node. However, it's a worthwhile exercise to configure it on all nodes for consistency.

Double check that the script has been executed successfully on all nodes:

ssh root@$WORKLOAD_CONTROLPLANE_01_PUBLIC_IP "cat $REGISTRIES_YAML"
ssh root@$WORKLOAD_NODE_01_PUBLIC_IP "cat $REGISTRIES_YAML"

You should see the content of the registries.yaml file with a valid configuration like this:

configs:
  harbor..sslip.io:
    auth:
      username: admin
      password: p@ssword
    tls:
      ca_file: /etc/ssl/certs/harbor..sslip.io/ca.crt

should be replaced with the actual public IP address of the workspace server by the script.


Since our application uses SQLite as the database, we will create a PersistentVolumeClaim to store the data on the RKE2 cluster. However, we don't have any StorageClass available.

# SSH into the RKE2 CP
ssh root@$WORKLOAD_CONTROLPLANE_01_PUBLIC_IP

# Get storage classes
kubectl get storageclass

# ==> output: No resources found

ℹ️ The StorageClass is a resource in the cluster that defines the type of volumes that can be requested. A PersistentVolumeClaim (PVC) is a request for storage by a user. It is similar to a Pod. Pods consume node resources, and PVCs consume PersistentVolumes (PVs). A PVC needs a StorageClass to define the type of volume it needs.

Hence the need to create a StorageClass beforehand. We have multiple choices, but for this guide, we are going to use a local storage class.

ℹ️ Local storage is storage that is directly attached to the node where the pod is running. It is not suitable for most production workloads, but this is an easy way to get started.

To store data locally, we have the possibility of using the official Local Persistence Volume Static Provisioner, but since we're using Rancher projects, we will use the Rancher Local Path Provisioner.

Even if it has some experimental features at the time of writing, the Local Path Provisioner is a better choice when you want dynamic provisioning while keeping the simplicity of the Local Persistent Volume Static Provisioner.

Rancher Local Path Provisioner automatically creates PersistentVolumes when a PersistentVolumeClaim is made. This is useful when you don't want to pre-create volumes, as opposed to Kubernetes' Local Persistent Volume Static Provisioner, which doesn't support dynamic provisioning. Also, the Local Path Provisioner is easy to configure and integrates seamlessly with Rancher.

Here is a quick comparative table:

FeatureKubernetes Local Persistent Volume Static ProvisionerRancher Local Path Provisioner
Provisioning ModelAdmin creates PVs manually, then user creates PVC, then Kubernetes binds themUser creates PVC, then provisioner creates PV automatically, then Kubernetes binds them
Dynamic ProvisioningNo - Admin must manually create directories on nodes and write PV YAML for each volume before users can claim themYes - PVs and directories created automatically when user creates PVC
Operational OverheadHigher - requires manual intervention for each new volumeLow - self-service model for developers
FlexibilityLimited - fixed pool of pre-created volumesHigh - creates volumes on-demand
Volume ManagementManual - admin must SSH to nodes, create directories, write and apply PV YAML for each volumeAutomatic - provisioner handles everything when PVC is created
Production ReadyYes - stable and matureExperimental features, but stable core
Storage LocationPre-defined discovery directories/opt/local-path-provisioner (configurable)
Node AffinityYes - enforced automaticallyYes - enforced automatically
Multi-Node SupportYes - different PVs per nodeYes - different PVs per node
Volume ExpansionNoNo
SnapshotsNoNo
Data PersistenceYes - data survives pod deletionYes - data survives pod deletion
Default StorageClassMust be set manuallyCan be set as default
Rancher IntegrationStandard Kubernetes resourceNative Rancher integration
GitHub Repositorykubernetes-sigs/sig-storage-local-static-provisionerrancher/local-path-provisioner
Best ForProduction workloads requiring strict control over storageDev/test/simple workloads needing dynamic local storage

Important: Both solutions store data locally on nodes. If a node fails, the data on that node is lost. For production workloads requiring high availability, consider distributed storage solutions like Longhorn, Ceph, or cloud-provider storage classes.

To install Rancher Local Path Provisioner, we need to run the following commands from the control plane node of the RKE2 cluster (the cluster where we are going to deploy the application):

# SSH into the RKE2 CP
ssh root@$WORKLOAD_CONTROLPLANE_01_PUBLIC_IP

# Add the Rancher Local Path Provisioner
cd /tmp
git clone https://github.com/rancher/local-path-provisioner.git
cd local-path-provisioner

# Checkout the version 0.0.30
git checkout v0.0.30

# Deploy the Local Path Provisioner
kubectl apply -f deploy/local-path-storage.yaml

# Check the storage classes
kubectl get sc

The following resources should be created now:

  • namespace/local-path-storage: The namespace
  • serviceaccount/local-path-provisioner-service-account: The service account
  • role.rbac.authorization.k8s.io/local-path-provisioner-role: The role
  • clusterrole.rbac.authorization.k8s.io/local-path-provisioner-role: The cluster role
  • rolebinding.rbac.authorization.k8s.io/local-path-provisioner-bind: The role binding
  • clusterrolebinding.rbac.authorization.k8s.io/local-path-provisioner-bind: The cluster role binding
  • deployment.apps/local-path-provisioner: The deployment of the provisioner
  • storageclass.storage.k8s.io/local-path: The storage class
  • configmap/local-path-config: The configuration of the provisioner

Note that we could have used Rancher UI to add these resources, but we chose to use the command line for this resource creation as it's faster.

Now, let's move to the next step: From the workspace server, we are going to create the Kubernetes manifests for the application and add them to the Gitea repository. The manifests will include:

  • A registry Secret to pull the application image from the Harbor registry. Even if RKE2 has the registry configuration, we will define this secret in the manifests to build a deployment that can be used in any Kubernetes cluster.
  • A Deployment for the application
  • A Service to expose the application (ClusterIP)
  • An Ingress to access the application
  • We will also store these definitions in the kube folder in the Gitea repository.

Start by creating the kube folder:

# SSH into the workspace server
ssh root@$WORKSPACE_PUBLIC_IP

# Create a folder:
mkdir -p $HOME/todo/app/kube

To create the registry secret, we need to run the following commands:

# Create a secret for the registry:
USERNAME=admin
PASSWORD=p@ssword

# Create a Docker registry secret
DOCKER_CONFIG_JSON=$(echo -n "{\"auths\":{\"harbor.$WORKSPACE_PUBLIC_IP.sslip.io\":{\"username\":\"$USERNAME\",\"password\":\"$PASSWORD\",\"auth\":\"$(echo -n $USERNAME:$PASSWORD

End-to-End Kubernetes with Rancher, RKE2, K3s, Fleet, Longhorn, and NeuVector

The full journey from nothing to production

Enroll now to unlock all content and receive all future updates for free.