Using Fleet for GitOps Workflows: A Practical Guide
Deploying Applications with Fleet
Prerequisites
Now, we have a good understanding of the Fleet architecture and workflow. So, let's put it into practice by deploying an application. At this stage, we have:
- Our RKE2 cluster up and running. It consists of a single control plane node and a single worker node.
- A Gitea repository with the application code. It's accessible at
https://gitea.$WORKSPACE_PUBLIC_IP.sslip.io/gitea_admin/todo-app-repository. - A Harbor registry to store the application images.
- Both Harbor and Gitea are secured with self-signed certificates. We have to deal with this when configuring the GitOps workflow.
- Fleet will use Gitea as the source of truth for the application manifests and will create a Bundle to deploy the application to the RKE2 cluster.
- The RKE2 cluster will pull the image from the Harbor registry and apply the bundle to deploy the application.
Since our Harbor registry is secured with a self-signed certificate, we need to ensure that the RKE2 cluster can access it to pull the image. We have some steps to follow before deploying the application, on each node in the RKE2 cluster:
- Download the Harbor certificate and add it to accepted certificates.
- Configure the
registries.yamlfile to allow the Kubelet to pull images from the Harbor registry. This file (located in/etc/rancher/rke2/registries.yaml) contains the path to the certificate file for the registry (that we can download fromhttps://harbor.$WORKSPACE_PUBLIC_IP.sslip.io/api/v2.0/systeminfo/getcert). This is a requirement for RKE2, without it, the Kubelet will not be able to pull images from the registry.
We can run the following script from the workspace server (or any other machine where WORKLOAD_CONTROLPLANE_01_PUBLIC_IP and the other IPs are exported) to configure the nodes in the RKE2 cluster:
#!/bin/bash
# Variables
HARBOR_URL="harbor.$WORKSPACE_PUBLIC_IP.sslip.io"
CA_CERT_PATH="/etc/ssl/certs/$HARBOR_URL/ca.crt"
REGISTRIES_YAML="/etc/rancher/rke2/registries.yaml"
HARBOR_USER="admin"
HARBOR_PASSWORD="p@ssword"
# Define control plane and worker nodes
CONTROL_PLANE_NODES=(
"$WORKLOAD_CONTROLPLANE_01_PUBLIC_IP"
# Add more control plane IPs here if needed
)
WORKER_NODES=(
"$WORKLOAD_NODE_01_PUBLIC_IP"
# Add more worker IPs here if needed
)
# Function to configure a node
configure_node() {
local node_ip=$1
local service=$2
ssh root@$node_ip << EOF
mkdir -p "/etc/ssl/certs/$HARBOR_URL"
curl -kL "https://$HARBOR_URL/api/v2.0/systeminfo/getcert" -o "$CA_CERT_PATH"
update-ca-certificates
cat << EOT > "$REGISTRIES_YAML"
configs:
$HARBOR_URL:
auth:
username: $HARBOR_USER
password: $HARBOR_PASSWORD
tls:
ca_file: $CA_CERT_PATH
EOT
systemctl restart $service
EOF
}
# Configure control plane nodes
echo "Configuring control plane nodes..."
for node in "${CONTROL_PLANE_NODES[@]}"; do
echo "Configuring control plane node: $node"
configure_node "$node" "rke2-server"
done
# Configure worker nodes
echo "Configuring worker nodes..."
for node in "${WORKER_NODES[@]}"; do
echo "Configuring worker node: $node"
configure_node "$node" "rke2-agent"
done
echo "Configuration completed."
Note that we are not going to run our application Pod on the control plane node (we will use a nodeSelector to run it on the worker node ($WORKLOAD_NODE_01_PUBLIC_IP), therefore the registries.yaml file is, in reality, only needed on the worker node. However, it's a worthwhile exercise to configure it on all nodes for consistency.
Double check that the script has been executed successfully on all nodes:
ssh root@$WORKLOAD_CONTROLPLANE_01_PUBLIC_IP "cat $REGISTRIES_YAML"
ssh root@$WORKLOAD_NODE_01_PUBLIC_IP "cat $REGISTRIES_YAML"
You should see the content of the registries.yaml file with a valid configuration like this:
configs:
harbor..sslip.io :
auth:
username: admin
password: p@ssword
tls:
ca_file: /etc/ssl/certs/harbor..sslip.io/ca.crt
should be replaced with the actual public IP address of the workspace server by the script.
Since our application uses SQLite as the database, we will create a PersistentVolumeClaim to store the data on the RKE2 cluster. However, we don't have any StorageClass available.
# SSH into the RKE2 CP
ssh root@$WORKLOAD_CONTROLPLANE_01_PUBLIC_IP
# Get storage classes
kubectl get storageclass
# ==> output: No resources found
ℹ️ The
StorageClassis a resource in the cluster that defines the type of volumes that can be requested. APersistentVolumeClaim(PVC) is a request for storage by a user. It is similar to a Pod. Pods consume node resources, and PVCs consumePersistentVolumes(PVs). A PVC needs aStorageClassto define the type of volume it needs.
Hence the need to create a StorageClass beforehand. We have multiple choices, but for this guide, we are going to use a local storage class.
ℹ️ Local storage is storage that is directly attached to the node where the pod is running. It is not suitable for most production workloads, but this is an easy way to get started.
To store data locally, we have the possibility of using the official Local Persistence Volume Static Provisioner, but since we're using Rancher projects, we will use the Rancher Local Path Provisioner.
Even if it has some experimental features at the time of writing, the Local Path Provisioner is a better choice when you want dynamic provisioning while keeping the simplicity of the Local Persistent Volume Static Provisioner.
Rancher Local Path Provisioner automatically creates PersistentVolumes when a PersistentVolumeClaim is made. This is useful when you don't want to pre-create volumes, as opposed to Kubernetes' Local Persistent Volume Static Provisioner, which doesn't support dynamic provisioning. Also, the Local Path Provisioner is easy to configure and integrates seamlessly with Rancher.
Here is a quick comparative table:
| Feature | Kubernetes Local Persistent Volume Static Provisioner | Rancher Local Path Provisioner |
|---|---|---|
| Provisioning Model | Admin creates PVs manually, then user creates PVC, then Kubernetes binds them | User creates PVC, then provisioner creates PV automatically, then Kubernetes binds them |
| Dynamic Provisioning | No - Admin must manually create directories on nodes and write PV YAML for each volume before users can claim them | Yes - PVs and directories created automatically when user creates PVC |
| Operational Overhead | Higher - requires manual intervention for each new volume | Low - self-service model for developers |
| Flexibility | Limited - fixed pool of pre-created volumes | High - creates volumes on-demand |
| Volume Management | Manual - admin must SSH to nodes, create directories, write and apply PV YAML for each volume | Automatic - provisioner handles everything when PVC is created |
| Production Ready | Yes - stable and mature | Experimental features, but stable core |
| Storage Location | Pre-defined discovery directories | /opt/local-path-provisioner (configurable) |
| Node Affinity | Yes - enforced automatically | Yes - enforced automatically |
| Multi-Node Support | Yes - different PVs per node | Yes - different PVs per node |
| Volume Expansion | No | No |
| Snapshots | No | No |
| Data Persistence | Yes - data survives pod deletion | Yes - data survives pod deletion |
| Default StorageClass | Must be set manually | Can be set as default |
| Rancher Integration | Standard Kubernetes resource | Native Rancher integration |
| GitHub Repository | kubernetes-sigs/sig-storage-local-static-provisioner | rancher/local-path-provisioner |
| Best For | Production workloads requiring strict control over storage | Dev/test/simple workloads needing dynamic local storage |
Important: Both solutions store data locally on nodes. If a node fails, the data on that node is lost. For production workloads requiring high availability, consider distributed storage solutions like Longhorn, Ceph, or cloud-provider storage classes.
To install Rancher Local Path Provisioner, we need to run the following commands from the control plane node of the RKE2 cluster (the cluster where we are going to deploy the application):
# SSH into the RKE2 CP
ssh root@$WORKLOAD_CONTROLPLANE_01_PUBLIC_IP
# Add the Rancher Local Path Provisioner
cd /tmp
git clone https://github.com/rancher/local-path-provisioner.git
cd local-path-provisioner
# Checkout the version 0.0.30
git checkout v0.0.30
# Deploy the Local Path Provisioner
kubectl apply -f deploy/local-path-storage.yaml
# Check the storage classes
kubectl get sc
The following resources should be created now:
namespace/local-path-storage: The namespaceserviceaccount/local-path-provisioner-service-account: The service accountrole.rbac.authorization.k8s.io/local-path-provisioner-role: The roleclusterrole.rbac.authorization.k8s.io/local-path-provisioner-role: The cluster rolerolebinding.rbac.authorization.k8s.io/local-path-provisioner-bind: The role bindingclusterrolebinding.rbac.authorization.k8s.io/local-path-provisioner-bind: The cluster role bindingdeployment.apps/local-path-provisioner: The deployment of the provisionerstorageclass.storage.k8s.io/local-path: The storage classconfigmap/local-path-config: The configuration of the provisioner
Note that we could have used Rancher UI to add these resources, but we chose to use the command line for this resource creation as it's faster.
Now, let's move to the next step: From the workspace server, we are going to create the Kubernetes manifests for the application and add them to the Gitea repository. The manifests will include:
- A registry Secret to pull the application image from the Harbor registry. Even if RKE2 has the registry configuration, we will define this secret in the manifests to build a deployment that can be used in any Kubernetes cluster.
- A Deployment for the application
- A Service to expose the application (ClusterIP)
- An Ingress to access the application
- We will also store these definitions in the
kubefolder in the Gitea repository.
Start by creating the kube folder:
# SSH into the workspace server
ssh root@$WORKSPACE_PUBLIC_IP
# Create a folder:
mkdir -p $HOME/todo/app/kube
To create the registry secret, we need to run the following commands:
# Create a secret for the registry:
USERNAME=admin
PASSWORD=p@ssword
# Create a Docker registry secret
DOCKER_CONFIG_JSON=$(echo -n "{\"auths\":{\"harbor.$WORKSPACE_PUBLIC_IP.sslip.io\":{\"username\":\"$USERNAME\",\"password\":\"$PASSWORD\",\"auth\":\"$(echo -n $USERNAME:$PASSWORDEnd-to-End Kubernetes with Rancher, RKE2, K3s, Fleet, Longhorn, and NeuVector
The full journey from nothing to productionEnroll now to unlock all content and receive all future updates for free.
