GitOps with Helm and Argo CD
Argo CD: Managing Different Environments
The Hub and Spoke Model
In general, the Hub and Spoke model is a network topology in which a central component (the Hub) acts as the single point of connection, control, and management for multiple peripheral components (the Spokes).
In the context of Argo CD and Kubernetes, the Hub and Spoke model refers to a setup in which Argo CD serves as the central management point (the Hub) for multiple Kubernetes clusters (the Spokes).
Imagine you have several Kubernetes clusters for different environments, such as development, staging, and production. Instead of installing and managing Argo CD separately on each cluster, you can set up a single instance of Argo CD (the Hub) that connects to and manages all these clusters (the Spokes).
For example, you can have a development environment, a staging environment, and a production environment. Each environment can be represented by a different Kubernetes cluster. You can install Argo CD on one of these clusters or on a separate management cluster (a dedicated Hub). From this central Argo CD instance, you can deploy and manage applications across all the other clusters.
As a reminder, to add a cluster to Argo CD, use the following command:
export cluster=
argocd cluster add $cluster -y
If we have a production cluster called production and a staging cluster called staging, we can add them to Argo CD using the following commands:
export cluster=production
argocd cluster add $cluster -y
export cluster=staging
argocd cluster add $cluster -y
In the Argo CD UI, you can see the different clusters in the Settings tab. Otherwise, list your clusters using:
argocd cluster list
Export the name of the cluster you want to use, depending on the environment to which you want to deploy the application.
export CLUSTER=
# Example if you want to deploy to the default cluster
# export CLUSTER=kubernetes.default.svc
Then, deploy the application to that cluster using an Application manifest like this:
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: flask-app
namespace: argocd
spec:
destination:
namespace: flask-app
# $CLUSTER is the name of the cluster
# you want to deploy the application to
server: $CLUSTER
project: default
source:
repoURL: https://github.com/$GITHUB_USERNAME/argocd-helm-example-bis
targetRevision: main
path: flask-app-helm
helm:
valueFiles:
- values.yaml
syncPolicy:
automated:
prune: true
selfHeal: true
allowEmpty: true
You can also create different values for each environment. For example, you can create a values-production.yaml file and a values-staging.yaml file. Then use:
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: flask-app
namespace: argocd
spec:
destination:
namespace: flask-app
# $CLUSTER is the name of the cluster
# you want to deploy the application to
server: $CLUSTER
project: default
source:
repoURL: https://github.com/$GITHUB_USERNAME/argocd-helm-example-bis
targetRevision: main
path: flask-app-helm
helm:
valueFiles:
- values-production.yaml
# or values-staging.yaml ...etc
syncPolicy:
automated:
prune: true
selfHeal: true
allowEmpty: true
There's a better way to achieve this using ApplicationSet. We will explore this topic next.
Managing Multiple Clusters Using Argo CD
Let's explore the concept of managing multiple clusters using a single Argo CD instance.
Before proceeding, we need to have two clusters added to Argo CD: one for production and one for staging.
In this guide, we already created a K3s cluster with two nodes. We are going to remove the worker node from the cluster and initialize a new K3s cluster on that node to have two clusters. You are free to launch a new cluster using any method you prefer.
# Source the variables containing the IP addresses
# of our worker node
source ~/learning-helm/variables.sh
# SSH into the worker node
ssh root@$WORKER_PRIVATE_IP
# On the worker node, uninstall K3s
k3s-agent-uninstall.sh
# Change the hostname from worker to master-staging
# to avoid confusion
hostnamectl set-hostname master-staging
# Reload the shell to apply the changes
exec $SHELL -l
# Initialize a new K3s cluster on the worker node
# Start by getting the external IP of the node
EXTERNAL_IP=$(curl -s http://ifconfig.me)
# Remove the old k3s.yaml file if it exists
rm -f /etc/rancher/k3s/k3s.yaml
# Install K3s
curl -sfL https://get.k3s.io | \
INSTALL_K3S_VERSION="v1.33.5+k3s1" sh -s - \
server \
--write-kubeconfig-mode '0644' \
--node-external-ip "$EXTERNAL_IP"
# Verify the installation
kubectl get nodes
# SSH back into the master node
ssh root@$MASTER_PRIVATE_IP
# On the master node, remove the worker
kubectl delete node worker
Let's also copy the kubeconfig file from the master-staging node to the workspace server where the Argo CD CLI is installed.
# SSH into the workspace
ssh root@$WORKSPACE_PRIVATE_IP
# Copy the kubeconfig file from the master-staging node
scp root@$WORKER_PRIVATE_IP:/etc/rancher/k3s/k3s.yaml \
~/.kube/config-staging
# Change the 127.0.0.1 to the real IP address of the master-staging node
sed -i "s/server: https:\/\/127.0.0.1:6443/server: https:\/\/$WORKER_PUBLIC_IP:6443/" \
~/.kube/config-staging
# To avoid confusion, since both clusters are called "default",
# we will rename both contexts to "master" and "master-staging"
KUBECONFIG=~/.kube/config \
kubectl config rename-context default master
KUBECONFIG=~/.kube/config-staging \
kubectl config rename-context default master-staging
Test both clusters:
# For the production cluster
KUBECONFIG=~/.kube/config kubectl get nodes
# For the staging cluster
KUBECONFIG=~/.kube/config-staging kubectl get nodes
At this stage, we have two single-node K3s clusters: one for production and one for staging.
Let's go back to Argo CD and add the new cluster to it:
# Add the staging cluster to Argo CD
export cluster=master-staging
KUBECONFIG=~/.kube/config-staging argocd cluster add $cluster -y
# Verify both clusters are added
argocd cluster list
# To avoid typing the clusters IP addresses every time,
# we can export them as environment variables
echo "export PRODUCTION_CLUSTER=https://kubernetes.default.svc"Helm in Practice
Designing, Deploying, and Operating Kubernetes Applications at ScaleEnroll now to unlock current content and receive all future updates for free. Your purchase supports the author and fuels the creation of more exciting content. Act fast, as the price will rise as the course nears completion!
Hurry! This limited time offer ends in:
To redeem this offer, copy the coupon code below and apply it at checkout:
