Opsio - Cloud and AI Solutions
8 min read· 1,864 words

ArgoCD Helm Charts Installation for Multi-Cluster Kubernetes

Published: ·Updated: ·Reviewed by Opsio Engineering Team
Johan Carlsson

Country Manager, Sweden

AI, DevOps, Security, and Cloud Solutioning. 12+ years leading enterprise cloud transformation across Scandinavia

Why Multi-Cluster ArgoCD Matters for Enterprise Kubernetes

As Kubernetes adoption matures, single-cluster GitOps pipelines quickly become a bottleneck. Production, staging, and development environments need isolation. Regional deployments must satisfy data residency requirements. Disaster recovery demands parallel cluster topology. When organisations attempt to address these needs without a deliberate multi-cluster strategy, the result is a tangle of independently maintained ArgoCD instances, divergent Helm values files, and no single plane of visibility.

ArgoCD's native multi-cluster model solves this by allowing one or more centralised control-plane instances to manage application delivery across any number of registered target clusters. Helm charts provide the packaging layer that ensures application configuration is version-controlled, parameterised, and reproducible. Together, they form the operational backbone of a mature GitOps platform. This article explains how to install ArgoCD using Helm, register multiple target clusters, structure your chart repositories, and avoid the architectural mistakes that are most common in mid-market and enterprise environments.

ArgoCD Installation Models: Choosing the Right Architecture

ArgoCD supports two primary installation modes. Understanding the distinction before reaching for Helm is essential, because the mode you choose determines your RBAC topology, your upgrade complexity, and your blast radius in a failure scenario.

Multi-Tenant Installation

This is the most common enterprise pattern. A single ArgoCD control plane, installed in a dedicated namespace (typically argocd), manages applications deployed to the same cluster and to externally registered clusters. All ArgoCD components are present: the API server, repository server, application controller, Redis, and the ApplicationSet controller. RBAC policies govern which teams can manage which applications and which target clusters.

Core Installation

The core installation omits the API server, the UI, and the Dex SSO component. It is suitable for scenarios where a cluster operates as a pure GitOps leaf node, reconciling resources without exposing a management interface. This pattern is common in hub-and-spoke architectures where a central hub cluster hosts the full multi-tenant installation and each spoke runs only the core components — or no ArgoCD components at all, relying entirely on the hub's application controller to push workloads via the Kubernetes API.

Hub-and-Spoke vs. Federated Instances

A hub-and-spoke topology uses one ArgoCD instance to manage N target clusters. A federated topology runs one ArgoCD instance per cluster or per region and uses ApplicationSets or an external orchestration layer to keep them consistent. Hub-and-spoke is simpler to operate and monitor; federated deployments are more resilient to control-plane outages but introduce configuration drift risk. For most mid-market organisations, hub-and-spoke with a highly available ArgoCD installation is the right starting point.

Free Expert Consultation

Need expert help with argocd helm charts installation for multi-cluster kubernetes?

Our cloud architects can help you with argocd helm charts installation for multi-cluster kubernetes — from strategy to implementation. Book a free 30-minute advisory call with no obligation.

Solution ArchitectAI ExpertSecurity SpecialistDevOps Engineer
50+ certified engineersAWS Advanced Partner24/7 support
Completely free — no obligationResponse within 24h

Installing ArgoCD with Helm: Step-by-Step

The official Argo CD Helm chart is maintained in the argoproj/argo-helm repository. The following procedure assumes a hub cluster is already running, kubectl and helm (version 3.x) are configured, and the ArgoCD CLI has been installed locally.

1. Add the Argo Helm Repository

Add the repository and update the local index:

  • helm repo add argo https://argoproj.github.io/argo-helm
  • helm repo update

2. Create the Namespace and Install

Create a dedicated namespace and install the chart with a custom values file. Avoid relying on default values in production — parameterise everything that may differ between environments.

  • kubectl create namespace argocd
  • helm install argocd argo/argo-cd --namespace argocd --values values-argocd.yaml

Key values to override in values-argocd.yaml for a production-grade deployment include: enabling high-availability mode for the application controller and Redis, configuring resource requests and limits, enabling metrics endpoints for Prometheus scraping, and setting the server's insecure flag to false to enforce TLS termination at the Ingress layer rather than within the pod.

3. Register Target Clusters

After ArgoCD is running on the hub, register each spoke cluster using the ArgoCD CLI. This creates a Secret in the argocd namespace containing the target cluster's API server URL, CA certificate, and a service account token with the permissions ArgoCD requires on that cluster.

  • Log in to the ArgoCD API server: argocd login <ARGOCD_SERVER>
  • Switch kubectl context to each target cluster and register it: argocd cluster add <CONTEXT_NAME>
  • Verify registration: argocd cluster list

In automated pipelines — for example, when clusters are provisioned by Terraform — this step is best handled by a post-provisioning script or by generating the cluster Secret declaratively via a Terraform kubernetes_secret resource, avoiding any manual CLI interaction.

4. Deploy Applications via ApplicationSets

For multi-cluster use, the ApplicationSet controller is indispensable. Rather than defining one Application resource per cluster, an ApplicationSet uses generators — such as the cluster generator or the list generator — to produce Application resources dynamically. A single ApplicationSet manifest can target all registered clusters, inject per-cluster Helm values overrides, and automatically add or remove Applications as clusters are registered or deregistered.

Managing Helm Chart Dependencies in a Mono-Repo

A common pattern in enterprise environments is the mono-repo: all Kubernetes manifests and Helm values files for all applications and all clusters reside in a single Git repository. This simplifies access control and change auditing but introduces Helm dependency management complexity.

Chart Structure for Multi-Cluster Deployments

A well-structured mono-repo separates base charts (the reusable Helm chart definitions) from environment overlays (per-cluster or per-environment values files). A representative directory layout looks like this:

  • charts/ — base Helm charts, one subdirectory per application
  • clusters/production-eu/ — values files and ApplicationSet overrides for the EU production cluster
  • clusters/production-us/ — values files for the US production cluster
  • clusters/staging/ — values files for the shared staging cluster
  • argocd/ — ArgoCD Application and ApplicationSet manifests, also version-controlled

ArgoCD references the mono-repo at a specific Git revision, resolving Helm dependencies at sync time. Pinning chart dependencies in Chart.lock files and committing those lock files to Git ensures deterministic builds across all clusters regardless of when a sync is triggered.

Single Revision for All Clusters

A disciplined approach requires that all clusters consume charts from the same Git revision within a promotion pipeline. A change is committed to a feature branch, validated in the staging cluster by ArgoCD's diff and dry-run mechanisms, then merged to main and automatically promoted to production clusters in sequence. This eliminates the version skew problem where production and staging silently diverge over time.

Evaluation Criteria: Selecting the Right Multi-Cluster GitOps Pattern

Not every organisation has the same operational requirements. The table below summarises the key evaluation dimensions and how the principal patterns compare.

Criterion Hub-and-Spoke (Single ArgoCD) Federated (ArgoCD per Cluster) Core Installation (Leaf Nodes)
Operational complexity Low — one control plane to upgrade and monitor High — N instances to maintain Medium — hub is full, leaves are minimal
Blast radius on control-plane failure High — all clusters lose GitOps reconciliation Low — failure is isolated per cluster Medium — leaves continue running last-known state
Visibility and auditing Excellent — single UI and audit log Poor without additional aggregation tooling Good if hub is the management plane
Network requirements Hub must reach all spoke API servers Each instance only reaches its own cluster Hub must reach spoke API servers
Suitability for air-gapped clusters Requires VPN or private connectivity High — each instance is local Moderate — depends on hub reachability
ApplicationSet support Full — recommended approach Partial — requires external sync Full on hub, not applicable on leaves

Common Pitfalls in Multi-Cluster ArgoCD Deployments

Organisations that have gone through this process — or that have inherited poorly designed GitOps platforms — consistently encounter the same failure patterns. Avoiding them saves significant remediation effort.

  • Storing cluster Secrets in plain Git: The cluster registration Secrets created by argocd cluster add contain sensitive bearer tokens. These must never be committed to Git in plaintext. Use Sealed Secrets, External Secrets Operator with a secrets manager such as AWS Secrets Manager, or a Vault-backed dynamic secrets solution.
  • Neglecting RBAC on the ArgoCD API server: In a multi-tenant ArgoCD instance, every registered cluster and every application is visible to all authenticated users by default. Define ArgoCD RBAC policies that restrict teams to their own projects and clusters from the outset. Retrofitting RBAC onto a running instance with many users is disruptive.
  • Using --set flags instead of values files: Helm --set arguments are not version-controlled and are invisible in Git history. All configuration must live in values files committed to the repository.
  • Ignoring resource health checks: ArgoCD's Synced status does not mean Healthy. Configure custom health checks for CRDs and application-specific resources. Without them, a failed deployment can appear green in the ArgoCD UI.
  • Running without high-availability mode in production: The default Helm chart values deploy single-replica components. A hub cluster running a single application controller pod is a single point of failure for GitOps reconciliation across every registered cluster. Enable HA mode in the values file before the first production deployment.
  • Omitting network policy between ArgoCD components: ArgoCD's internal components communicate over well-known ports. Enforce Kubernetes NetworkPolicies to restrict which pods can reach the Redis instance and the repository server, limiting the lateral movement surface if a component is compromised.

How Opsio Delivers Multi-Cluster ArgoCD Implementations

Opsio is an AWS Advanced Tier Services Partner with AWS Migration Competency, a Microsoft Partner, and a Google Cloud Partner. Its engineering team holds CKA and CKAD certifications, and all production Kubernetes work is backed by a 24/7 NOC operating under a 99.9% uptime SLA. The Bangalore delivery centre is ISO 27001 certified. With more than 3,000 projects completed since 2022 and over 50 certified engineers, Opsio has accumulated direct operational experience with the multi-cluster GitOps patterns described in this article across Nordic enterprise and mid-market clients.

A typical Opsio multi-cluster ArgoCD engagement covers the following scope:

  • Architecture design: Selection of hub-and-spoke vs. federated topology based on the client's network constraints, compliance requirements, and team structure. For clients subject to Nordic data residency obligations, cluster boundaries and ArgoCD hub placement are designed explicitly to keep data within the required jurisdictions.
  • Infrastructure provisioning: Terraform modules provision EKS, GKE, or AKS clusters and register them with the ArgoCD hub automatically at creation time, eliminating manual CLI steps from the operational workflow.
  • Helm chart repository design: Opsio engineers structure mono-repo or multi-repo layouts, enforce Chart.lock discipline, and implement promotion pipelines that progress changes from staging to production with mandatory diff approval gates.
  • Security hardening: Cluster Secrets are managed through External Secrets Operator backed by AWS Secrets Manager or Azure Key Vault. ArgoCD RBAC policies are aligned to the client's existing identity provider via OIDC/Dex integration. NetworkPolicies restrict intra-namespace communication. For AWS-based clusters, GuardDuty findings are surfaced into the 24/7 NOC monitoring stack.
  • Observability integration: ArgoCD metrics endpoints are scraped by Prometheus and visualised in Grafana dashboards covering sync status, reconciliation latency, and error rates across all registered clusters.
  • Ongoing operations: The 24/7 NOC monitors ArgoCD health continuously. Velero is deployed for cluster-state backup. Upgrade cycles for ArgoCD itself are managed through the same Helm-based GitOps pipeline, ensuring that the platform managing deployments is itself managed declaratively.

Organisations moving from ad-hoc Kubernetes management to a structured GitOps platform typically reduce deployment cycle times significantly and eliminate the configuration drift that accumulates when clusters are managed by hand. Opsio's model keeps the entire configuration surface — ArgoCD installation, cluster registration, application definitions, and Helm values — in Git, auditable, and recoverable. For Nordic enterprises and mid-market organisations that require ISO 27001-aligned processes and always-on operational coverage, this combination of technical rigour and certified delivery capacity makes a material operational difference.

About the Author

Johan Carlsson
Johan Carlsson

Country Manager, Sweden at Opsio

AI, DevOps, Security, and Cloud Solutioning. 12+ years leading enterprise cloud transformation across Scandinavia

Editorial standards: This article was written by a certified practitioner and peer-reviewed by our engineering team. We update content quarterly to ensure technical accuracy. Opsio maintains editorial independence — we recommend solutions based on technical merit, not commercial relationships.