Skip to main content
Back to Overview
February 7, 2026|14 min read

GitOps: ArgoCD + FluxCD, Better Together

Why we use two GitOps tools on every managed Kubernetes cluster - FluxCD for platform operations and ArgoCD for customer workloads. The separation of concerns that makes both sides happy.

Jan FuhrerBy Jan Fuhrer

"ArgoCD or FluxCD?" is one of the most common questions we get from customers evaluating our managed Kubernetes platform. Our answer tends to surprise people: we use both. On every cluster.

This is not indecision. It is a deliberate architecture choice that solves a real problem: how do you cleanly separate what the platform team manages from what application teams manage, on the same cluster, without stepping on each other?

We have been running this dual-GitOps setup across our fleet since 2022. This post explains why, how it works, and why trying to do everything with one tool creates problems you do not see until it is too late.

The problem with one tool

Most organizations start with a single GitOps tool. It manages everything: platform components, application deployments, cluster addons, CRDs. One ArgoCD instance, one big repo (or a few repos), and one team that owns it all.

This works until the organization grows. Then you hit the questions:

  • Your platform team needs to upgrade Cilium. Your application team is mid-deploy. Whose sync wins?
  • A developer wants to see why their deployment is stuck. They open the GitOps UI and see 200 applications, most of which are platform internals they should not touch.
  • Someone accidentally syncs a platform component from the application UI. The observability stack goes down.
  • You want to give team-scoped access to the GitOps dashboard. But the platform components and application components are managed by the same tool, so RBAC gets complicated fast.

The root cause: platform operations and application delivery have fundamentally different requirements, audiences, and change cadences. Trying to serve both with one tool forces compromises on both sides.

Our architecture: FluxCD for platform, ArgoCD for customers

FluxCD

Managed by Natron

Cilium CNI
cert-manager
Prometheus + Grafana + Loki
Alertmanager
Kyverno policies
Velero backups
External Secrets Operator
Blackbox Exporter
Ingress Controller
Git sourcenatron-internal/platform-config
ArgoCD

Managed by Customer

Application deployments
Helm releases & Kustomize
Environment promotion (dev/staging/prod)
Deployment status & sync state
Rollbacks & manual syncs
Team-scoped projects & RBAC
Git sourcecustomer-org/application-deployments

We split responsibilities cleanly. Each tool manages a different layer of the cluster, from a different Git source, owned by a different team.

FluxCD: the platform layer

FluxCD manages everything that Natron is responsible for. This is the infrastructure and platform services that make the cluster production-ready:

  • Cilium CNI and network policies
  • cert-manager and TLS automation
  • The full observability stack (Prometheus, Grafana, Loki, Alertmanager)
  • Velero backups
  • Kyverno policy engine
  • External Secrets Operator
  • Ingress controller
  • Blackbox Exporter

FluxCD reconciles from our internal Git repository. Customers never see this repo. They do not need to. When we push a Cilium upgrade or update an Alertmanager config, FluxCD detects the change and reconciles silently. No UI, no manual sync, no notification to the customer.

This is intentional. FluxCD is designed for infrastructure automation. It is CLI-native, event-driven, and does not need a web interface. Platform engineers operate through Git and kubectl, not through dashboards. FluxCD fits this model perfectly.

ArgoCD: the application layer

ArgoCD manages everything the customer deploys. Their microservices, APIs, web applications, workers, CronJobs. Each team gets a scoped ArgoCD project with RBAC, and sees only their own applications.

ArgoCD is the right tool here because:

  • Developers need a UI. They want to see sync status, view diffs before syncing, trigger rollbacks. ArgoCD's web interface makes this self-service.
  • Teams need isolation. ArgoCD's AppProject model gives each team a scoped view. Team A cannot see or sync Team B's applications.
  • Environment promotion needs visibility. Moving from dev to staging to production is a workflow that benefits from a visual diff and manual approval gates.

The customer's Git repositories are the source of truth for their applications. They own these repos, their CI pipelines push to them, and ArgoCD syncs from them.

Why this separation matters

Customer ApplicationsArgoCDCustomer team
MicroservicesAPIsWeb appsWorkersCronJobs
Platform ServicesFluxCDNatron
ObservabilitySecurityNetworkingBackupsPolicies
Cluster InfrastructureFluxCDNatron
CNICSINode configCluster addonsCRDs

The layered model creates a clean boundary:

Natron manages (FluxCD)
Cluster upgrades
CNI + networking
Observability stack
Backup + restore
TLS certificates
Policy engine
Secret sync
Node management
Responsibility boundary
Customer manages (ArgoCD)
App deployments
Release promotion
Config & secrets
Scaling decisions
Feature flags
CI pipelines
App monitoring
Rollbacks

This boundary is not just organizational. It is technical:

Different change cadences. Platform components change weekly or monthly (security patches, version upgrades). Applications change daily or hourly. Mixing them in one reconciliation loop means platform changes can block application deploys and vice versa.

Different blast radii. A bad platform change (broken Cilium config) affects every workload on the cluster. A bad application change affects one team. These need different rollback strategies, different testing approaches, and different approval gates.

Different access models. Platform changes go through Natron's internal review. Application changes go through the customer's PR process. Different repos, different reviewers, different merge policies. One tool cannot enforce both without becoming overly complex.

Different failure modes. If FluxCD goes down, platform components stop reconciling but applications keep running. If ArgoCD goes down, applications stop syncing but the platform stays healthy. Neither failure takes down both layers.

How it works in practice

Natron updates platform
Platform repo updated
FluxCD detects change
Reconciles silently
Customer deploys application
App repo updated
ArgoCD shows diff
Team syncs via UI

Scenario 1: Natron upgrades Prometheus. We update the HelmRelease version in our platform repo. FluxCD detects the change within 60 seconds. It runs helm upgrade with the new chart version. Prometheus restarts with zero customer impact. The customer does not see a notification, does not need to approve anything, and does not even know it happened. Their Grafana dashboards keep working.

Scenario 2: Customer deploys a new version of their API. The developer merges a PR that updates the image tag in their deployment manifest. ArgoCD detects the change and shows it in the UI as "OutOfSync". The developer clicks "Sync" (or auto-sync is enabled), and ArgoCD rolls out the new version. If it fails health checks, the developer sees it immediately in the ArgoCD dashboard and can roll back with one click.

Scenario 3: Natron deploys Kyverno policies, customer deploys a new service. Both happen simultaneously. FluxCD reconciles the new Kyverno policies. ArgoCD syncs the new service. The new service is immediately subject to the new policies. If the service violates a policy, admission is denied and the developer sees the error in ArgoCD's sync status. The feedback loop is instant.

Why not just ArgoCD for everything?

We get this question a lot. ArgoCD is popular, has a great UI, and can technically manage platform components. We tried this in 2021. Here is what went wrong:

RBAC explosion. To give customers access to their applications without exposing platform internals, we had to create complex AppProject configurations with resource whitelists, namespace restrictions, and source repo filters. Every new platform component needed RBAC updates. It was brittle.

Accidental platform syncs. A developer with "sync all" permissions triggered a sync on a platform application that was intentionally out of sync (we were testing a canary upgrade). The observability stack rolled back to the previous version mid-investigation.

Upgrade coupling. ArgoCD itself is a platform component. When we needed to upgrade ArgoCD, it was managing itself. Self-referential reconciliation is possible but adds unnecessary risk. With FluxCD managing ArgoCD, the upgrade path is clean: FluxCD upgrades ArgoCD, ArgoCD continues managing applications.

UI noise. 150+ ArgoCD applications, of which 80% are platform internals the customer should never see. Filtering and scoping helped, but the cognitive load was real.

Why not just FluxCD for everything?

Also a fair question. FluxCD can manage application deployments through Kustomizations and HelmReleases. We considered it.

No web UI. Developers who are used to clicking "deploy" or viewing a sync diff in a browser cannot do that with FluxCD. For platform engineers, CLI-only is fine. For application developers across multiple teams, it is a barrier.

No built-in RBAC dashboard. FluxCD's multi-tenancy is namespace-based, which works for isolation. But it does not give teams a self-service view of their deployments. You would need to build a custom UI or use Weave GitOps, which is essentially adding a UI layer back.

Application lifecycle is different. Developers want to see deployment history, compare revisions, trigger manual syncs for hotfixes, and view logs from failed syncs. ArgoCD has this out of the box. Building it on top of FluxCD is reinventing ArgoCD.

The comparison

Aspect
FluxCD
ArgoCD
Primary use case
Infrastructure & platform automation
Application delivery & promotion
UI
CLI-only (operators do not need a UI)
Full web UI (developers need visibility)
Reconciliation
Pull-based, event-driven, silent
Pull-based with sync status dashboard
Multi-tenancy
Namespace-scoped Kustomizations
AppProjects with RBAC per team
Helm support
HelmRelease CRD (declarative)
Native Helm rendering in UI
Drift detection
Automatic correction, no notification needed
Visual diff in UI, manual or auto sync
Access model
No UI to secure, cluster-internal only
SSO/OIDC, team-scoped dashboards

The tools are not competitors in our architecture. They serve different audiences with different needs. FluxCD is the silent operator. ArgoCD is the developer-facing dashboard.

How we set this up

On every managed cluster, the bootstrap looks like this:

  1. FluxCD is installed first. It bootstraps from our internal platform Git repo. All platform services are defined as FluxCD Kustomizations and HelmReleases.
  2. ArgoCD is one of those platform services. FluxCD installs and manages ArgoCD. This means ArgoCD versions, configurations, and RBAC are version-controlled in our platform repo.
  3. ArgoCD connects to customer repos. We create AppProjects scoped to each team's namespaces and repos. The customer's CI/CD pipeline pushes manifests to their repo, and ArgoCD syncs them.

If ArgoCD needs an upgrade, we update the HelmRelease in the platform repo, and FluxCD handles it. If FluxCD needs an upgrade, we update the FluxCD manifests (FluxCD can self-manage its own components).

The two tools never manage the same resources. FluxCD owns the flux-system, monitoring, cert-manager, kyverno, and similar namespaces. ArgoCD owns customer application namespaces. Kyverno policies enforce this boundary.

Explore further

This GitOps architecture is part of our broader managed Kubernetes platform. For multi-tenant setups, see how we use ArgoCD and Helm for tenant onboarding.

If you are running into the problems described above, or thinking about your GitOps strategy for a growing platform, schedule a call. We will walk through your current setup and see where the boundaries should be.

Jan Fuhrer

About the author

Jan Fuhrer

Platform Engineer and Architect at Natron Tech, designing GitOps workflows and platform automation for managed Kubernetes across Switzerland.

The best interface between two teams is a Git repository, not a shared dashboard.

Read Next