Migrating from the Router Helm Chart

Running Operator-compatible resources alongside your Supergraphs


The Router Helm Chart and the Apollo GraphOS Operator were designed with different goals in mind. The Helm chart provides a self-contained deployment of the Router, bundling optional Kubernetes integrations (Ingress, ServiceMonitor, HPA, and more) into a single chart. The Operator is designed to sit alongside your existing Kubernetes tooling and give you fine-grained control over how the Router integrates with the rest of your platform.

This means you manage some resources that the Helm chart generates on your behalf. All of these resources are fully supported — this guide shows you how to create each one yourself so you can configure and maintain them on your own terms.

How to target Operator-managed resources

For each Supergraph you create, the Operator provisions:

  • A Service with the same name as your Supergraph resource, in the same namespace

  • A Deployment (or Rollout) whose pods all carry the label apollographql.com/supergraph:<supergraph-name>

The Service uses this label as its pod selector, so any resource you create that needs to target your Router pods or Service should use it as well.

For example, for a Supergraph named my-supergraph in the production namespace:

  • Service name: my-supergraph

  • Pod selector label: apollographql.com/supergraph: my-supergraph

  • In-cluster Service address: my-supergraph.production.svc.cluster.local

HorizontalPodAutoscaler

The Router Helm Chart could create an HPA that targeted the Router Deployment directly. The Operator exposes the scale subresource on Supergraph resources, which means any Kubernetes-compatible autoscaler — including HPA v1, HPA v2, and KEDA — can target your Supergraph directly without needing to reference the underlying Deployment.

See Autoscaling your Supergraphs for examples using HPA v1, HPA v2, and KEDA.

ServiceMonitor

The Router Helm Chart could create a Prometheus ServiceMonitor that scraped metrics from the Router Service.

To use a ServiceMonitor with the Operator, first configure a metrics port on your Supergraph:

YAML
1spec:
2  networking:
3    metricsPort: 9090

This causes the Operator to expose a metrics port on the Service. You can then create a ServiceMonitor that targets the Supergraph's Service by its label:

YAML
1apiVersion: monitoring.coreos.com/v1
2kind: ServiceMonitor
3metadata:
4  name: my-supergraph
5  namespace: production
6spec:
7  selector:
8    matchLabels:
9      apollographql.com/supergraph: my-supergraph
10  endpoints:
11    - port: metrics
12      path: /metrics

Ingress

The Router Helm Chart could create an Ingress resource to expose the Router externally. The Operator creates a Service for your Supergraph that you can target with your own Ingress, applying whatever annotations, TLS configuration, and routing rules your environment requires.

YAML
1apiVersion: networking.k8s.io/v1
2kind: Ingress
3metadata:
4  name: my-supergraph
5  namespace: production
6  annotations:
7    # Add any cloud-provider or controller-specific annotations here
8spec:
9  rules:
10    - host: graphql.example.com
11      http:
12        paths:
13          - path: /
14            pathType: Prefix
15            backend:
16              service:
17                name: my-supergraph # (this needs to match the name field on the Service, this will be the name of the Supergraph)
18                port:
19                  number: 80

The Service port corresponds to spec.networking.servicePort on your Supergraph (default: 80).

ServiceAccount

The Router Helm Chart created a ServiceAccount specifically for the Router pods and mounted it automatically. When using the Operator, no ServiceAccount is created for your Router pods by default.

note
When you install the Operator itself via its Helm chart, a ServiceAccount is created for the Operator's own controller pods. This is an installation detail — it is not shared with or inherited by your Router pods, and granting it additional permissions is not a substitute for creating a ServiceAccount for your Routers.

If your Routers need a ServiceAccount — for example, to support IRSA/workload identity, cloud provider credential injection, or RBAC rules — create one in the same namespace as your Supergraph and reference it via spec.deployment.podTemplate.serviceAccountName:

YAML
1apiVersion: v1
2kind: ServiceAccount
3metadata:
4  name: my-router
5  namespace: production
6  annotations:
7    # Example: AWS IRSA annotation
8    eks.amazonaws.com/role-arn: arn:aws:iam::123456789012:role/my-router-role
YAML
1apiVersion: apollographql.com/v1alpha4
2kind: Supergraph
3metadata:
4  name: my-supergraph
5  namespace: production
6spec:
7  replicas: 2
8  deployment:
9    podTemplate:
10      routerVersion: 2.7.0
11      serviceAccountName: my-router
12  schema:
13    studio:
14      graphRef: my-graph@my-variant

Istio

VirtualService

The Router Helm Chart could create an Istio VirtualService that routed traffic to the Router. You can achieve the same result by creating a VirtualService that targets the Operator-managed Service:

YAML
1apiVersion: networking.istio.io/v1beta1
2kind: VirtualService
3metadata:
4  name: my-supergraph
5  namespace: production
6spec:
7  hosts:
8    - graphql.example.com
9  gateways:
10    - my-gateway
11  http:
12    - match:
13        - uri:
14            prefix: /
15      route:
16        - destination:
17            host: my-supergraph.production.svc.cluster.local
18            port:
19              number: 80

ServiceEntry and DestinationRules

The Router Helm Chart could create ServiceEntry and DestinationRule resources to describe external upstream services that the Router connects to. The Operator makes no assumptions about which external services your Routers communicate with, so these remain entirely under your control.

Create them as you would for any other workload in your mesh:

YAML
1apiVersion: networking.istio.io/v1beta1
2kind: ServiceEntry
3metadata:
4  name: my-upstream
5  namespace: production
6spec:
7  hosts:
8    - my-upstream.example.com
9  location: MESH_EXTERNAL
10  ports:
11    - number: 443
12      name: https
13      protocol: HTTPS
14  resolution: DNS
15---
16apiVersion: networking.istio.io/v1beta1
17kind: DestinationRule
18metadata:
19  name: my-upstream
20  namespace: production
21spec:
22  host: my-upstream.example.com
23  trafficPolicy:
24    tls:
25      mode: SIMPLE

PodDisruptionBudget

The Router Helm Chart could create a PodDisruptionBudget to protect Router availability during cluster maintenance. Because the Operator labels all Router pods with apollographql.com/supergraph: <supergraph-name>, you can create a PDB with a matching selector:

YAML
1apiVersion: policy/v1
2kind: PodDisruptionBudget
3metadata:
4  name: my-supergraph
5  namespace: production
6spec:
7  minAvailable: 1
8  selector:
9    matchLabels:
10      apollographql.com/supergraph: my-supergraph
Feedback

Ask Community