Docs
Launch GraphOS Studio

Deploy in Kubernetes

Self-hosted deployment of Apollo Router in Kubernetes


Learn how to deploy a self-hosted in Kubernetes using Helm charts.

Kubernetes cluster
Query
Load Balancer
Apollo Router
Coprocessor
Apollo Router
Coprocessor
Clients

The following guides provide the steps to:

  • Get a Helm chart from the Apollo container repository.
  • Deploy a router with a basic Helm chart.
  • Configure chart values to export metrics, enable Rhai scripting, and deploy a coprocessor.
  • Choose chart values that best suit migration from a gateway to the router.

NOTE

The Apollo Router source code and all its distributions are made available under the

.

About the router Helm chart

is a package manager for Kubernetes (k8s). Apollo provides an application Helm chart with each release of Apollo Router in GitHub. Since the router version 0.14.0, Apollo has released the router Helm chart as an
Open Container Initiative (OCI)
image in our GitHub container registry.

NOTE

The path to the OCI router chart is oci://ghcr.io/apollographql/helm-charts/router.

You customize a deployed router with the same

but under different Helm CLI options and YAML keys.

Basic deployment

Follow this guide to deploy the Apollo Router using Helm to install the basic chart provided with each router release.

Each router chart has a values.yaml file with router and deployment settings. The released, unedited file has a few explicit settings, including:

Set up Helm

  1. Install

    version 3.x. The Apollo Router's Helm chart requires Helm v3.x.

    NOTE

    Your Kubernetes version must be compatible with Helm v3. For details, see

    .

  2. Log in with Helm to the Apollo container registry in GitHub.

    • Get a GitHub OCI token and save it in an environment variable, GITHUB_OCI_TOKEN. For reference, follow the guide for

      .

    • Log in with the helm registry login command, using your saved GITHUB_OCI_TOKEN and GitHub username:

      echo ${GITHUB_OCI_TOKEN} | helm registry login -u <username> --password-stdin ghcr.io
  3. After logging in, verify your access to the registry by showing the latest router chart values with the helm show values command:

    helm show values oci://ghcr.io/apollographql/helm-charts/router

Set up cluster

Install the tools and provision the infrastructure for your Kubernetes cluster.

For an example, see the

. It provides steps you can reference for gathering accounts and credentials for your cloud platform (GCP or AWS), provisioning resources, and deploying your .

💡 TIP

To manage the system resources you need to deploy the router on Kubernetes:

Set up graph

Set up your self-hosted and get its

and
API key
.

If you need a guide to set up your graph, you can follow

and complete
step 1 (Set up Apollo tools)
,
step 4 (Obtain your subgraph schemas)
, and
step 5 (Publish your subgraph schemas)
.

Deploy router

To deploy the router, run the helm install command with an argument for the OCI image in the container repository, an argument for the values.yaml configuration file, and additional to override specific configuration values.

helm install --namespace <router-namespace> --set managedFederation.apiKey="<graph-api-key>" --set managedFederation.graphRef="<graph-ref>" oci://ghcr.io/apollographql/helm-charts/router --version <router-version> --values router/values.yaml

The necessary arguments for specific configuration values:

Some optional but recommended arguments:

  • --namespace <router-namespace>. The namespace scope for this deployment.
  • --version <router-version>. The version of the router to deploy. If not specified by helm install, the latest version is installed.

Verify deployment

Verify that your router is one of the deployed releases with the helm list command.

If you deployed with the --namespace <router-namespace> option, you can list only the releases within your namespace:

helm list --namespace <router-namespace>

Deploy with metrics endpoints

The router supports

. A
basic deployment
doesn't enable metrics endpoints, because the router chart disables both Prometheus (explicitly) and OTLP (by omission).

To enable metrics endpoints in your deployed router through a YAML configuration file:

  1. Create a YAML file, my_values.yaml, to contain additional values that override default values.

  2. Edit my_values.yaml to enable metrics endpoints:

    my_values.yaml
    router:
    configuration:
    telemetry:
    metrics:
    prometheus:
    enabled: true
    listen: 0.0.0.0:9090
    path: "/metrics"
    otlp:
    temporality: delta
    endpoint: <otlp-endpoint-addr>

    NOTE

    Although this example enables both Prometheus and OTLP, in practice it's common to enable only one endpoint.

    • router.configuration.telemetry.metrics.prometheus was already configured but disabled (enabled: false) by default. This configuration sets enabled: true.
    • router.configuration.telemetry.metrics.otlp is enabled by inclusion.
    • router.configuration.telemetry.temporality by default is temporality: cumulative and is a good choice for most metrics consumers. For DataDog, use temporality: delta.
  3. Deploy the router with the additional YAML configuration file. For example, starting with the helm install command from the basic deployment step, append --values my_values.yaml:

    helm install --namespace <router-namespace> --set managedFederation.apiKey="<graph-api-key>" --set managedFederation.graphRef="<graph-ref>" oci://ghcr.io/apollographql/helm-charts/router --version <router-version> --values router/values.yaml --values my_values.yaml

Deploy with Rhai scripts

The router supports

to add custom functionality.

Enabling Rhai scripts in your deployed router requires mounting an extra volume for your Rhai scripts and getting your scripts onto the volume. That can be done by following steps in

. The example creates a new (in-house) chart that wraps (and depends on) the released router chart, and the new chart has templates that add the necessary configuration to allow Rhai scripts for a deployed router.

Deploy with a coprocessor

The router supports

to run custom logic on requests throughout the
router's request-handling lifecycle
.

A deployed coprocessor has its own application image and container in the router pod.

To configure a coprocessor and its container for your deployed router through a YAML configuration file:

  1. Create a YAML file, my_values.yaml, to contain additional values that override default values.

  2. Edit my_values.yaml to configure a coprocessor for the router. For reference, follow the

    and
    minimal
    configuration examples, and apply them to router.configuration.coprocessor.

  3. Edit my_values.yaml to add a container for the coprocessor.

    my_values.yaml
    extraContainers:
    - name: <coprocessor-deployed-name> # name of deployed container
    image: <coprocessor-app-image> # name of application image
    ports:
    - containerPort: <coprocessor-container-port> # must match port of router.configuration.coprocessor.url
    env: [] # array of environment variables
  4. Deploy the router with the additional YAML configuration file. For example, starting with the helm install command from the basic deployment step, append --values my_values.yaml:

    helm install --namespace <router-namespace> --set managedFederation.apiKey="<graph-api-key>" --set managedFederation.graphRef="<graph-ref>" oci://ghcr.io/apollographql/helm-charts/router --version <router-version> --values router/values.yaml --values my_values.yaml

Separate configurations per environment

To support your different deployment configurations for different environments (development, staging, production, etc.), Apollo recommends separating your configuration values into separate files:

  • A common file, which contains values that apply across all environments.
  • A unique environment file per environment, which includes and overrides the values from the common file while adding new environment-specific values.

The helm install command applies each --values <values-file> option in the order you set them within the command. Therefore, a common file must be set before an environment file so that the environment file's values are applied last and override the common file's values.

For example, this command deploys with a common_values.yaml file applied first and then a prod_values.yaml file:

helm install --namespace <router-namespace> --set managedFederation.apiKey="<graph-api-key>" --set managedFederation.graphRef="<graph-ref>" oci://ghcr.io/apollographql/helm-charts/router --version <router-version> --values router/values.yaml --values common_values.yaml --values prod_values.yaml

Configure for migration from gateway

When

, consider the following tips to maximize the compatibility of your router deployment.

Increase maximum request bytes

By default the router sets its maximum supported request size at 2MB, while the gateway sets its maximum supported request size at 20MB. If your gateway accepts requests larger than 2MB, which it does by default, you can use the following configuration to ensure that the router is compatible with your gateway deployment.

values.yaml
router:
configuration:
limits:
http_max_request_bytes: 20000000 #20MB

Increase request timeout

The router's timeout is increased to accommodate with high latency.

values.yaml
router:
configuration:
traffic_shaping:
router:
timeout: 6min
all:
timeout: 5min

Propagate subgraph errors

The gateway propagates subgraph errors to clients, but the router doesn't by default, so it needs to be configured to propagate them.

values.yaml
router:
configuration:
include_subgraph_errors:
all: true
Previous
Overview
Next
Run with Docker
Edit on GitHubEditForumsDiscord

© 2024 Apollo Graph Inc.

Privacy Policy

Company