Join us from October 8-10 in New York City to learn the latest tips, trends, and news about GraphQL Federation and API platform engineering.Join us for GraphQL Summit 2024 in NYC
Start for Free

In-Memory Caching

Configure router caching for query plans and automatic persisted queries

Both and use an in-memory LRU cache to store the following data:

  • Generated query plans
  • Automatic persisted queries (APQ)
  • responses

You can configure certain caching behaviors for generated and (but not responses).


If you have a , you can also configure a Redis-backed distributed cache that enables multiple instances to share cached values. For details, see Distributed caching in the GraphOS Router.

Performance improvements vs stability

The router is a highly scalable and low-latency runtime. Even with all caching disabled, the time to process and query plans will be very minimal (nanoseconds to milliseconds) when compared to the overall request, except in the edge cases of extremely large operations and supergraphs. Caching offers stability to those running a large graph so that your overhead for given operations stays consistent, not that it dramatically improves. If you would like to validate the performance wins of operation caching, check out the traces and metrics in the router to take measurements before and after. In extremely large edge cases though, we have seen the cache save 2-10x time to create the , which is still a small part of the overall request.

Caching query plans

Whenever your router receives an incoming , it generates a query plan to determine which it needs to to resolve that operation.

By caching previously generated query plans, your router can skip generating them again if a client later sends the exact same operation. This improves your router's responsiveness.

The GraphOS Router enables query plan caching by default. In your router's YAML config file, you can configure the maximum number of query plan entries in the cache like so:

limit: 512 # This is the default value.

On schema reloads, the cache will be reset, and queries will need to go through query planning again. To avoid latencies right after the reload, you can configure the router to pregenerate query plans for the most used queries before switching to the new schema:

# Pre-plan the 100 most used operations when the supergraph changes. (Default is "0", disabled.)
warmed_up_queries: 100
limit: 512

Cache warm-up

When loading a new schema, a query plan might change for some queries, so cached query plans cannot be reused.

To prevent increased latency upon query plan cache invalidation, the router precomputes query plans for:

  • The most used queries from the cache.
  • The entire list of .

Precomputed plans will be cached before the router switches traffic over to the new schema.

By default, the router warms up the cache with 30% of the queries already in cache, but it can be configured as follows:

# Pre-plan the 100 most used operations when the supergraph changes
warmed_up_queries: 100

To get more information on the planning and warm-up process use the following metrics (where <storage> can be redis for distributed cache or memory):

  • counters:

    • apollo_router_cache_size{kind="query planner", storage="<storage>}: current size of the cache (only for in-memory cache)
    • apollo_router_cache_hit_count{kind="query planner", storage="<storage>}
    • apollo_router_cache_miss_count{kind="query planner", storage="<storage>}
  • histograms:

    • apollo.router.query_planning.plan.duration: time spent planning queries
    • apollo_router_schema_loading_time: time spent loading a schema
    • apollo_router_cache_hit_time{kind="query planner", storage="<storage>}: time to get a value from the cache
    • apollo_router_cache_miss_time{kind="query planner", storage="<storage>}

Typically, we would look at apollo_router_cache_size and the cache hit rate to define the right size of the in memory cache, then look at apollo_router_schema_loading_time and apollo.router.query_planning.plan.duration to decide how much time we want to spend warming up queries.

Cache warm-up with distributed caching

If the router is using for query plans, the warm-up phase will also store the new query plans in Redis. Since all Router instances might have the same distributions of queries in their in-memory cache, the list of queries is shuffled before warm-up, so each Router instance can plan queries in a different order and share their results through the cache.

Schema aware query hashing

The query plan cache key uses a hashing algorithm specifically designed for GraphQL queries, using the schema. If a schema update does not affect a query (example: a was added), then the query hash will stay the same. The query plan cache can use that key during warm up to check if a cached entry can be reused instead of planning it again.

It can be activated through this option:

warmed_up_queries: 100
experimental_reuse_query_plans: true

Caching automatic persisted queries (APQ)

Automatic Persisted Queries (APQ) enable to send a server the hash of their query string, instead of sending the query string itself. When query strings are very large, this can significantly reduce network usage.

The router supports using APQ in its communications with both clients and subgraphs:

  • In its communications with clients, the router acts as a GraphQL server, because it receives queries from clients.
  • In its communications with subgraphs, the router acts as a GraphQL client, because it sends queries to subgraphs.

Because the router's role differs between these two interactions, you configure these APQ settings separately.

APQ with clients

The router enables APQ caching for client operations by default. In your router's YAML config file, you can configure the maximum number of APQ entries in the cache like so:

limit: 512 # This is the default value.

You can also disable client APQ support entirely like so:

enabled: false

APQ with subgraphs

By default, the router does not use APQ when sending queries to its subgraphs.

In your router's YAML config file, you can configure this APQ support with a combination of global and per- settings:

# Disables subgraph APQ globally except where overridden per-subgraph
enabled: false
# Override global APQ setting for individual subgraphs
enabled: true

In the example above, subgraph APQ is disabled except for the products subgraph.

Distributed Caching
Rate articleRateEdit on GitHubEditForumsDiscord

© 2024 Apollo Graph Inc., d/b/a Apollo GraphQL.

Privacy Policy