Docs
Launch GraphOS Studio

Normalized caches in Apollo Kotlin


provides two built-in normalized caches for storing and reusing the results of :

  • An in-memory cache (MemoryCache)
  • A SQLite-backed cache (SqlNormalizedCache)

You can use one (or both!) of these caches in your app to improve its responsiveness for most .

To get started with a coarser caching strategy that's faster to set up, take a look at the HTTP cache.

What is a normalized cache?

In a , a normalized cache breaks each of your GraphQL responses into the individual objects it contains. Then, each object is cached as a separate entry based on its cache ID. This means that if multiple responses include the same object, that object can be deduplicated into a single cache entry. This reduces the overall size of the cache and helps keep your cached data consistent and fresh.

You can also use a normalized cache as a single source of truth for your UI, enabling it to react to changes in the cache. To learn more about the process, see this blog post.

Normalizing responses

Look at this example :

query GetFavoriteBook {
favoriteBook { # Book object
id
title
author { # Author object
id
name
}
}
}

This returns a Book object, which in turn includes an Author object. An example response from the looks like this:

{
"favoriteBook": {
"id": "bk123",
"title": "Les Guerriers du silence",
"author": {
"id": "au456",
"name": "Pierre Bordage"
}
}
}

A normalized cache does not store this response directly. Instead, it breaks it up into the following entries by default:

Cache
"favoriteBook": {"id": "bk123", "title": "Les guerriers du silence", "author": "ApolloCacheReference{favoriteBook.author}"}
"favoriteBook.author": {"id": "au456", "name": "Pierre Bordage"}
"QUERY_ROOT": {"favoriteBook": "ApolloCacheReference{favoriteBook}"}

⚠️ These default generated cache IDs (favoriteBook and favoriteBook.author) are undesirable for data deduplication. See Specifying cache IDs.

  • Notice that the author of the Book entry now contains the string ApolloCacheReference{favoriteBook.author}. This is a reference to the Author cache entry.
  • Notice also the QUERY_ROOT entry, which is always present if you've cached results from at least one . This entry contains a reference for each top-level you've included in a query (e.g., favoriteBook).

Provided caches

In-memory cache

's MemoryCache is a normalized, in-memory cache for storing objects from your . To use it, first add the apollo-normalized-cache artifact to your dependencies in your build.gradle[.kts] file:

build.gradle[.kts]
dependencies {
implementation("com.apollographql.apollo3:apollo-normalized-cache:3.8.2")
}

Then include the cache in your ApolloClient initialization, like so:

// Creates a 10MB MemoryCacheFactory
val cacheFactory = MemoryCacheFactory(maxSizeBytes = 10 * 1024 * 1024)
// Build the ApolloClient
val apolloClient = ApolloClient.Builder()
.serverUrl("https://...")
// normalizedCache() is an extension function on ApolloClient.Builder
.normalizedCache(cacheFactory)
.build()

Because the normalized cache is optional, normalizedCache() is an extension function on ApolloClient.Builder() that's defined in the apollo-normalized-cache artifact. It takes a NormalizedCacheFactory as a parameter so that it can create the cache outside the main thread if needed.

A MemoryCache is a least recently used (LRU) cache. It keeps entries in memory according to the following conditions:

NameDescription
maxSizeBytesThe cache's maximum size, in bytes.
expireAfterMillisThe timeout for expiring existing cache entries, in milliseconds. By default, there is no timeout.

When your app is stopped, data in the MemoryCache is lost forever. If you need to persist data, you can use the SQLite cache.

SQLite cache

's SQLite cache uses SQLDelight to store data persistently. You can use it to persist data across app restarts, or if your cached data becomes too large to fit in memory.

To enable SQLite cache support, add the apollo-normalized-cache-sqlite dependency to your project's build.gradle file:

build.gradle.kts
dependencies {
implementation("com.apollographql.apollo3:apollo-normalized-cache-sqlite:3.8.2")
}

Then include the SQLite cache in your ApolloClient initialization according to your platform target (different platforms use different drivers):

// Android
val sqlNormalizedCacheFactory = SqlNormalizedCacheFactory("apollo.db")
// JVM
val sqlNormalizedCacheFactory = SqlNormalizedCacheFactory("jdbc:sqlite:apollo.db")
// iOS
val sqlNormalizedCacheFactory = SqlNormalizedCacheFactory("apollo.db")
// Build the ApolloClient
val apolloClient = ApolloClient.Builder()
.serverUrl("https://...")
.normalizedCache(sqlNormalizedCacheFactory)
.build()

You can then use the SQLite cache just like you'd use the MemoryCache.

Chaining caches

To get the most out of both normalized caches, you can chain a MemoryCacheFactory with a SqlNormalizedCacheFactory:

val memoryFirstThenSqlCacheFactory = MemoryCacheFactory(10 * 1024 * 1024)
.chain(SqlNormalizedCacheFactory(context, "db_name"))

Whenever attempts to read cached data, it checks each chained cache in order until it encounters a hit. It then immediately returns that cached data without reading any additional caches.

Whenever writes data to the cache, those writes propagate down all caches in the chain.

Setting a fetch policy

After you add a normalized cache to your ApolloClient initialization, automatically uses FetchPolicy.CacheFirst as the default (client-wide) fetch policy for all queries. To change the default, you can call fetchPolicy on the client builder:

val apolloClient = ApolloClient.Builder()
.serverUrl("https://...")
.fetchPolicy(FetchPolicy.NetworkOnly)
.build()

You can also customize how the cache is used for a particular by setting a fetch policy for that query.

The following snippets show how to set all available fetch policies and their behavior:

val response = apolloClient.query(query)
// (Default) Check the cache, then only use the network if data isn't present
.fetchPolicy(FetchPolicy.CacheFirst)
// Check the cache and never use the network, even if data isn't present
.fetchPolicy(FetchPolicy.CacheOnly)
// Always use the network, then check the cache if network fails
.fetchPolicy(FetchPolicy.NetworkFirst)
// Always use the network and never check the cache, even if network fails
.fetchPolicy(FetchPolicy.NetworkOnly)
// Execute the query
.execute()

The CacheAndNetwork policy can emit multiple values, so you call toFlow() instead of execute():

apolloClient.query(query)
// Check the cache and also use the network (1 or 2 values can be emitted)
.fetchPolicy(FetchPolicy.CacheAndNetwork)
// Execute the query and collect the responses
.toFlow().collect { response ->
// ...
}

As with normalizedCache(NormalizedCacheFactory), fetchPolicy(FetchPolicy) is an extension function on ApolloClient.Builder(), so you need apollo-normalized-cache in your classpath for this to work.

Because the normalized cache deduplicates data, it enables you to react to cache changes. You do this with watchers that listen for cache changes. Learn more about query watchers.

Specifying cache IDs

By default, uses an object's path as its cache ID. For example, recall the following and its resulting cache entries from earlier:

query GetFavoriteBook {
favoriteBook { # Book object
id
title
author { # Author object
id
name
}
}
}
Cache
"favoriteBook": {"id": "bk123", "title": "Les guerriers du silence", "author": "ApolloCacheReference{favoriteBook.author}"}
"favoriteBook.author": {"id": "au456", "name": "Pierre Bordage"}
"QUERY_ROOT": {"favoriteBook": "ApolloCacheReference{favoriteBook}"}

Now, what happens if we execute a different to fetch the same Author object with id au456?

query AuthorById($id: String!) {
author(id: $id) {
id
name
}
}
}

After executing this , our cache looks like this:

Cache
"favoriteBook": {"id": "bk123", "title": "Les guerriers du silence", "author": "ApolloCacheReference{favoriteBook.author}"}
"favoriteBook.author": {"id": "au456", "name": "Pierre Bordage"}
"author(\"id\": \"au456\")": {"id": "au456", "name": "Pierre Bordage"}
"QUERY_ROOT": {"favoriteBook": "ApolloCacheReference{favoriteBook}", "author(\"id\": \"au456\")": "ApolloCacheReference{author(\"id\": \"au456\")}"}

We're now caching two identical entries for the same Author object! This is undesirable for a few reasons:

  • It takes up more space.
  • Modifying one of these objects does not notify any watchers of the other object.

We want to deduplicate entries like these by making sure they're assigned the same cache ID when they're written, resulting in a cache that looks more like this:

Cache
"Book:bk123": {"id": "bk123", "title": "Les guerriers du silence", "author": "ApolloCacheReference{Author:au456}"}
"Author:au456": {"id": "au456", "name": "Pierre Bordage"}
"QUERY_ROOT": {"favoriteBook": "ApolloCacheReference(Book:bk123)", "author(\"id\": \"au456\")": "ApolloCacheReference{Author:au456}"}

Fortunately, all of our objects have an id that we can use for this purpose. If an id is unique across all objects in your , you can use its value directly as a cache ID. Otherwise if it's unique per object type, you can prefix it with the type name (as shown above).

Methods

There are two methods for specifying an object's cache ID:

  • Declaratively (recommended). You can specify schema that tell the codegen where to find the ID and make sure at compile time that all the id are requested so that all objects can be identified. Declarative IDs also prefix each ID with the typename to ensure global uniqueness.
  • Programmatically. You can implement custom APIs that retrieve the ID for an object. Because you can execute arbitrary code, this solution is more flexible, but it's also more error prone and requires that you manually request id .
Previous
Introduction
Next
Declarative cache IDs
Edit on GitHubEditForumsDiscord

© 2024 Apollo Graph Inc.

Privacy Policy

Company