Customizing the behavior of cached fields
You can customize how a particular field in your Apollo Client cache is read and written. To do so, you define a field policy for the field. A field policy can include:
- A
read
function that specifies what happens when the field's cached value is read - A
merge
function that specifies what happens when field's cached value is written - An array of key arguments that help the cache avoid storing unnecessary duplicate data.
You provide field policies to the constructor of InMemoryCache
. Each field policy is defined inside whichever TypePolicy
object corresponds to the field's parent type. The following example defines a field policy for the name
field of a Person
type:
const cache = new InMemoryCache({typePolicies: {Person: {fields: {name: {read(name) {// Return the cached name, transformed to upper casereturn name.toUpperCase();}}},},},});
The field policy above defines a read
function that specifies what the cache returns whenever Person.name
is queried.
The read
function
If you define a read
function for a field, the cache calls that function whenever your client queries for the field. In the query response, the field is populated with the read
function's return value, instead of the field's cached value.
The first parameter of a read
function provides the field's currently cached value, if one exists. You can use this to help determine the function's return value.
The second parameter is an object that provides access to several properties and helper functions, which are explained in the FieldPolicy
API reference.
The following read
function assigns a default value of UNKNOWN NAME
to the name
field of a Person
type, if the actual value is not available in the cache. In all other cases, the cached value is returned.
const cache = new InMemoryCache({typePolicies: {Person: {fields: {name: {read(name = "UNKNOWN NAME") {return name;}},},},},});
If a field accepts arguments, the read
function's second parameter includes the values of those arguments.
The following read
function checks whether the maxLength
argument is provided when the name
field is queried. If it is, the function returns only the first maxLength
characters of the person's name. Otherwise, the person's full name is returned.
const cache = new InMemoryCache({typePolicies: {Person: {fields: {// If a field's TypePolicy would only include a read function,// you can optionally define the function like so, instead of// nesting it inside an object as shown in the example above.name(name: string, { args }) {if (args && typeof args.maxLength === "number") {return name.substring(0, args.maxLength);}return name;},},},},});
If a field requires numerous parameters then each parameter must be wrapped in a variable that is then destructured and returned. Each parameter will be available as individual subfields.
The following read
function assigns a default value of UNKNOWN FIRST_NAME
to the first_name
subfield of a fullName
field and a UNKNOWN LAST_NAME
to the last_name
of a fullName
field.
const cache = new InMemoryCache({typePolicies: {Person: {fields: {fullName: {read(full_name = {first_name: "UNKNOWN FIRST_NAME",last_name: "UNKNOWN LAST_NAME",}) {return { ...full_name };},},},},},});
The following query
returns the first_name
and last_name
subfields from the fullName
field:
query personWithFullName {fullName {first_namelast_name}}
If a field accepts arguments, the second parameter includes the values of those arguments. The following read
function checks to see if the maxLength
argument is provided when the name
field is queried. If it is, the function returns only the first maxLength
characters of the person's name. Otherwise, the person's full name is returned.
You can define a read
function for a field that isn't even defined in your schema. For example, the following read
function enables you to query a userId
field that is always populated with locally stored data:
const cache = new InMemoryCache({typePolicies: {Person: {fields: {userId() {return localStorage.getItem("loggedInUserId");},},},},});
Note that to query for a field that is only defined locally, your query should include the @client
directive on that field so that Apollo Client doesn't include it in requests to your GraphQL server.
Other use cases for a read
function include:
- Transforming cached data to suit your client's needs, such as rounding floating-point values to the nearest integer
- Deriving local-only fields from one or more schema fields on the same object (such as deriving an
age
field from abirthDate
field) - Deriving local-only fields from one or more schema fields across multiple objects
For a full list of the options provided to the read
function, see the API reference. You will almost never need to use all of these options, but each one has an important role when reading fields from the cache.
The merge
function
If you define a merge
function for a field, the cache calls that function whenever the field is about to be written with an incoming value (such as from your GraphQL server). When the write occurs, the field's new value is set to the merge
function's return value, instead of the original incoming value.
Merging arrays
A common use case for a merge
function is to define how to write to a field that holds an array. By default, the field's existing array is completely replaced by the incoming array. Often, it's preferable to concatenate the two arrays instead, like so:
const cache = new InMemoryCache({typePolicies: {Agenda: {fields: {tasks: {merge(existing = [], incoming: any[]) {return [...existing, ...incoming];},},},},},});
Note that existing
is undefined the very first time this function is called for a given instance of the field, because the cache does not yet contain any data for the field. Providing the existing = []
default parameter is a convenient way to handle this case.
Your merge
function cannot push the incoming
array directly onto the existing
array. It must instead return a new array to prevent potential errors. In development mode, Apollo Client prevents unintended modification of the existing
data with Object.freeze
.
Merging non-normalized objects
You can use a merge
function to intelligently combine nested objects that are not normalized in your cache, assuming those objects are nested within the same normalized parent.
Example
Let's say our graph's schema includes the following types:
type Book {id: ID!title: String!author: Author!}type Author { # Has no key fieldsname: String!dateOfBirth: String!}type Query {favoriteBook: Book!}
With this schema, our cache can normalize Book
objects because they have an id
field. However, Author
objects have no id
field, and they also have no other fields that can uniquely identify a particular instance. Therefore, the cache can't normalize Author
objects, and it can't tell when two different Author
objects actually represent the same author.
Now, let's say our client executes the following two queries, in order:
query BookWithAuthorName {favoriteBook {idauthor {name}}}query BookWithAuthorBirthdate {favoriteBook {idauthor {dateOfBirth}}}
When the first query returns, Apollo Client writes a Book
object like the following to the cache:
{"__typename": "Book","id": "abc123","author": {"__typename": "Author","name": "George Eliot"}}
Remember that because Author
objects can't be normalized, they're nested directly within their parent object.
Now, when the second query returns, the cached Book
object is updated to the following:
{"__typename": "Book","id": "abc123","author": {"__typename": "Author","dateOfBirth": "1819-11-22"}}
The Author
's name
field has been removed! This is because Apollo Client can't be sure that the Author
objects returned by the two queries actually refer to the same author. So instead of merging fields of the two objects, Apollo Client completely overwrites the object (and logs a warning).
However, we are confident that these two objects represent the same author, because a book's author virtually never changes. Therefore, we can tell the cache to treat Book.author
objects as the same object as long as they belong to the same Book
. This enables the cache to merge the name
and dateOfBirth
fields returned by different queries above.
To achieve this, we can define a custom merge
function for the author
field within the type policy for Book
:
const cache = new InMemoryCache({typePolicies: {Book: {fields: {author: {merge(existing, incoming, { mergeObjects }) {return mergeObjects(existing, incoming);},},},},},});
Here, we use the mergeObjects
helper function to merge values from the existing
and incoming
Author
objects. It's important to use mergeObjects
here instead of merging the objects with object spread syntax, because mergeObjects
makes sure to call any defined merge
functions for subfields of Book.author
.
Notice that this merge
function has zero Book
- or Author
-specific logic in it! This means you can reuse it for any number of non-normalized object fields. And because this exact merge
function definition is so common, you can also define it with the following shorthand:
const cache = new InMemoryCache({typePolicies: {Book: {fields: {author: {// Short for options.mergeObjects(existing, incoming).merge: true,},},},},});
In summary, the Book.author
policy above allows the cache to safely merge the author
objects of any two Book
objects that have the same identity.
Configuring merge
functions for types rather than fields
Beginning with Apollo Client 3.3, you can avoid having to configure merge
functions for lots of different fields that might hold an Author
object, and instead put the merge
configuration in the Author
type policy:
const cache = new InMemoryCache({typePolicies: {Book: {fields: {// No longer necessary!// author: {// merge: true,// },},},Author: {merge: true,},},});
These configurations have the same behavior, but putting the merge: true
in the Author
type policy is shorter and easier to maintain, especially when Author
objects could appear in lots of different fields besides Book.author
.
Remember that mergeable objects will only be merged with existing objects occupying the same field of the same parent object, and only when the __typename
of the objects is the same. If you really need to work around these rules, you can write a custom merge
function to do whatever you want, but merge: true
follows these rules.
Merging arrays of non-normalized objects
Once you're comfortable with the ideas and recommendations from the previous section, consider what happens when a Book
can have multiple authors:
query BookWithAuthorNames {favoriteBook {isbntitleauthors {name}}}query BookWithAuthorLanguages {favoriteBook {isbntitleauthors {language}}}
In this case, the favoriteBook.authors
field is no longer just a single object, but an array of authors, so it's even more important to define a custom merge
function to prevent loss of data by replacement:
const cache = new InMemoryCache({typePolicies: {Book: {fields: {authors: {merge(existing: any[], incoming: any[], { readField, mergeObjects }) {const merged: any[] = existing ? existing.slice(0) : [];const authorNameToIndex: Record<string, number> = Object.create(null);if (existing) {existing.forEach((author, index) => {authorNameToIndex[readField<string>("name", author)] = index;});}incoming.forEach(author => {const name = readField<string>("name", author);const index = authorNameToIndex[name];if (typeof index === "number") {// Merge the new author data with the existing author data.merged[index] = mergeObjects(merged[index], author);} else {// First time we've seen this author in this array.authorNameToIndex[name] = merged.length;merged.push(author);}});return merged;},},},},},});
Instead of blindly replacing the existing authors
array with the incoming array, this code concatenates the arrays together, while also checking for duplicate author names, merging the fields of any repeated author
objects.
The readField
helper function is more robust than using author.name
, because it also tolerates the possibility that the author
is a Reference
object referring to data elsewhere in the cache, which could happen if you (or someone else on your team) eventually gets around to specifying keyFields
for the Author
type.
As this example suggests, merge
functions can become quite sophisticated. When this happens, you can often extract the generic logic into a reusable helper function:
const cache = new InMemoryCache({typePolicies: {Book: {fields: {authors: {merge: mergeArrayByField<AuthorType>("name"),},},},},});
Now that you've hidden the details behind a reusable abstraction, it no longer matters how complicated the implementation gets. This is liberating, because it allows you to improve your client-side business logic over time, while keeping related logic consistent across your entire application.
Handling pagination
When a field holds an array, it's often useful to paginate that array's results, because the total result set can be arbitrarily large.
Typically, a query includes pagination arguments that specify:
- Where to start in the array, using either a numeric offset or a starting ID
- The maximum number of elements to return in a single "page"
If you implement pagination for a field, it's important to keep pagination arguments in mind if you then implement read
and merge
functions for the field:
const cache = new InMemoryCache({typePolicies: {Agenda: {fields: {tasks: {merge(existing: any[], incoming: any[], { args }) {const merged = existing ? existing.slice(0) : [];// Insert the incoming elements in the right places, according to args.const end = args.offset + Math.min(args.limit, incoming.length);for (let i = args.offset; i < end; ++i) {merged[i] = incoming[i - args.offset];}return merged;},read(existing: any[], { args }) {// If we read the field before any data has been written to the// cache, this function will return undefined, which correctly// indicates that the field is missing.const page = existing && existing.slice(args.offset,args.offset + args.limit,);// If we ask for a page outside the bounds of the existing array,// page.length will be 0, and we should return undefined instead of// the empty array.if (page && page.length > 0) {return page;}},},},},},});
As this example shows, your read
function often needs to cooperate with your merge
function, by handling the same arguments in the inverse direction.
If you want a given "page" to start after a specific entity ID instead of starting from args.offset
, you can implement your merge
and read
functions as follows, using the readField
helper function to examine existing task IDs:
const cache = new InMemoryCache({typePolicies: {Agenda: {fields: {tasks: {merge(existing: any[], incoming: any[], { args, readField }) {const merged = existing ? existing.slice(0) : [];// Obtain a Set of all existing task IDs.const existingIdSet = new Set(merged.map(task => readField("id", task)));// Remove incoming tasks already present in the existing data.incoming = incoming.filter(task => !existingIdSet.has(readField("id", task)));// Find the index of the task just before the incoming page of tasks.const afterIndex = merged.findIndex(task => args.afterId === readField("id", task));if (afterIndex >= 0) {// If we found afterIndex, insert incoming after that index.merged.splice(afterIndex + 1, 0, ...incoming);} else {// Otherwise insert incoming at the end of the existing data.merged.push(...incoming);}return merged;},read(existing: any[], { args, readField }) {if (existing) {const afterIndex = existing.findIndex(task => args.afterId === readField("id", task));if (afterIndex >= 0) {const page = existing.slice(afterIndex + 1,afterIndex + 1 + args.limit,);if (page && page.length > 0) {return page;}}}},},},},},});
Note that if you call readField(fieldName)
, it returns the value of the specified field from the current object. If you pass an object as a second argument to readField
, (e.g., readField("id", task)
), readField
instead reads the specified field from the specified object. In the above example, reading the id
field from existing Task
objects allows us to deduplicate the incoming
task data.
The pagination code above is complicated, but after you implement your preferred pagination strategy, you can reuse it for every field that uses that strategy, regardless of the field's type. For example:
function afterIdLimitPaginatedFieldPolicy<T>() {return {merge(existing: T[], incoming: T[], { args, readField }): T[] {...},read(existing: T[], { args, readField }): T[] {...},};}const cache = new InMemoryCache({typePolicies: {Agenda: {fields: {tasks: afterIdLimitPaginatedFieldPolicy<Reference>(),},},},});
Specifying key arguments
If a field accepts arguments, you can specify an array of keyArgs
in the field's FieldPolicy
. This array indicates which arguments are key arguments that are used to calculate the field's value. Specifying this array can help reduce the amount of duplicate data in your cache.
Example
Let's say your schema's Query
type includes a monthForNumber
field. This field returns the details of particular month, given a provided number
argument (January for 1
and so on). The number
argument is a key argument for this field, because it is used when calculating the field's result:
const cache = new InMemoryCache({typePolicies: {Query: {fields: {monthForNumber: {keyArgs: ["number"],},},},},});
An example of a non-key argument is an access token, which is used to authorize a query but not to calculate its result. If monthForNumber
also accepts an accessToken
argument, the value of that argument does not affect which month's details are returned.
By default, the cache stores a separate value for every unique combination of argument values you provide when querying a particular field. When you specify a field's key arguments, the cache understands that any non-key arguments don't affect that field's value. Consequently, if you execute two different queries with the monthForNumber
field, passing the same number
argument but different accessToken
arguments, the second query response will overwrite the first, because both invocations use the exact same value for all key arguments.
If you need more control over the behavior of keyArgs
, you can pass a function instead of an array. This keyArgs
function will receive the arguments object as its first parameter, and a context
object providing other relevant details as its second parameter. See KeyArgsFunction
in the types below for further information.
FieldPolicy
API reference
Here are the definitions for the FieldPolicy
type and its related types:
// These generic type parameters will be inferred from the provided policy in// most cases, though you can use this type to constrain them more precisely.type FieldPolicy<TExisting,TIncoming = TExisting,TReadResult = TExisting,> = {keyArgs?: KeySpecifier | KeyArgsFunction | false;read?: FieldReadFunction<TExisting, TReadResult>;merge?: FieldMergeFunction<TExisting, TIncoming> | boolean;};type KeySpecifier = (string | KeySpecifier)[];type KeyArgsFunction = (args: Record<string, any> | null,context: {typename: string;fieldName: string;field: FieldNode | null;variables?: Record<string, any>;},) => string | KeySpecifier | null | void;type FieldReadFunction<TExisting, TReadResult = TExisting> = (existing: Readonly<TExisting> | undefined,options: FieldFunctionOptions,) => TReadResult;type FieldMergeFunction<TExisting, TIncoming = TExisting> = (existing: Readonly<TExisting> | undefined,incoming: Readonly<TIncoming>,options: FieldFunctionOptions,) => TExisting;// These options are common to both read and merge functions:interface FieldFunctionOptions {cache: InMemoryCache;// The final argument values passed to the field, after applying variables.// If no arguments were provided, this property will be null.args: Record<string, any> | null;// The name of the field, equal to options.field.name.value when// options.field is available. Useful if you reuse the same function for// multiple fields, and you need to know which field you're currently// processing. Always a string, even when options.field is null.fieldName: string;// The FieldNode object used to read this field. Useful if you need to// know about other attributes of the field, such as its directives. This// option will be null when a string was passed to options.readField.field: FieldNode | null;// The variables that were provided when reading the query that contained// this field. Possibly undefined, if no variables were provided.variables?: Record<string, any>;// Easily detect { __ref: string } reference objects.isReference(obj: any): obj is Reference;// Returns a Reference object if obj can be identified, which requires,// at minimum, a __typename and any necessary key fields. If true is// passed for the optional mergeIntoStore argument, the object's fields// will also be persisted into the cache, which can be useful to ensure// the Reference actually refers to data stored in the cache. If you// pass an ID string, toReference will make a Reference out of it. If// you pass a Reference, toReference will return it as-is.toReference(objOrIdOrRef: StoreObject | string | Reference,mergeIntoStore?: boolean,): Reference | undefined;// Helper function for reading other fields within the current object.// If a foreign object or reference is provided, the field will be read// from that object instead of the current object, so this function can// be used (together with isReference) to examine the cache outside the// current object. If a FieldNode is passed instead of a string, and// that FieldNode has arguments, the same options.variables will be used// to compute the argument values. Note that this function will invoke// custom read functions for other fields, if defined. Always returns// immutable data (enforced with Object.freeze in development).readField<T = StoreValue>(nameOrField: string | FieldNode,foreignObjOrRef?: StoreObject | Reference,): T;// Returns true for non-normalized StoreObjects and non-dangling// References, indicating that readField(name, objOrRef) has a chance of// working. Useful for filtering out dangling references from lists.canRead(value: StoreValue): boolean;// A handy place to put field-specific data that you want to survive// across multiple read function calls. Useful for field-level caching,// if your read function does any expensive work.storage: Record<string, any>;// Instead of just merging objects with { ...existing, ...incoming }, this// helper function can be used to merge objects in a way that respects any// custom merge functions defined for their fields.mergeObjects<T extends StoreObject | Reference>(existing: T,incoming: T,): T | undefined;}