Docs
Launch GraphOS Studio

Core pagination API

Fetching and caching paginated results


Regardless of which pagination strategy your uses for a particular list , your app needs to do the following to that field effectively:

This article describes these core requirements for paginated .

The fetchMore function

Pagination always involves sending followup queries to your to obtain additional pages of results. In , the recommended way to send these followup queries is with the fetchMore function. This function is a member of the ObservableQuery object returned by client.watchQuery, and it's also provided by the useQuery hook:

FeedWithData.jsx
const FEED_QUERY = gql`
query Feed($offset: Int, $limit: Int) {
feed(offset: $offset, limit: $limit) {
id
# ...
}
}
`;
const FeedWithData() {
const { loading, data, fetchMore } = useQuery(FEED_QUERY, {
variables: {
offset: 0,
limit: 10
},
});
// ...continues below...
}

You usually call fetchMore in response to a user action, such as clicking a button or scrolling to the current "bottom" of an infinite-scroll feed.

By default, fetchMore executes a with the exact same shape and as your original query. You can pass new values for the query's variables (such as providing a new offset) like so:

FeedWithData.jsx
const FeedWithData() {
// ...continuing from above...
// If you want your component to rerender with loading:true whenever
// fetchMore is called, add `notifyOnNetworkStatusChange:true` to the
// options you pass to useQuery.
if (loading) return 'Loading...';
return (
<Feed
entries={data.feed || []}
onLoadMore={() => fetchMore({
variables: {
offset: data.feed.length
},
})}
/>
);
}

Here, we set the offset to feed.length to fetch items after the last item in our cached list. The variables we provide here are merged with the variables provided for the original , which means that omitted here (such as limit) retain their original value (10) in the followup .

In addition to variables, you can optionally provide an entirely different shape of query to execute. This can be useful when fetchMore needs to fetch only a single paginated , but the original contained unrelated fields.

Additional examples of using fetchMore are provided in the detailed documentation for offset-based pagination and cursor-based pagination.

Our fetchMore function is ready, but we're not finished! The cache doesn't know yet that it should merge our followup 's result with the original 's result. Instead, it will store the two results as two completely separate lists. To resolve this, let's move on to Merging paginated results.

Merging paginated results

The examples in this section use offset-based pagination, but this article applies to all pagination strategies.

As mentioned above, a fetchMore followup doesn't automatically merge its result with the original 's cached result. To achieve this behavior, we need to define a field policy for our paginated .

Why do I need a field policy?

Let's say we have a in our that takes an :

type Query {
user(id: ID!): User
}

Now, let's say we execute the following two times and provide different values for the $id each time:

query GetUser($id: ID!) {
user(id: $id) {
id
name
}
}

Our two queries return two entirely different User objects. Helpfully, the cache automatically stores these two objects separately, because it sees that different values were provided for at least one (id). Otherwise, the cache might overwrite the first User object with the second User object, and we want to cache both!

Now, let's say we execute this two times, with different values for the $offset :

query Feed($offset: Int, $limit: Int) {
feed(offset: $offset, limit: $limit) {
id
# ...
}
}

In this case, we're a paginated list twice to obtain two different pages of results, and we want those two pages to be merged. But the cache doesn't know that! It sees no difference between this scenario and the User scenario above, so it stores the results as two completely separate lists.

With policies, we can modify the cache's behavior for individual fields that require it. For example, we can tell the cache not to store separate results for the feed based on the values of offset and limit. Let's look at how.

Defining a field policy

A policy specifies how a particular field in your InMemoryCache is read and written. You can define a policy to merge the results of paginated queries into a single list.

Example

Here's the server-side schema for our message feed application that uses offset-based pagination:

type Query {
feed(offset: Int, limit: Int): [FeedItem!]
}
type FeedItem {
id: String!
message: String!
}

In our client, we want to define a policy for Query.feed so that all returned pages of the list are merged into a single list in our cache.

We define our policy within the typePolicies option we provide the InMemoryCache constructor:

const cache = new InMemoryCache({
typePolicies: {
Query: {
fields: {
feed: {
// Don't cache separate results based on
// any of this field's arguments.
keyArgs: false,
// Concatenate the incoming list items with
// the existing list items.
merge(existing = [], incoming) {
return [...existing, ...incoming];
},
}
}
}
}
})

This policy specifies the field's keyArgs, along with a merge function. Both of these configurations are necessary for handling pagination:

  • keyArgs specifies which of the 's cause the cache to store a separate value for each unique combination of those .
    • In our case, the cache shouldn't store a separate result based on any value (offset or limit). So, we disable this behavior entirely by passing false. An empty array (keyArgs: []) also works, but keyArgs: false is more expressive, and it results in a cleaner key within the cache (feed in this case).
    • If a particular 's value could cause items from an entirely different list to be returned in the , that should be included in keyArgs.
    • For more information, see Specifying key arguments and The keyArgs API.
  • A merge function tells the cache how to combine incoming data with existing cached data for a particular . Without this function, incoming field values overwrite existing values by default.

With this policy in place, the cache automatically merges the results of all queries that use the following structure, regardless of values:

// Client-side query definition
const FEED_QUERY = gql`
query Feed($offset: Int, $limit: Int) {
feed(offset: $offset, limit: $limit) {
id
message
}
}
`;

Improving the merge function

In the example above, our merge function is a single line:

merge(existing = [], incoming) {
return [...existing, ...incoming];
}

This function makes risky assumptions about the order in which the client requests pages, because it ignores the values of offset and limit. A more robust merge function can use options.args to decide where to put incoming data relative to existing data, like so:

const cache = new InMemoryCache({
typePolicies: {
Query: {
fields: {
feed: {
keyArgs: [],
merge(existing, incoming, { args: { offset = 0 }}) {
// Slicing is necessary because the existing data is
// immutable, and frozen in development.
const merged = existing ? existing.slice(0) : [];
for (let i = 0; i < incoming.length; ++i) {
merged[offset + i] = incoming[i];
}
return merged;
},
},
},
},
},
});

This logic handles sequential page writes the same way the single-line strategy does, but it can also tolerate repeated, overlapping, or out-of-order writes, without duplicating any list items.

read functions for paginated fields

As shown above, a merge function helps you combine paginated results from your into a single list in your client cache. But what if you also want to configure how that locally cached list is read? For that, you can define a read function.

You define a read function for a within its field policy, alongside the merge function and keyArgs. If you define a read function for a , the cache calls that function whenever you the field, passing the field's existing cached value (if any) as the first . In the query response, the field is populated with the read function's return value, instead of the existing cached value.

If a policy includes both a merge function and a read function, the default value of keyArgs becomes false (i.e., no are key arguments). If either function isn't defined, all of the 's are considered key arguments by default. In either case, you can define keyArgs yourself to override the default behavior.

A read function for a paginated typically uses one of the following approaches:

  • Re-pagination, in which the cached list is split back into pages, based on s
  • No pagination, in which the cached list is always returned in full

Although the "right" approach varies from to field, a non-paginated read function often works best for infinitely scrolling feeds, because it gives your code full control over which elements to display at a given time, without requiring any additional cache reads.

Paginated read functions

The read function for a list can perform client-side re-pagination for that list. It can even transform a page before returning it, such as by sorting or filtering its elements.

This capability goes beyond returning the same pages that you fetched from your server, because a read function for offset/limit pagination could read from any available offset, with any desired limit:

const cache = new InMemoryCache({
typePolicies: {
Query: {
fields: {
feed: {
read(existing, { args: { offset, limit }}) {
// A read function should always return undefined if existing is
// undefined. Returning undefined signals that the field is
// missing from the cache, which instructs Apollo Client to
// fetch its value from your GraphQL server.
return existing && existing.slice(offset, offset + limit);
},
// The keyArgs list and merge function are the same as above.
keyArgs: [],
merge(existing, incoming, { args: { offset = 0 }}) {
const merged = existing ? existing.slice(0) : [];
for (let i = 0; i < incoming.length; ++i) {
merged[offset + i] = incoming[i];
}
return merged;
},
},
},
},
},
});

Depending on the assumptions you feel comfortable making, you might want to make this code more defensive. For example, you can provide default values for offset and limit, in case someone fetches Query.feed without providing :

const cache = new InMemoryCache({
typePolicies: {
Query: {
fields: {
feed: {
read(existing, {
args: {
// Default to returning the entire cached list,
// if offset and limit are not provided.
offset = 0,
limit = existing?.length,
} = {},
}) {
return existing && existing.slice(offset, offset + limit);
},
// ... keyArgs, merge ...
},
},
},
},
});

This style of read function takes responsibility for re-paginating your data based on , essentially inverting the behavior of your merge function. This way, your application can different pages using different .

Non-paginated read functions

The read function for a paginated can choose to ignore like offset and limit, and always return the entire list as it exists in the cache. In this case, your application code takes responsibility for slicing the list into pages depending on your needs.

If you adopt this approach, you might not need to define a read function at all, because the cached list can be returned without any processing. That's why the offsetLimitPagination helper function is implemented without a read function.

Regardless of how you configure keyArgs, your read (and merge) functions can always examine any passed to the using the options.args object. See The keyArgs API for a deeper discussion of how to reason about dividing -handling responsibility between keyArgs and your read or merge functions.

Previous
Overview
Next
Offset-based
Edit on GitHubEditForumsDiscord

© 2024 Apollo Graph Inc.

Privacy Policy

Company