Docs
Launch GraphOS Studio

Batching operations


supports batching multiple in a single HTTP request to reduce the number of network round trips. Whenever you execute an operation with batching enabled, your client waits a short time interval to collect any other batchable that are also executed. When the interval completes, your client sends all of the operations in a single HTTP request.

Note that due to the waiting interval, batching adds some latency to . Operations that require as little latency as possible can opt out of batching.

Enabling batching

You enable batching when you initialize your ApolloClient instance, like so:

val apolloClient = ApolloClient.Builder()
.serverUrl("https://...")
.httpBatching()
.build()

By default, the batching engine waits 10ms to collect queries before sending them, and a single batch contains a maximum of 10 queries. You can configure these defaults by passing additional configuration options to the httpBatching method:

.httpBatching(
batchIntervalMillis = 50, // Wait 50ms
maxBatchSize = 20 // Max 20 queries
)

Opting out/in

By default if you enable batching, it's enabled for all . An individual operation can opt out of batching by passing false to the canBeBatched method:

apolloClient.query(MyQuery()).canBeBatched(false).execute()

This is helpful if there's a particular that you always want to execute with as little latency as possible.

You can also require individual to opt in to batching. To do so, you first call canBeBatched(false) while initializing your ApolloClient instance:

val apolloClient = ApolloClient.Builder()
.serverUrl("https://...")
.httpBatching()
.canBeBatched(false)
.build()

Now, batching is disabled by default, and individual can opt in with canBeBatched(true).

Previous
WebSocket errors
Next
Overview
Edit on GitHubEditForumsDiscord

© 2024 Apollo Graph Inc.

Privacy Policy

Company