10. Observability with Apollo Studio


The Airlock team has successfully implemented its newest feature (Project Galactic Coordinates), which has evolved and expanded the capabilities of the Airlock graph. They've also made sure to maintain the graph by adding new s and removing unused s. Let's explore how we can continue to keep track of the graph's health and performance.

In this lesson, we will:

  • Learn how to understand operation metrics and reporting from Apollo Studio
  • Learn how to interpret metrics for field executions and referencing operations

Metrics in Apollo Studio

Apollo Studio provides us with observability tools to track the health and performance of our graph. These tools help surface patterns in how our graph gets used, which helps us identify ways to continue improving our graph.

We can observe the performance of our s through the Operations page, which gives us an overview of request rates, service time, and error percentages. We can also dig into each in our graph and its usage through the Fields page.

Let's explore the details of each page.

Operations metrics

We can navigate to the Operations page of our graph using the left-hand sidebar in Apollo Studio.

Screenshot of Apollo Studio Operations page open

We recommend that clients clearly name each GraphQL they send to the graph, because these are the s you'll see in your Studio metrics.

It looks like one of Airlock's highest-requested s is GetListing. When we click on this , we can see more specific details about this 's usage, as well as its signature (which is the shape of the query).

Screenshot of Apollo Studio Operations page, drilling into the GetListing operation and its signature

Field usage

Apollo Studio also gives us insight into the usage metrics of our graph's s. We can navigate to this page by clicking the Fields icon in the left-hand sidebar.

We can use the dropdowns on the top-right side of the page to filter metrics based on a custom time range or on a subset of clients.

usage metrics fall under two categories: field executions and referencing operations.

  • Field executions tally how many times your servers have executed the resolver for a specific field.
  • Referencing operations tally how many operations sent by clients include the field.

Let's take a closer look at the Listing.overallRating as an example. Click the icon on the rightmost column to access that 's landing page.

Screenshot of Apollo Studio Fields page open

In the Usage section, we can see exactly which s include the Listing.overallRating :

Screenshot of the `Listing.overallRating` field in Apollo Studio

Note: The values for executions and referencing s can differ significantly. You can find out more about why this might be the case for the specific by reading the Apollo docs on field usage metrics or by reading this blog post on usage data improvements in Apollo Server 3.6.

We can use these metrics to monitor the health and usage of our graph's types and s. This helps us answer questions like:

  • Are there some fields that don't get any use at all? Are there deprecated fields from long ago that aren't being used but are still in the schema? Maybe it's time to remove these from the graph to keep our schema clean and useful.
  • Are we planning on making a significant change to a field? Which clients and operations would be affected? We'll need to make sure they're looped into any changes we make.

As our graph evolves, these are good questions to keep in mind, and Apollo Studio will always be there to help answer them!


If you needed to know how many times a field has been resolved, which metric would you use?
If you needed to know how many operations have included a field, which metric would you use?

Key takeaways

  • Field usage metrics have two categories: field executions and referencing operations.
  • Field executions indicate the number of times servers have executed the resolver for the field.
  • Referencing operations indicate how many operations sent by clients have included the field.

Congratulations 🎉

Well done, you've reached the end! In this course, we learned all about how to work with an existing in production. We saw how to incorporate and graph variants into a CI/CD workflow so we can ship new features with confidence. We also explored different types of errors we might encounter from build checks and checks. Finally, we looked at how to use Apollo Studio to discover metrics on our client s and usage.

If you're looking to apply what you've learned here, we recommend tackling our upcoming Voyage III Lab, which is all about deploying the FlyBy app to production.

See you in the next series!