Docs
Launch GraphOS Studio

Connecting OpenTelemetry traces to Prometheus

Convert operation traces into aggregated metrics for a broader view of your graph's performance

serverobservability

Self-hosting the is limited to GraphOS Enterprise plans. Other plan types use managed cloud routing with GraphOS. Check out the pricing page to learn more.

💡 TIP

If you're an enterprise customer looking for more material on this topic, try the Enterprise best practices: Supergraph observability course on .

Not an enterprise customer? Learn about GraphOS for Enterprise.

traces provide insight into performance issues that are occurring at various execution points in your . However, individual traces don't provide a view of your graph's broader performance.

Helpfully, you can convert your traces into aggregated metrics without requiring manual instrumentation. To accomplish this, we'll use spanmetricsprocessor in an OpenTelemetry Collector instance to automatically generate metrics from our existing trace spans.

OpenTelemetry Collector configuration

OpenTelemetry provides two different repositories for their OpenTelemetry Collector:

These repositories are similar in scope, but the contributor library includes extended features that aren't suitable for the core library. To derive performance metrics from our existing spans, we'll use the contributor library to take advantage of the spanmetricsprocessor via the associated Docker image.

💡 TIP

We also recommend checking out the Collector Builder to build binaries that are tailored to your environment instead of relying on prebuilt images.

When your OpenTelemetry Collector is ready to run, you can start configuring it with this barebones example:

receivers:
otlp:
protocols:
grpc:
http:
cors:
allowed_origins:
- http://*
- https://*
otlp/spanmetrics:
protocols:
grpc:
endpoint: 0.0.0.0:12346
exporters:
prometheus:
endpoint: '0.0.0.0:9464'
processors:
batch:
spanmetrics:
metrics_exporter: prometheus
service:
pipelines:
traces:
receivers: [otlp]
processors: [spanmetrics, batch]
metrics:
receivers: [otlp/spanmetrics]
exporters: [prometheus]
processors: [batch]

Apollo Server setup

Add the OTLP Exporter (@opentelemetry/exporter-trace-otlp-http Node package) following the same instructions as shown in the documentation for Apollo Server and OpenTelemetry.

Apollo Router setup

To send traces from the Apollo Router to OpenTelemetry Collector, see this article.

Prometheus setup

Lastly, we need to add the OpenTelemetry Collector as a target within Prometheus. It'll use the standard port for Prometheus metrics (9464).

That's it- you should have access to span metrics using the same !

Example queries

Here are a few sample queries to help explore the data structure being reported:

  • P95 by service: histogram_quantile(.95, sum(rate(latency_bucket[5m])) by (le, service_name))
  • Average latency by service and operation (for example router / graphql.validate): sum by (operation, service_name)(rate(latency_sum{}[1m])) / sum by (operation, service_name)(rate(latency_count{}[1m]))
  • RPM by service: sum(rate(calls_total{operation="HTTP POST"}[1m])) by (service_name)

Full demo

To see this in action, check out the Supergraph Demo repository using the OpenTelemetry-Collector-specific Docker Compose image.

Next
Home
Edit on GitHubEditForumsDiscord

© 2024 Apollo Graph Inc.

Privacy Policy

Company