Connect to Datadog via OpenTelemetry Collector
Send router telemetry to Datadog through the OpenTelemetry Collector
This guide walks through configuring the Apollo Router to send telemetry data to Datadog via the OpenTelemetry Collector.
For general metrics configuration, refer to Router Metrics Configuration. For general tracing configuration, refer to Router Tracing Configuration.
For router instrumentation with Datadog-specific attributes, see the Router Instrumentation guide.
Connection methods
Datadog supports multiple methods for ingesting OpenTelemetry data:
Datadog OpenTelemetry collector (DDOT) - Datadog's distribution of the OpenTelemetry Collector
OpenTelemetry collector - Vendor-neutral telemetry pipeline with advanced processing (this guide)
Datadog agent - Direct connection with native Datadog features (guide available)
Agentless - Direct OTLP ingestion to Datadog
For a comprehensive overview of all OpenTelemetry setup options with Datadog, see Datadog's OpenTelemetry setup documentation.
Prerequisites
Apollo Router installed and running - see deployment options
OpenTelemetry collector - see installation options
Datadog account with an API key - see how to generate an API key
OpenTelemetry integration installed in Datadog - install from the integrations page
Alternative: Apollo Runtime container
For a simpler deployment, consider using the Apollo Runtime container which packages both the router and OpenTelemetry Collector together. See the Datadog example configuration for ready-to-use configuration.
Configure the router
Configure your router to send telemetry to the OpenTelemetry Collector using OTLP.
Basic configuration
Add telemetry exporters to your existing router.yaml. This configuration follows Datadog's unified service tagging approach:
1telemetry:
2 exporters:
3 # Send metrics to the collector
4 metrics:
5 otlp:
6 enabled: true
7 endpoint: http://localhost:4317
8 protocol: grpc
9 temporality: delta # Required for Datadog
10 common:
11 resource:
12 service.name: "apollo-router"
13 deployment.environment: "dev"
14
15 # Send traces to the collector
16 tracing:
17 otlp:
18 enabled: true
19 endpoint: http://localhost:4317
20 protocol: grpc
21 common:
22 sampler: 1.0 # Send all traces to collector for intelligent sampling
23 resource:
24 service.name: "apollo-router"
25 deployment.environment: "dev"Configure the OpenTelemetry collector
The collector uses a pipeline architecture with four key components:
Receivers: Collect telemetry from sources (in this case, the router via OTLP)
Processors: Transform and modify data (batching, sampling, filtering)
Connectors: Generate new telemetry from existing data (the Datadog connector creates trace metrics)
Exporters: Send data to backends (Datadog in this configuration)
For detailed configuration options, see configuration basics and Datadog's collector exporter documentation.
Basic collector configuration
Create otel-collector.yaml with this minimal configuration. Set your Datadog API key as an environment variable: export DD_API_KEY=your_datadog_api_key_here:
1receivers:
2 otlp:
3 protocols:
4 grpc:
5 endpoint: 0.0.0.0:4317
6
7processors:
8 batch:
9 # Datadog APM intake limit is 3.2MB
10 send_batch_max_size: 1000
11 send_batch_size: 100
12 timeout: 10s
13
14connectors:
15 # Required for APM trace metrics
16 datadog/connector:
17
18exporters:
19 datadog/exporter:
20 api:
21 key: ${env:DD_API_KEY}
22 # site: datadoghq.eu # Uncomment for EU site
23
24service:
25 pipelines:
26 traces:
27 receivers: [otlp]
28 processors: [batch]
29 exporters: [datadog/connector, datadog/exporter]
30
31 metrics:
32 receivers: [otlp, datadog/connector]
33 processors: [batch]
34 exporters: [datadog/exporter]Verify the connection
With both the router and OpenTelemetry collector running, verify that telemetry data is flowing to Datadog:
Send test queries to your router to generate telemetry data
View your data in Datadog:
Service: Go to APM > Services and look for the
apollo-routerserviceMetrics: Go to Metrics > Explorer and search for
http.server.request.durationfiltered byservice:apollo-router
Enhance your configuration
After you verify the basic setup, you can enhance your configuration with additional features.
Router enhancements
Add router instrumentation for Datadog-optimized span attributes, error tracking, and better APM organization.
Collector enhancements
Enhance the collector pipeline with advanced processing capabilities. This configuration is designed for non-containerized environments - see Datadog's OpenTelemetry integrations for Kubernetes and container examples:
1receivers:
2 otlp:
3 protocols:
4 grpc:
5 endpoint: 0.0.0.0:4317
6 http:
7 endpoint: 0.0.0.0:4318
8
9 # Collect host metrics
10 hostmetrics:
11 collection_interval: 10s
12 scrapers:
13 cpu:
14 metrics:
15 system.cpu.utilization:
16 enabled: true
17 memory:
18 network:
19 disk:
20 filesystem:
21 metrics:
22 system.filesystem.utilization:
23 enabled: true
24
25 # Collector self-monitoring
26 prometheus/internal:
27 config:
28 scrape_configs:
29 - job_name: 'otelcol'
30 scrape_interval: 10s
31 static_configs:
32 - targets: ['0.0.0.0:8888']
33
34processors:
35 batch:
36 send_batch_max_size: 1000
37 send_batch_size: 100
38 timeout: 10s
39
40 # Detect resource attributes
41 resourcedetection:
42 detectors: [env, system]
43 system:
44 hostname_sources: [os]
45
46 tail_sampling:
47 decision_wait: 30s
48 num_traces: 50000
49 policies:
50 # Always sample errors
51 - name: errors
52 type: status_code
53 status_code:
54 status_codes: [ERROR]
55
56 # Sample slow requests
57 - name: slow_requests
58 type: latency
59 latency:
60 threshold_ms: 1000
61
62 # Random sampling for everything else
63 - name: random_sampling
64 type: probabilistic
65 probabilistic:
66 sampling_percentage: 10
67
68connectors:
69 datadog/connector:
70
71exporters:
72 datadog/exporter:
73 api:
74 key: ${env:DD_API_KEY}
75
76service:
77 pipelines:
78 traces:
79 receivers: [otlp]
80 processors: [resourcedetection, batch]
81 exporters: [datadog/connector]
82
83 traces/sampling:
84 receivers: [datadog/connector]
85 processors: [tail_sampling, batch]
86 exporters: [datadog/exporter]
87
88 metrics:
89 receivers: [hostmetrics, otlp, prometheus/internal, datadog/connector]
90 processors: [resourcedetection, batch]
91 exporters: [datadog/exporter]Host metrics collection
Add host metrics to monitor your infrastructure alongside router telemetry. See Datadog's host metrics guide for detailed configuration including CPU, memory, disk, network, and filesystem metrics.
For additional examples across different environments (Kubernetes, Docker, host metrics, etc.), see the Datadog exporter example configurations.
Container and orchestration metrics
Extend monitoring to containerized environments with Docker and Kubernetes metrics. See Datadog's Docker metrics guide for container resource usage, networking, and orchestration data alongside your router telemetry.
Intelligent sampling
Add tail sampling to keep all errors and slow requests while sampling normal traffic:
Tail sampling configuration
1processors:
2 batch:
3 send_batch_max_size: 1000
4 send_batch_size: 100
5 timeout: 10s
6
7 tail_sampling:
8 decision_wait: 30s
9 num_traces: 50000
10 policies:
11 # Always sample errors (HTTP and GraphQL)
12 - name: errors
13 type: status_code
14 status_code:
15 status_codes: [ERROR]
16
17 # Sample slow requests
18 - name: slow_requests
19 type: latency
20 latency:
21 threshold_ms: 1000
22
23 # Random sampling for normal traffic
24 - name: random_sampling
25 type: probabilistic
26 probabilistic:
27 sampling_percentage: 10
28
29service:
30 pipelines:
31 traces:
32 receivers: [otlp]
33 processors: [batch]
34 exporters: [datadog/connector]
35
36 # Use a separate pipeline for sampling to maintain accurate trace metrics
37 traces/sampling:
38 receivers: [datadog/connector]
39 processors: [tail_sampling, batch]
40 exporters: [datadog/exporter]Collector health metrics
Monitor the health and performance of the OpenTelemetry Collector itself. See Datadog's collector health metrics guide for configuration to track collector throughput, dropped data, and resource usage. For detailed information about collector internal telemetry, see the OpenTelemetry internal telemetry documentation.
Troubleshooting
No data in Datadog
Verify your API key is set correctly:
echo $DD_API_KEYCheck collector logs for authentication errors
Ensure the correct Datadog site is configured (US vs EU)
Check router logs for OTLP export errors or connection failures to the collector
For comprehensive troubleshooting guidance, see the OpenTelemetry Collector troubleshooting documentation.
Learn more
OpenTelemetry collector resources
Additional collector documentation and resources
- Quick start guide - Get started with the collector
- Configuration basics - Deep dive into collector configuration
- Deployment patterns - Agent vs Gateway deployment models
- Transforming telemetry - Processing and enriching data
- Architecture overview - Understanding collector internals
- Component registry - Available receivers, processors, and exporters
- Collector health checks - HTTP health check endpoints
- zPages diagnostic pages - Internal debugging and metrics pages
- Collector internal telemetry - Self-monitoring configuration
Next steps
Configure router instrumentation for Datadog-optimized telemetry
Configure Datadog dashboards for router monitoring
Learn about advanced sampling strategies
Explore collector processors for data transformation