Getting Data Out
View SourceTelemetry only becomes useful once you can look at it, query it, and connect it back to real behavior in your system. This chapter covers the export paths you will use most often.
Export Destinations
The instrument library supports several export formats:
| Exporter | Format | Best For |
|---|---|---|
| Console | Text | Development, debugging |
| OTLP | OpenTelemetry Protocol | Jaeger, Tempo, any OTLP backend |
| Prometheus | Text/OpenMetrics | Prometheus scraping |
Console Export
The console exporter prints telemetry to stdout. It is a quick way to verify that spans, metrics, or logs are being produced before you configure a full backend.
Spans
%% Register console span exporter
instrument_exporter:register(instrument_exporter_console:new()).
%% Now spans print when they end
instrument_tracer:with_span(<<"test">>, fun() ->
ok
end).
%% Output: Span: test (1.234ms) trace_id=abc... span_id=xyz...Metrics
%% Format all metrics as Prometheus text
Text = instrument_prometheus:format(),
io:format("~s", [Text]).Logs
%% Register console log exporter
instrument_log_exporter:register(instrument_log_exporter_console:new()).
instrument_logger:install(#{exporter => true}).OTLP Export
OTLP, the OpenTelemetry Protocol, is the standard format for sending telemetry to backends such as Jaeger, Grafana Tempo, and Honeycomb.
Configuration
%% Via environment variables
os:putenv("OTEL_EXPORTER_OTLP_ENDPOINT", "http://localhost:4318"),
os:putenv("OTEL_SERVICE_NAME", "my-service"),
instrument_config:init().Or programmatically:
%% Configure OTLP exporter
Exporter = instrument_exporter_otlp:new(#{
endpoint => "http://localhost:4318/v1/traces",
headers => [{<<"Authorization">>, <<"Bearer token">>}]
}).Trace Export
%% Register OTLP span exporter (goes through the batched manager)
instrument_exporter:register(instrument_exporter_otlp:new(#{
endpoint => "http://jaeger:4318/v1/traces"
})).Metric Export
%% Export metrics via OTLP
MetricExporter = instrument_metrics_exporter_otlp:new(#{
endpoint => "http://collector:4318/v1/metrics"
}),
instrument_metrics_exporter:register(MetricExporter).Log Export
%% Export logs via OTLP
LogExporter = instrument_log_exporter_otlp:new(#{
endpoint => "http://collector:4318/v1/logs"
}),
instrument_log_exporter:register(LogExporter),
instrument_logger:install(#{exporter => true}).Prometheus Export
Prometheus pulls metrics by scraping an HTTP endpoint exposed by your application.
Setting Up the Endpoint
%% In your HTTP server (e.g., cowboy handler)
handle_metrics(_Req) ->
Body = instrument_prometheus:format(),
ContentType = instrument_prometheus:content_type(),
{200, [{<<"content-type">>, ContentType}], Body}.Prometheus Configuration
Add a scrape target in prometheus.yml:
scrape_configs:
- job_name: 'my-erlang-app'
static_configs:
- targets: ['localhost:8080']
metrics_path: '/metrics'
scrape_interval: 15sMetric Naming for Prometheus
Prometheus metric names should be stable and easy to query:
%% Good names
instrument_metric:new_counter(http_requests_total, <<"Total HTTP requests">>).
instrument_metric:new_gauge(http_active_connections, <<"Active connections">>).
instrument_metric:new_histogram(http_request_duration_seconds, <<"Request duration">>).
%% Counter: use _total suffix
%% Histogram: use _seconds or _bytes suffix
%% Gauge: describe current stateJaeger Setup
Jaeger accepts OTLP traces. For local development, you can start it with Docker:
docker run -d --name jaeger \
-p 16686:16686 \
-p 4318:4318 \
jaegertracing/all-in-one:latest
Configure your application:
os:putenv("OTEL_EXPORTER_OTLP_ENDPOINT", "http://localhost:4318"),
os:putenv("OTEL_SERVICE_NAME", "my-service"),
instrument_config:init().View traces at http://localhost:16686.
Batch Processing
In production, use the batch processor to reduce request-path overhead:
%% Configure batch span processor
instrument_span_processor:register(instrument_span_processor_batch, #{
exporter => instrument_exporter_otlp,
exporter_config => #{
endpoint => "http://collector:4318/v1/traces"
},
max_queue_size => 2048,
schedule_delay_millis => 5000, %% 5 seconds
max_export_batch_size => 512
}).Batch processing:
- Buffers spans in memory
- Exports in batches periodically
- Reduces network overhead
- Handles temporary backend unavailability
Resource Configuration
Resources identify the service that produced the telemetry:
%% Via environment
os:putenv("OTEL_SERVICE_NAME", "order-service"),
os:putenv("OTEL_SERVICE_VERSION", "1.2.3"),
os:putenv("OTEL_RESOURCE_ATTRIBUTES", "deployment.environment=production").
%% Or programmatically
Resource = instrument_resource:create(#{
<<"service.name">> => <<"order-service">>,
<<"service.version">> => <<"1.2.3">>,
<<"deployment.environment">> => <<"production">>
}).Multiple Exporters
You can export to multiple destinations:
%% Console for development
instrument_exporter:register(instrument_exporter_console:new()),
%% OTLP for production
instrument_exporter:register(instrument_exporter_otlp:new(#{
endpoint => "http://collector:4318/v1/traces"
})).Complete Setup Example
-module(telemetry_setup).
-export([init/0]).
init() ->
%% Configure from environment
instrument_config:init(),
%% Set up batch processor for traces
ok = instrument_span_processor:register(instrument_span_processor_batch, #{
exporter => instrument_exporter_otlp,
exporter_config => #{
endpoint => os:getenv("OTEL_EXPORTER_OTLP_ENDPOINT", "http://localhost:4318") ++ "/v1/traces"
},
max_queue_size => 2048,
schedule_delay_millis => 5000,
max_export_batch_size => 512
}),
%% Set up log exporter
case os:getenv("OTEL_EXPORTER_OTLP_ENDPOINT") of
false ->
%% Development: console logging
ok;
Endpoint ->
%% Production: OTLP logging
LogExporter = instrument_log_exporter_otlp:new(#{
endpoint => Endpoint ++ "/v1/logs"
}),
instrument_log_exporter:register(LogExporter),
instrument_logger:install(#{exporter => true})
end,
%% Set up metrics exporter
MetricExporter = instrument_metrics_exporter_otlp:new(#{
endpoint => os:getenv("OTEL_EXPORTER_OTLP_ENDPOINT", "http://localhost:4318") ++ "/v1/metrics"
}),
instrument_metrics_exporter:register(MetricExporter),
ok.Graceful Shutdown
Flush pending telemetry before shutdown:
%% In your application stop callback
stop(_State) ->
%% Flush pending spans
instrument_span_processor:force_flush(),
%% Allow time for export
timer:sleep(1000),
ok.Exercise
Set up a complete observability stack:
- Start Jaeger with Docker
- Configure OTLP export for traces
- Set up Prometheus metrics endpoint
- Verify data appears in both backends
Generate traffic, then confirm that traces and metrics appear where you expect.
Next Steps
Your telemetry is now flowing to backends. Next, we will control trace volume and cost with sampling.