Retrieve
Pull from the Retriever archive (an S3 bucket fingerprinted by log pattern). Surfaces events beyond log analyzer retention, long-window baselines the log analyzer can't query, or new metrics built from history.
"pull
payment_retryforacme-corp, 90d back"412 events match the tenant filter, pulled directly from S3. Time-bucketed series available, or raw events.
"30d baseline of
Payment_Retry, hourly, by tenant"720 hourly buckets, broken down by tenant. ~8B events in the window — per-bucket counts are sampled estimates, but shape and tenant ranking are reliable.
"create a Datadog metric
db_query_timeout, 90d backfilled"New metric
db_query_timeoutdefined in Datadog; 2,160 historical points emitted with original timestamps.
| You ask | Example answer |
|---|---|
pull payment_retry for acme-corp, 90d back |
Raw events, count rollups, time-bucketed series, or Prometheus-shape series — pulled directly from S3, optionally filtered by field values. |
status on retriever query b27c9-... |
Scan progress, worker completion, and a verdict (complete / in-flight / scan pending / complete, zero events) for a query whose first call returned partialResults: true. Does not re-fetch events. |
30d baseline of Payment_Retry, hourly, by tenant |
A time series across the full window, broken down by tenant. When the volume is too high for exact counts, per-bucket numbers are sampled estimates — shape and top-N ranking stay reliable. |
create a Datadog metric db_query_timeout, 90d backfilled |
Defines a new metric in Datadog or Prometheus and emits historical data points with original timestamps preserved. |
Prerequisites
These tools require the Retriever deployed. Status additionally needs LOG10X_RETRIEVER_LOG_GROUP set so the MCP can read the per-query CloudWatch streams.