Compatibility
Kibana and existing queries, Logstash/Beats, supported forwarders and platforms, how to test on your cluster, and Data Streams + ILM integration with Retriever.
Does 10x work with my existing Kibana dashboards and queries
Yes. The 10x Engine operates pre-ingestion -- events that pass through arrive in standard Elasticsearch format. Kibana dashboards, saved searches, KQL queries, visualizations, and alerts all work unchanged. Zero reconfiguration.
All field mappings, index patterns, and document structure are preserved. Reporter and Reducer output standard Elasticsearch documents. Reducer (Compact mode) compacts events losslessly -- expanded transparently by the L1ES plugin for self-hosted Elasticsearch 8.17 and OpenSearch 2.19 (Kibana dashboards and KQL queries work unchanged), or via Retriever for managed services.
Does this work with Logstash and Beats
Yes. The 10x Engine runs as a sidecar alongside your existing forwarder -- it doesn't replace Logstash, Beats, or any part of your pipeline.
Your Logstash pipelines, Beats modules, and Elasticsearch ingest nodes all continue unchanged. The 10x Engine intercepts events at the forwarder output, optimizes them, then forwards to the Elasticsearch Bulk API. Nothing to reconfigure in Logstash, Beats, or your ingest pipelines.
Supported: Filebeat, Metricbeat, Logstash 7.x/8.x, Fluent Bit, Fluentd, and OpenTelemetry Collector.
Which forwarders are supported with detailed versions
All major log forwarders are supported. The 10x Engine integrates via standard input/output protocols — no custom plugins or agents required.
| Forwarder | Minimum Version | Integration Type | Socket Type | Notes |
|---|---|---|---|---|
| Fluent Bit | None documented | Lua filter + subprocess | TCP or Unix | Launches 10x on first event |
| Fluentd | None documented | exec_filter plugin | Unix socket | Auto-restart child process |
| Filebeat | None documented | JavaScript processor | Unix socket | Cannot use console output |
| Logstash | 7.x+ | Pipe output plugin | Unix/TCP (5160) | Works with Logstash pipelines |
| OTel Collector | 0.143.0+ | Syslog RFC5424 exporter | Unix/TCP | Separate service, not sidecar |
| Datadog Agent | Any | File relay (via Fluent Bit) | File-based | Indirect integration |
| Splunk UF | Any | File relay (via Fluent Bit) | File-based | Indirect integration |
Key note: All integrations use standard protocols (syslog RFC5424, Fluentd forward protocol, JSON). The only explicit version requirement is OTel Collector v0.143.0+ for Unix socket support in the syslog exporter.
Which Elasticsearch platforms are supported
All of them. The 10x Engine works pre-ingestion -- it optimizes events before they reach any Elasticsearch-compatible backend.
| Platform | Versions |
|---|---|
| Elastic Cloud | All regions, all tiers |
| Self-hosted Elasticsearch | 7.x, 8.x (including open-source) |
| OpenSearch | 1.x, 2.x (AWS and self-hosted) |
| Coralogix | Elasticsearch-compatible ingestion |
| Logz.io | Elasticsearch-compatible ingestion |
Same optimization pipeline, same Reducer sidecar, same results -- regardless of where your Elasticsearch runs.
How do I test this on my Elasticsearch cluster
- Dev — Run on your Elasticsearch log files locally. One-line install, results in minutes. No account, no credit card.
- Reporter — Deploy as a DaemonSet for pre-SIEM cost visibility. Alternatively, use the MCP server's SIEM-sample tool for agentless SIEM-side analysis via Elasticsearch REST API.
- Reducer — Deploy via Helm chart alongside your forwarder. Filter mode (sampling) or Compact mode (lossless shrink via the L1ES plugin). ~30 min setup.
- Retriever — Route events to S3, stream selected data to Elasticsearch on-demand.
Each step is independent — start with Dev to see your reduction ratio, then move to production when ready.
Retriever: Data Streams & ILM Integration
Retriever streams queried events from S3 back into Elasticsearch. If you're using Data Streams with Index Lifecycle Management (ILM) for automated index rollover, you may have questions about integration:
Q: Will streamed events from S3 integrate cleanly with my existing data stream?
Yes. Retriever outputs standard Elasticsearch JSON documents with original timestamps and fields intact. These events integrate cleanly into your data stream's backing indices -- no special configuration needed.
How it works:
- Retriever queries S3 for matching events (via Bloom filter index)
- Matched events are transformed back to original JSON format (full field fidelity)
- Events are sent to Elasticsearch Bulk API with timestamps from the original events
- Data stream receives events and routes them to appropriate backing index based on timestamp
- ILM policies continue unchanged -- rollover decisions based on index age/size, unaware of event origin
Q: Will streamed events disrupt my ILM rollover schedule?
No. ILM policies operate on index metadata (creation time, size), not event timestamps. Streamed events with historical timestamps integrate into the appropriate backing index without triggering unintended rollovers. Example:
Today's live logs (timestamp: 2024-02-27T14:00:00Z)
↓
Data stream: logs-prod-default
↓
Backing index: .ds-logs-prod-default-2024.02.27-000001 (age-based ILM policy)
↓
S3 archive (queried: "last 24h of ERROR")
└─ Events with timestamps from yesterday (2024-02-26T14:00:00Z)
↓
Streamed to data stream
↓
Routed to backing index: .ds-logs-prod-default-2024.02.26-000001 (or appropriate index by timestamp)
ILM continues: Index rollover happens on schedule based on index age, not event timestamps.
Q: Can I stream regulated events into the same data stream as live logs?
Yes. The typical use case is:
- Live logs: Stream continuously to Elasticsearch via Reducer (Compact mode)
- Archived logs: Query via Retriever on-demand (e.g., "pull last 24h of ERROR level")
- Target: Same data stream (e.g.,
logs-prod-default)
Events from both sources merge seamlessly in Kibana. Searches, dashboards, and alerts work on the combined dataset without any changes to your KQL queries or visualization logic.
Example workflow:
# Query S3 for errors in last 24h, stream to same data stream as live logs
curl -X POST https://retriever.log10x.com/query \
-d '{
"from": "now(\"-24h\")",
"to": "now()",
"search": "severity_level == \"ERROR\"",
"target": "logs-prod",
"elasticsearch_endpoint": "https://your-elastic-cloud.es.us-central1.gcp.cloud.es.io:9243"
}'
Results: Historical ERROR events from S3 appear in Kibana alongside today's live logs in the same data stream index.
Q: What if I want to stream to a different destination index?
You can configure Retriever to route streamed events to a separate index (e.g., logs-prod-archive-restored) if needed. Set the target index in Retriever config; events will be ingested there instead. ILM policies on both the live data stream and archive index work independently.
Performance notes:
- Data stream ingestion continues at normal rate (unaffected by Retriever queries)
- Streamed events use Elasticsearch Bulk API, processed at same rate as HEC events
- ILM index rollover adds no overhead (operates on index metadata, not per-event)
- Typical query: ~2-30 seconds depending on result set size (see Retriever FAQ)