FAQ
The Edge Optimizer reduces log analytics and storage costs by optimizing log/trace events at the edge before shipping them to output.
Overview
How does Edge Optimizer reduce volume without losing data
The engine identifies repeating structure in your logs -- JSON keys, timestamp formats, constant strings -- and stores each unique pattern once as a cached template. Only the variable values (IPs, pod names, trace IDs) are shipped per event. Similar to how Protocol Buffers define a schema once and send only field values over the wire.
The AOT Compiler builds symbol vocabulary from your repos in CI/CD; the JIT engine uses those symbols to create and assign templates at runtime. 150+ frameworks are covered by the built-in library.
Result: 50-80% volume reduction with 100% data fidelity. Every field, value, and timestamp remains intact. Real-world benchmark: 64% reduction on Kubernetes OTel logs (1,835 → 662 bytes per event).
Expansion at query time: The 10x for Splunk app and L1ES plugin for Elasticsearch/OpenSearch expand compact events transparently at search time. Storage Streamer expands and streams from S3 to Datadog, CloudWatch, or managed Elasticsearch on-demand. Dashboards, alerts, and queries work unchanged.
Compact vs. compression:
- Compressed data requires expansion before it can be searched or aggregated. Compact events remain searchable in place -- they can be streamed to log analyzers and aggregated to metrics without expansion.
- Many SIEMs bill on uncompressed ingest volume -- gzip and zstd reduce storage but not the ingestion bill. Edge Optimizer reduces volume before ingestion, cutting both costs.
- The two approaches are complementary: optimize first, then let your SIEM compress on top.
Why not sampling or filtering? Those are lossy -- they permanently discard data, eliminating evidence for troubleshooting and security investigations. Edge Optimizer uses only lossless techniques.
What do compact events actually look like
See real-world before/after examples showing how compact events preserve all data while reducing size. Examples include Kubernetes OTel logs (1,835 → 662 bytes), application traces, and container runtime output.
How can I verify that optimization is truly lossless
Run a round-trip test with the Dev CLI on your own logs:
# 1. Place your log file in the input directory
cp /path/to/original.log $TENX_CONFIG/data/sample/input/
# 2. Run the engine — produces encoded.log (compact) and templates
tenx @apps/dev
# 3. Feed the compact output back as input
rm -f $TENX_CONFIG/data/sample/input/*.log
cp $TENX_CONFIG/data/sample/output/encoded.log $TENX_CONFIG/data/sample/input/
# 4. Run again — produces decoded.log (expanded)
tenx @apps/dev
# 5. Compare expanded output against the original
diff $TENX_CONFIG/data/sample/output/decoded.log /path/to/original.log
The diff command produces no output when the files are identical. This confirms every field, value, delimiter, and whitespace character survived the round-trip.
The edge-optimizer.html page includes an interactive demo with an "Expand back" button and a "Verify across 1,000 real events" dialog that performs this round-trip across 1,000 Kubernetes events from the OpenTelemetry Demo, with side-by-side comparison and downloadable files.
What types of logs see the best optimization ratios
Highly repetitive log sources achieve the best results. Typical reduction by log type:
- Application logs with consistent formats -- 60-80% reduction
- Container orchestration (Kubernetes), web server access logs, cloud infrastructure logs -- high reduction due to structured, repetitive nature
- Security event logs with many similar entries (firewall allow/deny) -- compact efficiently
- Unstructured logs -- 30-50% reduction from whitespace normalization and structural optimization
Use the Dev tool to analyze your specific logs.
What ROI can I expect from Edge Optimizer
ROI formula:
(daily volume in GB) x (reduction ratio) x (per-GB cost) x 365 = annual savings
Typical reduction ratio: 0.50-0.70. Subtract your Log10x license cost for net ROI.
Example: 10 TB/day (10,000 GB) x 64% reduction = 6,400 GB/day saved x $0.50/GB x 365 = $1.17M annual savings, minus your Log10x license cost.
- Use the Dev tool to measure your reduction ratio
- Use the ROI Calculator on the pricing page
Reference point: 64% lossless reduction measured on real K8s OTel logs.
How does the Optimizer relate to the Regulator
The Optimizer is a superset of the Edge Regulator — it includes the same cost-aware filtering (budget caps, severity boost, per-event-type throttling) plus lossless compaction. Deploying the Optimizer replaces the Regulator; you do not run both.
- Regulator output: Plain text events — forwarder ships to analyzer as-is
- Optimizer output: Compact events — forwarder ships at 50-65% smaller volume
Use the Regulator when your analyzer does not support compact events (e.g., Datadog, CloudWatch). Use the Optimizer when it does (Splunk, Elasticsearch) or when forwarding to S3 via Storage Streamer.
Integration & Deployment
Which log forwarders does Edge Optimizer support
Edge Optimizer integrates with all major log forwarders:
Deployment: Runs as a sidecar process alongside your forwarder. Kubernetes deployment via Helm chart (DaemonSet). Setup time: ~30 minutes.
Resource requirements: 512 MB heap + 2 threads handles 100+ GB/day per node. See Performance FAQ for sizing details and Kubernetes resource specs, and the deployment guide for per-forwarder configuration.
How do I search optimized events in my SIEM
How expanding works depends on your SIEM:
- Splunk: The open-source 10x for Splunk app transparently expands compact events at search time. Queries, dashboards, and alerts work unchanged
- Elasticsearch / OpenSearch: The open-source L1ES plugin transparently rewrites standard queries and decodes
_sourceat search time. Kibana dashboards, KQL queries, and alerts work unchanged. Available for self-hosted Elasticsearch 8.17 and OpenSearch 2.19 - Datadog: Optimizer output goes to S3 only -- Datadog cannot parse compact events directly. Storage Streamer expands and streams to Datadog on-demand. For Datadog-bound logs, use Edge Regulator
- CloudWatch: Events arrive in standard CloudWatch format. Logs Insights queries, dashboards, metric filters, and alarms work unchanged
- Managed Elasticsearch (Elastic Cloud, AWS OpenSearch Service): Storage Streamer expands compact events from S3 before ingestion (custom plugins cannot be installed on managed services)
How does this work with managed platforms (Elastic Cloud, Splunk Cloud, AWS OpenSearch Service)
The integration path depends on your platform:
| Platform | Expansion method | Query overhead |
|---|---|---|
| Self-hosted ES 8.17+ / OpenSearch 2.19+ | L1ES plugin expands at search time | ~1.25x |
| Elastic Cloud, AWS OpenSearch Service, Coralogix, Logz.io | Storage Streamer expands from S3 before ingestion | Zero |
| Splunk Cloud | 10x for Splunk app installs via Splunkbase -- same as Enterprise | ~1.25x |
| Splunk Enterprise | 10x for Splunk app installs directly | ~1.25x |
| Datadog, CloudWatch | Storage Streamer expands from S3 before ingestion | Zero |
Edge apps (Reporter, Regulator, Optimizer) deploy identically regardless of platform -- the forwarder sidecar doesn't change. The difference is how compact events reach your analyzer.
Performance
What's the query-time overhead of searching compact events
~1.25x query time for both Splunk and Elasticsearch. Datadog and CloudWatch have zero overhead (events are expanded before ingestion).
- Splunk: The 10x for Splunk app expands using native SPL and KV Store template lookups -- no Python in the per-event hot path. Per-event expansion is O(1). A 10-second search takes ~12.5 seconds. Compatible with interactive search, scheduled search, alerts, dashboards, REST API, and summary indexing
- Elasticsearch / OpenSearch: The L1ES plugin expands at the Lucene segment level -- each shard handles expansion locally with no central bottleneck. A 100ms query takes ~125ms. Scales horizontally with cluster size
- Datadog, CloudWatch & managed Elasticsearch: Storage Streamer expands events before ingestion -- zero query overhead
What latency does optimization add to my pipeline
Single-digit millisecond processing per batch in most configurations. The optimization engine processes logs as they arrive without buffering delays.
The reduced data volume can improve end-to-end pipeline latency by decreasing network transfer time and SIEM indexing overhead.
How does Edge Optimizer handle failures
Edge Optimizer uses automatic recovery — if 10x crashes, the forwarder buffers events to disk and respawns 10x. Once restarted, buffered events drain through 10x normally. No data loss, no unoptimized leakage.
| Scenario | Behavior | Details |
|---|---|---|
| 10x crash or OOM | Forwarder buffers to disk, respawns 10x, buffer drains after restart | Sidecar failure recovery |
| Volume exceeds 10x capacity | Forwarder buffers to disk, 10x catches up | Backpressure handling |
| Forwarder crashes | 10x stops with it, both restart together | Forwarder crash recovery |
| Network interruption | No effect — 10x communicates via local IPC | Network independence |
| Downstream SIEM slow or unreachable | Forwarder handles retries — 10x unaffected | SIEM availability |
| Rollback | Single helm uninstall — forwarder continues unchanged, no data migration |
Rollback procedure |