Skip to content

FAQ

The Edge Optimizer reduces log analytics and storage costs by optimizing log/trace events at the edge before shipping them to output.

Overview

How does Edge Optimizer reduce volume without losing data

The engine identifies repeating structure in your logs -- JSON keys, timestamp formats, constant strings -- and stores each unique pattern once as a cached template. Only the variable values (IPs, pod names, trace IDs) are shipped per event. Similar to how Protocol Buffers define a schema once and send only field values over the wire.

The AOT Compiler builds symbol vocabulary from your repos in CI/CD; the JIT engine uses those symbols to create and assign templates at runtime. 150+ frameworks are covered by the built-in library.

Result: 50-80% volume reduction with 100% data fidelity. Every field, value, and timestamp remains intact. Real-world benchmark: 64% reduction on Kubernetes OTel logs (1,835 → 662 bytes per event).

Expansion at query time: The 10x for Splunk app and L1ES plugin for Elasticsearch/OpenSearch expand compact events transparently at search time. Storage Streamer expands and streams from S3 to Datadog, CloudWatch, or managed Elasticsearch on-demand. Dashboards, alerts, and queries work unchanged.

Compact vs. compression:

  • Compressed data requires expansion before it can be searched or aggregated. Compact events remain searchable in place -- they can be streamed to log analyzers and aggregated to metrics without expansion.
  • Many SIEMs bill on uncompressed ingest volume -- gzip and zstd reduce storage but not the ingestion bill. Edge Optimizer reduces volume before ingestion, cutting both costs.
  • The two approaches are complementary: optimize first, then let your SIEM compress on top.

Why not sampling or filtering? Those are lossy -- they permanently discard data, eliminating evidence for troubleshooting and security investigations. Edge Optimizer uses only lossless techniques.

What do compact events actually look like

See real-world before/after examples showing how compact events preserve all data while reducing size. Examples include Kubernetes OTel logs (1,835 → 662 bytes), application traces, and container runtime output.

What types of logs see the best optimization ratios

Highly repetitive log sources achieve the best results. Typical reduction by log type:

  • Application logs with consistent formats -- 60-80% reduction
  • Container orchestration (Kubernetes), web server access logs, cloud infrastructure logs -- high reduction due to structured, repetitive nature
  • Security event logs with many similar entries (firewall allow/deny) -- compact efficiently
  • Unstructured logs -- 30-50% reduction from whitespace normalization and structural optimization

Use the Dev tool to analyze your specific logs.

What ROI can I expect from Edge Optimizer

ROI formula:

(daily volume in GB) x (reduction ratio) x (per-GB cost) x 365 = annual savings

Typical reduction ratio: 0.50-0.70. Subtract your Log10x license cost for net ROI.

Example: 10 TB/day (10,000 GB) x 64% reduction = 6,400 GB/day saved x $0.50/GB x 365 = $1.17M annual savings, minus your Log10x license cost.

Reference point: 64% lossless reduction measured on real K8s OTel logs.

Integration & Deployment

Which log forwarders does Edge Optimizer support

Edge Optimizer integrates with all major log forwarders:

Deployment: Runs as a sidecar process alongside your forwarder. Kubernetes deployment via Helm chart (DaemonSet). Setup time: ~30 minutes.

Resource requirements: 512 MB heap + 2 threads handles 100+ GB/day per node. See Performance FAQ for sizing details and Kubernetes resource specs, and the deployment guide for per-forwarder configuration.

How do I search optimized events in my SIEM

How expanding works depends on your SIEM:

  • Splunk: The open-source 10x for Splunk app transparently expands compact events at search time. Queries, dashboards, and alerts work unchanged
  • Elasticsearch / OpenSearch: The open-source L1ES plugin transparently rewrites standard queries and decodes _source at search time. Kibana dashboards, KQL queries, and alerts work unchanged. Available for self-hosted Elasticsearch 8.17 and OpenSearch 2.19
  • Datadog: Optimizer output goes to S3 only -- Datadog cannot parse compact events directly. Storage Streamer expands and streams to Datadog on-demand. For Datadog-bound logs, use Edge Regulator
  • CloudWatch: Events arrive in standard CloudWatch format. Logs Insights queries, dashboards, metric filters, and alarms work unchanged
  • Managed Elasticsearch (Elastic Cloud, AWS OpenSearch Service): Storage Streamer expands compact events from S3 before ingestion (custom plugins cannot be installed on managed services)
What's the query-time overhead of searching compacted events

~1.25x query time for both Splunk and Elasticsearch. Datadog and CloudWatch have zero overhead (events are expanded before ingestion).

  • Splunk: The 10x for Splunk app expands using native SPL and KV Store template lookups -- no Python in the per-event hot path. Per-event expansion is O(1). A 10-second search takes ~12.5 seconds. Compatible with interactive search, scheduled search, alerts, dashboards, REST API, and summary indexing
  • Elasticsearch / OpenSearch: The L1ES plugin expands at the Lucene segment level -- each shard handles expansion locally with no central bottleneck. A 100ms query takes ~125ms. Scales horizontally with cluster size
  • Datadog, CloudWatch & managed Elasticsearch: Storage Streamer expands events before ingestion -- zero query overhead
How does this work with managed platforms (Elastic Cloud, Splunk Cloud, AWS OpenSearch Service)

The integration path depends on your platform:

Platform Expansion method Query overhead
Self-hosted ES 8.17+ / OpenSearch 2.19+ L1ES plugin expands at search time ~1.25x
Elastic Cloud, AWS OpenSearch Service, Coralogix, Logz.io Storage Streamer expands from S3 before ingestion Zero
Splunk Cloud 10x for Splunk app installs via Splunkbase -- same as Enterprise ~1.25x
Splunk Enterprise 10x for Splunk app installs directly ~1.25x
Datadog, CloudWatch Storage Streamer expands from S3 before ingestion Zero

Edge apps (Reporter, Regulator, Optimizer) deploy identically regardless of platform -- the forwarder sidecar doesn't change. The difference is how compact events reach your analyzer.

What latency does optimization add to my pipeline

Single-digit millisecond processing per batch in most configurations. The optimization engine processes logs as they arrive without buffering delays.

The reduced data volume can improve end-to-end pipeline latency by decreasing network transfer time and SIEM indexing overhead.