FAQ
This page covers general questions about Log10x — what it does, how it's priced, and how it compares to other tools. Each 10x app has its own dedicated FAQ page. For platform-specific questions, see Solutions FAQ.
-
What 10x does, how it compares, and how to get started.
-
Node-based pricing, tiers, billing, and volume limits.
-
Data protection, compliance, encryption, and deployment options.
-
Resource limits, failure modes, scaling, and pre-production testing.
-
Dev, Cloud Reporter, Edge Reporter, Regulator, Optimizer, Storage Streamer.
-
Splunk, Datadog, Elasticsearch, and CloudWatch integrations.
General
What is 10x
A stream processor that analyzes code and binaries to turn log events into typed, class-based objects — not raw text — eliminating 50–80% of log storage and analytics costs.
A lightweight, portable data engine that runs where your data is — locally, at the edge, or in the cloud. Works with your existing infrastructure — log forwarders (Fluentd, OTel, Fluent Bit), log analyzers (Splunk, Datadog, Elastic), and storage (S3, Azure Blobs). See Overview.
What does 10x do
:material-tools-outline: 10x learns the structure of every log event your environment produces — from source code, container images, and Helm charts. At runtime, it uses that knowledge to classify and optimize events without regex or manual rules.
Four apps, each tackling a different part of the cost problem:
Who is 10x for
SRE, DevOps, and FinOps teams facing:
- High costs from Splunk, Datadog, or Elastic for both hardware and licensing
- Billing spikes during traffic surges or incidents
- Storage bloat from verbose logs in k8s and microservices
What makes 10x different
No parsing rules : Pipeline tools — Cribl, Logstash, OTel Collector, Vector — require regex, grok, VRL, or OTTL rules for every log format. Those rules break when code changes and need dedicated pipeline engineering to maintain. The 10x compiler builds symbol vocabulary from repos and containers. The JIT stream processor uses those symbols to recognize log structure and assign cached hidden classes at runtime — no regex, no grok, no per-format rules.
Optimize everywhere : The hybrid AOT/JIT engine powers automatic log data optimization at every stage of the pipeline. Test locally, report on costs and regulate billing spikes at the edge, losslessly compact before shipping, ingest from S3 on-demand.
BYO stack : Works with your existing log forwarders, analyzers, time-series databases, object storage, and compute (K8s, Lambda, EC2). No migration required.
Zero egress : The engine runs as a lightweight runtime inside your infrastructure. Log data never leaves your environment. No vendor access to your logs required.
Powering agents : Every event exits the 10x Engine as a typed object with direct field access — not raw text to parse. Aggregation condenses millions of events into compact summaries — so AI agents operate on structured data instead of burning tokens on raw log lines, without exposing customer data to external models.
Predictable pricing : Commercial tools price per GB ingested — costs spike with traffic. 10x is priced per infrastructure node running log collection. Volume spikes, new applications, and traffic surges have no impact on cost.
What tools does 10x work with
The 10x Engine fits your existing infrastructure. A modular extension framework supports integration across your stack:
- Log forwarders — Fluentd, Fluent Bit, OTel, Filebeat, Logstash, Splunk UF
- Log analyzers — Splunk, Datadog, Elastic, CloudWatch
- Time-series — Prometheus, Datadog, Grafana, SignalFx
- Object storage — S3, Azure Blobs, GCS
- Compute — K8s, Lambda, EC2, Docker
How does the AOT compiler work
The compiler builds symbol vocabulary from your deployment artifacts in your CI/CD pipeline:
- Helm charts — resolves all image references, pulls and scans each one
- Docker images — pulls and scans any container image directly
- GitHub repos — extracts symbols from source code and compiled binaries
- Artifactory — pulls from artifact repositories
The runtime ships with a default symbol library covering 130+ open-source frameworks (Spring Boot, Django, Express, Kafka, Kubernetes, and more). Most environments work out of the box — custom compilation is only needed for proprietary logging frameworks or application-specific log formats.
The compiler commits custom symbol libraries to GitHub as part of its pipeline. Edge and cloud apps pull them automatically via the @github launch macro at startup and poll for changes at a configurable interval. See compiler workflow for the full CI/CD → Pull → Scan → Link → Push → Distribute pipeline.
The compiler runs in your infrastructure — source code and binaries never leave your network. The compiler itself is free to run with no credits or metered usage.
Pricing
How is 10x priced
Node-based. Pay per infrastructure node running log collection, not per byte ingested. Log volume spikes, traffic surges, and new applications have no impact on cost.
See Pricing for tiers, node counting, and what's included. See Pricing FAQ for detailed questions about autoscaling, Lambda, mixed environments, and more.
Where does 10x run
Does 10x work with Lambda and serverless
Yes. Edge apps run as persistent sidecars, so they don't apply directly to ephemeral Lambda functions. For serverless workloads:
- Cloud Reporter — analyzes Lambda log costs via your log analyzer's API (Datadog, CloudWatch, Splunk). Read-only, no Lambda modification needed
- Storage Streamer — archives Lambda logs to S3 and streams selected events to your analyzer on-demand
Pricing for serverless is based on Cloud Reporter and Storage Streamer pods in your cluster, not on the number of Lambda functions.
Comparisons
How does 10x differ from log forwarders (Fluentd, OTel, Fluent Bit)
Complementary. 10x runs as a sidecar alongside your existing log forwarder, adding cost analysis, regulation, and structured optimization before logs ship downstream.
Log forwarders collect, route, and do basic parsing. 10x adds cost awareness (which event types cost the most), regulation (cap noisy types during spikes), and structured optimization (50%+ lossless volume reduction). No forwarder replacement needed.
How does 10x differ from Cribl
No parsing rules — Pipeline tools — Cribl, OTel Collector, Vector, Tero — require regex, grok, VRL, or OTTL rules for every log format. Those rules break when code changes and need dedicated pipeline engineering to maintain. The 10x compiler builds symbol vocabulary from repos and containers. The JIT stream processor uses those symbols to recognize log structure and assign cached hidden classes at runtime — no regex, no grok, no per-format rules.
Predictable pricing — Pay per infrastructure node running log collection, not per byte ingested. Log volume spikes, traffic surges, and new applications have no impact on cost.
How does 10x differ from log analytics tools (Splunk, Datadog, Elastic)
10x reduces the cost of log analytics without replacing them. Your SIEM configuration, dashboards, queries, and alerts all continue working unchanged.
- Reporter — identifies which event types incur the highest costs
- Regulator — caps noisy event types before they ship
- Optimizer — losslessly compacts events 50%+ before ingestion
- Streamer — stores everything in S3, streams selected data to your SIEM on-demand
The open-source 10x for Splunk app auto-expands compact logs at search time. For other SIEMs, Storage Streamer expands and streams events on-demand.
How does 10x differ from APMs and OpenTelemetry
Different goals. APMs (Dynatrace, AppDynamics) and OTel add instrumentation to your applications — more logging, more profiling, more tracing data. 10x processes the data your environment already produces to reduce its cost.
10x is agentless — no SDKs, no bytecode injection, no runtime overhead. Your application code runs exactly as written. 10x processes the output downstream as a sidecar alongside your log forwarder.
The two are complementary: use APMs and OTel for application insights, then use 10x to reduce the cost of storing and analyzing that telemetry.
Getting Started
Where should I start
- Dev (free, no account) — run on your own log files locally. See your reduction ratio in minutes
- Cloud Reporter — connects to your SIEM via read-only API. No agents, no forwarder changes. Deploy in 15 minutes
Both publish metrics to ROI Analytics — per-app Grafana dashboards showing cost per application, volume by severity, and top expensive patterns. Act on the findings with Edge Optimizer, Edge Regulator, or Storage Streamer.
How do I search compact events in my log platform
Depends on your log platform and which app you use:
- Splunk: The open-source 10x for Splunk app transparently expands compact events at search time. Queries, dashboards, and alerts work unchanged
- Datadog: Edge Regulator sends events to Datadog in standard log format. Edge Optimizer output goes to S3 only — Datadog cannot parse compact events directly. Storage Streamer expands and streams to Datadog on-demand
- Elasticsearch: Reporter and Regulator output standard Elasticsearch documents. Optimizer output is expanded by Storage Streamer, or via the L1ES Lucene plugin for self-hosted deployments
- CloudWatch: Events arrive in standard CloudWatch format. Logs Insights queries, dashboards, metric filters, and alarms work unchanged
Extensibility
How do I extend 10x with custom integrations
:material-puzzle-multiple-outline: Build custom input/output integrations using the 10x API framework:
| Integration Type | API | Description |
|---|---|---|
| Log forwarders | InputStream or Log4j2 | Read from or write to custom log forwarding tools |
| Log analyzers | Apache Camel | Read from 400+ analytics sources via YAML routes |
| Object storage | Object Storage | Index and query from GCP Storage, MinIO, etc. |
| Regulator modules | JavaScript | Define custom regulation rules and filters |
| Launcher types | Launcher | Deploy in k8s, Quarkus, or CLI |