Pricing
Predictable, node-based pricing that doesn't punish growth. Pay per infrastructure node, not data volume -- all products included, unlimited log volume.
Overview
How does node-based pricing work
A node = any host running a Log10x collector, regardless of instance size:
- Kubernetes: 1 worker node = 1 billing unit (DaemonSet deployment)
- VMs / bare metal: 1 host running a collector = 1 billing unit
- ECS: 1 container instance = 1 billing unit
Instance size (vCPU, memory) does not affect count. One question determines your tier: "How many nodes run log collection?"
Everything else is unlimited -- log volume, traffic spikes, applications, environments, queries, destinations.
How does this compare to volume-based pricing
Volume-based tools charge per GB. We charge per node.
When your traffic grows, their bill grows proportionally. Yours doesn't.
Splunk charges ~$6/GB ingested. Datadog charges ~$2.50/GB (ingestion + 30-day indexing). Cribl charges per GB processed plus per worker.
All create unpredictable costs that spike during success or incidents. Log10x pricing is based on infrastructure capacity -- predictable and doesn't punish growth.
What savings can I expect
Savings depend on which apps you deploy, your daily log volume, per-GB ingestion cost, and how much you divert to S3:
- Edge Optimizer -- lossless volume reduction (50-80% depending on log structure)
- Edge Regulator -- cost-based sampling of noisy event types
- Storage Streamer -- store in S3 at $0.023/GB, stream to your SIEM on demand
Use the ROI Calculator with your actual numbers for a personalized estimate.
Which products are included
All of them. Every tier includes:
- All Edge apps (Reporter, Regulator, Optimizer)
- Storage Streamer (indexing + querying)
- Cloud Reporter
- 10x Console (dashboard, metrics, recommendations)
There are no feature gates or product tiers. You get everything.
Volume & Scaling
Is volume unlimited
Yes. Whether you process 1TB/day or 100TB/day, your price is determined only by node count.
Your bill stays the same regardless of log volume. We never charge per-GB for ingestion, processing, storage, or queries.
What if my log volume doubles
Your price doesn't change.
Log10x pricing is based on infrastructure capacity (nodes), not log volume. If you double your traffic but keep the same node count, your bill stays identical.
What about autoscaling
Autoscaling is free.
Your tier is based on P90 baseline node count over 30 days. Transient spikes from autoscaling or traffic surges don't affect your tier. Spot instances, preemptible nodes, and ephemeral nodes that exist only during scale-up events are not counted.
Black Friday, product launches, or incidents generate zero extra costs.
Tiers & Billing
What if I exceed my tier
We notify you at 75%, 90%, and 100% of tier capacity. You get a 14-day grace period to upgrade. No surprise charges.
The Console shows your current usage in real-time. Upgrade is one-click and manual -- we don't auto-charge.
Your tier is based on P90 baseline, so temporary spikes don't trigger notifications.
How do I know how many nodes I have
For Kubernetes, run kubectl get nodes to see your worker node count.
For other architectures, count the hosts/VMs running your log collector (Fluentd, Fluent Bit, Filebeat, OTel Collector, Logstash).
The billing unit is collector instances, which equals node count for DaemonSet deployments.
Do pods count separately from nodes
No. Kubernetes worker nodes count. Pods running on those nodes don't add to your count.
Example: If you have 100 worker nodes running 1,000 application pods, your node count is 100 -- not 1,100.
Is there a limit on applications or environments
No limits.
You can run unlimited applications across unlimited environments (dev, staging, prod, etc.). You pay only for collector nodes, regardless of how many apps or environments you monitor.
Can I upgrade or downgrade
Yes. One-click upgrade in the Console with pro-rated billing.
Downgrades are available at the end of your billing cycle.
Is there a free trial
Yes — two ways to start:
- Dev (free, no account) — download and run on your own log files locally. See your reduction ratio in minutes. No credit card, no signup.
- 14-day trial (full platform) — all products, all features, no limits. Credit card required. Cancel anytime.
Resource Footprint
What are the CPU and memory requirements per node
You control both. The 10x Engine runs on the JVM (HotSpot or GraalVM), so resource allocation is explicit:
- Memory: Set via
-Xmx(e.g.,-Xmx512m). The JVM heap won't exceed this ceiling. 512 MB is a reasonable default for most workloads. - CPU: Set via
threadPoolSize— a fixed thread count (e.g.,2) or a fraction of available cores (e.g.,0.25= 25%). One or two dedicated threads handle most production volumes.
A single node with 512 MB heap and 2 threads handles 100+ GB/day of log throughput. Backpressure throttles input gracefully if the pipeline approaches its resource limit — no crashes, no dropped logs.
Both values map directly to standard Kubernetes resource specs in your DaemonSet manifest. For throughput benchmarks, scaling tables, and architecture details, see Performance FAQ.
Node Counting
What if I only use Storage Streamer (no Edge)
Count the collectors shipping logs to S3. Those are your billing units. See Storage Streamer for details.
What if I only use Cloud Reporter (no Edge)
Count concurrent 10x Engine instances sampling your log analytics platform. Each running instance (Lambda function, k8s CronJob) counts as one node. See Cloud Reporter for details.
Do analytics servers (Splunk, Elasticsearch, Datadog) count as nodes
No. Only hosts running a Log10x collector count. Your analytics infrastructure — Splunk indexers, Elasticsearch data nodes, Datadog Agent hosts — are not billing units.
Example: 200 EKS worker nodes running Filebeat DaemonSets + 30 Elasticsearch data nodes + 3 Splunk indexers = 200 nodes. Only the worker nodes running log collection count.
Do Lambda functions count as nodes
No. Lambda invocations are not billing units. Edge apps run as persistent sidecars and don't apply to ephemeral functions.
For serverless workloads, use Cloud Reporter and Storage Streamer. Pricing is based on the pods running these apps in your cluster — not the number of Lambda functions.
What if I deploy only to a subset of nodes
Count only the nodes running the Log10x DaemonSet. If you use nodeSelector or nodeAffinity to deploy on 50 of your 200 worker nodes, your count is 50.
What if I use both Edge and Storage Streamer
Count your Edge collectors only. Cloud products don't add to node count.
If you run 10x Edge on 200 nodes and also use Storage Streamer, you pay for 200 nodes. Storage Streamer and Cloud Reporter are included at no extra charge.