Node Counting
CPU/memory per node, and how to count nodes across Retriever-only, MCP-only, analytics hosts, Lambda, and partial-rollout setups.
What are the CPU and memory requirements per node
You control both. The 10x Engine runs on the JVM (HotSpot or GraalVM), so resource allocation is explicit:
- Memory: Set via
-Xmx(e.g.,-Xmx512m). The JVM heap won't exceed this ceiling. 512 MB is a reasonable default for most workloads. - CPU: Set via
threadPoolSize— a fixed thread count (e.g.,2) or a fraction of available cores (e.g.,0.25= 25%). One or two dedicated threads handle most production volumes.
A single node with 512 MB heap and 2 threads handles 100+ GB/day of log throughput. Backpressure throttles input gracefully if the pipeline approaches its resource limit — no crashes, no dropped logs.
Both values map directly to standard Kubernetes resource specs in your DaemonSet manifest. For throughput benchmarks, scaling tables, and architecture details, see Performance FAQ.
What if I only use Retriever (no Edge)
Count the collectors shipping logs to S3. Those are your billing units. See Retriever for details.
What if I only use the MCP SIEM-polling tool (no DaemonSet)
The MCP server's SIEM-sample tool offers on-demand SIEM polling without a DaemonSet. Each concurrent 10x Engine instance launched by the tool counts as one node. This is the evolution of the old "Cloud Reporter" capability.
Do analytics servers (Splunk, Elasticsearch, Datadog) count as nodes
No. Only hosts running a Log10x collector count. Your analytics infrastructure — Splunk indexers, Elasticsearch data nodes, Datadog Agent hosts — are not billing units.
Example: 200 EKS worker nodes running Filebeat DaemonSets + 30 Elasticsearch data nodes + 3 Splunk indexers = 200 nodes. Only the worker nodes running log collection count.
Do Lambda functions count as nodes
No. Lambda invocations are not billing units. The DaemonSet Reporter and the Reducer sidecar both assume a long-lived host with a forwarder and don't apply to ephemeral functions.
For serverless workloads, use the MCP server's SIEM-sample tool (agentless SIEM polling) and Retriever (S3 archive + on-demand stream). Pricing is based on concurrent 10x Engine instances and Retriever pods in your cluster — not the number of Lambda functions.
What if I deploy only to a subset of nodes
Count only the nodes running the Log10x DaemonSet. If you use nodeSelector or nodeAffinity to deploy on 50 of your 200 worker nodes, your count is 50.
What if I use both the Reporter/Reducer and Retriever
Count your Reporter/Reducer collectors only. Retriever pods don't add to node count.
If you run the Reporter (DaemonSet) or Reducer (sidecar) on 200 nodes and also use Retriever, you pay for 200 nodes. Retriever and the MCP server are included at no extra charge.