Optimization
How the 10x for Splunk app expands compact events at search time, search-time overhead, potential license-tier reduction, and what happens to events the Reducer filters out.
How does the 10x for Splunk app expand optimized events
Transparent search-time expansion. The open-source 10x for Splunk app automatically expands compact events before displaying results.
How it works:
- Search Hook intercepts all
/search/jobsrequests - REST handler transforms SPL to include the
tenx-inflatemacro - Macro joins compact events with templates from KV Store
- Full-fidelity events returned with original field names and values
Storage architecture:
- Templates stored in
tenx_kvdmlKV Store collection - Compact events stored in
tenx_encodedindex - Hash references link events to their templates
Built-in Analytics Dashboard shows:
- Total compact events and active templates
- Reduction ratio and storage savings
- Event volume trends over time
- Top templates by usage
- Expansion success rate
User experience: Completely transparent. Users search, build dashboards, and configure alerts exactly as before--on the original full-fidelity data.
Open source: Available on GitHub.
What is the search-time overhead in Splunk
A one-time template resolution (~0.5–2s per search) matches search terms against the template index. Per-event expansion uses a KV Store primary-key lookup and native SPL functions — negligible overhead per event. Queries, dashboards, and alerts work unchanged.
The 10x Engine processes events at sub-millisecond per event — 100+ GB/day on a single node (512 MB heap, 2 threads). For resource requirements, scaling tables, and architecture details, see Performance FAQ.
Can Log10x reduce our Splunk license tier
Yes, 30-60% volume reduction can move you to lower license tiers. See pricing for details:
Example:
- Before: 550 GB/day, paying for 500 GB tier ($150K/year) + overage penalties
- After Log10x: 320 GB/day, drops to lower tier
- Result: $110K+ annual savings
License renewal strategy: Deploy Log10x 2-3 months before renewal to demonstrate sustained reduction. Negotiate your new tier based on 6-month average post-optimization.
Typical deployment timeline:
- Day 1 (15 min): Agentless cost analysis via the MCP server's SIEM-sample tool (read-only Splunk REST API query). Or deploy the Reporter DaemonSet for pre-SIEM cost visibility.
- Week 1 (30 min): Deploy the Reducer in Compact mode alongside your forwarders via Helm
- Week 2-3: Measure sustained reduction, validate with Splunk license usage reports
- Renewal: Negotiate new tier based on demonstrated lower ingestion
Splunk Cloud: Works with Ingest-based pricing. Directly reduces GB ingested, lowering monthly costs proportionally.
What happens to logs filtered by the Reducer
The Reducer in Filter mode identifies low-priority logs (excessive debug, health checks, noise) based on your configured budget and severity thresholds. You control what happens to the filtered logs:
- Archive to S3/object storage: Route to low-cost storage for compliance. Query via Athena or rehydrate to Splunk on-demand.
- Route to different Splunk index: Send to a cheaper "cold" index with longer retention but lower priority.
- Drop completely: Eliminate entirely after a validation period.
The Reducer exports cost metrics per event type -- volume filtered, spend rate, and sampling ratios -- queryable via the Prometheus Metrics API and ROI Analytics dashboards.