Comparisons

Log10x vs Splunk Ingest Actions, vs Cisco Data Fabric / Federated Search, vs Splunk Edge Processor, and a supported-version matrix plus integration order.

Log10x vs Splunk Ingest Actions

Complementary, not competitive. Ingest Actions run on Heavy Forwarders/Indexers (within Splunk's license boundary). Log10x optimizes before data reaches any Splunk component.

Key differences:

  • Processing location: Ingest Actions on Heavy Forwarder; Log10x pre-Splunk at log source
  • License impact: Ingest Actions data already counted toward license; 10x Engine reduces data before it counts
  • Deduplication: Ingest Actions not supported; 10x Engine yes (template-based)
  • Vendor lock-in: Ingest Actions Splunk-only; 10x Engine works with any destination

Use together: Ingest Actions for Splunk-specific parsing/routing after ingestion. 10x Engine for cost optimization before ingestion.

Note: Splunk Ingest Actions requires Enterprise 9.0+ and Heavy Forwarders. 10x Engine works with Splunk Cloud and Enterprise.

Cisco Data Fabric / Federated Search vs Retriever

Federated Search scans your S3 raw -- ~100 seconds per TB, no indexes, unpredictable costs. Retriever indexes at upload and streams only what you need.

Federated Search Retriever
Query method Brute-force scan (~100s/TB) Bloom filter index lookup -- instant
Cost model DSU pricing (undisclosed, per-scan) $0.023/GB stored in your S3
Output control All results returned Cost-aware regulated streaming (severity-boosted, budget-capped)
Limits 10 TB per search, 100K events default No archive size limits
Platform Splunk Cloud on AWS only Splunk Cloud + Enterprise, AWS + Azure, on-prem + air-gapped
Log10x vs Splunk Edge Processor

Edge Processor needs hand-written SPL2 rules and ships kept events at full size. Log10x regulates automatically by cost and severity per event type, enforces per-app K8s budgets, and compacts what ships 50%+ losslessly.

Edge Processor Log10x
Filtering Hand-written SPL2 rules per pattern Automatic cost-aware sampling per event type, severity-aware (ERRORs kept, DEBUG throttled first)
Budget control None Per-app K8s budgets -- scaling pods doesn't bypass limits
Compact mode None -- kept events ship at full size Lossless 50%+ reduction without dropping events
Sources Splunk forwarders only Any forwarder (UF, Fluent Bit, OTel, Vector, Logstash)
Destinations Splunk and S3 Splunk, Datadog, Elastic, S3, Prometheus
Fleet config Per-forwarder SPL2 pipelines Environment-wide GitOps driven by cost metrics
Splunk version compatibility & integration order

Version compatibility:

Component Minimum Tested Supported
Splunk Cloud 9.0 9.0, 9.1 9.0+
Splunk Enterprise 8.2 8.2, 9.0, 9.1 8.2+
Universal Forwarder 8.0 8.1, 9.0, 9.1 8.0+
Fluentd (if used) 1.14 1.14, 1.16 1.14+
Fluent Bit (if used) 2.0 2.0, 2.1 2.0+
Helm (K8s deployment) 3.0 3.0, 3.12 3.0+
Kubernetes 1.20 1.24, 1.27 1.20+

Integration order (safe sequence):

  1. Verify prerequisites (~10 minutes) - [ ] Admin access to Splunk instance - [ ] HEC token configured (or ability to create one) - [ ] K8s cluster with 4GB+ available memory (DaemonSet test)

  2. Deploy Reducer (~15 minutes) - [ ] Deploy via Helm with --dry-run first to verify manifests - [ ] Apply Helm chart alongside forwarder (sidecar) - [ ] Verify pods are running: kubectl get pods -l app=log10x-reducer - [ ] Check logs for startup errors: kubectl logs -l app=log10x-reducer

  3. Install Splunk App (~10 minutes) - [ ] Download 10x for Splunk from GitHub - [ ] Install via Splunk UI: Apps → Install app from file - [ ] Verify installation: Settings → Installed apps (should show "10x for Splunk")

  4. Configure KV Store (~15 minutes) - [ ] Create KV Store collection kvdml with required schema - [ ] Verify collection: Settings → Collections → Data models - [ ] Test: Run a simple search to populate collection

  5. Validate end-to-end (~10 minutes) - [ ] Query compact events in Splunk - [ ] Verify events expand automatically at search time - [ ] Check that dashboards display expanded events correctly - [ ] Confirm reduction ratio in logs

Rollback procedure (if needed): - Remove Reducer: helm uninstall log10x-reducer (logs resume at full volume) - Remove app: Splunk UI → Apps → Uninstall 10x for Splunk - Both actions are safe: No configuration changes needed, no data loss - Timeline: ~2 minutes total

Compatibility checklist before full deployment: - [ ] Test on non-production environment first - [ ] Verify with actual log volume from your apps - [ ] Confirm reduction ratio meets expectations (typically 50-70%) - [ ] Run for 24+ hours in shadow mode to detect issues - [ ] Validate alerting rules still work - [ ] Test saved search performance (should be similar)