Skip to content

Splunk

Cut Splunk costs 50-80%. Go beyond Federated Search and Edge Processor -- index your S3, stream regulated data, and optimize events losslessly across Splunk Cloud and Enterprise.

Compatibility

Does 10x work with my existing Splunk dashboards and queries

Yes. Log10x preserves all Splunk field mappings and metadata.

Fields preserved:

  • _time, host, source, sourcetype
  • _raw, index
  • All field extractions (props.conf, transforms.conf)
  • Custom metadata and tags

Functionality preserved:

  • Saved searches work identically
  • Dashboards and alerts require zero changes
  • SPL queries return same results
  • Report scheduling unchanged

How optimization works: The 10x Engine uses template-based deduplication, not field removal. Repeated log events are compacted via references to templates while maintaining full searchability in Splunk.

Does this work with Universal Forwarders

Yes. Log10x Edge apps deploy as sidecars to Universal Forwarders.

Kubernetes deployment:

  • DaemonSet alongside Universal Forwarder
  • Logs forwarded to 10x via file output or stdout
  • 10x optimizes and forwards to Splunk HEC

VM deployment:

  • Local process alongside Universal Forwarder
  • Reads from forwarder output directory
  • Optimizes and forwards to HEC

Compatibility: Universal Forwarder 8.x, 9.x. No changes to Universal Forwarder configuration required. Works with existing HEC endpoints and token authentication.

How does this integrate with Splunk HEC

The 10x Engine sits between your log forwarder and Splunk HEC:

App → Fluentd/UF → 10x Edge Optimizer → Splunk HEC → Indexer

Edge deployment:

  • Deploys as sidecar to Universal Forwarders or alongside Fluentd/OTel
  • Optimizes logs in-memory before forwarding to HEC
  • Uses standard HEC token authentication
  • Can also forward via splunktcp to indexers (standard UF protocol)

Cloud deployment:

Compatibility: Works with Splunk Cloud Platform (all regions), Splunk Enterprise 8.x and 9.x. No changes to HEC configuration required.

Works with Splunk Enterprise on-premises

Yes. Works with both Splunk Cloud and Splunk Enterprise on-premises.

Splunk Enterprise compatibility:

  • Versions: 8.x, 9.x
  • Deployment: Single instance, distributed, clustered
  • Authentication: HEC token or Heavy Forwarder S2S
  • Integration: Edge apps forward to HEC or Heavy Forwarders

Cloud Reporter for on-premises:

  • Queries Splunk Enterprise REST API
  • Requires admin or power user credentials
  • Deploys as pod in your infrastructure
  • No data egress to external systems

Storage Streamer for on-premises: Works with AWS S3, Azure Blobs, and any S3-compatible object storage. Streams archived logs to Splunk Enterprise HEC endpoints on-demand.

Splunk Cloud: KV Store Setup & Pilot Checklist

The 10x for Splunk app expands compact events at search time using a KV Store collection. Here's the complete setup and pilot validation checklist for Splunk Cloud.

Before You Start

  • Admin or Power User access to your Splunk Cloud instance
  • Splunk Cloud supporting KV Store (all modern instances do)
  • Ability to create HTTP Event Collector (HEC) tokens
  • Two HEC tokens configured: one for templates (tenx_dml_raw_json sourcetype), one for encoded events

Day 1: App Installation & KV Store Setup

  1. Install 10x for Splunk app - [ ] Download from GitHub - [ ] Upload via Settings > Apps > Install app from file - [ ] Restart (if prompted by Splunk Cloud) - [ ] Verify: Settings > Apps > Confirm "10x for Splunk" appears in app list

  2. Create KV Store collection - [ ] Go to Settings > Advanced Search > Collections - [ ] Create new collection named kvdml - [ ] Schema fields (automatically generated, verify all present):

    _key (primary key)
    pattern_hash (string)
    pattern (string)
    pattern_parts (array)
    part_0 (string)
    pattern_terminator (string)
    timestamp_format (string)
    
    - [ ] Verify: | inputlookup tenx-dml-lookup | stats count returns 0 (empty)

  3. Create indexes for template data - [ ] Create tenx_dml index (required for storing templates):

    Settings → Indexes → New Index
    Name: tenx_dml
    Data type: Events
    Max size: 10GB (adjust based on your expected template volume)
    Retention: 30+ days (templates are reference data, not logs)
    
    Verify: | rest /services/data/indexes | search title="tenx_dml" - [ ] Optional: Create separate index for encoded events:
    Settings → Indexes → New Index
    Name: encoded_events (or your preferred name)
    Data type: Events
    Max size: Depends on log volume
    
    Or use main index if preferred (encoded events are searchable until inflation)

  4. Verify props.conf and transforms.conf - [ ] Check Settings > Field Extractions > Verify tenx_encoded sourcetype has REPORT-tenx extraction - [ ] Verify transforms.conf has tenx-hash-vars-extraction and tenx-dml-lookup defined - [ ] If missing, manually add via Settings > Add data > Source type settings

Phase 4: Enable HTTP Event Collector (HEC) globally

4a. Enable HEC globally (required before creating tokens): - [ ] Settings → Data Inputs → HTTP Event Collector → Global Settings - [ ] Toggle "All Tokens" to ENABLED - [ ] Set Default Input Port: 8088 (or your custom port) - [ ] Enable SSL: YES (recommended for production) - [ ] Click Save - [ ] Verify: | rest /services/data/inputs/http | search disabled=0

4b. Create HEC Token 1 (for templates): - [ ] Settings → Data Inputs → HTTP Event Collector → New Token - [ ] Name: tenx-templates - [ ] Source Type: tenx_dml_raw_json - [ ] Index: tenx_dml (created in step 3) - [ ] Indexes allowed: tenx_dml (restrict to this index only) - [ ] Disabled: NO - [ ] Click Save Token - [ ] Copy the token value (save it for later)

4c. Create HEC Token 2 (for encoded events): - [ ] Settings → Data Inputs → HTTP Event Collector → New Token - [ ] Name: tenx-encoded - [ ] Source Type: tenx_encoded - [ ] Index: Your target index (where searchable events go) - [ ] Indexes allowed: Your target index - [ ] Disabled: NO - [ ] Click Save Token - [ ] Copy the token value (save it for later)

Week 1: Data Ingestion & KV Store Population

  1. Send template data via HEC (or via your log forwarder) - [ ] Via curl (for testing):

    SPLUNK_HOST="your-splunk-cloud.splunkcloud.com"
    SPLUNK_PORT="8088"
    HEC_TOKEN="<your-tenx-templates-token>"
    
    curl -k https://$SPLUNK_HOST:$SPLUNK_PORT/services/collector/event \
      -H "Authorization: Splunk $HEC_TOKEN" \
      -H "Content-Type: application/json" \
      -d '{
        "event": {
          "templateHash": "abc123def456",
          "template": "User %s logged in from %s",
          "templateParts": ["User", "logged in from"]
        },
        "sourcetype": "tenx_dml_raw_json",
        "index": "tenx_dml"
      }'
    
    - [ ] Via Fluentd/Fluent Bit: Configure your forwarder output to send to this HEC endpoint with same token and sourcetype - [ ] Verify templates arriving: index=tenx_dml sourcetype=tenx_dml_raw_json | head 10

  2. Send encoded events via HEC (or via your log forwarder) - [ ] Via curl (for testing):

    SPLUNK_HOST="your-splunk-cloud.splunkcloud.com"
    SPLUNK_PORT="8088"
    HEC_TOKEN="<your-tenx-encoded-token>"
    
    curl -k https://$SPLUNK_HOST:$SPLUNK_PORT/services/collector/event \
      -H "Authorization: Splunk $HEC_TOKEN" \
      -H "Content-Type: application/json" \
      -d '{
        "event": "~abc123def456,admin,192.168.1.1",
        "sourcetype": "tenx_encoded",
        "index": "main"
      }'
    
    - [ ] Via Fluentd/Fluent Bit: Configure your forwarder output to send to this HEC endpoint with same token and sourcetype - [ ] Verify encoded events arriving: index=main sourcetype=tenx_encoded | head 10

  3. Wait for KV Store population - [ ] The "Consume KV" saved search runs every 2 minutes (automatic) - [ ] Check: index=_internal savedsearch_name="Consume KV" | table _time, status, result_count (verify no errors) - [ ] Check KV store: | inputlookup tenx-dml-lookup | stats count (should be > 0 after 2-3 min)

  4. Monitor template consumption - [ ] Run: index=tenx_dml sourcetype=tenx_dml_pure | stats count (confirms templates are searchable) - [ ] Run: | inputlookup tenx-dml-lookup | head 5 | table _key, pattern, timestamp_format (verify structure)

Week 2: Inflation Validation & Performance Testing

  1. Test basic inflation - [ ] Run: index=your_target_index sourcetype=tenx_encoded | head 10 | \tenx-inflate`- [ ] Verify: All fields (raw,_time,host,source, etc.) are restored to original values - [ ] Check: Notenx_hash,tenx_var*` fields remain in final output (cleanup working)

  2. Test debug mode - [ ] Run: index=your_target_index sourcetype=tenx_encoded | head 1 | \tenx-inflate-debug` | table *- [ ] Verify:_rawfield matches original (unencoded) log format - [ ] Check:tenx_ts_sec` correctly detects timestamp precision (milliseconds vs nanoseconds)

  3. Test field extractions and searches post-inflation - [ ] Run existing saved search/dashboard on inflated data - [ ] Verify: All field extractions work (extractions applied post-inflation) - [ ] Check: Alerts trigger correctly on inflated events - [ ] Compare: Results match pre-optimization historical logs (sample query on same time range)

  4. Measure search performance - [ ] Run: index=your_target_index sourcetype=tenx_encoded earliest=-1h | \tenx-inflate` | stats count` - [ ] Note: Search time (should be ~1-3 seconds for inflation overhead) - [ ] Compare: Same query without inflation vs with inflation - [ ] Acceptable: <5 second overhead for 10M+ event searches

Post-Pilot: Production Deployment

  1. Enable analytics dashboard - [ ] Open: App launcher > 10x for Splunk > Analytics Dashboard - [ ] Verify: Shows total compact events, reduction ratio, storage savings - [ ] Check: Updates every minute (confirms scheduled searches running)

  2. Set up monitoring and alerts - [ ] Monitor KV store size: | inputlookup tenx-dml-lookup | stats count (alert if > 1M entries or error) - [ ] Monitor inflation failures: Check tenx app logs for errors - [ ] Optional: Set up dashboard for Splunk license impact (GB before/after)

  3. Risk Mitigation & Rollback - [ ] Rollback procedure: Simply disable 10x for Splunk app:

    1. Settings > Apps > 10x for Splunk > Disable
    2. Re-run searches without \tenx-inflate`` macro (searches work on encoded raw data until disabled)
    3. KV Store collection remains; can re-enable app without data loss - [ ] Zero data loss: Encoded events remain in index; templates preserved in KV Store - [ ] Retention: Configure KV Store collection retention if needed (Settings > Collections)

Splunk Cloud Limitations & Workarounds

  • No custom Python alert actions — Covered. App uses standard KV Store and Search hooks (no custom Python required)
  • Limited app customization — App config available in local/default folders; can override via local/ without modifying default/
  • Network egress — All data stays within Splunk Cloud. No external calls needed after app installation
  • KV Store max size — Typical: 10M-50M entries. Monitor via | inputlookup tenx-dml-lookup | stats count. If approaching limit, consider archiving old templates

Forwarder Configuration Examples

Fluentd:

<match encoded_events>
  @type http_buffered
  endpoint_url https://<splunk-host>:8088/services/collector/event
  serializer json
  auth_type basic
  auth_key "Splunk <your-hec-token>"
  <buffer>
    flush_interval 10s
  </buffer>
</match>

Fluent Bit:

[OUTPUT]
Name http
Match *
Host <splunk-host>
Port 8088
URI /services/collector/event
header Authorization Splunk <your-hec-token>
header Content-Type application/json
json_date_key timestamp
Format json

Universal Forwarder: Configure in $SPLUNK_HOME/etc/apps/TA-log10x/local/outputs.conf:

[tcpout]
defaultGroup = log10x_hec

[tcpout:log10x_hec]
server = <splunk-host>:8088
clientCert = $SPLUNK_HOME/etc/auth/mycerts/client.pem
sslVerifyServerCert = true

Support & Troubleshooting

  • Templates not in KV Store: Check saved search logs: index=_internal savedsearch_name="Consume KV"
  • Inflation returns empty: Verify template format in tenx_dml_raw_json index, ensure KV Store has matching hash
  • Performance degradation: Limit time ranges in searches; filter by tenx_hash before inflation for large datasets
  • HEC token rejected: Verify token is enabled and not deleted: | rest /services/data/inputs/http
  • For detailed troubleshooting: See 10x for Splunk Troubleshooting Guide on GitHub

KV Store Validation & Diagnostics

How do I validate that KV Store is working correctly

Quick Health Check (run all three):

  1. Verify KV collection exists:

    | rest /servicesNS/nobody/tenx-for-splunk/storage/collections/config
    | search title="tenx_dml"
    
    Expected: Returns 1 result. If 0 results, collection wasn't created.

  2. Check KV store population:

    | inputlookup tenx-dml-lookup | stats count
    
    Expected: Shows N (number of templates). If 0, no templates loaded yet.

  3. Verify "Consume KV" scheduled search is running:

    | index=_internal savedsearch_name="Consume KV"
    | stats latest(status) as status, latest(_time) as last_run by savedsearch_name
    
    Expected: status=success, last_run within last 2 minutes.

If any check fails, see troubleshooting below.

\"Consume KV\" scheduled search is failing silently

The "Consume KV" search populates templates from tenx_dml index into the KV Store. If it fails, templates won't be available for inflation.

Diagnostic procedure:

Step 1: Check scheduler logs
| index=_internal sourcetype=scheduler savedsearch_name="Consume KV"
| table _time, status, result_count, alert_action
| stats latest(*) as * by status

Common failure modes:

Status Cause Fix
error Search syntax error in saved search Edit saved search "Consume KV" and verify query syntax
success / count=0 No templates in tenx_dml index Run: \| index=tenx_dml \| stats count — if 0, send templates via HEC
failure Alert action (tenx_dml_to_kv.py) failed Check: \| index=_internal sourcetype=action_handler savedsearch_name="Consume KV"
No results Search never ran Verify: Scheduler is enabled (Settings > Scheduled Searches)

Recovery steps:

1. Verify templates exist:
   | index=tenx_dml sourcetype=tenx_dml_raw_json | stats count

2. Force immediate execution:
   Click saved search "Consume KV" > Run
   (Or use: | savedsearch "Consume KV")

3. Wait 2 minutes and verify population:
   | inputlookup tenx-dml-lookup | stats count
   (Should show > 0)

4. If still 0, check KV collection exists:
   | rest /servicesNS/nobody/tenx-for-splunk/storage/collections/config
How do I monitor KV Store size and capacity

KV Store size affects search performance. Monitor it proactively:

Monthly capacity check:

| inputlookup tenx-dml-lookup
| stats count as num_templates, max(timestamp_format) as latest_update

Recommended capacity limits:

Template Count Action Performance
< 100K No action needed Excellent (< 5ms lookup)
100K-500K Monitor monthly Good (5-20ms lookup)
500K-1M Plan optimization Fair (20-50ms lookup)
> 1M Contact engineering Needs partitioning

If approaching 1M templates:

Option 1: Archive old templates (move to secondary collection)

| inputlookup tenx-dml-lookup
| search timestamp_format < "2024-01-01"
| ... (export to archive)

Option 2: Partition templates across multiple collections

Create: tenx_dml_2024, tenx_dml_2025, etc.
Route by year in your inflation macro

Monitor inflation latency:

index=<your-index> sourcetype=tenx_encoded
| `tenx-inflate`
| stats avg(eval(round(relative_time(now(), "now") - _time, 3))) as inflate_latency_sec

If latency > 1 second, KV Store may be oversized.

What if I accidentally send encoded events before templates are loaded

If encoded events arrive before templates, inflation will fail silently until templates load.

Prevention:

Always verify template population BEFORE sending encoded events:

# Wait for this to return > 0:
| inputlookup tenx-dml-lookup | stats count

Recovery (if already happened):

  1. Load the missing templates - Re-send template data via HEC (same format as before) - Wait 2-3 minutes for "Consume KV" to process

  2. Re-index the encoded events (optional)

    # If using Kubernetes:
    kubectl delete pod <forwarder-pod-name>  # Triggers reprocessing
    
    # If using file-based forwarder:
    # Delete offset tracking file, restart forwarder
    

  3. Verify recovery:

    | index=<your-index> sourcetype=tenx_encoded
    | head 10 | `tenx-inflate`
    # Should now return expanded events
    

Distributed KV Store setup for multi-node Splunk clusters

For production Splunk clusters, KV Store can be: - Replicated (HA across nodes) - Partitioned (scaled across multiple collections)

For 3-node Splunk cluster:

KV Store automatically replicates to all nodes (no special config). To verify:

# On each node:
| rest /servicesNS/nobody/tenx-for-splunk/storage/collections/config
| search title="tenx_dml"
| table label, acl{}.perms

All three nodes should return the same collection.

Performance optimization for distributed setup:

# In app's local/collections.conf (or via REST):

[tenx_dml]
field.pattern_hash = string
field.pattern = string
accelerated_fields = pattern_hash  # Index pattern_hash for faster lookups

This creates an index on pattern_hash (faster inflation macro joins).

For very large clusters (10+ nodes):

Consider dedicated KV Store nodes:

Edit: $SPLUNK_HOME/etc/system/local/server.conf
[sslConfig]
serverRepositories = <list-of-kv-store-only-nodes>

Monitoring cluster KV Store health:

| rest /servicesNS/nobody/tenx-for-splunk/storage/collections/data/tenx_dml
| stats count as templates_primary
| append
  [| rest /servicesNS/nobody/tenx-for-splunk/storage/collections/data/tenx_dml
   | stats count as templates_replica]

Both should be equal (healthy replication).

Optimization

How does the 10x for Splunk app expand optimized events

Transparent search-time expansion. The open-source 10x for Splunk app automatically expands compact events before displaying results.

How it works:

  1. Search Hook intercepts all /search/jobs requests
  2. REST handler transforms SPL to include the tenx-inflate macro
  3. Macro joins compact events with templates from KV Store
  4. Full-fidelity events returned with original field names and values

Storage architecture:

  • Templates stored in tenx_kvdml KV Store collection
  • Compact events stored in tenx_encoded index
  • Hash references link events to their templates

Built-in Analytics Dashboard shows:

  • Total compact events and active templates
  • Reduction ratio and storage savings
  • Event volume trends over time
  • Top templates by usage
  • Expansion success rate

User experience: Completely transparent. Users search, build dashboards, and configure alerts exactly as before--on the original full-fidelity data.

Open source: Available on GitHub.

What is the search-time overhead in Splunk

A one-time template resolution (~0.5–2s per search) matches search terms against the template index. Per-event expansion uses a KV Store primary-key lookup and native SPL functions — negligible overhead per event. Queries, dashboards, and alerts work unchanged.

The 10x Engine processes events at sub-millisecond per event — 100+ GB/day on a single node (512 MB heap, 2 threads). For resource requirements, scaling tables, and architecture details, see Performance FAQ.

Can Log10x reduce our Splunk license tier

Yes, 30-60% volume reduction can move you to lower license tiers. See pricing for details:

Example:

  • Before: 550 GB/day, paying for 500 GB tier ($150K/year) + overage penalties
  • After Log10x: 320 GB/day, drops to lower tier
  • Result: $110K+ annual savings

License renewal strategy: Deploy Log10x 2-3 months before renewal to demonstrate sustained reduction. Negotiate your new tier based on 6-month average post-optimization.

Typical deployment timeline:

  • Day 1 (15 min): Deploy Cloud Reporter -- agentless, read-only cost analysis via Splunk REST API
  • Week 1 (30 min): Deploy Edge Optimizer alongside your forwarders via Helm
  • Week 2-3: Measure sustained reduction, validate with Splunk license usage reports
  • Renewal: Negotiate new tier based on demonstrated lower ingestion

Splunk Cloud: Works with Ingest-based pricing. Directly reduces GB ingested, lowering monthly costs proportionally.

What happens to logs regulated by Edge Regulator

Edge Regulator identifies low-priority logs (excessive debug, health checks, noise) based on your configured budget and severity thresholds. You control what happens to the regulated logs:

  • Archive to S3/object storage: Route to low-cost storage for compliance. Query via Athena or rehydrate to Splunk on-demand.
  • Route to different Splunk index: Send to a cheaper "cold" index with longer retention but lower priority.
  • Drop completely: Eliminate entirely after a validation period.

Regulator exports cost metrics per event type -- volume regulated, spend rate, and sampling ratios -- queryable via the Prometheus Metrics API and ROI Analytics dashboards.

Getting Started

How do I test this on my Splunk environment
  1. Dev — Run on your Splunk log files locally. One-line install, results in minutes. No account, no credit card.
  2. Cloud Reporter — Connect to your Splunk instance via REST API. See which event types cost the most — no agent changes.
  3. Edge apps — Deploy optimizer and regulator via Helm chart alongside your forwarder. ~30 min setup.
  4. Storage Streamer — Route events to S3, stream selected data to Splunk on-demand.

Each step is independent — start with Dev to see your reduction ratio, then move to production when ready.

Comparisons

Log10x vs Splunk Ingest Actions

Complementary, not competitive. Ingest Actions run on Heavy Forwarders/Indexers (within Splunk's license boundary). Log10x optimizes before data reaches any Splunk component.

Key differences:

  • Processing location: Ingest Actions on Heavy Forwarder; Log10x pre-Splunk at log source
  • License impact: Ingest Actions data already counted toward license; 10x Engine reduces data before it counts
  • Deduplication: Ingest Actions not supported; 10x Engine yes (template-based)
  • Vendor lock-in: Ingest Actions Splunk-only; 10x Engine works with any destination

Use together: Ingest Actions for Splunk-specific parsing/routing after ingestion. 10x Engine for cost optimization before ingestion.

Note: Splunk Ingest Actions requires Enterprise 9.0+ and Heavy Forwarders. 10x Engine works with Splunk Cloud and Enterprise.

Cisco Data Fabric / Federated Search vs Storage Streamer

Federated Search scans your S3 raw -- ~100 seconds per TB, no indexes, unpredictable costs. Storage Streamer indexes at upload and streams only what you need.

Federated Search Storage Streamer
Query method Brute-force scan (~100s/TB) Bloom filter index lookup -- instant
Cost model DSU pricing (undisclosed, per-scan) $0.023/GB stored in your S3
Output control All results returned Cost-aware regulated streaming (severity-boosted, budget-capped)
Limits 10 TB per search, 100K events default No archive size limits
Platform Splunk Cloud on AWS only Splunk Cloud + Enterprise, AWS + Azure, on-prem + air-gapped
Log10x vs Splunk Edge Processor

Edge Processor needs hand-written SPL2 rules and ships kept events at full size. Log10x regulates automatically by cost and severity per event type, enforces per-app K8s budgets, and compacts what ships 50%+ losslessly.

Edge Processor Log10x Edge
Filtering Hand-written SPL2 rules per pattern Automatic cost-aware sampling per event type, severity-aware (ERRORs kept, DEBUG throttled first)
Budget control None Per-app K8s budgets -- scaling pods doesn't bypass limits
Compaction None -- kept events ship at full size Lossless 50%+ reduction without dropping events
Sources Splunk forwarders only Any forwarder (UF, Fluent Bit, OTel, Vector, Logstash)
Destinations Splunk and S3 Splunk, Datadog, Elastic, S3, Prometheus
Fleet config Per-forwarder SPL2 pipelines Environment-wide GitOps driven by cost metrics

Integration Compatibility & Order

Splunk version compatibility & integration order

Version compatibility:

Component Minimum Tested Supported
Splunk Cloud 9.0 9.0, 9.1 9.0+
Splunk Enterprise 8.2 8.2, 9.0, 9.1 8.2+
Universal Forwarder 8.0 8.1, 9.0, 9.1 8.0+
Fluentd (if used) 1.14 1.14, 1.16 1.14+
Fluent Bit (if used) 2.0 2.0, 2.1 2.0+
Helm (K8s deployment) 3.0 3.0, 3.12 3.0+
Kubernetes 1.20 1.24, 1.27 1.20+

Integration order (safe sequence):

  1. Verify prerequisites (~10 minutes) - [ ] Admin access to Splunk instance - [ ] HEC token configured (or ability to create one) - [ ] K8s cluster with 4GB+ available memory (DaemonSet test)

  2. Deploy Edge sidecar (~15 minutes) - [ ] Deploy via Helm with --dry-run first to verify manifests - [ ] Apply Helm chart alongside forwarder (DaemonSet) - [ ] Verify sidecar pods are running: kubectl get pods -l app=log10x-optimizer - [ ] Check logs for startup errors: kubectl logs -l app=log10x-optimizer

  3. Install Splunk App (~10 minutes) - [ ] Download 10x for Splunk from GitHub - [ ] Install via Splunk UI: Apps → Install app from file - [ ] Verify installation: Settings → Installed apps (should show "10x for Splunk")

  4. Configure KV Store (~15 minutes) - [ ] Create KV Store collection kvdml with required schema - [ ] Verify collection: Settings → Collections → Data models - [ ] Test: Run a simple search to populate collection

  5. Validate end-to-end (~10 minutes) - [ ] Query compact events in Splunk - [ ] Verify events expand automatically at search time - [ ] Check that dashboards display expanded events correctly - [ ] Confirm compression ratio in logs

Rollback procedure (if needed): - Remove sidecar: helm uninstall log10x-optimizer (logs resume at full volume) - Remove app: Splunk UI → Apps → Uninstall 10x for Splunk - Both actions are safe: No configuration changes needed, no data loss - Timeline: ~2 minutes total

Compatibility checklist before full deployment: - [ ] Test on non-production environment first - [ ] Verify with actual log volume from your apps - [ ] Confirm compression ratio meets expectations (typically 50-70%) - [ ] Run for 24+ hours in shadow mode to detect issues - [ ] Validate alerting rules still work - [ ] Test saved search performance (should be similar)