Data Protection

Where log processing happens, what data leaves your network, symbol libraries, AI, and how to validate that security logs are not filtered.

Where does log processing happen

All processing happens in your infrastructure:

  • Reporter -- a DaemonSet alongside your forwarder. Not in the critical log path.
  • Reducer -- a sidecar to your forwarder (Filter or Compact mode).
  • Retriever -- deploys in your AWS/cloud account, not ours.
  • MCP Server -- runs wherever you run an MCP server.

You control where processed events go via output configuration (files, forwarders, metric destinations). Log10x never receives log content.

What data does Log10x actually see

Zero log content. When configured to send metrics to our SaaS (optional), only aggregated metrics leave your network -- event counts and byte volumes grouped by enrichment fields (message pattern identity derived from symbol tokens in your source code, severity level, K8s container/namespace, HTTP status code). No log messages, no PII, no sensitive data. You can also send metrics to your own TSDB instead -- we never see anything.

What log data leaves my environment

None. Log data never leaves your infrastructure. The architecture keeps all log content in your environment.

The only data that optionally reaches our SaaS is aggregated metrics (event counts, byte volumes). No log content is included.

Optional AI recommendations: The Console provides AI-powered analysis on ROI Analytics dashboards in three configurable modes: Managed (hosted by Log10x), Bring Your Own Key (OpenAI, Anthropic, xAI, Azure OpenAI, or self-hosted via Ollama or any OpenAI-compatible endpoint), or Disabled (no data sent to any AI provider). Only aggregated metrics from Prometheus (event type names, volume, cost) are sent -- never raw log content. All API keys are encrypted at rest.

What specific metrics leave my network in managed mode

In managed console mode, 10x apps send aggregated metrics to prometheus.log10x.com over TLS 1.3. The exact fields:

Label Example Value Contains PII?
tenx_env production No
tenx_app order-service No
tenx_host_name edge-node-1 No
tenx_pipeline_uuid a1b2c3d4-... No
severity_level ERROR No
message_pattern Failed to connect to {} No
k8s_namespace payments No
k8s_container api-gateway No
http_code 503 No
index_app main No

message_pattern is a template name derived from log statement structure (placeholders replace all variable data) -- it contains no log content, no request data, no PII.

Metric names (the values):

Metric Type What It Measures
tenx_pipeline_up Gauge Pipeline running (1 = up)
tenx_pipeline_bootstrap_time_seconds Gauge Startup time
tenx_pipeline_runtime_seconds Gauge Total runtime
totalEvents_total Counter Events processed
totalBytes_total Counter Bytes processed
emitted_events_summaryBytes_total Counter Output bytes (after processing)
emitted_events_optimized_size_total Counter Compact output bytes
all_events_summaryBytes_total Counter Input bytes (before regulation)
indexed_events_summaryBytes_total Counter Bytes indexed (retriever)
streamed_events_summaryBytes_total Counter Bytes streamed to S3
tokenized_total Counter Events matched to templates
nonTokenized_total Counter Events without template match

All values are numeric counters and gauges -- event counts, byte volumes, durations. No log content appears in any metric value.

Billing telemetry: Engines also send lightweight heartbeats (tenx_pipeline_up, tenx_pipeline_info) containing node ID and pipeline name for license tracking. No log content, no PII. Air-gapped deployments use a local License Receiver instead.

Self-managed mode: Nothing is sent to Log10x. All metrics go to your own TSDB.

Sensitivity note: Metric labels include infrastructure metadata such as application names, Kubernetes namespace names, and log pattern templates. Organizations that classify infrastructure topology as sensitive should deploy self-managed -- no data reaches Log10x systems.

What are symbol libraries and do they contain my code

Symbol libraries contain 64-bit hashes of string constants extracted from your log statements, plus class and method names to identify the source of each log statement. They contain no source code, no log data, and no telemetry. Compilation happens in your CI/CD pipeline — we never see your repositories, code, or symbol libraries. See the Compiler FAQ for full details.

Is AI optional? What data does it send

Fully optional. The Console provides AI-powered analysis on ROI Analytics dashboards in three configurable modes:

  • Managed — hosted by Log10x using xAI Grok (default in SaaS mode, included in subscription)
  • Bring Your Own Key — OpenAI, Anthropic, xAI, Azure OpenAI, or any OpenAI-compatible endpoint including self-hosted Ollama
  • Disabled — raw metrics only, no data sent to any AI provider

SaaS mode: AI analysis is enabled by default using the Log10x-managed provider. Only aggregated metrics from Prometheus (event type names, volume, cost) are sent -- never raw log content. You can switch to Disabled at any time in Console settings.

Self-managed mode: AI is not preconfigured -- you control whether and how to enable it. No API key is provided by Log10x.

All API keys are encrypted at rest. Disabling AI has no impact on core optimization functionality. See AI Analysis for full configuration.

How do I validate that critical security logs aren't being filtered

Multi-layer validation ensures security logs always reach your analytics tool:

  1. Shadow mode testing: Deploy the Reporter as a read-only DaemonSet -- it tails the live event stream pre-SIEM without modifying, filtering, or redirecting any data. Compare what would be optimized vs actual security events before enabling production changes with the Reducer.
  2. Allowlist approach: Explicitly preserve all logs from security indexes. Allowlist sourcetypes like firewall, ids, authentication.
  3. Metrics tracking: Dropped event counts are recorded in aggregated metrics -- compare total vs emitted volumes to verify nothing unexpected was filtered.
  4. Compliance reporting: Daily summary confirms zero security logs filtered. Start with no filtering on security sources, then expand gradually after 30-day validation.