Tools
The MCP Server exposes a set of tools to your AI assistant. You never call them directly — just ask your assistant a question in plain English, and it picks the right tool automatically. This page lists what's possible and what each tool does.
Cost analysis
"Why did our log costs spike?"
Tool: log10x_cost_drivers
Dollar-ranked patterns driving the increase, with before→after deltas.
How it works. Query current window (bytes per pattern). Query baseline (average of prior 3 windows of the same size). Compute delta. A pattern is a cost driver when it passes both: delta exceeds $500/period AND delta is at least 5% of the total service increase. Sort by delta, descending.
Example output:
cart — $103 → $13K/wk (3 cost drivers)
#1 cart cartstore ValkeyCartStore $51 → $6.4K/wk INFO 6.6B events
#2 GetCartAsync called with userId $34 → $4.2K/wk 4.2B events
#3 AddItemAsync called with userId $18 → $2.2K/wk 2.0B events
3 drivers = 98% of increase · 11 other patterns
"What is this Payment Gateway pattern?"
Tool: log10x_event_lookup
Cost breakdown by service, AI classification, recommended action.
"When did this pattern start spiking?"
Tool: log10x_pattern_trend
Time series with spike detection and sparkline.
"How much are we saving?"
Tool: log10x_savings
Per-app savings (Regulator filter, Regulator compact, Streamer) with annual projection.
"What services are we monitoring?"
Tool: log10x_services
All services ranked by volume and cost.
Environment discovery and setup
"Analyze my cluster for 10x apps"
Tool: k8s discovery
With your kubeconfig mounted, the server runs kubectl get equivalents (read-only) to identify:
- Forwarder DaemonSets (Fluent Bit, Fluentd, Datadog Agent, OTel Collector)
- Node count and pod topology
- Namespaces and existing
log10x-*deployments - Logging destinations on forwarders (Splunk HEC endpoints, Elasticsearch clusters, CloudWatch log groups)
Output: what's there, and which 10x apps make sense for your stack.
"Set me up with the Reporter" / "Generate Helm values for the Regulator"
Tool: Helm values generator
Given what discovery finds, the server writes a values file tailored to your stack. Example for Reporter:
# Generated for your cluster
reporter:
forwarder:
type: fluentbit # detected from your DaemonSet
socketPath: /var/run/fluentbit.sock
auth:
apiKeyRef:
secretName: log10x-api-key # references the secret you already created
prometheus:
remoteWrite:
url: https://prometheus.log10x.com
The file lands in your working directory. You review the diff, run helm install. The server doesn't apply anything.
Regulator configuration
"Cap the top cost driver at 5%"
Tool: log10x_exclusion_filter
After identifying a cost driver, ask for a filter config. The server generates vendor-specific snippets for 14 targets:
- SIEMs: Datadog (UI + API), Splunk (transforms.conf + API), Elasticsearch (ingest pipeline + API), CloudWatch (subscription filter)
- Forwarders: Datadog Agent, Fluent Bit, Fluentd, OTel Collector, Vector, Logstash, Filebeat, rsyslog, syslog-ng, Promtail
For the Log10x Regulator specifically, the output is a mute-file entry scoped to the pattern:
Sample rate, expiry epoch, audit label. Commit to git; the Regulator pulls on next reload.
"Anything depending on this before I drop it?"
Tool: log10x_dependency_check
Before a pattern drops, ask the server to check your SIEM. It generates a bash command that downloads and runs a Python script locally against your SIEM (read-only). Checks dashboards, alerts, saved searches, watchers.
Supported SIEMs: Datadog, Splunk, Elasticsearch, CloudWatch. No data sent to Log10x.
Validation
"Validate my candidate Regulator config"
Tool: log10x_validate
Spawns the @apps/mcp runtime as a subprocess and dry-runs the config against sample events. Output: the structured TenXObjects + TenXTemplates the candidate config produced. See Validate for how the runtime works and how to reproduce a run locally for debugging.
Agentless SIEM polling
"Do a POC from my Splunk/Datadog/Elasticsearch"
Tool: log10x_poc_from_siem_submit
Read-only cost analysis via your SIEM's API — no DaemonSet deployment required. Useful for first-touch analysis before committing to deploying the Reporter.
Analyzer cost
The server reads your analyzer cost ($/GB) from your Console profile settings at startup and refreshes it hourly. Override per question: "show costs at $6/GB for Splunk."
Timeframes
Append a timeframe to any question. Default: 7 days.
| Timeframe | Label | Baseline |
|---|---|---|
1d |
Last 24h | Avg of prior 3 days |
7d |
This week | Avg of prior 3 weeks |
30d |
Last 30d | Avg of prior 3 months |
Safety invariants
Every tool is read-only against your infrastructure. The server never applies changes:
- k8s discovery:
kubectl getequivalents only, no writes - Helm values / mute file entries: written as files, not applied
- Filter suggestions: diff-reviewable (PR-ready)
- Dependency checks: local-only with your SIEM credentials