Skip to content

MCP Server

The Log10x MCP Server gives AI assistants (Claude Code, Claude Desktop, Cursor, etc.) real-time access to per-pattern log cost attribution data. Ask your AI assistant about log costs in plain English and get instant, dollar-ranked answers — without leaving your IDE or terminal.

Install

Prerequisites

  • Node.js 18+
  • A Log10x API key and environment ID (from Console > Profile > API Settings)

The server is published to npm as log10x-mcp. No clone, no build — npx downloads and runs it on demand.

Claude Desktop

Add to your claude_desktop_config.json (~/Library/Application Support/Claude/ on macOS, %APPDATA%\Claude\ on Windows):

{
  "mcpServers": {
    "log10x": {
      "command": "npx",
      "args": ["-y", "log10x-mcp"],
      "env": {
        "LOG10X_API_KEY": "your-api-key",
        "LOG10X_ENV_ID": "your-env-id"
      }
    }
  }
}

Restart Claude Desktop.

Claude Code

claude mcp add --transport stdio \
  --env LOG10X_API_KEY=your-api-key \
  --env LOG10X_ENV_ID=your-env-id \
  log10x -- npx -y log10x-mcp

Verify with /mcp — the server should show as connected with 7 tools.

Cursor, Windsurf, other MCP clients

Same pattern — add an mcpServers entry with "command": "npx", "args": ["-y", "log10x-mcp"], and the two env vars. The exact config file location varies by client.

Multi-Environment Setup

To query multiple Log10x environments (prod, staging, etc.), register one MCP server per environment with a distinct name:

{
  "mcpServers": {
    "log10x-prod": {
      "command": "npx",
      "args": ["-y", "log10x-mcp"],
      "env": {
        "LOG10X_API_KEY": "prod-api-key",
        "LOG10X_ENV_ID": "prod-env-id"
      }
    },
    "log10x-staging": {
      "command": "npx",
      "args": ["-y", "log10x-mcp"],
      "env": {
        "LOG10X_API_KEY": "staging-api-key",
        "LOG10X_ENV_ID": "staging-env-id"
      }
    }
  }
}

Ask "check prod costs" or "what's spiking in staging?" and your AI assistant routes to the matching server automatically. Each environment gets its own toolset namespaced by server name — no parameter juggling, no risk of querying the wrong environment.

Advanced: Single-Process Multi-Env

For users with 10+ environments who want to avoid spawning N subprocesses, set LOG10X_ENVS to a JSON array of {nickname, apiKey, envId} objects. Queries accept an environment parameter to route by nickname. Use the simpler multi-server pattern above unless you specifically need this.

LOG10X_ENVS='[{"nickname":"prod","apiKey":"...","envId":"..."},{"nickname":"staging","apiKey":"...","envId":"..."}]'

Tools

The server exposes 7 tools. You never call them directly — just ask your AI assistant a question and it picks the right tool automatically.

What you ask Tool called What it does
"Why did our log costs spike?" log10x_cost_drivers Dollar-ranked patterns driving the increase, with before→after deltas
"What is this Payment Gateway pattern?" log10x_event_lookup Cost breakdown by service, AI classification, recommended action
"How much are we saving?" log10x_savings Per-app savings (regulator, optimizer, streamer) with annual projection
"When did this pattern start spiking?" log10x_pattern_trend Time series with spike detection and sparkline
"What services are we monitoring?" log10x_services All services ranked by volume and cost
"How do I drop this pattern in Datadog?" log10x_exclusion_filter Config snippet for 14 vendors (SIEMs + forwarders)
"Anything depending on this before I drop it?" log10x_dependency_check Command to scan your SIEM for dependent dashboards and alerts

Analyzer Cost

The server reads your analyzer cost ($/GB) from your Console profile settings at startup and refreshes it hourly. To change it, update the cost in your profile — the server picks up the new value within an hour.

You can also override it per question: "show costs at $6/GB for Splunk."

Timeframes

Append a timeframe to any question. Default is 7 days.

Timeframe Label Baseline
1d Last 24h Avg of prior 3 days
7d This week Avg of prior 3 weeks
30d Last 30d Avg of prior 3 months

Cost Driver Analysis

When you ask about cost spikes, the server runs the same algorithm as the Slack Bot:

  1. Query current window — bytes per pattern for the selected timeframe
  2. Query baseline — average of the 3 prior windows of the same size
  3. Compute deltacost_this_period - cost_baseline per pattern
  4. Apply gates — a pattern is a cost driver when it passes both:
    • Dollar floor: delta exceeds $500/period
    • Contribution floor: delta is at least 5% of the total service increase
  5. Sort by delta descending

Example output:

cart — $103 → $13K/wk (3 cost drivers)

#1  cart cartstore ValkeyCartStore      $51 → $6.4K/wk   INFO  6.6B events
#2  GetCartAsync called with userId     $34 → $4.2K/wk         4.2B events
#3  AddItemAsync called with userId     $18 → $2.2K/wk         2.0B events

3 drivers = 98% of increase · 11 other patterns

Exclusion Filters

After identifying a cost driver, ask the AI to generate a filter config for your SIEM or forwarder. The server generates vendor-specific snippets for 14 targets:

SIEMs: Datadog (UI + API), Splunk (transforms.conf + API), Elasticsearch (ingest pipeline + API), CloudWatch (subscription filter)

Forwarders: Datadog Agent, Fluent Bit, Fluentd, OTel Collector, Vector, Logstash, Filebeat, rsyslog, syslog-ng, Promtail

Each filter is scoped to the pattern's service, severity, and keywords. Config and API modes are available for Datadog, Splunk, and Elasticsearch.

Dependency Check

Before dropping a pattern, ask the AI to check for dependencies. The server generates a bash command that downloads and runs a Python script locally against your SIEM (read-only). The script checks whether any dashboards, alerts, or saved searches reference the pattern.

Supported SIEMs: Datadog, Splunk, Elasticsearch, CloudWatch.

No data is sent to Log10x — the scan runs entirely on your machine using your SIEM credentials.

Architecture

AI Assistant (Claude Code, Claude Desktop, Cursor, etc.)
    ↓ MCP Protocol (stdio)
Log10x MCP Server (TypeScript, local process)
    ↓ HTTPS + PromQL
Log10x Prometheus API (prometheus.log10x.com)
Pre-aggregated per-pattern cost metrics

The MCP server is a thin query layer that runs locally on your machine. The hard work (deterministic template extraction, per-pattern byte metrics) is already done by the Log10x pipeline. The server just queries Prometheus and formats results.

No cloud hosting, no Lambda, no API Gateway. The server starts as a local subprocess when your AI assistant connects, and queries the same Prometheus API used by the Console and Slack Bot.

Security

  • All API calls use your personal API key (never exposed in tool output)
  • The server runs locally — no data leaves your machine except Prometheus queries
  • Dependency check scripts run locally with your own SIEM credentials (read-only)
  • No caching of log content — all data comes from pre-aggregated metrics