AI Analysis
The Log10x Console provides AI-powered analysis for optimization recommendations, cost insights, and pattern detection. AI is fully optionalβyou can disable it entirely, use Log10x-managed AI, or bring your own model.
How It Works
graph LR
A["<div style='font-size: 16px;'>ποΈ Select Filters</div><div style='font-size: 14px;'>Dashboard Panel</div>"] --> B["<div style='font-size: 16px;'>π Query Prometheus</div><div style='font-size: 14px;'>Metrics Data</div>"]
B --> C["<div style='font-size: 16px;'>π€ AI Analysis</div><div style='font-size: 14px;'>LLM Processing</div>"]
C --> D["<div style='font-size: 16px;'>π Render Response</div><div style='font-size: 14px;'>Markdown Output</div>"]
classDef filters fill:#6366f1,stroke:#4f46e5,color:#ffffff,stroke-width:2px,rx:8,ry:8
classDef query fill:#06b6d4,stroke:#0891b2,color:#ffffff,stroke-width:2px,rx:8,ry:8
classDef ai fill:#8b5cf6,stroke:#7c3aed,color:#ffffff,stroke-width:2px,rx:8,ry:8
classDef render fill:#22c55e,stroke:#16a34a,color:#ffffff,stroke-width:2px,rx:8,ry:8
class A filters
class B query
class C ai
class D render
| Step | Description |
|---|---|
| ποΈ Select Filters | Choose time range, environment, application, and log level in the Console dashboard |
| π Query Prometheus | Console queries Prometheus metrics for the selected filters and time window |
| π€ AI Analysis | Metrics are sent to your configured AI provider (if enabled), or displayed as raw data when AI is disabled |
| π Render Response | AI-generated insights are rendered as formatted markdown in the dashboard panel |
Deployment Modes
The default SaaS deployment uses Log10x-managed AI - no API key required.
| Feature | Details |
|---|---|
| Provider | xAI Grok 4 Fast Reasoning |
| Latency | ~10-30 seconds |
| Rate Limit | Shared pool |
| API Key | Not required |
Advantages
- Zero Configuration: Works out of the box, no setup needed
- No API Costs: AI usage included in your Log10x subscription
- Automatic Updates: Always uses the latest model version
Use your own AI provider for full control over model, latency, and costs.
| Feature | Details |
|---|---|
| Providers | OpenAI, Anthropic, xAI, Custom |
| Latency | Direct to provider |
| Rate Limit | Your account limits |
| API Key | Required |
Advantages
- Model Selection: Choose from GPT-4o, Claude Sonnet 4.5, Grok 4, or any OpenAI-compatible model
- Dedicated Rate Limits: Use your own API quota instead of shared pool
- Temperature Control: Adjust creativity/determinism (0.0-1.0)
- Secure Storage: API key encrypted at rest in your user profile
For users who prefer not to use AI analysis, disable AI entirely.
| Feature | Details |
|---|---|
| AI Calls | None |
| Data Sent | None to AI providers |
| Dashboard Panels | Show raw metrics only |
Use Cases
- Data Privacy: No data sent to external AI providers
- Simplicity: Focus on raw metrics without AI interpretation
- Cost Control: Zero AI API usage or costs
Supported Providers
| Setting | Value |
|---|---|
| Endpoint | https://api.openai.com/v1 |
| Models | GPT-4o (default), GPT-4o-mini |
| Latency | ~5-15 seconds |
Features
- Fast inference with best-in-class reasoning
- Function calling support
- Widely adopted API standard
| Setting | Value |
|---|---|
| Endpoint | https://api.anthropic.com |
| Models | Claude Sonnet 4.5 (default), Claude Opus 4 |
| Latency | ~10-20 seconds |
Features
- Strong analytical capability
- Extended context windows (200K tokens)
- Safety-focused responses
| Setting | Value |
|---|---|
| Endpoint | https://api.x.ai/v1 |
| Models | Grok 4 Fast Reasoning |
| Latency | ~5-15 seconds |
Features
- Real-time knowledge integration
- Fast reasoning capabilities
- OpenAI-compatible API
| Setting | Value |
|---|---|
| Endpoint | Your OpenAI-compatible URL |
| Models | Any model at your endpoint |
| Latency | Depends on deployment |
Supported Platforms
- Azure OpenAI Service
- Self-hosted Ollama (with OpenAI compatibility)
- vLLM, LocalAI, or any
/chat/completionsendpoint
Note: Custom endpoints must implement the OpenAI chat completions API format.
Configuration
Step 1: Open AI Settings
Navigate to the Console and click the AI tab in the navigation.
- Go to console.log10x.com
- Sign in with your account
- Click AI in the navigation tabs
- Click Configure to open settings
Step 2: Select Provider
Choose your AI provider based on your requirements:
| Provider | Best For | Typical Latency |
|---|---|---|
| Disabled | No AI, raw metrics only | N/A |
| Log10x Managed | Zero setup, shared usage | 10-30s |
| OpenAI | Fast responses, reliability | 5-15s |
| Anthropic | Complex analysis, long context | 10-20s |
| xAI | Real-time data, direct answers | 5-15s |
| Custom | Self-hosted, compliance needs | Varies |
Step 3: Enter Credentials (BYOK only)
For BYOK providers, enter your API credentials:
# Example configuration stored in user metadata
ai_settings:
provider: openai
api_endpoint: https://api.openai.com/v1/chat/completions
api_key: sk-... # Stored securely, never logged
model: gpt-4o
temperature: 0.7
Security Notes:
API keys are encrypted at rest in Auth0 user metadata
Keys are never logged in application logs
Keys are only used server-side to call your AI provider
Use Cases
| Feature | What It Does | Example |
|---|---|---|
| Cost Analysis | Identifies high-cost log patterns | "Your DEBUG logs cost $2.4K/month" |
| Optimization Tips | Suggests specific actions | "Add sampling to app-x DEBUG logs" |
| Anomaly Detection | Flags unusual patterns | "Event volume spike at 14:00 UTC" |
| Pattern Explanation | Explains log patterns | "This pattern is a Kubernetes health check" |
Timeout Considerations
AWS Managed Grafana has a ~60 second timeout for all panel types. For slow AI models:
- Use faster models (GPT-4o-mini, Claude Haiku)
- Results are cached for identical queries
- Retry if timeout occurs
Related: Metrics Infrastructure