Elasticsearch
Search and visualize compact events in Elasticsearch and OpenSearch with zero data loss. This open-source plugin transparently expands compact events at query time, maintaining full Kibana dashboard, search, and alerting capabilities while reducing ingestion and storage costs by over 50%.
How It Works
L1ES is an Elasticsearch plugin that intercepts search requests and expands before returning results. Users interact with Kibana and Elasticsearch exactly as before — searching, building dashboards, and configuring alerts on the original full-fidelity data.
Two mechanisms make this transparent:
- Query rewriting — standard
match,match_phrase, andmulti_matchqueries are automatically converted to L1ES equivalents that search across decoded content _sourcedecoding — encoded fields in_sourceare expanded in search responses, so Kibana Discover, document views, and dashboards display the original log text
Ingestion Flow
Events are compact at the edge and ingested into Elasticsearch with reduced payload size:
graph LR
A["<div style='font-size: 14px;'>🗜️ Optimizer</div><div style='font-size: 10px;'>Compact Events</div>"] --> B["<div style='font-size: 14px;'>📡 Ingest</div><div style='font-size: 10px;'>Bulk API</div>"]
B --> C["<div style='font-size: 14px;'>📋 Templates</div><div style='font-size: 10px;'>l1es_dml Index</div>"]
B --> D["<div style='font-size: 14px;'>💾 Index</div><div style='font-size: 10px;'>Encoded Events</div>"]
classDef edge fill:#7c3aed88,stroke:#6d28d9,color:#ffffff,stroke-width:2px,rx:8,ry:8
classDef ingest fill:#9333ea88,stroke:#7c3aed,color:#ffffff,stroke-width:2px,rx:8,ry:8
classDef store fill:#2563eb88,stroke:#1d4ed8,color:#ffffff,stroke-width:2px,rx:8,ry:8
class A edge
class B ingest
class C,D store
🗜️ Optimizer: Edge Optimizer compacts events, extracting repetitive patterns into templates
📡 Ingest: Encoded events forwarded to Elasticsearch via the Bulk API with reduced payload size
📋 Templates: Templates stored in the l1es_dml internal index for lookup at query time
💾 Index: Compact events stored with template hash references
Search Flow
Standard Elasticsearch queries are transparently rewritten to expand compact events:
graph LR
E["<div style='font-size: 14px;'>👤 User</div><div style='font-size: 10px;'>KQL / Query DSL</div>"] --> F["<div style='font-size: 14px;'>🔄 Rewrite</div><div style='font-size: 10px;'>Intercept Query</div>"]
F --> G["<div style='font-size: 14px;'>🔍 Match</div><div style='font-size: 10px;'>Templates + Values</div>"]
G --> H["<div style='font-size: 14px;'>📖 Expand</div><div style='font-size: 10px;'>Decode _source</div>"]
H --> I["<div style='font-size: 14px;'>📊 Results</div><div style='font-size: 10px;'>Full Data</div>"]
classDef user fill:#059669,stroke:#047857,color:#ffffff,stroke-width:2px,rx:8,ry:8
classDef hook fill:#f59e0b,stroke:#d97706,color:#ffffff,stroke-width:2px,rx:8,ry:8
classDef result fill:#ea580c88,stroke:#c2410c,color:#ffffff,stroke-width:2px,rx:8,ry:8
class E user
class F,G hook
class H,I result
👤 User: Submits search query through Kibana, API, or any Elasticsearch client
🔄 Rewrite: ActionFilter intercepts the search request and converts standard queries to L1ES equivalents
🔍 Match: L1ES queries match search terms against template patterns and encoded values
📖 Expand: Fetch sub-phase decodes encoded fields in _source and fields
📊 Results: Full-fidelity events returned with original field names and values
Compact Documents in Elasticsearch
A compact event replaces the log message with a template reference and variable values. Here is the same event before and after optimization:
Original event _source:
{
"message": "2026-02-25T14:03:22Z INFO [http-handler] POST /api/v2/orders completed in 42ms status=200 bytes=1583 user=acct_7291",
"@timestamp": "2026-02-25T14:03:22.000Z",
"kubernetes.pod_name": "order-svc-6f8b4d-xk2lp"
}
Compact event _source (as stored in Elasticsearch):
{
"message": "~a3f29c01,2026-02-25T14:03:22Z,/api/v2/orders,42,200,1583,acct_7291",
"@timestamp": "2026-02-25T14:03:22.000Z",
"kubernetes.pod_name": "order-svc-6f8b4d-xk2lp"
}
Expanded event (returned by L1ES at query time):
Identical to the original. L1ES looks up template a3f29c01 in the l1es_dml index, reconstructs the full message from the template pattern and variable values, and returns it in _source. Kibana, dashboards, and alerts see the original text.
What changes and what stays the same:
| Field | Compact? | Notes |
|---|---|---|
message (or configured source field) |
Yes | Replaced with ~hash,val1,val2,... |
@timestamp |
No | Passed through unchanged |
| All other fields | No | Metadata, labels, Kubernetes fields unchanged |
| Index mappings | No | Same field types, same index patterns |
Only the field registered via _l1es/add-dml-index is compacted. Everything else is stored and indexed exactly as before.
Query Behavior
L1ES intercepts standard Elasticsearch queries and rewrites them to search across compact content. The following query types are transparently rewritten:
Supported Query Types
| Query Type | Behavior |
|---|---|
match |
Rewritten to l1es_match — searches template patterns and variable values |
match_phrase |
Rewritten to l1es_match_phrase — phrase matching across decoded content |
multi_match |
Rewritten to l1es_multi_match — multi-field search across decoded content |
| KQL (Kibana) | KQL compiles to match/match_phrase — works transparently |
These cover the queries generated by Kibana Discover, Kibana dashboards, and most saved searches. No query changes needed.
Not Rewritten
| Query Type | Behavior |
|---|---|
term / terms |
Searches the raw indexed value — matches compact form, not decoded text |
wildcard / regexp / fuzzy |
Operates on raw indexed tokens |
range |
Works on non-compacted fields (e.g., @timestamp) — not applicable to compact text fields |
Aggregations (terms, significant_terms) |
Aggregate on raw indexed values — compact field values appear as ~hash,... in buckets |
highlight |
Highlights raw indexed tokens, not decoded text |
Practical impact: Most Kibana usage (Discover search bar, dashboard panels, alerting rules) relies on match and match_phrase queries, which are fully supported. Direct term queries and aggregations on the compacted field will see the raw compact form.
Workaround for aggregations: Use aggregations on non-compacted fields (e.g., kubernetes.pod_name, level, @timestamp) which are stored unchanged. For aggregations that must operate on decoded message content, use Storage Streamer to expand events into a separate index.
_source and fields
Both _source and fields responses decode the compacted field automatically when source_decoding_enabled and decoding_enabled are true (both default to true). API consumers, Kibana document views, and CSV exports receive the original text.
Quickstart
To get searchable compact events in Elasticsearch in under 15 minutes:
Step 1: Install the Plugin
Install the L1ES plugin on each Elasticsearch data node:
Restart Elasticsearch after installing.
Restart OpenSearch after installing.
Prerequisites:
| Requirement | Description |
|---|---|
| Elasticsearch 8.17.0 or OpenSearch 2.19.0 | Self-hosted deployment with plugin install access |
| Java 17+ | Required by Elasticsearch 8.x and OpenSearch 2.x |
| Admin access | Needed to install plugins and restart nodes |
Step 2: Initialize and Register
Initialize the plugin's internal indices:
Register which index and field contain encoded data:
curl -X POST 'http://localhost:9200/_l1es/add-dml-index' \
-H 'Content-Type: application/json' \
-d '{
"index_name": "my-logs",
"source": "message",
"dest": "decoded_message"
}'
| Parameter | Description |
|---|---|
index_name |
Your data index containing encoded events |
source |
The field containing encoded events (e.g., message) |
dest |
Field name for decoded output (defaults to source if omitted) |
Repeat add-dml-index for each index that contains encoded data.
Step 3: Configure Forwarder
Configure your log forwarder to send encoded events and templates to Elasticsearch.
Include 10x optimizer configuration:
@INCLUDE ${TENX_MODULES}/pipelines/run/modules/input/forwarder/fluentbit/conf/tenx-optimize.conf
@INCLUDE ${TENX_MODULES}/pipelines/run/modules/input/forwarder/fluentbit/conf/tenx-unix.conf
Configure Elasticsearch outputs:
# ========================= TEMPLATES OUTPUT =========================
# Routes templates to l1es_dml index for plugin template lookup
[OUTPUT]
Name es
Match tenx-template
Host your-elasticsearch-host.com
Port 9200
Index l1es_dml
Type _doc
Suppress_Type_Name On
TLS On
TLS.Verify Off
# ========================= ENCODED EVENTS OUTPUT ====================
# Routes encoded log events to your target index
[OUTPUT]
Name es
Match_Regex ^(?!tenx-template).*
Host your-elasticsearch-host.com
Port 9200
Index my-logs
Type _doc
Suppress_Type_Name On
TLS On
TLS.Verify Off
Include 10x optimizer configuration:
@include "#{ENV['TENX_MODULES']}/pipelines/run/modules/input/forwarder/fluentd/conf/tenx-optimize-unix.conf"
Configure Elasticsearch outputs:
# ========================= TEMPLATES OUTPUT =========================
<match tenx-template>
@type elasticsearch
host your-elasticsearch-host.com
port 9200
index_name l1es_dml
type_name _doc
suppress_type_name true
</match>
# ========================= ENCODED EVENTS OUTPUT ====================
<match **>
@type elasticsearch
host your-elasticsearch-host.com
port 9200
index_name my-logs
type_name _doc
suppress_type_name true
</match>
Configure exporters in your OTel config:
exporters:
elasticsearch/templates:
endpoints: ["https://your-elasticsearch-host.com:9200"]
logs_index: "l1es_dml"
tls:
insecure_skip_verify: true
elasticsearch/encoded:
endpoints: ["https://your-elasticsearch-host.com:9200"]
logs_index: "my-logs"
tls:
insecure_skip_verify: true
service:
pipelines:
logs/templates:
receivers: [tenx_templates]
exporters: [elasticsearch/templates]
logs/encoded:
receivers: [tenx_encoded]
exporters: [elasticsearch/encoded]
Step 4: Verify End-to-End
Run these queries to confirm everything is working:
1. Check templates are loaded:
Expected:count > 0
2. Check encoded events are indexed:
Expected:count > 0
3. Search with a standard query (transparent rewriting):
curl -X POST 'http://localhost:9200/my-logs/_search' \
-H 'Content-Type: application/json' \
-d '{"query":{"match":{"message":"error"}},"size":3}'
_source — original log text, not ~hash,val1,val2...
4. Open Kibana Discover:
Navigate to Kibana, select your index pattern, and search using KQL (e.g., message: "error"). Results should display the full original log events.
Verification Checklist
Use this checklist to diagnose issues at each stage of the pipeline.
Plugin Loaded?
Test:
| Result | Meaning | Action |
|---|---|---|
| JSON with version | Plugin loaded | Proceed to setup check |
| 400/404 error | Plugin not installed | Reinstall plugin, restart Elasticsearch |
Internal Indices Created?
Test:
| Result | Meaning | Action |
|---|---|---|
l1es_dml and l1es_dml_indices listed |
Setup complete | Proceed to template check |
| No indices | Setup not run | Run POST _l1es/setup |
Templates Loaded?
Test:
| Result | Meaning | Action |
|---|---|---|
| Count > 0 | Templates present | Proceed to search check |
| Count = 0 | No templates loaded | Check forwarder config, verify templates are being sent to l1es_dml index |
Queries Returning Decoded Results?
Test:
curl -X POST 'http://localhost:9200/my-logs/_search' \
-H 'Content-Type: application/json' \
-d '{"query":{"match":{"message":"your-search-term"}},"size":1}'
| Result | Meaning | Action |
|---|---|---|
Decoded _source with original text |
Working correctly | Done |
~hash,val1,val2... in _source |
Source decoding not active | Check source_decoding_enabled: true in l1es.yml, verify field is registered via add-dml-index |
| 0 hits | Query rewriting not matching | Check query_rewrite_enabled: true in l1es.yml, verify template hash exists in l1es_dml |
Troubleshooting
Standard Queries Return 0 Hits on Encoded Data
Symptom: A match or match_phrase query on an encoded field returns no results, even though the data is indexed.
Common Causes:
| Cause | Solution |
|---|---|
query_rewrite_enabled is false |
Set to true in config/l1es.yml and restart |
| Field not registered | Run POST _l1es/add-dml-index for the index and field |
| Templates not loaded | Check l1es_dml index has matching template hashes |
| Wrong index name in registration | Verify index_name matches your data index exactly |
Kibana Shows Encoded Text Instead of Decoded
Symptom: Kibana Discover displays ~hash,val1,val2... instead of the original log line.
Common Causes:
| Cause | Solution |
|---|---|
source_decoding_enabled is false |
Set to true in config/l1es.yml and restart |
| Field not registered | Run POST _l1es/add-dml-index with the correct source field |
| Template hash not found | Verify template exists: GET l1es_dml/_doc/<hash-from-event> |
Plugin Not Loading After Install
Symptom: GET _l1es returns 400 or the endpoint is not found.
Diagnostic Steps:
-
Check Elasticsearch logs for plugin loading errors:
-
Verify the plugin is listed:
| Error | Cause | Solution |
|---|---|---|
java.lang.UnsupportedClassVersionError |
Wrong Java version | L1ES requires Java 17+ |
Plugin version mismatch |
ES/OS version mismatch | Use the plugin build matching your ES/OS version |
| Plugin not in list | Install failed | Reinstall with --batch flag |
Configuration
The plugin reads config/l1es.yml from its plugin directory. Key settings:
flags:
enabled: true # Master switch
query_rewrite_enabled: true # Transparent rewriting of standard queries
source_decoding_enabled: true # Decode encoded fields in _source responses
decoding_enabled: true # Decode encoded fields in 'fields' responses
match_query_enabled: true # Enable l1es_match query type
match_pharse_query_enabled: true # Enable l1es_match_phrase query type
multi_match_query_enabled: true # Enable l1es_multi_match query type
| Flag | Default | Description |
|---|---|---|
query_rewrite_enabled |
true |
Converts standard match/match_phrase/multi_match to L1ES equivalents |
source_decoding_enabled |
true |
Decodes encoded fields in _source for registered indices |
decoding_enabled |
true |
Decodes encoded fields when requested via the fields parameter |
Components
| Component | Description |
|---|---|
| Query Rewriter | Recursive query tree walker that converts standard queries to L1ES equivalents |
| Action Filter | ES ActionFilter intercepting search requests for transparent rewriting |
| Fetch Sub-Phase | Decodes _source and fields for encoded events in search responses |
| Template Index | Internal l1es_dml index storing template patterns for lookup at query time |
| REST Handlers | _l1es/setup, _l1es/add-dml-index, and other management endpoints |
Platform Support
L1ES ships as two separate plugin builds — one for Elasticsearch, one for OpenSearch. Both are functionally identical.
| Platform | Version | Plugin Build |
|---|---|---|
| Elasticsearch | 8.17.0 | l1es-plugin-0.3.0.es.8.17.0.zip |
| OpenSearch | 2.19.0 | l1es-plugin-0.3.0.os.2.19.0.zip |
The plugin must be installed on every data node in your cluster. Coordinating-only nodes and Kibana instances do not need the plugin.
For managed services (Elastic Cloud, AWS OpenSearch Service) where custom plugins cannot be installed, use Storage Streamer to expand compact events from S3 before ingestion.
Version compatibility: Each plugin build is compiled against a specific Elasticsearch/OpenSearch version. The plugin version must match your cluster version exactly — an ES 8.17.0 plugin will not load on ES 8.16.x or 8.18.x. Check GitHub releases for available builds.
Production Operations
Rolling Upgrades
L1ES supports rolling upgrades without downtime. Upgrade one data node at a time:
-
Disable shard allocation — prevent rebalancing during the restart:
-
Stop Elasticsearch on the target node
-
Install the new plugin version (remove old, install new):
-
Start Elasticsearch on the node
-
Re-enable shard allocation:
-
Wait for green status before proceeding to the next node:
Repeat for each data node. During the upgrade, nodes running the old plugin version continue to expand queries — mixed-version operation is safe as long as the l1es_dml index schema has not changed between versions (check the release notes).
Operator Checklist
Pre-production readiness checklist for L1ES deployments:
| Category | Check | How to Verify |
|---|---|---|
| Install | Plugin installed on every data node | GET _l1es returns version on each node |
| Install | Plugin version matches ES/OS version exactly | Compare plugin build version to GET / output |
| Setup | Internal indices created | GET _cat/indices/l1es_*?v lists l1es_dml and l1es_dml_indices |
| Setup | Target indices registered | GET _l1es/dml-indices lists your data indices |
| Forwarder | Templates routed to l1es_dml |
GET l1es_dml/_count returns > 0 |
| Forwarder | Compact events routed to data index | GET my-logs/_count returns > 0 |
| Search | Query rewriting active | match query returns decoded results |
| Search | _source decoding active |
Document view in Kibana shows original text |
| Config | l1es.yml flags reviewed |
All three flags default true — adjust if needed |
| Ops | Cluster health green after install | GET _cluster/health |
| Ops | Plugin appears in node info | GET _nodes/plugins lists l1es on each data node |
Performance
The L1ES plugin adds ~1.25x overhead to search queries on compacted fields. This overhead comes from two operations:
- Query rewriting — translating
match/match_phraseto L1ES equivalents (per-query, microseconds) _sourcedecoding — template lookup and string reconstruction for each hit (per-document, sub-millisecond)
The overhead scales with result set size, not data volume. A query returning 500 hits decodes 500 documents regardless of whether the index holds 1M or 1B events.
Net effect on cluster performance: The 50%+ storage reduction from compact events means fewer data nodes, less SSD, and less memory required for the same retention period. The reduced shard sizes also improve baseline query performance (smaller segments to scan). For most clusters, the net result is faster searches on cheaper infrastructure.
REST API
| Endpoint | Method | Description |
|---|---|---|
_l1es |
GET | Plugin info (version, description) |
_l1es/setup |
POST | Create internal indices (l1es_dml, l1es_dml_indices) |
_l1es/cleanup |
POST | Remove internal indices |
_l1es/add-dml-index |
POST | Register an encoded field mapping for an index |
_l1es/remove-dml-index |
POST | Unregister an encoded field mapping |
This plugin is open source. View on GitHub.