Deploy
Deploy the Reducer app to Kubernetes via Helm.
The chart deploys your forwarder with the 10x Engine as a sidecar process. Most forwarders run as a DaemonSet, while Logstash runs as a StatefulSet.
Step 1: Prerequisites
| Requirement | Description |
|---|---|
| Log10x License | Your license key (get one) |
| Helm | Helm CLI installed |
| kubectl | Configured to access your cluster |
| GitHub Token | Personal access token for config repo (create one) |
| Output Destination | Elasticsearch, Splunk, or other log backend configured |
Step 2: Add Helm Repository
Vector uses the upstream chart (no Log10x fork) — the integration is a values overlay added to your existing Vector deployment.
For Kubernetes, use the Fluent Bit tab — Splunk Connect for Kubernetes is Fluent Bit-based. For VM infrastructure, see the Splunk UF reducer guide.
For Kubernetes, use the Fluent Bit or OTel Collector tab. For VM infrastructure, see the Datadog Agent reducer guide.
View all chart values:
For Kubernetes, use the Fluent Bit tab — Splunk Connect for Kubernetes is Fluent Bit-based. For VM infrastructure, see the Splunk UF reducer guide.
For Kubernetes, use the Fluent Bit or OTel Collector tab. For VM infrastructure, see the Datadog Agent reducer guide.
Step 3: Configure Deployment Settings
Create a new file called my-reducer.yaml in your working directory. This Helm values file will be used in all subsequent steps.
All 10x values are nested under the tenx block. Charts retain all original values from official Fluentd, Fluent Bit, and Filebeat charts.
Vector has no tenx: values block — there is no Log10x fork of the chart. Instead, the integration is a values overlay (extraContainers + extraVolumes + customConfig + image.pullSecrets) added to your existing Vector values file. See the Vector forwarder guide for the full overlay, including the read-only / regulate / optimize mode selection via the reducerReadOnly / reducerOptimize env vars.
For Kubernetes, use the Fluent Bit tab — Splunk Connect for Kubernetes is Fluent Bit-based. For VM infrastructure, see the Splunk UF reducer guide.
For Kubernetes, use the Fluent Bit or OTel Collector tab. For VM infrastructure, see the Datadog Agent reducer guide.
Step 4: Load Configuration
Load the 10x Engine config folder into the cluster using one of the methods below.
If you skip this step, the default configuration bundled with the Log10x image is used.
An init container clones your configuration repository before each pod starts. Works with GitHub, GitLab, Bitbucket, or any HTTPS-accessible Git provider.
- Fork the Config Repository
- Create a branch for your configuration changes
- Edit the app configuration to match your metric output and enrichment options
Add to your Helm values:
tenx:
config:
git:
enabled: true
url: "https://github.com/YOUR-ACCOUNT/config.git"
branch: "my-reducer-config" # Optional
# symbols: # Uncomment if using symbol library
# git:
# enabled: true
# url: "https://github.com/YOUR-ACCOUNT/symbols.git"
gitToken: "YOUR-GIT-TOKEN"
For production, store the token in a Kubernetes Secret rather than in the values file.
Mount an existing PersistentVolumeClaim that contains your configuration directory. This approach works in air-gapped environments and requires no external network access.
- Create a PVC containing your configuration files (cloned from the Config Repository)
- Reference it in your Helm values:
Step 5: Configure Secrets
Store sensitive credentials in Kubernetes Secrets. Only add secrets for metric outputs you've configured.
Create the secret:
kubectl create secret generic reducer-credentials \
--from-literal=elasticsearch-username=elastic \
--from-literal=elasticsearch-password=YOUR_ES_PASSWORD \
--from-literal=datadog-api-key=YOUR_DATADOG_API_KEY
Note: Only include credentials for outputs you've configured.
Add secret references to your my-reducer.yaml:
env:
# For Datadog metrics
- name: DD_API_KEY
valueFrom:
secretKeyRef:
name: reducer-credentials
key: datadog-api-key
# For Elasticsearch metrics
# - name: ELASTIC_API_KEY
# valueFrom:
# secretKeyRef:
# name: reducer-credentials
# key: elastic-api-key
# For AWS CloudWatch metrics
# - name: AWS_ACCESS_KEY_ID
# valueFrom:
# secretKeyRef:
# name: reducer-credentials
# key: aws-access-key-id
# For SignalFx metrics
# - name: SIGNALFX_ACCESS_TOKEN
# valueFrom:
# secretKeyRef:
# name: reducer-credentials
# key: signalfx-access-token
daemonset:
extraEnvs:
# For Elasticsearch output
- name: ELASTICSEARCH_USERNAME
valueFrom:
secretKeyRef:
name: reducer-credentials
key: elasticsearch-username
- name: ELASTICSEARCH_PASSWORD
valueFrom:
secretKeyRef:
name: reducer-credentials
key: elasticsearch-password
# For Datadog metrics (optional)
# - name: DD_API_KEY
# valueFrom:
# secretKeyRef:
# name: reducer-credentials
# key: datadog-api-key
Vector additionally needs a docker-registry secret for the private GHCR image used by the 10x sidecar:
kubectl create secret docker-registry ghcr-log10x \
--namespace=logging \
--docker-server=ghcr.io \
--docker-username=YOUR-GHCR-USER \
--docker-password=YOUR-GHCR-TOKEN
Then reference it from your overlay:
Metric-output secrets (Datadog, Elastic, etc.) ride on the 10x sidecar's env block — see the Vector forwarder guide.
For Kubernetes, use the Fluent Bit tab — Splunk Connect for Kubernetes is Fluent Bit-based. For VM infrastructure, see the Splunk UF reducer guide.
For Kubernetes, use the Fluent Bit or OTel Collector tab. For VM infrastructure, see the Datadog Agent reducer guide.
Step 6: Forwarder
Configure your forwarder for log collection and output destinations. The Log10x reducer filters events before they reach your final destination.
Configure your output destination. The chart automatically routes events through the reducer.
Note: The Log10x chart automatically configures event routing through the reducer.
daemonset:
filebeatConfig:
filebeat.yml: |
filebeat.inputs:
- type: filestream
id: tenx_internal
paths:
- /var/log/tenx/*.log
fields:
log_type: tenx_internal
- type: container
paths:
- /var/log/containers/*.log
processors:
- add_kubernetes_metadata:
host: ${NODE_NAME}
matchers:
- logs_path:
logs_path: "/var/log/containers/"
output.elasticsearch:
hosts: '["https://elasticsearch-master:9200"]'
username: '${ELASTICSEARCH_USERNAME}'
password: '${ELASTICSEARCH_PASSWORD}'
indices:
- index: "tenx_internal"
when.equals:
fields.log_type: "tenx_internal"
- index: "logs-filtered-%{+yyyy.MM.dd}"
mode: "daemonset"
config:
receivers:
filelog:
include: [/var/log/pods/*/*/*.log]
operators:
- type: container
id: container-parser
exporters:
elasticsearch:
endpoints: ["https://elasticsearch-master:9200"]
logs_index: logs-filtered
service:
pipelines:
logs:
receivers: [filelog]
processors: [memory_limiter, batch]
exporters: [elasticsearch]
Note: The Log10x chart automatically configures sidecar communication for filtering.
Vector's forwarder config is the customConfig block in the same overlay file. The overlay already adds the Unix-socket tenx_in sink and tenx_out source — you keep your existing sources and switch your destination sinks to consume tenx_out (the regulated stream) instead of the originals. Full template in the Vector forwarder guide.
logstashPipeline:
logstash.conf: |
input {
beats {
port => 5044
}
}
output {
elasticsearch {
hosts => ["elasticsearch-master:9200"]
index => "logs-filtered-%{+YYYY.MM.dd}"
}
}
Note: The Log10x chart automatically configures sidecar communication for filtering.
For Kubernetes, use the Fluent Bit tab — Splunk Connect for Kubernetes is Fluent Bit-based. For VM infrastructure, see the Splunk UF reducer guide.
For Kubernetes, use the Fluent Bit or OTel Collector tab. For VM infrastructure, see the Datadog Agent reducer guide.
Step 7: Deploy
Create your namespace (if needed) and deploy:
Layer the Log10x overlay (tenx-overlay.yaml from the Vector forwarder guide) on top of your existing Vector values:
For Kubernetes, use the Fluent Bit tab — Splunk Connect for Kubernetes is Fluent Bit-based. For VM infrastructure, see the Splunk UF reducer guide.
For Kubernetes, use the Fluent Bit or OTel Collector tab. For VM infrastructure, see the Datadog Agent reducer guide.
Step 8: Verify
Verify the install in three phases: pods Ready → 10x processor alive → regulated events flowing. A probe passes when its commands exit 0.
Phase A — pods Ready
The selector depends on the forwarder chart family. log10x-fluent/* and log10x-otel/* charts use the k8s-recommended label set; log10x-elastic/* charts use legacy Helm labels.
Phase B — 10x reducer plugin alive
Look for reducer initialization lines in the forwarder container (10x runs inside the forwarder image — no separate sidecar for any log10x-repackaged chart).
Phase C — regulated events flowing
Confirm the forwarder is actually writing regulated events to its destination. For a real destination, check the destination's UI (Elasticsearch index, Splunk sourcetype, Datadog logs view). For mock/stdout output, grep for the TENX-MOCK marker:
View mute/sample counts in the dashboard:
Once running, view your mute/sample activity in the Reducer Dashboard.
Step 9: Teardown
Uninstall the Helm release:
Clean up derived resources (use the chart family's label convention):
Verify nothing remains:
Delete the namespace (optional):
Quickstart Full Sample
tenx:
enabled: true
apiKey: "YOUR-LICENSE-KEY-HERE"
kind: "regulate"
runtimeName: "my-reducer-filebeat"
github:
config:
enabled: true
token: "YOUR-GITHUB-TOKEN"
repo: "YOUR-ACCOUNT/REPO-NAME"
daemonset:
filebeatConfig:
filebeat.yml: |
filebeat.inputs:
- type: filestream
id: tenx_internal
paths:
- /var/log/tenx/*.log
fields:
log_type: tenx_internal
- type: container
paths:
- /var/log/containers/*.log
processors:
- add_kubernetes_metadata:
host: ${NODE_NAME}
matchers:
- logs_path:
logs_path: "/var/log/containers/"
output.elasticsearch:
hosts: '["https://elasticsearch-master:9200"]'
indices:
- index: "tenx_internal"
when.equals:
fields.log_type: "tenx_internal"
- index: "logs-filtered-%{+yyyy.MM.dd}"
mode: "daemonset"
tenx:
enabled: true
apiKey: "YOUR-LICENSE-KEY-HERE"
kind: "regulate"
runtimeName: "my-otel-reducer"
github:
config:
enabled: true
token: "YOUR-GITHUB-TOKEN"
repo: "YOUR-ACCOUNT/REPO-NAME"
config:
exporters:
elasticsearch:
endpoints: ["https://elasticsearch-master:9200"]
logs_index: logs-filtered
service:
pipelines:
logs:
receivers: [filelog]
processors: [memory_limiter, batch]
exporters: [elasticsearch]
Vector's quickstart is a values overlay applied to your existing Vector chart install — there is no tenx: block. Save as tenx-overlay.yaml and apply with helm upgrade --install my-reducer vector/vector -f your-existing-vector-values.yaml -f tenx-overlay.yaml.
image:
pullSecrets:
- name: ghcr-log10x
extraVolumes:
- name: tenx-sockets
emptyDir: {}
extraVolumeMounts:
- name: tenx-sockets
mountPath: /tmp/tenx-sockets
extraContainers:
- name: tenx
image: ghcr.io/log-10x/pipeline-10x-dev:vector
args: ["run", "@run/input/forwarder/vector/regulate", "@apps/reducer"]
env:
- name: TENX_API_KEY
value: "YOUR-LICENSE-KEY-HERE"
- name: vectorInputPath
value: "/tmp/tenx-sockets/tenx-vector-in.sock"
- name: vectorOutputForwardAddress
value: "/tmp/tenx-sockets/tenx-vector-out.sock"
# - name: reducerReadOnly # uncomment for read-only / non-intervening mode
# value: "true"
volumeMounts:
- name: tenx-sockets
mountPath: /tmp/tenx-sockets
podSecurityContext:
runAsUser: 0
runAsGroup: 0
fsGroup: 0
customConfig:
sources:
tenx_out:
type: fluent
mode: unix
path: /tmp/tenx-sockets/tenx-vector-out.sock
sinks:
tenx_in:
type: socket
inputs: [YOUR-EXISTING-SOURCE-NAMES]
mode: unix
path: /tmp/tenx-sockets/tenx-vector-in.sock
encoding:
codec: text
# Switch your destination sink(s) to consume `tenx_out`
# final_destination:
# type: elasticsearch
# inputs: [tenx_out]
# ...
Full walkthrough in the Vector forwarder guide.
tenx:
enabled: true
apiKey: "YOUR-LICENSE-KEY-HERE"
kind: "regulate"
runtimeName: "my-logstash-reducer"
github:
config:
enabled: true
token: "YOUR-GITHUB-TOKEN"
repo: "YOUR-ACCOUNT/REPO-NAME"
# Logstash pipeline for final destination
logstashPipeline:
output.conf: |
output {
elasticsearch {
hosts => ["elasticsearch-master:9200"]
index => "logs-filtered"
}
}
For Kubernetes, use the Fluent Bit tab — Splunk Connect for Kubernetes is Fluent Bit-based. For VM infrastructure, see the Splunk UF reducer guide.
For Kubernetes, use the Fluent Bit or OTel Collector tab. For VM infrastructure, see the Datadog Agent reducer guide.
Datadog Output Examples
To send filtered events to Datadog, use the file relay pattern: Fluent Bit writes regulated events to a folder that the Datadog Agent monitors. This keeps the Datadog Agent as the forwarder (handling buffering, retries, metadata enrichment) while 10x regulates events inline.
tenx:
enabled: true
apiKey: "YOUR-LICENSE-KEY-HERE"
kind: "regulate"
runtimeName: "my-fluentbit-reducer"
config:
outputs: |
[OUTPUT]
Name file
Match *
Path /var/log/regulated
Format plain
Then configure the Datadog Agent to monitor the regulated output folder:
logs:
- type: file
path: /var/log/regulated/*.log
service: myapp
source: myapp
On EKS, mount a shared emptyDir volume between the Fluent Bit + 10x pod and the Datadog Agent DaemonSet at /var/log/regulated.
mode: "daemonset"
tenx:
enabled: true
apiKey: "YOUR-LICENSE-KEY-HERE"
kind: "regulate"
runtimeName: "my-otel-reducer"
config:
exporters:
datadog:
api:
key: "${env:DD_API_KEY}"
site: datadoghq.com
service:
pipelines:
logs:
receivers: [filelog]
processors: [memory_limiter, batch]
exporters: [datadog]
Splunk HEC Output Examples
To send filtered events to Splunk instead of Elasticsearch, use Splunk HEC output.
tenx:
enabled: true
apiKey: "YOUR-LICENSE-KEY-HERE"
kind: "regulate"
runtimeName: "my-fluentd-reducer"
outputConfigs:
06_final_output.conf: |-
<label @FINAL-OUTPUT>
<match **>
@type splunk_hec
hec_host "splunk-hec.example.com"
hec_port 8088
hec_token "YOUR-HEC-TOKEN"
index main
source kubernetes
</match>
</label>
mode: "daemonset"
tenx:
enabled: true
apiKey: "YOUR-LICENSE-KEY-HERE"
kind: "regulate"
runtimeName: "my-otel-reducer"
config:
exporters:
splunk_hec:
endpoint: "https://splunk-hec.example.com:8088/services/collector"
token: "YOUR-HEC-TOKEN"
index: main
tls:
insecure_skip_verify: true
service:
pipelines:
logs:
receivers: [filelog]
processors: [memory_limiter, batch]
exporters: [splunk_hec]
tenx:
enabled: true
apiKey: "YOUR-LICENSE-KEY-HERE"
kind: "regulate"
runtimeName: "my-logstash-reducer"
logstashPipeline:
output.conf: |
output {
http {
url => "https://splunk-hec.example.com:8088/services/collector"
http_method => "post"
headers => ["Authorization", "Splunk YOUR-HEC-TOKEN"]
format => "json"
}
}