Deploy

Deploy the Edge Optimizer app to Kubernetes via Helm.

The chart deploys your forwarder with the 10x engine as a sidecar process. Most forwarders run as a DaemonSet, while Logstash runs as a StatefulSet.

Step 1: Prerequisites
Requirement Description
Log10x License Your license key (get one)
Helm Helm CLI installed
kubectl Configured to access your cluster
GitHub Token Personal access token for config repo (create one)
Output Destination Elasticsearch or Splunk for optimized events and templates
Step 2: Add Helm Repository
helm repo add log10x-fluent https://log-10x.github.io/fluent-helm-charts
helm repo update
helm search repo fluentd
helm repo add log10x-fluent https://log-10x.github.io/fluent-helm-charts
helm repo update
helm search repo fluent-bit
helm repo add log10x-elastic https://log-10x.github.io/elastic-helm-charts
helm repo update
helm search repo filebeat-10x
helm repo add log10x-otel https://log-10x.github.io/opentelemetry-helm-charts
helm repo update
helm search repo opentelemetry-collector
helm repo add log10x-elastic https://log-10x.github.io/elastic-helm-charts
helm repo update
helm search repo logstash-10x

For Kubernetes, use the Fluent Bit tab — Splunk Connect for Kubernetes is Fluent Bit-based. For VM infrastructure, see the Splunk UF optimizer guide.

For Kubernetes, use the Fluent Bit or OTel Collector tab. For VM infrastructure, see the Datadog Agent optimizer guide.

View all chart values:

helm show values log10x-fluent/fluentd
helm show values log10x-fluent/fluent-bit
helm show values log10x-elastic/filebeat-10x
helm show values log10x-otel/otel-collector-10x
helm show values log10x-elastic/logstash-10x

For Kubernetes, use the Fluent Bit tab — Splunk Connect for Kubernetes is Fluent Bit-based. For VM infrastructure, see the Splunk UF optimizer guide.

For Kubernetes, use the Fluent Bit or OTel Collector tab. For VM infrastructure, see the Datadog Agent optimizer guide.

Step 3: Configure Application

Create a new file called my-edge-optimizer.yaml in your working directory. This Helm values file will be used in all subsequent steps.

All 10x values are nested under the tenx block. Charts retain all original values from official Fluentd, Fluent Bit, and Filebeat charts.

my-edge-optimizer.yaml
tenx:
  enabled: true
  apiKey: "YOUR-LICENSE-KEY-HERE"
  kind: "optimize"
  runtimeName: my-edge-optimizer
my-edge-optimizer.yaml
tenx:
  enabled: true
  apiKey: "YOUR-LICENSE-KEY-HERE"
  kind: "optimize"
  runtimeName: my-edge-optimizer
my-edge-optimizer.yaml
mode: "daemonset"

tenx:
  enabled: true
  apiKey: "YOUR-LICENSE-KEY-HERE"
  kind: "optimize"
  runtimeName: my-edge-optimizer
my-edge-optimizer.yaml
tenx:
  enabled: true
  apiKey: "YOUR-LICENSE-KEY-HERE"
  kind: "optimize"
  runtimeName: my-edge-optimizer

For Kubernetes, use the Fluent Bit tab — Splunk Connect for Kubernetes is Fluent Bit-based. For VM infrastructure, see the Splunk UF optimizer guide.

For Kubernetes, use the Fluent Bit or OTel Collector tab. For VM infrastructure, see the Datadog Agent optimizer guide.

Step 4: GitOps (optional)

Log10x uses GitOps to manage configuration centrally.

Setup steps:

  1. Fork the Config Repository
  2. Create a branch for your configuration
  3. Edit the app configuration to match your metric output and enrichment options

Add GitHub credentials to your my-edge-optimizer.yaml:

my-edge-optimizer.yaml
tenx:
  github:
    config:
      enabled: true
      token: "YOUR-GITHUB-TOKEN"
      repo: "YOUR-ACCOUNT/REPO-NAME"
      branch: "my-edge-optimizer-config"    # Optional

    symbols:
      enabled: false                         # Enable if using symbol library
      token: "YOUR-GITHUB-TOKEN"
      repo: "YOUR-ACCOUNT/SYMBOLS-REPO"
Step 5: Configure Secrets

Store sensitive credentials in Kubernetes Secrets. Only add secrets for metric outputs you've configured.

Create the secret:

kubectl create secret generic edge-optimizer-credentials \
  --from-literal=elasticsearch-username=elastic \
  --from-literal=elasticsearch-password=YOUR_ES_PASSWORD \
  --from-literal=datadog-api-key=YOUR_DATADOG_API_KEY

Note: Only include credentials for outputs you've configured.

Add secret references to your my-edge-optimizer.yaml:

my-edge-optimizer.yaml
env:
  # For Datadog metrics
  - name: DD_API_KEY
    valueFrom:
      secretKeyRef:
        name: edge-optimizer-credentials
        key: datadog-api-key

  # For Elasticsearch metrics
  # - name: ELASTIC_API_KEY
  #   valueFrom:
  #     secretKeyRef:
  #       name: edge-optimizer-credentials
  #       key: elastic-api-key

  # For AWS CloudWatch metrics
  # - name: AWS_ACCESS_KEY_ID
  #   valueFrom:
  #     secretKeyRef:
  #       name: edge-optimizer-credentials
  #       key: aws-access-key-id

  # For SignalFx metrics
  # - name: SIGNALFX_ACCESS_TOKEN
  #   valueFrom:
  #     secretKeyRef:
  #       name: edge-optimizer-credentials
  #       key: signalfx-access-token
env:
  # For Datadog metrics
  - name: DD_API_KEY
    valueFrom:
      secretKeyRef:
        name: edge-optimizer-credentials
        key: datadog-api-key
daemonset:
  extraEnvs:
    # For Elasticsearch output
    - name: ELASTICSEARCH_USERNAME
      valueFrom:
        secretKeyRef:
          name: edge-optimizer-credentials
          key: elasticsearch-username
    - name: ELASTICSEARCH_PASSWORD
      valueFrom:
        secretKeyRef:
          name: edge-optimizer-credentials
          key: elasticsearch-password

    # For Datadog metrics (optional)
    # - name: DD_API_KEY
    #   valueFrom:
    #     secretKeyRef:
    #       name: edge-optimizer-credentials
    #       key: datadog-api-key
extraEnvs:
  - name: DD_API_KEY
    valueFrom:
      secretKeyRef:
        name: edge-optimizer-credentials
        key: datadog-api-key
extraEnvs:
  - name: DD_API_KEY
    valueFrom:
      secretKeyRef:
        name: edge-optimizer-credentials
        key: datadog-api-key

For Kubernetes, use the Fluent Bit tab — Splunk Connect for Kubernetes is Fluent Bit-based. For VM infrastructure, see the Splunk UF optimizer guide.

For Kubernetes, use the Fluent Bit or OTel Collector tab. For VM infrastructure, see the Datadog Agent optimizer guide.

Step 6: Forwarder & Template Output

The Optimizer requires configuring both event routing and template output. Templates must be stored in a separate index (l1es_dml for Elasticsearch) to enable expansion of optimized events.

Configure your output destination and template storage. The chart automatically routes events through the optimizer.

tenx:
  outputConfigs:
    # Final destination for optimized events
    06_final_output.conf: |-
      <label @FINAL-OUTPUT>
        <match **>
          @type elasticsearch
          host "elasticsearch-master"
          port 9200
          user elastic
          password changeme
        </match>
      </label>

    # Template output (required for optimizer)
    07_tenx_templates.conf: |-
      <label @TENX-TEMPLATE>
        <match **>
          @type elasticsearch
          host "elasticsearch-master"
          port 9200
          user elastic
          password changeme
          index_name l1es_dml
          id_key templateHash
        </match>
      </label>

Configure your output destination and template storage. The chart automatically routes events through the optimizer.

tenx:
  configFiles:
    # Template output (required for optimizer) - customize host if needed
    tenx-templates-output.conf: |
      [OUTPUT]
          Name es
          Match tenx-template
          Host elasticsearch-master
          Index l1es_dml
          Id_Key templateHash

config:
  outputs: |
    [OUTPUT]
        Name es
        Match kube.*
        Host elasticsearch-master
        Logstash_Format On
daemonset:
  filebeatConfig:
    filebeat.yml: |
      filebeat.inputs:
      - type: filestream
        id: tenx_internal
        paths:
          - /var/log/tenx/*.log
        fields:
          log_type: tenx_internal
      - type: container
        paths:
          - /var/log/containers/*.log
        processors:
        - add_kubernetes_metadata:
            host: ${NODE_NAME}
            matchers:
            - logs_path:
                logs_path: "/var/log/containers/"

      output.elasticsearch:
        hosts: '["https://elasticsearch-master:9200"]'
        username: '${ELASTICSEARCH_USERNAME}'
        password: '${ELASTICSEARCH_PASSWORD}'
        indices:
        - index: "tenx_internal"
          when.equals:
            fields.log_type: "tenx_internal"
        - index: "l1es_dml"
          when.has_fields: ["template", "templateHash"]
        - index: "logs-optimized-%{+yyyy.MM.dd}"

Configure your standard log collection and output. The elasticsearch/templates exporter stores templates in the l1es_dml index for expanding optimized events.

mode: "daemonset"

config:
  receivers:
    filelog:
      include: [/var/log/pods/*/*/*.log]
      operators:
        - type: container
          id: container-parser

  exporters:
    elasticsearch:
      endpoints: ["https://elasticsearch-master:9200"]
      logs_index: logs-optimized
    elasticsearch/templates:
      endpoints: ["https://elasticsearch-master:9200"]
      logs_index: l1es_dml

  service:
    pipelines:
      logs:
        receivers: [filelog]
        processors: [memory_limiter, batch]
        exporters: [elasticsearch]

Note: The Log10x chart automatically routes templates to the elasticsearch/templates exporter and configures sidecar communication.

The optimizer requires template output. Configure your standard input and output with a separate template index.

logstashPipeline:
  logstash.conf: |
    input {
      beats {
        port => 5044
      }
    }
    output {
      if [template] {
        elasticsearch {
          hosts => ["elasticsearch-master:9200"]
          index => "l1es_dml"
          document_id => "%{templateHash}"
        }
      } else {
        elasticsearch {
          hosts => ["elasticsearch-master:9200"]
          index => "logs-optimized-%{+YYYY.MM.dd}"
        }
      }
    }

Note: The Log10x chart automatically configures sidecar communication for optimization.

For Kubernetes, use the Fluent Bit tab — Splunk Connect for Kubernetes is Fluent Bit-based. For VM infrastructure, see the Splunk UF optimizer guide.

For Kubernetes, use the Fluent Bit or OTel Collector tab. For VM infrastructure, see the Datadog Agent optimizer guide.

Step 7: Deploy

Create your namespace (if needed) and deploy:

kubectl create namespace logging
helm install my-edge-optimizer log10x-fluent/fluentd \
  -f my-edge-optimizer.yaml \
  --namespace logging
helm install my-edge-optimizer log10x-fluent/fluent-bit \
  -f my-edge-optimizer.yaml \
  --namespace logging
helm install my-edge-optimizer log10x-elastic/filebeat-10x \
  -f my-edge-optimizer.yaml \
  --namespace logging
helm install my-edge-optimizer log10x-otel/otel-collector-10x \
  -f my-edge-optimizer.yaml \
  --namespace logging
helm install my-edge-optimizer log10x-elastic/logstash-10x \
  -f my-edge-optimizer.yaml \
  --namespace logging

For Kubernetes, use the Fluent Bit tab — Splunk Connect for Kubernetes is Fluent Bit-based. For VM infrastructure, see the Splunk UF optimizer guide.

For Kubernetes, use the Fluent Bit or OTel Collector tab. For VM infrastructure, see the Datadog Agent optimizer guide.

Step 8: Verify

Check pods are running:

kubectl get pods -l app.kubernetes.io/instance=my-edge-optimizer -n logging

Check pod logs for errors:

kubectl logs -l app.kubernetes.io/instance=my-edge-optimizer -n logging --tail=100

Verify no errors appear in the log file.

View results in the dashboard:

Once running, view your cost analytics in the Edge Optimizer Dashboard.

Quickstart Full Sample
my-edge-optimizer.yaml
tenx:
  enabled: true
  apiKey: "YOUR-LICENSE-KEY-HERE"
  kind: "optimize"
  runtimeName: "my-fluentd-optimizer"

  github:
    config:
      enabled: true
      token: "YOUR-GITHUB-TOKEN"
      repo: "YOUR-ACCOUNT/REPO-NAME"

  outputConfigs:
    06_final_output.conf: |-
      <label @FINAL-OUTPUT>
        <match **>
          @type elasticsearch
          host "elasticsearch-master"
          port 9200
        </match>
      </label>

    07_tenx_templates.conf: |-
      <label @TENX-TEMPLATE>
        <match **>
          @type elasticsearch
          host "elasticsearch-master"
          port 9200
          index_name l1es_dml
          id_key templateHash
        </match>
      </label>
my-edge-optimizer.yaml
tenx:
  enabled: true
  apiKey: "YOUR-LICENSE-KEY-HERE"
  kind: "optimize"
  runtimeName: "my-fluent-bit-optimizer"

  configFiles:
    tenx-templates-output.conf: |
      [OUTPUT]
          Name es
          Match tenx-template
          Host elasticsearch-master
          Index l1es_dml
          Id_Key templateHash

config:
  outputs: |
    [OUTPUT]
        Name es
        Match kube.*
        Host elasticsearch-master
        Logstash_Format On
my-edge-optimizer.yaml
tenx:
  enabled: true
  apiKey: "YOUR-LICENSE-KEY-HERE"
  kind: "optimize"
  runtimeName: "my-filebeat-optimizer"

  github:
    config:
      enabled: true
      token: "YOUR-GITHUB-TOKEN"
      repo: "YOUR-ACCOUNT/REPO-NAME"

daemonset:
  filebeatConfig:
    filebeat.yml: |
      filebeat.inputs:
      - type: filestream
        id: tenx_internal
        paths:
          - /var/log/tenx/*.log
        fields:
          log_type: tenx_internal
      - type: container
        paths:
          - /var/log/containers/*.log
        processors:
        - add_kubernetes_metadata:
            host: ${NODE_NAME}
            matchers:
            - logs_path:
                logs_path: "/var/log/containers/"

      output.elasticsearch:
        hosts: '["https://elasticsearch-master:9200"]'
        indices:
        - index: "tenx_internal"
          when.equals:
            fields.log_type: "tenx_internal"
        - index: "l1es_dml"
          when.has_fields: ["template", "templateHash"]
        - index: "logs-optimized-%{+yyyy.MM.dd}"
my-edge-optimizer.yaml
mode: "daemonset"

tenx:
  enabled: true
  apiKey: "YOUR-LICENSE-KEY-HERE"
  kind: "optimize"
  runtimeName: "my-otel-optimizer"

  github:
    config:
      enabled: true
      token: "YOUR-GITHUB-TOKEN"
      repo: "YOUR-ACCOUNT/REPO-NAME"

config:
  exporters:
    elasticsearch:
      endpoints: ["https://elasticsearch-master:9200"]
      logs_index: logs-optimized
    elasticsearch/templates:
      endpoints: ["https://elasticsearch-master:9200"]
      logs_index: l1es_dml

  service:
    pipelines:
      logs:
        receivers: [filelog]
        processors: [memory_limiter, batch]
        exporters: [elasticsearch]
my-edge-optimizer.yaml
tenx:
  enabled: true
  apiKey: "YOUR-LICENSE-KEY-HERE"
  kind: "optimize"
  runtimeName: "my-logstash-optimizer"

  github:
    config:
      enabled: true
      token: "YOUR-GITHUB-TOKEN"
      repo: "YOUR-ACCOUNT/REPO-NAME"

# Logstash pipeline for final destination
logstashPipeline:
  output.conf: |
    output {
      elasticsearch {
        hosts => ["elasticsearch-master:9200"]
        index => "logs-optimized"
      }
    }

For Kubernetes, use the Fluent Bit tab — Splunk Connect for Kubernetes is Fluent Bit-based. For VM infrastructure, see the Splunk UF optimizer guide.

For Kubernetes, use the Fluent Bit or OTel Collector tab. For VM infrastructure, see the Datadog Agent optimizer guide.

Datadog Output Examples

To send optimized events to Datadog, use the file relay pattern: Fluent Bit writes optimized events to a folder that the Datadog Agent monitors. This keeps the Datadog Agent as the forwarder (handling buffering, retries, metadata enrichment) while 10x optimizes events inline.

my-edge-optimizer.yaml
tenx:
  enabled: true
  apiKey: "YOUR-LICENSE-KEY-HERE"
  kind: "optimize"
  runtimeName: "my-fluentbit-optimizer"

config:
  outputs: |
    [OUTPUT]
        Name         file
        Match        *
        Path         /var/log/optimized
        Format       plain

Then configure the Datadog Agent to monitor the optimized output folder:

datadog-agent conf.d/optimized.d/conf.yaml
logs:
  - type: file
    path: /var/log/optimized/*.log
    service: myapp
    source: myapp

On EKS, mount a shared emptyDir volume between the Fluent Bit + 10x pod and the Datadog Agent DaemonSet at /var/log/optimized.

my-edge-optimizer.yaml
mode: "daemonset"

tenx:
  enabled: true
  apiKey: "YOUR-LICENSE-KEY-HERE"
  kind: "optimize"
  runtimeName: "my-otel-optimizer"

config:
  exporters:
    datadog:
      api:
        key: "${env:DD_API_KEY}"
        site: datadoghq.com

  service:
    pipelines:
      logs:
        receivers: [filelog]
        processors: [memory_limiter, batch]
        exporters: [datadog]
Splunk HEC Output Examples

To send optimized events to Splunk instead of Elasticsearch, use Splunk HEC (HTTP Event Collector) output. Templates can be stored in a separate Splunk index.

my-edge-optimizer.yaml
tenx:
  enabled: true
  apiKey: "YOUR-LICENSE-KEY-HERE"
  kind: "optimize"
  runtimeName: "my-fluentd-optimizer"

  outputConfigs:
    06_final_output.conf: |-
      <label @FINAL-OUTPUT>
        <match **>
          @type splunk_hec
          hec_host "splunk-hec.example.com"
          hec_port 8088
          hec_token "YOUR-HEC-TOKEN"
          index main
          source kubernetes
          sourcetype _json
        </match>
      </label>

    07_tenx_templates.conf: |-
      <label @TENX-TEMPLATE>
        <match **>
          @type splunk_hec
          hec_host "splunk-hec.example.com"
          hec_port 8088
          hec_token "YOUR-HEC-TOKEN"
          index tenx_templates
          source log10x
          sourcetype _json
        </match>
      </label>
my-edge-optimizer.yaml
tenx:
  enabled: true
  apiKey: "YOUR-LICENSE-KEY-HERE"
  kind: "optimize"
  runtimeName: "my-fluentbit-optimizer"

  configFiles:
    tenx-templates-output.conf: |
      [OUTPUT]
          Name splunk
          Match tenx-template
          Host splunk-hec.example.com
          Port 8088
          Splunk_Token YOUR-HEC-TOKEN
          Splunk_Send_Raw On
          TLS On
          TLS.Verify Off

config:
  outputs: |
    [OUTPUT]
        Name splunk
        Match kube.*
        Host splunk-hec.example.com
        Port 8088
        Splunk_Token YOUR-HEC-TOKEN
        Splunk_Send_Raw On
        TLS On
        TLS.Verify Off
my-edge-optimizer.yaml
tenx:
  enabled: true
  apiKey: "YOUR-LICENSE-KEY-HERE"
  kind: "optimize"
  runtimeName: "my-filebeat-optimizer"

daemonset:
  filebeatConfig:
    filebeat.yml: |
      filebeat.inputs:
      - type: container
        paths:
          - /var/log/containers/*.log
        processors:
        - add_kubernetes_metadata:
            host: ${NODE_NAME}
            matchers:
            - logs_path:
                logs_path: "/var/log/containers/"

      output.http:
        hosts: ["https://splunk-hec.example.com:8088/services/collector"]
        headers:
          Authorization: "Splunk YOUR-HEC-TOKEN"
          Content-Type: "application/json"
my-edge-optimizer.yaml
mode: "daemonset"

tenx:
  enabled: true
  apiKey: "YOUR-LICENSE-KEY-HERE"
  kind: "optimize"
  runtimeName: "my-otel-optimizer"

config:
  receivers:
    filelog:
      include: [/var/log/pods/*/*/*.log]
      operators:
        - type: container
          id: container-parser

  exporters:
    splunk_hec:
      endpoint: "https://splunk-hec.example.com:8088/services/collector"
      token: "YOUR-HEC-TOKEN"
      index: main
      tls:
        insecure_skip_verify: true
    splunk_hec/templates:
      endpoint: "https://splunk-hec.example.com:8088/services/collector"
      token: "YOUR-HEC-TOKEN"
      index: tenx_templates
      tls:
        insecure_skip_verify: true

  service:
    pipelines:
      logs:
        receivers: [filelog]
        processors: [memory_limiter, batch]
        exporters: [splunk_hec]
my-edge-optimizer.yaml
tenx:
  enabled: true
  apiKey: "YOUR-LICENSE-KEY-HERE"
  kind: "optimize"
  runtimeName: "my-logstash-optimizer"

logstashPipeline:
  logstash.conf: |
    input {
      beats {
        port => 5044
      }
    }
    output {
      if [template] {
        http {
          url => "https://splunk-hec.example.com:8088/services/collector"
          http_method => "post"
          headers => ["Authorization", "Splunk YOUR-HEC-TOKEN"]
          format => "json"
          mapping => {"index" => "tenx_templates"}
        }
      } else {
        http {
          url => "https://splunk-hec.example.com:8088/services/collector"
          http_method => "post"
          headers => ["Authorization", "Splunk YOUR-HEC-TOKEN"]
          format => "json"
        }
      }
    }