Deploy

Deploy the Edge Regulator app to Kubernetes via Helm.

The chart deploys your forwarder with the 10x engine as a sidecar process. Most forwarders run as a DaemonSet, while Logstash runs as a StatefulSet.

Step 1: Prerequisites
Requirement Description
Log10x License Your license key (get one)
Helm Helm CLI installed
kubectl Configured to access your cluster
GitHub Token Personal access token for config repo (create one)
Output Destination Elasticsearch, Splunk, or other log backend configured
Step 2: Add Helm Repository
helm repo add log10x-fluent https://log-10x.github.io/fluent-helm-charts
helm repo update
helm search repo fluentd
helm repo add log10x-fluent https://log-10x.github.io/fluent-helm-charts
helm repo update
helm search repo fluent-bit
helm repo add log10x-elastic https://log-10x.github.io/elastic-helm-charts
helm repo update
helm search repo filebeat-10x
helm repo add log10x-otel https://log-10x.github.io/opentelemetry-helm-charts
helm repo update
helm search repo opentelemetry-collector
helm repo add log10x-elastic https://log-10x.github.io/elastic-helm-charts
helm repo update
helm search repo logstash-10x

For Kubernetes, use the Fluent Bit tab — Splunk Connect for Kubernetes is Fluent Bit-based. For VM infrastructure, see the Splunk UF regulator guide.

For Kubernetes, use the Fluent Bit or OTel Collector tab. For VM infrastructure, see the Datadog Agent regulator guide.

View all chart values:

helm show values log10x-fluent/fluentd
helm show values log10x-fluent/fluent-bit
helm show values log10x-elastic/filebeat-10x
helm show values log10x-otel/otel-collector-10x
helm show values log10x-elastic/logstash-10x

For Kubernetes, use the Fluent Bit tab — Splunk Connect for Kubernetes is Fluent Bit-based. For VM infrastructure, see the Splunk UF regulator guide.

For Kubernetes, use the Fluent Bit or OTel Collector tab. For VM infrastructure, see the Datadog Agent regulator guide.

Step 3: Configure Application

Create a new file called my-edge-regulator.yaml in your working directory. This Helm values file will be used in all subsequent steps.

All 10x values are nested under the tenx block. Charts retain all original values from official Fluentd, Fluent Bit, and Filebeat charts.

my-edge-regulator.yaml
tenx:
  enabled: true
  apiKey: "YOUR-LICENSE-KEY-HERE"
  kind: "regulate"
  runtimeName: my-edge-regulator
my-edge-regulator.yaml
tenx:
  enabled: true
  apiKey: "YOUR-LICENSE-KEY-HERE"
  kind: "regulate"
  runtimeName: my-edge-regulator
my-edge-regulator.yaml
mode: "daemonset"

tenx:
  enabled: true
  apiKey: "YOUR-LICENSE-KEY-HERE"
  kind: "regulate"
  runtimeName: my-edge-regulator
my-edge-regulator.yaml
tenx:
  enabled: true
  apiKey: "YOUR-LICENSE-KEY-HERE"
  kind: "regulate"
  runtimeName: my-edge-regulator

For Kubernetes, use the Fluent Bit tab — Splunk Connect for Kubernetes is Fluent Bit-based. For VM infrastructure, see the Splunk UF regulator guide.

For Kubernetes, use the Fluent Bit or OTel Collector tab. For VM infrastructure, see the Datadog Agent regulator guide.

Step 4: GitOps (optional)

Log10x uses GitOps to manage configuration centrally.

Setup steps:

  1. Fork the Config Repository
  2. Create a branch for your configuration
  3. Edit the app configuration to match your metric output and enrichment options

Add GitHub credentials to your my-edge-regulator.yaml:

my-edge-regulator.yaml
tenx:
  github:
    config:
      enabled: true
      token: "YOUR-GITHUB-TOKEN"
      repo: "YOUR-ACCOUNT/REPO-NAME"
      branch: "my-edge-regulator-config"    # Optional

    symbols:
      enabled: false                         # Enable if using symbol library
      token: "YOUR-GITHUB-TOKEN"
      repo: "YOUR-ACCOUNT/SYMBOLS-REPO"
Step 5: Configure Secrets

Store sensitive credentials in Kubernetes Secrets. Only add secrets for metric outputs you've configured.

Create the secret:

kubectl create secret generic edge-regulator-credentials \
  --from-literal=elasticsearch-username=elastic \
  --from-literal=elasticsearch-password=YOUR_ES_PASSWORD \
  --from-literal=datadog-api-key=YOUR_DATADOG_API_KEY

Note: Only include credentials for outputs you've configured.

Add secret references to your my-edge-regulator.yaml:

my-edge-regulator.yaml
env:
  # For Datadog metrics
  - name: DD_API_KEY
    valueFrom:
      secretKeyRef:
        name: edge-regulator-credentials
        key: datadog-api-key

  # For Elasticsearch metrics
  # - name: ELASTIC_API_KEY
  #   valueFrom:
  #     secretKeyRef:
  #       name: edge-regulator-credentials
  #       key: elastic-api-key

  # For AWS CloudWatch metrics
  # - name: AWS_ACCESS_KEY_ID
  #   valueFrom:
  #     secretKeyRef:
  #       name: edge-regulator-credentials
  #       key: aws-access-key-id

  # For SignalFx metrics
  # - name: SIGNALFX_ACCESS_TOKEN
  #   valueFrom:
  #     secretKeyRef:
  #       name: edge-regulator-credentials
  #       key: signalfx-access-token
env:
  # For Datadog metrics
  - name: DD_API_KEY
    valueFrom:
      secretKeyRef:
        name: edge-regulator-credentials
        key: datadog-api-key
daemonset:
  extraEnvs:
    # For Elasticsearch output
    - name: ELASTICSEARCH_USERNAME
      valueFrom:
        secretKeyRef:
          name: edge-regulator-credentials
          key: elasticsearch-username
    - name: ELASTICSEARCH_PASSWORD
      valueFrom:
        secretKeyRef:
          name: edge-regulator-credentials
          key: elasticsearch-password

    # For Datadog metrics (optional)
    # - name: DD_API_KEY
    #   valueFrom:
    #     secretKeyRef:
    #       name: edge-regulator-credentials
    #       key: datadog-api-key
extraEnvs:
  # For Datadog metrics
  - name: DD_API_KEY
    valueFrom:
      secretKeyRef:
        name: edge-regulator-credentials
        key: datadog-api-key
extraEnvs:
  - name: DD_API_KEY
    valueFrom:
      secretKeyRef:
        name: edge-regulator-credentials
        key: datadog-api-key

For Kubernetes, use the Fluent Bit tab — Splunk Connect for Kubernetes is Fluent Bit-based. For VM infrastructure, see the Splunk UF regulator guide.

For Kubernetes, use the Fluent Bit or OTel Collector tab. For VM infrastructure, see the Datadog Agent regulator guide.

Step 6: Forwarder

Configure your forwarder for log collection and output destinations. The Log10x regulator filters events before they reach your final destination.

Configure your output destination. The chart automatically routes events through the regulator.

tenx:
  outputConfigs:
    # Final destination for filtered events
    06_final_output.conf: |-
      <label @FINAL-OUTPUT>
        <match **>
          @type elasticsearch
          host "elasticsearch-master"
          port 9200
        </match>
      </label>
config:
  outputs: |
    [OUTPUT]
        Name es
        Match kube.*
        Host elasticsearch-master
        Logstash_Format On

Note: The Log10x chart automatically configures event routing through the regulator.

daemonset:
  filebeatConfig:
    filebeat.yml: |
      filebeat.inputs:
      - type: filestream
        id: tenx_internal
        paths:
          - /var/log/tenx/*.log
        fields:
          log_type: tenx_internal
      - type: container
        paths:
          - /var/log/containers/*.log
        processors:
        - add_kubernetes_metadata:
            host: ${NODE_NAME}
            matchers:
            - logs_path:
                logs_path: "/var/log/containers/"

      output.elasticsearch:
        hosts: '["https://elasticsearch-master:9200"]'
        username: '${ELASTICSEARCH_USERNAME}'
        password: '${ELASTICSEARCH_PASSWORD}'
        indices:
        - index: "tenx_internal"
          when.equals:
            fields.log_type: "tenx_internal"
        - index: "logs-filtered-%{+yyyy.MM.dd}"
mode: "daemonset"

config:
  receivers:
    filelog:
      include: [/var/log/pods/*/*/*.log]
      operators:
        - type: container
          id: container-parser

  exporters:
    elasticsearch:
      endpoints: ["https://elasticsearch-master:9200"]
      logs_index: logs-filtered

  service:
    pipelines:
      logs:
        receivers: [filelog]
        processors: [memory_limiter, batch]
        exporters: [elasticsearch]

Note: The Log10x chart automatically configures sidecar communication for filtering.

logstashPipeline:
  logstash.conf: |
    input {
      beats {
        port => 5044
      }
    }
    output {
      elasticsearch {
        hosts => ["elasticsearch-master:9200"]
        index => "logs-filtered-%{+YYYY.MM.dd}"
      }
    }

Note: The Log10x chart automatically configures sidecar communication for filtering.

For Kubernetes, use the Fluent Bit tab — Splunk Connect for Kubernetes is Fluent Bit-based. For VM infrastructure, see the Splunk UF regulator guide.

For Kubernetes, use the Fluent Bit or OTel Collector tab. For VM infrastructure, see the Datadog Agent regulator guide.

Step 7: Deploy

Create your namespace (if needed) and deploy:

kubectl create namespace logging
helm install my-edge-regulator log10x-fluent/fluentd \
  -f my-edge-regulator.yaml \
  --namespace logging
helm install my-edge-regulator log10x-fluent/fluent-bit \
  -f my-edge-regulator.yaml \
  --namespace logging
helm install my-edge-regulator log10x-elastic/filebeat-10x \
  -f my-edge-regulator.yaml \
  --namespace logging
helm install my-edge-regulator log10x-otel/otel-collector-10x \
  -f my-edge-regulator.yaml \
  --namespace logging
helm install my-edge-regulator log10x-elastic/logstash-10x \
  -f my-edge-regulator.yaml \
  --namespace logging

For Kubernetes, use the Fluent Bit tab — Splunk Connect for Kubernetes is Fluent Bit-based. For VM infrastructure, see the Splunk UF regulator guide.

For Kubernetes, use the Fluent Bit or OTel Collector tab. For VM infrastructure, see the Datadog Agent regulator guide.

Step 8: Verify

Check pods are running:

kubectl get pods -l app.kubernetes.io/instance=my-edge-regulator -n logging

Check pod logs for errors:

kubectl logs -l app.kubernetes.io/instance=my-edge-regulator -n logging --tail=100

Verify no errors appear in the log file.

View results in the dashboard:

Once running, view your cost analytics in the Edge Regulator Dashboard.

Quickstart Full Sample
my-edge-regulator.yaml
tenx:
  enabled: true
  apiKey: "YOUR-LICENSE-KEY-HERE"
  kind: "regulate"
  runtimeName: "my-edge-regulator-fluentd"

  github:
    config:
      enabled: true
      token: "YOUR-GITHUB-TOKEN"
      repo: "YOUR-ACCOUNT/REPO-NAME"
my-edge-regulator.yaml
tenx:
  enabled: true
  apiKey: "YOUR-LICENSE-KEY-HERE"
  kind: "regulate"
  runtimeName: "my-edge-regulator-fluent-bit"

  github:
    config:
      enabled: true
      token: "YOUR-GITHUB-TOKEN"
      repo: "YOUR-ACCOUNT/REPO-NAME"
my-edge-regulator.yaml
tenx:
  enabled: true
  apiKey: "YOUR-LICENSE-KEY-HERE"
  kind: "regulate"
  runtimeName: "my-edge-regulator-filebeat"

  github:
    config:
      enabled: true
      token: "YOUR-GITHUB-TOKEN"
      repo: "YOUR-ACCOUNT/REPO-NAME"

daemonset:
  filebeatConfig:
    filebeat.yml: |
      filebeat.inputs:
      - type: filestream
        id: tenx_internal
        paths:
          - /var/log/tenx/*.log
        fields:
          log_type: tenx_internal
      - type: container
        paths:
          - /var/log/containers/*.log
        processors:
        - add_kubernetes_metadata:
            host: ${NODE_NAME}
            matchers:
            - logs_path:
                logs_path: "/var/log/containers/"

      output.elasticsearch:
        hosts: '["https://elasticsearch-master:9200"]'
        indices:
        - index: "tenx_internal"
          when.equals:
            fields.log_type: "tenx_internal"
        - index: "logs-filtered-%{+yyyy.MM.dd}"
my-edge-regulator.yaml
mode: "daemonset"

tenx:
  enabled: true
  apiKey: "YOUR-LICENSE-KEY-HERE"
  kind: "regulate"
  runtimeName: "my-otel-regulator"

  github:
    config:
      enabled: true
      token: "YOUR-GITHUB-TOKEN"
      repo: "YOUR-ACCOUNT/REPO-NAME"

config:
  exporters:
    elasticsearch:
      endpoints: ["https://elasticsearch-master:9200"]
      logs_index: logs-filtered

  service:
    pipelines:
      logs:
        receivers: [filelog]
        processors: [memory_limiter, batch]
        exporters: [elasticsearch]
my-edge-regulator.yaml
tenx:
  enabled: true
  apiKey: "YOUR-LICENSE-KEY-HERE"
  kind: "regulate"
  runtimeName: "my-logstash-regulator"

  github:
    config:
      enabled: true
      token: "YOUR-GITHUB-TOKEN"
      repo: "YOUR-ACCOUNT/REPO-NAME"

# Logstash pipeline for final destination
logstashPipeline:
  output.conf: |
    output {
      elasticsearch {
        hosts => ["elasticsearch-master:9200"]
        index => "logs-filtered"
      }
    }

For Kubernetes, use the Fluent Bit tab — Splunk Connect for Kubernetes is Fluent Bit-based. For VM infrastructure, see the Splunk UF regulator guide.

For Kubernetes, use the Fluent Bit or OTel Collector tab. For VM infrastructure, see the Datadog Agent regulator guide.

Datadog Output Examples

To send filtered events to Datadog, use the file relay pattern: Fluent Bit writes regulated events to a folder that the Datadog Agent monitors. This keeps the Datadog Agent as the forwarder (handling buffering, retries, metadata enrichment) while 10x regulates events inline.

my-edge-regulator.yaml
tenx:
  enabled: true
  apiKey: "YOUR-LICENSE-KEY-HERE"
  kind: "regulate"
  runtimeName: "my-fluentbit-regulator"

config:
  outputs: |
    [OUTPUT]
        Name         file
        Match        *
        Path         /var/log/regulated
        Format       plain

Then configure the Datadog Agent to monitor the regulated output folder:

datadog-agent conf.d/regulated.d/conf.yaml
logs:
  - type: file
    path: /var/log/regulated/*.log
    service: myapp
    source: myapp

On EKS, mount a shared emptyDir volume between the Fluent Bit + 10x pod and the Datadog Agent DaemonSet at /var/log/regulated.

my-edge-regulator.yaml
mode: "daemonset"

tenx:
  enabled: true
  apiKey: "YOUR-LICENSE-KEY-HERE"
  kind: "regulate"
  runtimeName: "my-otel-regulator"

config:
  exporters:
    datadog:
      api:
        key: "${env:DD_API_KEY}"
        site: datadoghq.com

  service:
    pipelines:
      logs:
        receivers: [filelog]
        processors: [memory_limiter, batch]
        exporters: [datadog]
Splunk HEC Output Examples

To send filtered events to Splunk instead of Elasticsearch, use Splunk HEC output.

my-edge-regulator.yaml
tenx:
  enabled: true
  apiKey: "YOUR-LICENSE-KEY-HERE"
  kind: "regulate"
  runtimeName: "my-fluentd-regulator"

  outputConfigs:
    06_final_output.conf: |-
      <label @FINAL-OUTPUT>
        <match **>
          @type splunk_hec
          hec_host "splunk-hec.example.com"
          hec_port 8088
          hec_token "YOUR-HEC-TOKEN"
          index main
          source kubernetes
        </match>
      </label>
my-edge-regulator.yaml
tenx:
  enabled: true
  apiKey: "YOUR-LICENSE-KEY-HERE"
  kind: "regulate"
  runtimeName: "my-fluentbit-regulator"

config:
  outputs: |
    [OUTPUT]
        Name splunk
        Match kube.*
        Host splunk-hec.example.com
        Port 8088
        Splunk_Token YOUR-HEC-TOKEN
        Splunk_Send_Raw On
        TLS On
        TLS.Verify Off
my-edge-regulator.yaml
mode: "daemonset"

tenx:
  enabled: true
  apiKey: "YOUR-LICENSE-KEY-HERE"
  kind: "regulate"
  runtimeName: "my-otel-regulator"

config:
  exporters:
    splunk_hec:
      endpoint: "https://splunk-hec.example.com:8088/services/collector"
      token: "YOUR-HEC-TOKEN"
      index: main
      tls:
        insecure_skip_verify: true

  service:
    pipelines:
      logs:
        receivers: [filelog]
        processors: [memory_limiter, batch]
        exporters: [splunk_hec]
my-edge-regulator.yaml
tenx:
  enabled: true
  apiKey: "YOUR-LICENSE-KEY-HERE"
  kind: "regulate"
  runtimeName: "my-logstash-regulator"

logstashPipeline:
  output.conf: |
    output {
      http {
        url => "https://splunk-hec.example.com:8088/services/collector"
        http_method => "post"
        headers => ["Authorization", "Splunk YOUR-HEC-TOKEN"]
        format => "json"
      }
    }