Deploy

Deploy the Edge Reporter app to Kubernetes via Helm.

The chart deploys your forwarder with the 10x engine as a sidecar process. Most forwarders run as a DaemonSet, while Logstash runs as a StatefulSet.

Step 1: Prerequisites
Requirement Description
Log10x License Your license key (get one)
Helm Helm CLI installed
kubectl Configured to access your cluster
GitHub Token Personal access token for config repo (create one)
Output Destination Elasticsearch, Splunk, or other log backend configured
Step 2: Add Helm Repository
helm repo add log10x-fluent https://log-10x.github.io/fluent-helm-charts
helm repo update
helm search repo fluentd
helm repo add log10x-fluent https://log-10x.github.io/fluent-helm-charts
helm repo update
helm search repo fluent-bit
helm repo add log10x-elastic https://log-10x.github.io/elastic-helm-charts
helm repo update
helm search repo filebeat-10x
helm repo add log10x-otel https://log-10x.github.io/opentelemetry-helm-charts
helm repo update
helm search repo opentelemetry-collector
helm repo add log10x-elastic https://log-10x.github.io/elastic-helm-charts
helm repo update
helm search repo logstash-10x

For Kubernetes, use the Fluent Bit tab — Splunk Connect for Kubernetes is Fluent Bit-based. For VM infrastructure, see the Splunk UF reporter guide.

For Kubernetes, use the Fluent Bit or OTel Collector tab. For VM infrastructure, see the Datadog Agent reporter guide.

View all chart values:

helm show values log10x-fluent/fluentd
helm show values log10x-fluent/fluent-bit
helm show values log10x-elastic/filebeat-10x
helm show values log10x-otel/otel-collector-10x
helm show values log10x-elastic/logstash-10x

For Kubernetes, use the Fluent Bit tab — Splunk Connect for Kubernetes is Fluent Bit-based. For VM infrastructure, see the Splunk UF reporter guide.

For Kubernetes, use the Fluent Bit or OTel Collector tab. For VM infrastructure, see the Datadog Agent reporter guide.

Step 3: Configure Application

Create a new file called my-edge-reporter.yaml in your working directory. This Helm values file will be used in all subsequent steps.

All 10x values are nested under the tenx block. Charts retain all original values from official Fluentd, Fluent Bit, and Filebeat charts.

my-edge-reporter.yaml
tenx:
  enabled: true
  apiKey: "YOUR-LICENSE-KEY-HERE"
  kind: "report"
  runtimeName: my-edge-reporter
my-edge-reporter.yaml
tenx:
  enabled: true
  apiKey: "YOUR-LICENSE-KEY-HERE"
  kind: "report"
  runtimeName: my-edge-reporter
my-edge-reporter.yaml
mode: "daemonset"

tenx:
  enabled: true
  apiKey: "YOUR-LICENSE-KEY-HERE"
  kind: "report"
  runtimeName: my-edge-reporter
my-edge-reporter.yaml
tenx:
  enabled: true
  apiKey: "YOUR-LICENSE-KEY-HERE"
  kind: "report"
  runtimeName: my-edge-reporter

For Kubernetes, use the Fluent Bit tab — Splunk Connect for Kubernetes is Fluent Bit-based. For VM infrastructure, see the Splunk UF reporter guide.

For Kubernetes, use the Fluent Bit or OTel Collector tab. For VM infrastructure, see the Datadog Agent reporter guide.

Step 4: GitOps (optional)

Log10x uses GitOps to manage configuration centrally.

Setup steps:

  1. Fork the Config Repository
  2. Create a branch for your configuration
  3. Edit the app configuration to match your metric output

Add GitHub credentials to your my-edge-reporter.yaml:

my-edge-reporter.yaml
tenx:
  github:
    config:
      enabled: true
      token: "YOUR-GITHUB-TOKEN"
      repo: "YOUR-ACCOUNT/REPO-NAME"
      branch: "my-edge-reporter-config"    # Optional

    symbols:
      enabled: false                        # Enable if using symbol library
      token: "YOUR-GITHUB-TOKEN"
      repo: "YOUR-ACCOUNT/SYMBOLS-REPO"
Step 5: Configure Secrets

Store sensitive credentials in Kubernetes Secrets. Only add secrets for metric outputs you've configured.

Create the secret:

kubectl create secret generic edge-reporter-credentials \
  --from-literal=elasticsearch-username=elastic \
  --from-literal=elasticsearch-password=YOUR_ES_PASSWORD \
  --from-literal=datadog-api-key=YOUR_DATADOG_API_KEY

Note: Only include credentials for outputs you've configured.

Add secret references to your my-edge-reporter.yaml:

my-edge-reporter.yaml
env:
  # For Datadog metrics
  - name: DD_API_KEY
    valueFrom:
      secretKeyRef:
        name: edge-reporter-credentials
        key: datadog-api-key

  # For Elasticsearch metrics
  # - name: ELASTIC_API_KEY
  #   valueFrom:
  #     secretKeyRef:
  #       name: edge-reporter-credentials
  #       key: elastic-api-key
my-edge-reporter.yaml
env:
  # For Datadog metrics
  - name: DD_API_KEY
    valueFrom:
      secretKeyRef:
        name: edge-reporter-credentials
        key: datadog-api-key
daemonset:
  extraEnvs:
    # For Elasticsearch output
    - name: ELASTICSEARCH_USERNAME
      valueFrom:
        secretKeyRef:
          name: edge-reporter-credentials
          key: elasticsearch-username
    - name: ELASTICSEARCH_PASSWORD
      valueFrom:
        secretKeyRef:
          name: edge-reporter-credentials
          key: elasticsearch-password

    # For Datadog metrics (optional)
    # - name: DD_API_KEY
    #   valueFrom:
    #     secretKeyRef:
    #       name: edge-reporter-credentials
    #       key: datadog-api-key
extraEnvs:
  - name: DD_API_KEY
    valueFrom:
      secretKeyRef:
        name: edge-reporter-credentials
        key: datadog-api-key
extraEnvs:
  - name: DD_API_KEY
    valueFrom:
      secretKeyRef:
        name: edge-reporter-credentials
        key: datadog-api-key

For Kubernetes, use the Fluent Bit tab — Splunk Connect for Kubernetes is Fluent Bit-based. For VM infrastructure, see the Splunk UF reporter guide.

For Kubernetes, use the Fluent Bit or OTel Collector tab. For VM infrastructure, see the Datadog Agent reporter guide.

Step 6: Forwarder

Configure which events are sent to the 10x reporter and define output destinations.

Add forwarder configuration to your my-edge-reporter.yaml:

Configure your output destination. The chart automatically routes events through the reporter.

my-edge-reporter.yaml
tenx:
  outputConfigs:
    # Final destination for events
    06_final_output.conf: |-
      <label @FINAL-OUTPUT>
        <match **>
          @type elasticsearch
          host "elasticsearch-master"
          port 9200
          user elastic
          password changeme
        </match>
      </label>
config:
  outputs: |
    [OUTPUT]
        Name es
        Match kube.*
        Host elasticsearch-master
        Logstash_Format On

Note: The Log10x chart automatically configures event routing through the reporter.

daemonset:
  filebeatConfig:
    filebeat.yml: |
      filebeat.inputs:
      - type: filestream
        id: tenx_internal
        paths:
          - /var/log/tenx/*.log
        fields:
          log_type: tenx_internal
      - type: container
        paths:
          - /var/log/containers/*.log
        processors:
        - add_kubernetes_metadata:
            host: ${NODE_NAME}
            matchers:
            - logs_path:
                logs_path: "/var/log/containers/"

      output.elasticsearch:
        hosts: '["https://elasticsearch-master:9200"]'
        username: '${ELASTICSEARCH_USERNAME}'
        password: '${ELASTICSEARCH_PASSWORD}'
        indices:
        - index: "tenx_internal"
          when.equals:
            fields.log_type: "tenx_internal"
        - index: "logs-%{+yyyy.MM.dd}"

The Log10x sidecar receives logs via Unix socket. Configure OTel Collector to send logs to the sidecar and receive processed logs back.

mode: "daemonset"

config:
  receivers:
    filelog:
      include: [/var/log/pods/*/*/*.log]
      operators:
        - type: container
          id: container-parser

  exporters:
    elasticsearch:
      endpoints: ["https://elasticsearch-master:9200"]
      logs_index: logs

  service:
    pipelines:
      logs:
        receivers: [filelog]
        processors: [memory_limiter, batch]
        exporters: [elasticsearch]

Note: The Log10x chart automatically configures the sidecar communication. The above shows your standard OTel Collector config for log collection and output.

Configure Logstash pipeline for log collection and output.

logstashPipeline:
  logstash.conf: |
    input {
      beats {
        port => 5044
      }
    }
    output {
      elasticsearch {
        hosts => ["elasticsearch-master:9200"]
        index => "logs-%{+YYYY.MM.dd}"
      }
    }

Note: The Log10x chart automatically configures sidecar communication. Configure your standard Logstash input/output above.

For Kubernetes, use the Fluent Bit tab — Splunk Connect for Kubernetes is Fluent Bit-based. For VM infrastructure, see the Splunk UF reporter guide.

For Kubernetes, use the Fluent Bit or OTel Collector tab. For VM infrastructure, see the Datadog Agent reporter guide.

Step 7: Deploy

Create your namespace (if needed) and deploy:

kubectl create namespace logging
helm install my-edge-reporter log10x-fluent/fluentd \
  -f my-edge-reporter.yaml \
  --namespace logging
helm install my-edge-reporter log10x-fluent/fluent-bit \
  -f my-edge-reporter.yaml \
  --namespace logging
helm install my-edge-reporter log10x-elastic/filebeat-10x \
  -f my-edge-reporter.yaml \
  --namespace logging
helm install my-edge-reporter log10x-otel/otel-collector-10x \
  -f my-edge-reporter.yaml \
  --namespace logging
helm install my-edge-reporter log10x-elastic/logstash-10x \
  -f my-edge-reporter.yaml \
  --namespace logging

For Kubernetes, use the Fluent Bit tab — Splunk Connect for Kubernetes is Fluent Bit-based. For VM infrastructure, see the Splunk UF reporter guide.

For Kubernetes, use the Fluent Bit or OTel Collector tab. For VM infrastructure, see the Datadog Agent reporter guide.

Step 8: Verify

Check pods are running:

kubectl get pods -l app.kubernetes.io/instance=my-edge-reporter -n logging

Check pod logs for errors:

kubectl logs -l app.kubernetes.io/instance=my-edge-reporter -n logging --tail=100

Verify no errors appear in the log file.

View results in the dashboard:

Once running, view your cost analytics in the Edge Reporter Dashboard.

Quickstart Full Sample
my-edge-reporter.yaml
tenx:
  enabled: true
  apiKey: "YOUR-LICENSE-KEY-HERE"
  kind: "report"
  runtimeName: "my-fluentd-reporter"

  github:
    config:
      enabled: true
      token: "YOUR-GITHUB-TOKEN"
      repo: "YOUR-ACCOUNT/REPO-NAME"

  outputConfigs:
    06_final_output.conf: |-
      <label @FINAL-OUTPUT>
        <match **>
          @type elasticsearch
          host "elasticsearch-master"
          port 9200
        </match>
      </label>
my-edge-reporter.yaml
tenx:
  enabled: true
  apiKey: "YOUR-LICENSE-KEY-HERE"
  kind: "report"
  runtimeName: "my-fluent-bit-reporter"

config:
  outputs: |
    [OUTPUT]
        Name es
        Match kube.*
        Host elasticsearch-master
        Logstash_Format On
my-edge-reporter.yaml
tenx:
  enabled: true
  apiKey: "YOUR-LICENSE-KEY-HERE"
  kind: "report"
  runtimeName: "my-filebeat-reporter"

  github:
    config:
      enabled: true
      token: "YOUR-GITHUB-TOKEN"
      repo: "YOUR-ACCOUNT/REPO-NAME"

daemonset:
  filebeatConfig:
    filebeat.yml: |
      filebeat.inputs:
      - type: filestream
        id: tenx_internal
        paths:
          - /var/log/tenx/*.log
        fields:
          log_type: tenx_internal
      - type: container
        paths:
          - /var/log/containers/*.log
        processors:
        - add_kubernetes_metadata:
            host: ${NODE_NAME}
            matchers:
            - logs_path:
                logs_path: "/var/log/containers/"

      output.elasticsearch:
        hosts: '["https://elasticsearch-master:9200"]'
        indices:
        - index: "tenx_internal"
          when.equals:
            fields.log_type: "tenx_internal"
        - index: "logs-%{+yyyy.MM.dd}"
my-edge-reporter.yaml
mode: "daemonset"

tenx:
  enabled: true
  apiKey: "YOUR-LICENSE-KEY-HERE"
  kind: "report"
  runtimeName: "my-otel-reporter"

  github:
    config:
      enabled: true
      token: "YOUR-GITHUB-TOKEN"
      repo: "YOUR-ACCOUNT/REPO-NAME"

config:
  exporters:
    elasticsearch:
      endpoints: ["https://elasticsearch-master:9200"]
      logs_index: logs

  service:
    pipelines:
      logs:
        receivers: [filelog]
        processors: [memory_limiter, batch]
        exporters: [elasticsearch]
my-edge-reporter.yaml
tenx:
  enabled: true
  apiKey: "YOUR-LICENSE-KEY-HERE"
  kind: "report"
  runtimeName: "my-logstash-reporter"

  github:
    config:
      enabled: true
      token: "YOUR-GITHUB-TOKEN"
      repo: "YOUR-ACCOUNT/REPO-NAME"

# Logstash pipeline for final destination
logstashPipeline:
  output.conf: |
    output {
      elasticsearch {
        hosts => ["elasticsearch-master:9200"]
        index => "logs"
      }
    }

For Kubernetes, use the Fluent Bit tab — Splunk Connect for Kubernetes is Fluent Bit-based. For VM infrastructure, see the Splunk UF reporter guide.

For Kubernetes, use the Fluent Bit or OTel Collector tab. For VM infrastructure, see the Datadog Agent reporter guide.

Splunk HEC Output Examples

To send events to Splunk instead of Elasticsearch, use Splunk HEC (HTTP Event Collector) output.

my-edge-reporter.yaml
tenx:
  enabled: true
  apiKey: "YOUR-LICENSE-KEY-HERE"
  kind: "report"
  runtimeName: "my-fluentbit-reporter"

  github:
    config:
      enabled: true
      token: "YOUR-GITHUB-TOKEN"
      repo: "YOUR-ACCOUNT/REPO-NAME"

config:
  outputs: |
    [OUTPUT]
        Name        splunk
        Match       *
        Host        splunk-hec.example.com
        Port        8088
        TLS         On
        Splunk_Token YOUR-SPLUNK-HEC-TOKEN
my-edge-reporter.yaml
tenx:
  enabled: true
  apiKey: "YOUR-LICENSE-KEY-HERE"
  kind: "report"
  runtimeName: "my-fluentd-reporter"

  github:
    config:
      enabled: true
      token: "YOUR-GITHUB-TOKEN"
      repo: "YOUR-ACCOUNT/REPO-NAME"

fileConfigs:
  output.conf: |
    <match **>
      @type splunk_hec
      hec_host splunk-hec.example.com
      hec_port 8088
      hec_token YOUR-SPLUNK-HEC-TOKEN
      use_ssl true
    </match>
my-edge-reporter.yaml
tenx:
  enabled: true
  apiKey: "YOUR-LICENSE-KEY-HERE"
  kind: "report"
  runtimeName: "my-filebeat-reporter"

  github:
    config:
      enabled: true
      token: "YOUR-GITHUB-TOKEN"
      repo: "YOUR-ACCOUNT/REPO-NAME"

daemonset:
  filebeatConfig:
    filebeat.yml: |
      filebeat.autodiscover:
        providers:
          - type: kubernetes
            hints.enabled: true
      output.elasticsearch:
        enabled: false
      output.logstash:
        enabled: false
      # Filebeat doesn't have native Splunk HEC output
      # Use Logstash as intermediary or output to file/kafka

Note

Filebeat doesn't have native Splunk HEC support. Consider using Logstash as an intermediary, or use the Kafka output with Splunk Connect for Kafka.

my-edge-reporter.yaml
tenx:
  enabled: true
  apiKey: "YOUR-LICENSE-KEY-HERE"
  kind: "report"
  runtimeName: "my-otel-reporter"

  github:
    config:
      enabled: true
      token: "YOUR-GITHUB-TOKEN"
      repo: "YOUR-ACCOUNT/REPO-NAME"

config:
  exporters:
    splunk_hec:
      token: "YOUR-SPLUNK-HEC-TOKEN"
      endpoint: "https://splunk-hec.example.com:8088/services/collector"
      source: "otel"
      sourcetype: "otel"

  service:
    pipelines:
      logs:
        receivers: [filelog]
        processors: [memory_limiter, batch]
        exporters: [splunk_hec]
my-edge-reporter.yaml
tenx:
  enabled: true
  apiKey: "YOUR-LICENSE-KEY-HERE"
  kind: "report"
  runtimeName: "my-logstash-reporter"

  github:
    config:
      enabled: true
      token: "YOUR-GITHUB-TOKEN"
      repo: "YOUR-ACCOUNT/REPO-NAME"

logstashPipeline:
  output.conf: |
    output {
      http {
        url => "https://splunk-hec.example.com:8088/services/collector"
        http_method => "post"
        format => "json"
        headers => {
          "Authorization" => "Splunk YOUR-SPLUNK-HEC-TOKEN"
        }
      }
    }