Skip to content

Run

Generate filter tables for Edge Regulator instances by analyzing aggregated event statistics from Prometheus.

Data comes from Edge Reporter instances that publish event frequency metrics, enabling centralized rate-based filtering across your infrastructure.

Setup Guide

Follow the steps below. Steps that require customization link to the relevant Configuration section where you can edit on github.dev or locally.

Step 1: Install

Install the Edge binary flavor:

Step 2: Set Environment Variables

Set these environment variables before running. See path configuration for details.

Variable Description
TENX_CONFIG Path to your configuration directory
TENX_API_KEY Your Log10x API key (get one)
GH_TOKEN GitHub personal access token (create one) (if using GitHub output)
export TENX_CONFIG=/path/to/your/config
export TENX_API_KEY=your-api-key
export GH_TOKEN=your-github-token

See best practices for managing secrets in production.

Step 3: Enable Prometheus Input
  1. In the app config, uncomment the Prometheus include entry
  2. In the policy input section below, configure connection details and query parameters
Step 4: Choose Filter Table Output

Push filter tables to a GitHub repository for centralized distribution. Edge Regulators pull these tables automatically.

Benefits: Version control, automatic updates, easy rollback.

Write filter tables to local files for manual distribution.

Benefits: No external dependencies, works in air-gapped environments.

Distribution: Shared storage (NFS, S3), config management (Ansible, Puppet), or manual copying.

  1. In the app config, uncomment your chosen output include entry
  2. In the event outputs section below, configure your output settings
Step 5: Run

Best for: Quick local testing and development.

tenx @apps/edge/policy

Best for: Isolated testing with local configuration.

docker run --rm \
  -v $TENX_CONFIG:/etc/tenx/config/ \
  -e TENX_CONFIG=/etc/tenx/config/ \
  -e TENX_API_KEY=${TENX_API_KEY} \
  -e GH_TOKEN=${GH_TOKEN} \
  ghcr.io/log-10x/pipeline-10x:latest \
  @apps/edge/policy

Best for: CI/CD pipelines with version-controlled configuration.

docker run --rm \
  -e TENX_API_KEY=${TENX_API_KEY} \
  -e GH_TOKEN=${GH_TOKEN} \
  ghcr.io/log-10x/pipeline-10x:latest \
  '@github={"token": "${GH_TOKEN}", "repo": "my-user/my-repo"}' \
  @apps/edge/policy
Step 6: Output

With File Output enabled, the app generates filter tables containing event patterns and frequency rates:

Sample Filter Table (first 25 lines)
symbolMessage,normalized_rate
products_jpg_HTTP_frontend_proxy_Mozilla_X11,3381.58
products_jpg_HTTP_frontend_proxy_cart_Mozilla,3356.73
service_name_recommendation_trace_sampled,1774.70
opentelemetry_demo_logo_png_HTTP_frontend,1056.09
frontend_proxy_Mozilla_X11_Linux_x86_AppleWebKit,445.86
time_level_msg_Reloading_Product_Catalog,1203.03
time_level_msg_Loaded_products,1148.40
Accounting_Consumer,983.08
Sending_Quote,889.58
orders_u003e_u003e_UTC_u003e_severity_timestamp,926.15
sending_postProcessor_severity_timestamp,691.03
Received_quote,1012.73
POST_send_order_confirmation_HTTP,965.85
message_offset_duration_severity_timestamp,730.05
nanos_creditCard_msg_Charge_request_received,693.80
through_transaction_id_severity_timestamp,692.13
service_name_payment_transactionId_visa_amount,730.43
Tracking_ID_Created,752.38
user_id_user_currency_USD_severity_timestamp,544.75
United_States_zipCode_items_item_productId,451.35
migrator_t_level_msg_Migration_successfully,428.33
cache_size_pod_cache_size_pod_cache_api_updates,273.85
Order_confirmation_email_sent_example_com,293.38
email_sent_example_com_severity_timestamp,113.58
email_sent_reed_example_com_severity_timestamp,117.88

Use to: Review discovered patterns, validate frequency calculations, and understand which events will be filtered by downstream Edge Regulator instances.

Step 7: Verify

Verify no errors appear in the log file.

Check output files:

Verify policy files were generated in your configured output paths.

Configuration

To configure the Regulator Policy app, Edit these settings:

Main Config

Main Config

Below is the default configuration from: policy/config.yaml.

Edit Online

Edit config.yaml Locally

#
# 🔟❎ dev app main config

# The dev app locally tests, and validates structured event processing.

# To learn more see https://doc.log10x.com/apps/dev

# ============================ Bootstrap Runtime ==============================

tenx: run

runtimeName: $=TenXEnv.get("TENX_RUNTIME_NAME", "myRegulatorPolicy")

# ============================ Load App Modules ===============================

# Uncomment and edit selected config.yaml files (e.g., run/input/file/config.yaml)

include:

# ------------------------------ App settings ---------------------------------

  # Load general app settings:
  - edge/policy

# ------------------------------ Open Inputs ----------------------------------

  # read log/trace events from inputs to transform into well-defined TenXObjects:

  - run/regulate/policy       # https://doc.log10x.com/run/regulate/policy
Policy Input

Policy Input

Activate Policy regulator lookup inputs to generate a regulator event rate policy lookup file.

Below is the default configuration from: policy/config.yaml.

Edit Online

Edit Policy regulator lookup input Config Locally

# 🔟❎ 'run' policy generator configuration

# Policy generators periodically query a instance Prometheus to generate a filter lookup file on GitHub
# used by Edge regulators to filter 'noisy' telemetry from shipping to outputs (e.g., Splunk, Elastic).

# To learn more see https://doc.log10x.com/run/regulate/policy

# Set the 10x pipeline to 'run'
tenx: run

# =============================== Dependencies ================================

include:
  - run/modules/regulate/policy

# =========================== Prometheus Policy Input =========================

policy:

  - prometheus:

      # 'endpoint' defines the address of the Prometheus HTTP API.
      endpoint: https://prometheus.log10x.com

      # 'series' defines the name of the series to query for events volume in bytes
      series: all_events_summaryBytes_total

      # 'labels' specifies the list of metric labels to group by in the Prometheus 'avg_over_time' query
      labels:
        - message_pattern

      # 'start' sets the query's beginning timestamp for Prometheus data retrieval
      start: $=now() / 1000

      # 'rangeInterval' defines the time range interval for the Prometheus 'avg_over_time' query
      rangeInterval: 24h

      # 'policyStepDuration' defines the resolution step duration for the Prometheus 'avg_over_time' query
      stepDuration: 5m

      # 'topEvents' specifies the number of highest-rate event patterns to return using Prometheus topk operator
      topEvents: 50

    ingestionCostPerGB: 1.5

    output:

      path: $=path("data/sample/policy") + "/policy.csv"

      github:
        # 'repo' specifies the GitHub repo to push output lookup file to
        repo: ""

        # 'branch' specifies the GitHub repo branch to push the file to, defaults to main
        branch: ""

Advanced Settings

To configure advanced options (optional) for the Regulator Policy app, Edit these settings:

Bootstrap

Bootstrap

Configure the Pipeline Bootstrapper to authenticate the log10x account and launch a target pipeline.

Below is the default configuration from: bootstrap/config.yaml.

Edit Online

Edit Pipeline Bootstrapper Config Locally

# 🔟❎ run bootstrap configuration

# This config file specifies bootstrap options for the run pipeline.
# To learn more see https://doc.log10x.com/run/bootstrap

# To learn more see https://doc.log10x.com/config/

tenx: run

# =============================== Launch Settings ==============================

# 'apiKey' specifies an api key used authenticating against the 10x service
apiKey: $=TenXEnv.get("TENX_API_KEY", "NO-API-KEY")

# 'includePaths' specifies folders on disk for resolving relative config file/folder references in addition to the working folder
includePaths: []

# 'quiet' disables printing version information to the console.
# quiet: true

# 'jarFiles' specifies .jar files to dynamically load for use by compile, input and output API extensions.
jarFiles: []

# 'metricEndpoint' specifies the Prometheus endpoint to report usage/health metrics (enterprise version only).
# metricEndpoint: https://prometheus.log10x.com/api/v1/write

# 'disabledArgs' specifies a list of launch arguments that are disallowed from either command line or user config files.
disabledArgs: []

# 'debugEnvVars' list environment variables to debug
debugEnvVars: []
Symbols

Symbols

Below is the default configuration from: symbol/config.yaml.

Edit Online

Edit config.yaml Locally

# 🔟❎ 'run' symbol file configuration

# Loads symbol library files to transform events into well-defined TenXObjects.
# To learn more see https://doc.log10x.com/run/symbol

# Set the 10x pipeline to 'run'
tenx: run

# ============================ Symbol Options =================================

symbol:

  # 'paths' specifies the file/folder locations to scan for symbol library files.
  #  To learn more see https://doc.log10x.com/run/symbol/#symbolpaths
  paths:
    - $=path("data/shared/symbols", false)
    - $=path("<TENX_SYMBOLS_PATH>",  false)

  literals: []
Template

Template

Below is the default configuration from: template/config.yaml.

Edit Online

Edit config.yaml Locally

# 🔟❎ 'run' template file configuration

# Load TenXTemplates .json files that define the structure/schema of TenXObjects.
# To learn more see https://doc.log10x.com/run/template/

# Set the 10x pipeline to 'run'
tenx: run

# =============================== Template ===================================

template:
  # 'files' specifies GLOB pattern for finding JSON-encoded TenXTemplates.
  files:
    - $=path("data/templates/*.json")
    - $=path("data/sample/output/*.json")

  # 'cacheSize' controls the maximum total byte size of templates held
  #  in the in-memory cache vs. on disk. Set to 0 to disable pruning.
  cacheSize: $=parseBytes("10MB")

# =============================== Variable ===================================

var:
  # 'placeholder' specifies a character to use when encoding a TenXTemplate
  #  to signify the location of a runtime variable value.
  placeholder: "$"

  # 'maxRecurIndexes' controls the maximum number of variable values to reuse.
  maxRecurIndexes: 10

# =============================== Timestamp ==================================

timestamp:
  # 'prefix' specifies a prefix for a TenXTemplate's timestamp tokens.
  prefix: (

  # 'postfix' specifies a postfix for a TenXTemplate's timestamp tokens.
  postfix: )
Transform

Transform

Configure the Transform to transform log and trace events into well-defined TenXObjects.

timestamp

Configure the Timestamp parser to extract alphanumeric and epoch timestamp values from input events.

Below is the default configuration from: timestamp/config.yaml.

Edit Online

Edit Timestamp parser Config Locally

# 🔟❎ 'run' timestamp parser configuration

# Identify unix/alphanumeric timestamp structures within TenXTemplates.
# https://doc.log10x.com/run/transform/timestamp/

# Set the 10x pipeline to 'run'
tenx: run

# ============================= Timestamp Options =============================

timestamp:

  # 'maxPerObject' controls the max number of timestamps to add into an TenXObject's
  # 'timestamp' array. Set to 0 for unlimited.
  maxPerObject: 0

  # 'searchDirection' controls the direction(s) from with timestamps are searched for
  #  within the object's 'text' field. Possible values: 

  #  - fromStart: search for 'maxPerObject' from the start of 'text',  
  #    limiting search to 'searchLengthLimitFromStart' characters  

  #  - fromEnd: search backward for 'maxPerObject' from the end of 'text',
  #    limiting search to 'searchLengthLimitFromEnd' characters  

  #  - fromStartAndEnd: search for 'maxPerObject' from both the start and end of 'text'
  #    limiting search to 'searchLengthLimitFromStart' and 'searchLengthLimitFromEnd' respectively  

  #  - none: do not parse timestamps

  searchDirection: fromStartAndEnd

  # 'searchLengthLimitFromStart' limits the number of characters to search for
  #  timestamps from the beginning of the object's 'text' field. 0 for unlimited
  searchLengthLimitFromStart: 0

  # 'searchLengthLimitFromStart' limits the number of characters to search for
  #  timestamps from the end of the object's 'text' field. 0 for unlimited
  searchLengthLimitFromEnd: 0

  # 'zone' controls the timezone for formatting timestamp epoch
  #  values to string. The 'java.time.ZoneId.of(String zoneId)' 
  #  is used to obtain the timezone from the 'zone' value.
  #  If set to null, the host OS timezone is used. 
  zone: null

  # 'literals' contains an array of strings to treat as a part 
  #  of any timestamp candidate found when structuring an TenXObject.
  literals:
    - T
    - Z
    - I # Go INFO
    - E # Go ERROR
    - W # Go WARN

  # 'patterns' specifies an array of date-time formats to attempt when parsing timestamps from input events.
  #  Timestamp formats that appear frequently within an input stream can be 'bumped' higher within the list below.

  patterns:
  # Most common formats
  - "'I'MMdd HH:mm:ss.S"                       # Used in Kubernetes kube-apiserver logs with INFO prefix, 1-digit microsecond precision.
  - "'I'MMdd HH:mm:ss.SS"                      # Used in Kubernetes kube-apiserver logs with INFO prefix, 2-digit microsecond precision.
  - "'I'MMdd HH:mm:ss.SSS"                     # Used in Kubernetes kube-apiserver logs with INFO prefix, 3-digit microsecond precision.
  - "'I'MMdd HH:mm:ss.SSSS"                    # Used in Kubernetes kube-apiserver logs with INFO prefix, 4-digit microsecond precision.
  - "'I'MMdd HH:mm:ss.SSSSS"                   # Used in Kubernetes kube-apiserver logs with INFO prefix, 5-digit microsecond precision.
  - "'I'MMdd HH:mm:ss.SSSSSS"                  # Used in Kubernetes kube-apiserver logs with INFO prefix, 6-digit microsecond precision.
  - "'W'MMdd HH:mm:ss.S"                       # Used in Kubernetes kube-apiserver logs with WARNING prefix, 1-digit microsecond precision.
  - "'W'MMdd HH:mm:ss.SS"                      # Used in Kubernetes kube-apiserver logs with WARNING prefix, 2-digit microsecond precision.
  - "'W'MMdd HH:mm:ss.SSS"                     # Used in Kubernetes kube-apiserver logs with WARNING prefix, 3-digit microsecond precision.
  - "'W'MMdd HH:mm:ss.SSSS"                    # Used in Kubernetes kube-apiserver logs with WARNING prefix, 4-digit microsecond precision.
  - "'W'MMdd HH:mm:ss.SSSSS"                   # Used in Kubernetes kube-apiserver logs with WARNING prefix, 5-digit microsecond precision.
  - "'W'MMdd HH:mm:ss.SSSSSS"                  # Used in Kubernetes kube-apiserver logs with WARNING prefix, 6-digit microsecond precision.
  - "'E'MMdd HH:mm:ss.S"                       # Used in Kubernetes kube-apiserver logs with ERROR prefix, 1-digit microsecond precision.
  - "'E'MMdd HH:mm:ss.SS"                      # Used in Kubernetes kube-apiserver logs with ERROR prefix, 2-digit microsecond precision.
  - "'E'MMdd HH:mm:ss.SSS"                     # Used in Kubernetes kube-apiserver logs with ERROR prefix, 3-digit microsecond precision.
  - "'E'MMdd HH:mm:ss.SSSS"                    # Used in Kubernetes kube-apiserver logs with ERROR prefix, 4-digit microsecond precision.
  - "'E'MMdd HH:mm:ss.SSSSS"                   # Used in Kubernetes kube-apiserver logs with ERROR prefix, 5-digit microsecond precision.
  - "'E'MMdd HH:mm:ss.SSSSSS"                  # Used in Kubernetes kube-apiserver logs with ERROR prefix, 6-digit microsecond precision.
  - "'F'MMdd HH:mm:ss.S"                       # Used in Kubernetes kube-apiserver logs with FATAL prefix, 1-digit microsecond precision.
  - "'F'MMdd HH:mm:ss.SS"                      # Used in Kubernetes kube-apiserver logs with FATAL prefix, 2-digit microsecond precision.
  - "'F'MMdd HH:mm:ss.SSS"                     # Used in Kubernetes kube-apiserver logs with FATAL prefix, 3-digit microsecond precision.
  - "'F'MMdd HH:mm:ss.SSSS"                    # Used in Kubernetes kube-apiserver logs with FATAL prefix, 4-digit microsecond precision.
  - "'F'MMdd HH:mm:ss.SSSSS"                   # Used in Kubernetes kube-apiserver logs with FATAL prefix, 5-digit microsecond precision.
  - "'F'MMdd HH:mm:ss.SSSSSS"                  # Used in Kubernetes kube-apiserver logs with FATAL prefix, 6-digit microsecond precision.
  - "yyyy-MM-dd HH:mm:ss"                      # Widely used in Java applications, databases (e.g., MySQL, PostgreSQL), and application servers (e.g., Tomcat, JBoss).
  - "yyyy-MM-dd'T'HH:mm:ss.SSSZ"               # Common in web services, APIs, Java (with DateTimeFormatter.ISO_OFFSET_DATE_TIME), Python (with datetime.isoformat()), and systems requiring precise timestamps with timezone information.
  - "MMM dd HH:mm:ss"                          # Frequently seen in syslog, Unix-based systems, network devices (e.g., Cisco routers), and web servers (e.g., Apache, Nginx).
  - "dd/MMM/yyyy:HH:mm:ss Z"                   # Standard syslog format, used in firewalls (e.g., Cisco ASA, Palo Alto), network monitoring tools, and Unix-based systems.
  - "yyyy-MM-dd HH:mm:ss,SSS"                  # Common in Java logging frameworks like Log4j and Logback.
  - "yyyy-MM-dd'T'HH:mm:ss,SSS"                # Common in Java logging with ISO 8601 date-time, comma-separated milliseconds.
  - "MM/dd/yyyy HH:mm:ss"                      # Used in Windows event logs, .NET applications, and U.S.-based systems.
  - "EEE MMM dd HH:mm:ss yyyy"                 # Human-readable format used in various logs, including some web servers and application logs.
  - "yyyy-MM-dd'T'HH:mm:ss"                    # Simplified ISO 8601 format, used in many modern applications and frameworks.
  - "HH:mm:ss"                                 # Time-only format, used when the date is implied or provided separately, common in embedded systems and some programming languages.
  - "yyyyMMdd HH:mm:ss"                        # Compact format used in some legacy systems and batch processing logs.
  - "MMM dd, yyyy h:mm:ss a"                   # Human-readable format with 12-hour clock, used in application logs and some U.S.-based systems.
  - "MMM dd, yyyy hh:mm:ss a"                  # Human-readable format with 12-hour clock (padded hour), used in application logs and some U.S.-based systems.
  # Common formats with slight variations
  - "yyyy-MM-dd HH:mm:ss.SSS"                  # Extended precision format used in Java applications and databases requiring millisecond accuracy.
  - "dd-MMM-yyyy HH:mm:ss.SSS"                 # Common in Java logging frameworks (e.g., Log4j, SLF4J) and application logs requiring human-readable dates with millisecond precision.
  - "yyyy-MM-dd'T'HH:mm:ss.S'Z'"               # Microsecond precision ISO 8601 format, 1-digit precision.
  - "yyyy-MM-dd'T'HH:mm:ss.SS'Z'"              # Microsecond precision ISO 8601 format, 2-digit precision.
  - "yyyy-MM-dd'T'HH:mm:ss.SSS'Z'"             # Microsecond precision ISO 8601 format, 3-digit precision.
  - "yyyy-MM-dd'T'HH:mm:ss.SSSS'Z'"            # Microsecond precision ISO 8601 format, 4-digit precision.
  - "yyyy-MM-dd'T'HH:mm:ss.SSSSS'Z'"           # Microsecond precision ISO 8601 format, 5-digit precision.
  - "yyyy-MM-dd'T'HH:mm:ss.SSSSSS'Z'"          # Microsecond precision ISO 8601 format, 6-digit precision.
  - "yyyy-MM-dd'T'HH:mm:ss.S'Z'"               # Nanosecond precision ISO 8601 format, 1-digit precision.
  - "yyyy-MM-dd'T'HH:mm:ss.SS'Z'"              # Nanosecond precision ISO 8601 format, 2-digit precision.
  - "yyyy-MM-dd'T'HH:mm:ss.SSS'Z'"             # Nanosecond precision ISO 8601 format, 3-digit precision.
  - "yyyy-MM-dd'T'HH:mm:ss.SSSS'Z'"            # Nanosecond precision ISO 8601 format, 4-digit precision.
  - "yyyy-MM-dd'T'HH:mm:ss.SSSSS'Z'"           # Nanosecond precision ISO 8601 format, 5-digit precision.
  - "yyyy-MM-dd'T'HH:mm:ss.SSSSSS'Z'"          # Nanosecond precision ISO 8601 format, 6-digit precision.
  - "yyyy-MM-dd'T'HH:mm:ss.SSSSSSS'Z'"         # Nanosecond precision ISO 8601 format, 7-digit precision.
  - "yyyy-MM-dd'T'HH:mm:ss.SSSSSSSS'Z'"        # Nanosecond precision ISO 8601 format, 8-digit precision.
  - "yyyy-MM-dd'T'HH:mm:ss.SSSSSSSSS'Z'"       # Nanosecond precision ISO 8601 format, 9-digit precision.
  - "yyyy-MM-dd'T'HH:mm:ssZZZZZ"               # ISO 8601 with timezone offset, used in systems where timezone information is critical.
  - "yyyy-MM-dd HH:mm:ss.SSSZZZZZ"             # Similar to above but without the 'T' separator, used in database logs and application servers.
  - "yyyy-MM-dd HH:mm:ss.S"                    # Microsecond precision format, 1-digit precision.
  - "yyyy-MM-dd HH:mm:ss.SS"                   # Microsecond precision format, 2-digit precision.
  - "yyyy-MM-dd HH:mm:ss.SSS"                  # Microsecond precision format, 3-digit precision.
  - "yyyy-MM-dd HH:mm:ss.SSSS"                 # Microsecond precision format, 4-digit precision.
  - "yyyy-MM-dd HH:mm:ss.SSSSS"                # Microsecond precision format, 5-digit precision.
  - "yyyy-MM-dd HH:mm:ss.SSSSSS"               # Microsecond precision format, 6-digit precision.
  - "MM/dd/yyyy*HH:mm:ss*SSS"                  # Used in some U.S.-based systems, particularly in legacy applications or specific logging frameworks.
  - "M/d/yyyy h:mm:ss a:SSS"                   # Common in systems using 12-hour time format, such as some Windows applications or older logging systems.
  - "M/d/yyyy hh:mm:ss a:SSS"                  # Common in systems using 12-hour time format with milliseconds and padded hour, such as some Windows applications.
  - "M/dd/yyyy hh:mm:ss a"                     # Similar to above, used in systems where millisecond precision is not needed.
  - "yyyy-MM-dd'T'HH:mm:ss.SSSX"               # ISO 8601 with basic timezone offset format, used in systems that require standardized timestamps.
  - "yyyy-MM-dd'T'HH:mm:ss.SSSz"               # ISO 8601 with timezone name, used in applications that need to display timezone information.
  - "yyyy-MM-dd'T'HH:mm:ss'Z'"                 # ISO 8601 format assuming UTC, used in systems where all times are in UTC.
  - "yy/MM/dd HH:mm:ss"                        # Two-digit year format, used in compact logs or older systems.
  - "MMM dd HH:mm:ss ZZZZ"                     # Used in some Unix-based systems and web servers, includes timezone offset.
  - "HH:mm:ss,SSS"                             # Time-only format with milliseconds, used in performance logs or systems where date is provided separately.
  - "yyyy-MM-dd*HH:mm:ss"                      # Used in some application logs where the separator is a space or asterisk.
  - "yyyy MMM dd HH:mm:ss.SSS"                 # Human-readable format with milliseconds, used in some application logs.
  - "dd/MMM/yyyy HH:mm:ss"                     # Used in European systems and some web applications.
  - "yyyy-MM-dd'T'HH:mm:ss.SSS''Z''"           # ISO 8601 with milliseconds and literal 'Z', used in systems where 'Z' is explicitly included.
  - "MMM dd yyyy HH:mm:ss"                     # Human-readable format, used in various logs.
  - "yyyy-MM-dd HH:mm:ss ZZZZ"                 # Format with timezone offset, used in systems requiring timezone information.
  - "yyyy-MM-dd HH:mm:ssZZZZZ"                 # Similar to above, used in application logs.
  - "dd MMM yyyy HH:mm:ss"                     # European format, used in some web applications and databases.
  - "MMdd_HH:mm:ss"                            # Compact format without separators, used in file names or space-constrained logs.
  - "yyyy-MM-dd HH:mm:ss,SSSZZZZZ"             # Java logging format with timezone offset.
  - "yyyyMMdd HH:mm:ss.SSS"                    # Compact format with milliseconds, used in some legacy systems.
  - "yyyy/MM/dd HH:mm:ss"                      # Format used in some Asian systems, particularly in Japan.
  - "dd/MM/yyyy HH:mm:ss"                      # Common in European systems, including some web applications and databases.
  - "MM-dd-yyyy HH:mm:ss"                      # U.S. format variant, used in some older systems.
  - "yyyyMMddHHmmss"                           # Compact format for file names or database timestamps, used in systems where space is a concern.
  - "EEE, dd MMM yyyy HH:mm:ss zzz"            # RFC 1123 format, used in HTTP headers, web servers, and email systems.

fields

Configure the Field parser to scan TenXTemplates for JSON and KV fields.

Below is the default configuration from: fields/config.yaml.

Edit Online

Edit Field parser Config Locally

# 🔟❎ 'run' TenXTemplate field extract configuration

# Configure how to extract JSON objects and KV structures from TenXTemplates.
# To learn more see https://doc.log10x.com/run/transform/fields/

# Set the 10x pipeline to 'run'
tenx: run

# =============================== Extract Options =============================

field:

  # 'extract' controls whether to scan TenXTemplate for  JSON
  #  objects or key-value lists (e.g., 'X=Y'). 
  extract: true

  # 'nameBreaks' controls which characters found to the left of a token
  #  whose is a candidate for being a 'key' in a KV field formation should serve 
  #  as a terminator for the search. For example, for an object whose 'text' field
  #  contains the following entry ',tx_result=OK' the desired key name should be 'tx_result'
  #  and as such, the name terminator character should be ',' vs. '_' (in which case
  #  the key would have been named 'result').
  nameBreaks: ', /\{}.()[]'

  # 'valueBreaks' controls which characters found to the right of a token
  #  whose is a candidate for being a 'value' in a KV field formation should serve 
  #  as a terminator for the search. For example, for an object whose 'text' field
  #  contains the following entry: 'status=RESULT_SUCCESS' the desired KV value 
  #  should be 'RESULT_SUCCESS', and as such, the value terminator character 
  #  should be ',' vs. '_' (in which case the value would be 'RESULT').
  valueBreaks: ', /\{}()[]'

symbol

Configure the Origin selector to select the source code/binary executable origin of TenXTemplate symbol values.

Below is the default configuration from: symbol/config.yaml.

Edit Online

Edit Origin selector Config Locally

# 🔟❎ 'run' symbol origin configuration

# Identify the origin of symbol values within TenXTemplates.
# To learn more see https://doc.log10x.com/run/transform/symbol/

# Set the 10x pipeline to 'run'
tenx: run

# =============================== Origin Options ==============================

symbol:

  # 'maxOrigins' controls the number of source/binary origins to list 
  #  per symbol token sequence. As a series of tokens within 
  #  a target TenXTemplate (e.g., 'ERROR', 'could not connect to "{}') may appear 
  #  in multiple source/binary files within the pipeline's loaded library.
  #  The sorting algorithm configured below is used to select the 'maxSymbolOrigins' 
  #  topmost entries to list.
  maxOrigins: 64

  # 'sequenceReserved' defines a list of terms to ignore when searching
  #  for the symbol tokens that constitute an TenXTemplate's 'message' portion.
  #  For example, for an event whose text contains: 'connect success = true',
  #  the value 'true' will not be considered a part of the event's message,
  #  as the 'true' value is mostly likely the result of a variable boolean state.
  #  For more information, see: https://doc.log10x.com/api/js/#TenXObject+symbolSequence
  sequenceReserved:
    - "null"
    - "nil"
    - "true"
    - "false"
    - "to"
    - "the"
    - "a"
    - "at"
    - "for"
    - "log"
    - "info"
    - "http"  
    - TRACE
    - DEBUG
    - INFO
    - NOTICE
    - WARN
    - ERROR
    - CRITICAL
    - ALERT
    - EMERGENCY

  # ----------------------------- Debug Options -------------------------------

  debug:

    # 'symbol' debugging allows for verbose printing of the selection process
    #  for symbol tokens from the pipeline's 10x symbol files used to produce the
    #  results of 10x reflection functions.
    #  For more information, see: https://doc.log10x.com/api/js/#TenXObject+symbol


    # 'origins' outputs information about the origin selection process for an TenXTemplate symbol.
    #  For example, setting 'units' to 'foo.js' will output information about if/how it 
    #  was selected as the origin of TenXTemplate symbols.
    #  In other words, if a symbol (e.g., 'MyClass') has 'foo.java' as the source file
    #  from which it originated, adding 'foo.js' to 'units' will emit information
    #  about the selection process.
    #  Specifying '*' will emit information for all source /binary files that have
    #  been selected as the origin of any TenXTemplate objects within the pipeline.
    origins: [
    #  '*'
    ]

    # 'symbols' logs the origin selection process for symbol tokens within an TenXTemplate.
    #  For example, set 'symbols' to 'Could not connect to' to log how the 10x JavaScript 'symbolSequence' 
    #  determines the origin source code/binary file within the pipeline's symbol library. 

    #  Specifying '*' will emit information for all source code /binary files that have
    #  been selected as the origin of any TenXTemplate objects within the pipeline.
    symbols: [
     # '*'
    ]

group

Configure the Group sequencer to group and sequence TenXObjects.

Below is the default configuration from: group/config.yaml.

Edit Online

Edit Group sequencer Config Locally

# 🔟❎ 'run' event grouping configuration

# Group  sequences of TenXObjects to filter, aggregate and output as a single logical unit.
# To learn more see https://doc.log10x.com/run/transform/group/

# Set the 10x pipeline to 'run'
tenx: run

# =============================== Group Options ===============================

group:

  # 'filters' specify JavaScript expressions an TenXObject instance/group must 
  #  evaluate as truthy against to be written to output
  filters: []

  # 'maxSize' defines the maximum number of TenXObjects to group
  #  before the group is sealed and forwarded into the pipeline. 
  #  Subsequent TenXObjects can form a new group.
  maxSize: 20000

  # 'flushTimeout' defines the max interval (e.g., 10s) to wait for 
  #  new events to be read from an input stream before it flushes any
  #  pending TenXObjects group into the pipeline.
  #  This mechanism is designed to avoid latencies in dispatching pending event
  #  groups to output destinations.
  flushTimeout: $=parseDuration("5s")

  # 'async' specifies whether to sequence and group TenXObjects in a dedicated thread
  async: true 

parallelize

Configure the Parallel processor to distribute event parsing and transformation workloads across multiple cores..

Below is the default configuration from: parallelize/config.yaml.

Edit Online

Edit Parallel processor Config Locally

# 🔟❎ 'run' event parallel processing configuration

# Transform log/trace events read from inputs into well-defined TenXObjects using multiple cores.
# To learn more see https://doc.log10x.com/run/transform/parallelize/

# Set the 10x pipeline to 'run'
tenx: run

# =============================== Parallel Options ============================

parallelEvent:

  # 'threadPoolSize' specifies the number of threads allocated to transform events into TenXObjects concurrently.
  #  If the value is -
  #  - = 0: transform events into TenXObjects synchronously using their input stream calling thread.
  #  - < 1: interpreted as a percentage of the number of available cores (e.g., 0.5 = use up to 50% of available cores)
  #  - = 1: allocate a single dedicated to transform events.
  #  - > 1: interpreted as a fixed number of threads (e.g., 10 = 10 threads)
  threadPoolSize: "0.5"

  # 'batchSize' specifies the maximum number of events to queue for concurrent processing before flushing.
  #  If 'threadPoolSize' is 1, this value is unused, and events are transformed into TenXObject synchronously 
  #  to their calling input's thread.  If 0, flush pending events after 'parallelEventFlushInterval' expires or the
  #  source input reaches end-of-file.
  batchSize: 1000

  # 'flushInterval'  specifies the maximum wait duration before flushing queued events 
  #  If 'parallelThreadPoolSize' is 1, this value is unused, and events are transformed into TenXObject synchronously 
  #  to their input thread. If 0, no wait flush interval is applied. 
  flushInterval: 2s

  # 'processingTimeout' specifies the maximum wait duration before dropping un-processed queued events 
  #  This value provides a backstop for overflowing the heap if the pipeline cannot dequeue 
  #  pending events to transform into TenXObjects. If 0, no timeout is applied.
  processingTimeout: 30s


This app is defined in policy/app.yaml.