Scale and aggregate metrics with NXLog Agent

In many environments, metrics arrive at a high frequency or in units that are not ideal for long-term analysis. Converting units and aggregating values before forwarding makes metrics easier to interpret and reduces data volume.

Scaling metrics allows you to normalize values into meaningful units. For example, network traffic is often measured in bytes, but monitoring platforms typically display them in megabytes or gigabytes. Converting values during collection ensures consistent units before data leaves the host.

Aggregation summarizes high-frequency measurements into periodic statistics. Instead of forwarding every measurement, the agent can calculate values such as averages or minimum and maximum values over a set time window. This approach reduces noise while preserving key metric characteristics.

Scale metrics from bytes to MB

Metrics related to data transfer often use bytes as the unit, which can make large values hard to interpret. By scaling these metrics with NXLog Agent, you can convert them to more practical units before forwarding them. The example below demonstrates how to convert a data transfer metric from bytes to megabytes (MB).

Example 1. Converting data transfer size from bytes to MB

This configuration collects NGINX access logs with the File input module and uses a regular expression to parse records into structured data. It then removes unnecessary fields while keeping those relevant for metric conversion. Finally, it outputs the records in JSON format.

nxlog.conf
<Extension json>
    Module        xm_json
</Extension>

<Extension rewrite>
    Module        xm_rewrite (1)
    Keep          timestamp, hostname, http_url, http_status_code, file_size
</Extension>

<Input nginx_access>
    Module    im_file
    File      '/var/log/nginx/access.log'
    <Exec>
        if ($raw_event =~ /(?x)^(\S+)\ \S+\ (\S+)\ \[([^\]]+)\]\ \"(\S+)\ (.+)
                          \ HTTP\/\d\.\d\"\ (\S+)\ (\S+)\ \"([^\"]+)\"
                          \ \"([^\"]+)\"/) { (2)
            $timestamp = parsedate($3);
            $http_url = $5;
            $http_status_code = $6;

            if ($7 != '-') {
                $file_size = integer($7) / 1048576; (3)
            }

            rewrite->process();
            to_json(); (4)
        }
    </Exec>
</Input>
1 The Rewrite extension provides the functionality to normalize data records, including discarding unwanted fields.
2 Parses NGINX access log records and creates fields from the captured groups.
3 Converts the file size portion of the log record to an integer and divides it by 1048576 to obtain the value in MB.
4 The to_json() procedure converts the record to JSON format and writes it to the $raw_event core field.

The following is an NGINX access log sample.

Input sample
198.51.100.14 - - [11/Mar/2026:09:16:06 +0100] "GET /media/training-video.mp4 HTTP/1.1" 200 1456723456 "https://example.com/videos" "Mozilla/5.0 (Macintosh; Intel Mac OS X 14_3)"

The following JSON object shows the same record after NXLog Agent processed it.

Output sample
{
  "Hostname": "WEB-SRV",
  "timestamp": "2026-03-11T09:16:06.000000+01:00",
  "http_url": "/media/training-video.mp4",
  "http_status_code": "200",
  "file_size": 1389
}

Aggregate and spool metrics

High-frequency metrics can quickly generate a lot of data when collected at short intervals. Aggregating these metrics within NXLog Agent reduces the number of records while preserving useful statistical information. The example below demonstrates how to collect NXLog Agent host metrics, calculate the average and maximum over a one-minute interval, and spool the summarized results for downstream monitoring.

Example 2. Streaming aggregated NXLog Agent open files and resource metrics

This configuration uses the Internal Metrics input module to collect metrics from the host system. It then uses statistical counters to record the average and maximum file descriptor count and writes the values to the NXLog Agent log file every minute.

nxlog.conf
<Input nxlog>
    Module           im_internalmetrics
    CounterServer    TRUE
    PollInterval     2
</Input>

<Output metrics>
    Module           om_null
     <Schedule>
        When         @startup
        <Exec>
            create_stat("fd_avg", "AVG", 60); (1)
            create_stat("fd_max", "COUNTMAX", 60);
        </Exec>
    </Schedule>
    <Schedule>
        Every        1 min
        <Exec>
            log_info("fd_avg=" + get_stat("fd_avg") + ",fd_max=" + get_stat("fd_max")); (2)
        </Exec>
    </Schedule>
    <Exec>
        add_stat("fd_avg", $server_fd_count);
        add_stat("fd_max", $server_fd_count);

        drop(); (3)
    </Exec>
</Output>

<Route r1>
    Path             nxlog => metrics
</Route>
1 Initializes the counters with a 60-second sliding window.
2 Writes the aggregated values to the NXLog Agent log file.
3 Discards the original record.
Output sample
2026-03-11T10:34:42+01:00 INFO [om_null|metrics] fd_avg=200,fd_max=201
2026-03-11T10:35:42+01:00 INFO [om_null|metrics] fd_avg=205,fd_max=411