Splunk
Splunk is a software platform for data collection, indexing, searching, and visualization. NXLog can be configured as an agent for Splunk, collecting and forwarding logs to the Splunk instance. Splunk can accept logs forwarded via UDP, TCP, TLS, or HTTP.
For more information, see the Splunk Enterprise documentation. See also the Sending ETW Logs to Splunk with NXLog post.
An alternative to the Splunk Universal Forwarder
The Splunk universal forwarder is a Splunk agent commonly used in a similar role as NXLog. However, NXLog offers some significant advantages over the Splunk forwarder by providing full-featured parsing and filtering prior to forwarding, which results in faster indexing by Splunk, just to mention a few. In controlled tests, Splunk was able to process and index events forwarded by NXLog over 10 times faster than the same set of Windows events forwarded by the Splunk universal forwarder, despite the overhead of renaming Windows field names and reformatting the events to emulate Splunk’s proprietary forwarding format.
When planning a migration to NXLog, the various types of log sources being collected by Splunk universal forwarders should be evaluated. Depending on the type of log source, it could be as simple as creating a new TCP data input port and following some of the examples contained in this chapter, such as forwarding BSD Syslog events. As long as the log source provides data in a standard format that Splunk can easily index, and Splunk is retaining the original field names, no special configurations need to be written.
In the case of Windows Event Log providers, special NXLog configurations are required to emulate the event fields and format sent by the Splunk universal forwarder since Splunk renames at least four Windows fields and adds some new fields to the event schema. See the comparison table below.
Windows | NXLog | Splunk |
---|---|---|
Channel |
Channel |
Logname |
Computer |
Hostname * |
ComputerName |
EventID |
EventID |
EventCode |
Execution_ProcessID |
ExecutionProcessID |
— |
Execution_ThreadID |
ExecutionThreadID |
— |
ProviderGuid |
ProviderGuid |
— |
UserID |
UserID |
Sid |
— |
— |
Type |
— |
— |
idType |
* NXLog normalizes this field name across all modules and log sources.
It should be emphasized that NXLog is capable of forwarding Windows events or any other kind of structured logs to Splunk for indexing without any need to emulate the format or event schema used by the Splunk universal forwarder. There is no technical requirement or advantage in using Splunk’s proprietary format for forwarding logs to Splunk, especially for new Splunk deployments which have no existing corpus of Windows events. |
The only purpose of emulating the Splunk forwarder format is to maintain continuity with previously indexed Windows events that were forwarded with the Splunk universal forwarder. Forwarding Windows Event Log data in JSON format over TCP to Splunk is the preferred method.
Forwarding Windows events using JSON
This section assumes that any pre-existing Windows Event Log data currently indexed in Splunk will be managed separately (due to some of its fields names being altered from the original Windows field names) until it ages out of the system. However, if there is a need to maintain Splunk-specific field names of Windows events, see the next section that provides a solution for using NXLog to forward Windows events as if they were sent by the Splunk universal forwarder.
After defining a network data input port (see Adding a TCP or UDP Data Input in the next section for details), the only NXLog configuration needed for forwarding events to Splunk is a simple, generic TCP (or UDP) output module instance that converts the logs to JSON as they are being sent.
This example uses Windows ETW to collect Windows DNS Server events.
The output instance defines the IP address and port of the host where Splunk Enterprise is receiving data on TCP port 1527 which was defined in Splunk to have a Source Type of _json
.
<Extension json>
Module xm_json
</Extension>
<Input dns_server>
Module im_etw
Provider Microsoft-Windows-DNSServer
</Input>
<Output splunk>
Module om_tcp
ListenAddr 192.168.1.21:1527
Exec to_json();
</Output>
{
"SourceName": "Microsoft-Windows-DNSServer",
"ProviderGuid": "{EB79061A-A566-4698-9119-3ED2807060E7}",
"EventID": 515,
"Version": 0,
"ChannelID": 17,
"OpcodeValue": 0,
"TaskValue": 5,
"Keywords": "4611686018428436480",
"EventTime": "2020-05-19T10:42:06.313322-05:00",
"ExecutionProcessID": 1536,
"ExecutionThreadID": 3896,
"EventType": "INFO",
"SeverityValue": 2,
"Severity": "INFO",
"Domain": "WIN-R4QHULN6KLH",
"AccountName": "Administrator",
"UserID": "S-1-5-21-915329490-2962477901-227355065-500",
"AccountType": "User",
"Flags": "EXTENDED_INFO|IS_64_BIT_HEADER|PROCESSOR_INDEX (577)",
"Type": "5",
"NAME": "www.example.com",
"TTL": "3600",
"BufferSize": "17",
"RDATA": "0x106E73312E6578616D706C652E636F6D2E",
"Zone": "example.com",
"ZoneScope": "Default",
"VirtualizationID": ".",
"EventReceivedTime": "2020-05-19T10:42:07.313482-05:00",
"SourceModuleName": "dns_server",
"SourceModuleType": "im_etw",
"MessageSourceAddress": "192.168.1.61"
}
Since Splunk readily accepts formats like JSON and XML that support highly structured data, querying JSON-formatted logs is easily accomplished with the Splunk spath command.

Forwarding Windows events using the Splunk Universal Forwarder format
If it is important to retain the Splunk universal format after migrating to NXLog, then adhering to the following procedures is imperative for Splunk to correctly ingest the logs being forwarded using this emulation technique.
When creating configurations with NXLog for maintaining backwards compatibility with events previously collected by the universal forwarder, only a few general principles need to be observed:
-
When creating a new TCP data input in Splunk, choose the right Source Type.
-
In the NXLog configuration, rename event fields to the field names Splunk associates with that Source Type.
-
In the NXLog configuration, make sure the data matches the format shown in Splunk as closely as possible, unless Splunk is failing to parse specific fields.
-
In the NXLog configuration, manually parse embedded structured data as new, full-fledged fields. A common cause of failed parsing using this technique are fields containing long strings of embedded subfields.
The following steps should be followed for each type of log source being forwarded:
-
Examine the events in Splunk and note which value is assigned to
sourcetype=
listed below each event. The universal forwarder may list different values forsourcetype
even when they are coming from the samesource
. Try to determine which one is the best fit. -
In Splunk, create a new TCP Data Input port for each log source type to be forwarded and set the Source Type to the same one assigned to events that have been sent by the universal forwarder after they have been ingested by Splunk.
-
Note which fields are being parsed and indexed after they have been received and processed by Splunk.
-
Create an NXLog configuration that will capture the log source data, rename the field names to those associated with the Source Type, and format them to match the format that the Splunk universal forwarder uses.
The actual format used by the Splunk universal forwarder is "cooked" data which has a binary header component and a footer.
A single line containing the date and time of the event marks the beginning of the event data on the next line, which is generally formatted as key-value pairs, unquoted, separated by an equals sign (=
), with only one key-value pair per line.
The header and footer parts are not needed for forwarding events to a TCP Data Input port.
Only the first line containing the event’s date/time and the subsequent lines containing the key-value pairs are needed.
Windows Event Log data can be forwarded to Splunk using NXLog in such a way that Splunk parses and indexes them as if they were sent by the Splunk universal forwarder. Only three criteria need to be met:
-
The Splunk Add-on for Microsoft Windows has been installed where the forwarded events will be received. See About installing Splunk add-ons on Splunk Docs for more details.
-
The NXLog configuration rewrites events to match the field names expected by the corresponding log source in the Splunk Add-on for Microsoft Windows and formats the event to match the format of the Splunk universal forwarder.
-
A unique TCP Data Input port is created for each type of Windows Event Provider by following the procedure in Adding a TCP or UDP data input. When specifying the Source type it is imperative to choose the correct name from the dropdown list that follows this naming convention:
WinEventLog:
Provider[
/Channel]
.
When adding a new TCP Data Input, the desired Source type for Windows might not be present in the Select Source Type dropdown menu.
If so, select or manually enter WinEventLog and create the TCP Data Input.
Once created, go back to the list of TCP Data Inputs and edit it by clicking the TCP port number.
Make sure Set source type is set to Manual , then enter the correct name in the Source type field.
|
The following examples have been tested with Splunk version 9.0.1 and the Splunk Add-on for Microsoft Windows version 8.5.0. |
This example illustrates the method for emulating the Splunk Universal Forwarder for sending Windows DNS Server Audit events to Splunk.
First, a new TCP Input on port 1515 with a Source type of WinEventLog:Microsoft-Windows-DNSServer/Audit
is created for receiving the forwarded events.

This configuration uses the im_msvistalog module to collect and parse the log data.
Since there will be no need for filtering in this example, a simple File
directive defines the location of the log source to be read, otherwise a QueryXML
block would have been used to define the filters and the Provider/Channel as the log source.
The Exec
block contains the necessary logic for converting the parsed data to the format used by the Splunk universal forwarder.
Since each event will be formatted and output as a multi-line record stored as a single string in the $raw_event
field, the xm_rewrite module is used to delete the original fields.
Once converted, events are then forwarded over TCP port 1515 to Splunk.
<Extension Drop_Fields>
Module xm_rewrite
Keep # Remove all
</Extension>
<Input DNS_Server_Audit>
Module im_msvistalog
File %SystemRoot%\System32\Winevt\Logs\Microsoft-Windows-DNSServer%4Audit.evtx
<Exec>
# Create a header variable for storing the Splunk datetime string
create_var('timestamp_header');
create_var('event'); # The Splunk equivalent of a $raw_event
create_var('message'); # For preserving the $Message field
create_var('vip_fields'); # Message subfields converted to fields
# Get the Splunk datetime string needed for the Header Line
$dts = strftime($EventTime,'YYYY-MM-DD hh:mm:ss.sTZ');
$hr = ""; # Hours, 2-digit
$ap = ""; # For either "AM" or "PM";
if ($dts =~ /(\d{4})-(\d{2})-(\d{2}) (\d{2}):(\d{2}):(\d{2})/ ) {
if (hour($EventTime) < 12) {
$ap = "AM";
$hr = $4;
if (hour($EventTime) == 0) $hr = "12";
}
if (hour($EventTime) > 11) {
$ap = "PM";
if (hour($EventTime) == 12) $hr = $4;;
if (hour($EventTime) > 12) {
$hr = hour($EventTime) - 12;
if (hour($EventTime) < 22) $hr = "0" + $hr;
}
}
$dts = $2 +"/"+ $3 +"/"+ $1 +" "+ \
$hr +":"+ $5 +":"+ $6 +" "+ $ap + "\n";
}
set_var('timestamp_header', $dts);
# Convert $EventType to what Splunk expects
$EventType = ($EventType == "INFO" ? 4 : $EventType);
# Some really important DNS fields that Splunk doesn't parse
$vipFields = "";
if (defined($NAME))
$vipFields = $vipFields + 'NAME=' + $NAME + "\n";
if (defined($Severity))
$vipFields = $vipFields + 'Severity=' + $Severity + "\n";
if (defined($TTL))
$vipFields = $vipFields + 'TTL=' + $TTL + "\n";
if (defined($BufferSize))
$vipFields = $vipFields + 'BufferSize=' + $BufferSize + "\n";
if (defined($RDATA))
$vipFields = $vipFields + 'RDATA=' + $RDATA + "\n";
if (defined($Zone))
$vipFields = $vipFields + 'Zone=' + $Zone + "\n";
if (defined($ZoneScope))
$vipFields = $vipFields + 'ZoneScope=' + $ZoneScope + "\n";
set_var('vip_fields', $vipFields);
# Store and display the original $Message field at the end of the list
# of fields, just in case Splunk parses it correctly
set_var('message', $Message);
# Set the new Splunk Event for DNS Server Audit
set_var('event', \
'LogName=' + $Channel +"\n"+ \
'SourceName=' + $SourceName +"\n"+ \
'EventCode=' + $EventID +"\n"+ \
'EventType=' + $EventType +"\n"+ \
'Type=' + 'Information' +"\n"+ \
'ComputerName=' + $Hostname +"\n"+ \
'User=' + 'NOT_TRANSLATED' +"\n"+ \
'Sid=' + $UserID +"\n"+ \
'SidType=' + '0' +"\n"+ \
'TaskCategory=' + $Category +"\n"+ \
'OpCode=' + $OpCode +"\n"+ \
'RecordNumber=' + $RecordNumber +"\n"+ \
'Keywords=' + $Keywords +"\n" \
);
# Remove all NXLog fields.
# This is necessary for emulating the Splunk proprietary format.
Drop_Fields->process();
# Add the Splunk datetime string as a "header" line for this
# multi-line event
$raw_event = get_var('timestamp_header') + get_var('event') + \
get_var('vip_fields') + 'Message=' + get_var('message') +"\n";
</Exec>
</Input>
<Output Splunk_TCP_DNS_Audit>
Module om_tcp
ListenAddr 192.168.1.52:1515
</Output>
A sample DNS Server Audit Event after being forwarded to Splunk.

Events should be automatically parsed by Splunk as shown below.

This example illustrates the method for emulating the Splunk Universal Forwarder for sending Windows Sysmon DNS Query Events events to Splunk.
First, a new TCP Input on port 1515 with a Source type of WinEventLog:Microsoft-Windows-Sysmon/Operational
is created for receiving the forwarded events.

The configuration uses the im_msvistalog module to collect and parse the log data.
The QueryXML
block is used to specify not only the Provider/Channel, but also provides additional filtering for collecting only DNS Query events.
The Exec
block contains the necessary logic for converting the data to the format used by the Splunk universal forwarder.
Since each event will be formatted and output as a multi-line record stored as a single string in the $raw_event
field, the xm_rewrite module is used to delete the original fields.
Once converted, events are then forwarded over TCP port 1517 to Splunk.
<Extension Drop_Fields>
Module xm_rewrite
Keep # Remove all
</Extension>
<Input DNS_Sysmon>
Module im_msvistalog
<QueryXML>
<QueryList>
<Query Id="0">
<Select Path="Microsoft-Windows-Sysmon/Operational">
*[System[(EventID=22)]]
</Select>
</Query>
</QueryList>
</QueryXML>
<Exec>
# Create a header variable for storing the Splunk datetime string
create_var('timestamp_header');
create_var('event'); # The Splunk equivalent of a $raw_event
create_var('message'); # For preserving the $Message field
create_var('message_fields'); # Message subfields converted to fields
# Get the Splunk datetime string needed for the Header Line
$dts = strftime($EventTime,'YYYY-MM-DD hh:mm:ss.sTZ');
$hr = ""; # Hours, 2-digit
$ap = ""; # For either "AM" or "PM";
if ($dts =~ /(\d{4})-(\d{2})-(\d{2}) (\d{2}):(\d{2}):(\d{2})/ ) {
if (hour($EventTime) < 12) {
$ap = "AM";
$hr = $4;
if (hour($EventTime) == 0) $hr = "12";
}
if (hour($EventTime) > 11) {
$ap = "PM";
if (hour($EventTime) == 12) $hr = $4;;
if (hour($EventTime) > 12) {
$hr = hour($EventTime) - 12;
if (hour($EventTime) < 22) $hr = "0" + $hr;
}
}
$dts = $2 +"/"+ $3 +"/"+ $1 +" "+ \
$hr +":"+ $5 +":"+ $6 +" "+ $ap + "\n";
}
set_var('timestamp_header', $dts);
# Convert $EventType to what Splunk expects
$EventType = ($EventType == "INFO" ? 4 : $EventType);
# Since Splunk often fails to parse the "sub-fields" of the Sysmon
# $Message field, create them as full-fledged fields.
$Subfields = "";
if (defined($UtsTime))
$Subfields = $Subfields + 'UtcTime=' + $UtcTime + "\n";
if (defined($ProcessGuid))
$Subfields = $Subfields + 'ProcessGuid=' + $ProcessGuid + "\n";
if (defined($ProcessId))
$Subfields = $Subfields + 'ProcessId=' + $ProcessId + "\n";
if (defined($QueryName))
$Subfields = $Subfields + 'QueryName=' + $QueryName + "\n";
if (defined($QueryStatus))
$Subfields = $Subfields + 'QueryStatus=' + $QueryStatus + "\n";
if (defined($QueryResults))
$Subfields = $Subfields + 'QueryResults=' + $QueryResults + "\n";
if (defined($Image))
$Subfields = $Subfields + 'Image=' + $Image + "\n";
set_var('message_fields', $Subfields);
# Store and display the original $Message field at the end of the list
# of fields, just in case Splunk parses it correctly
set_var('message', $Message);
# Set the new Splunk Event for sysmon
set_var('event', \
'LogName=' + $Channel +"\n"+ \
'SourceName=' + $SourceName +"\n"+ \
'EventCode=' + $EventID +"\n"+ \
'EventType=' + $EventType +"\n"+ \
'Type=' + 'Information' +"\n"+ \
'ComputerName=' + $Hostname +"\n"+ \
'User=' + 'NOT_TRANSLATED' +"\n"+ \
'Sid=' + $UserID +"\n"+ \
'SidType=' + '0' +"\n"+ \
'TaskCategory=' + $Category +"\n"+ \
'OpCode=' + $OpCode +"\n"+ \
'RecordNumber=' + $RecordNumber +"\n"+ \
'Keywords=' + $Keywords +"\n" \
);
# Remove all NXLog fields.
# This is necessary for emulating the Splunk proprietary format.
Drop_Fields->process();
# Add the Splunk datetime string as a "header" line for this
# multi-line event
$raw_event = get_var('timestamp_header') + get_var('event') + \
get_var('message_fields') + 'Message=' + get_var('message') +"\n";
</Exec>
</Input>
<Output Splunk_TCP_Sysmon>
Module om_tcp
ListenAddr 192.168.1.52:1517
</Output>
A sample Sysmon DNS Query Event after being forwarded to Splunk:

Events should be automatically parsed by Splunk as shown below.

File and directory-based forwarding
The only means available to the Splunk Universal Forwarder for selecting log sources to monitor is by manually defining paths to files or directories on the local host. This same technique is available with NXLog. Since NXLog is also designed to forward to other NXLog agents, this feature can be leveraged to reduce the number of open network connections to a Splunk Enterprise server when events are forwarded from a single NXLog central logging server.
In the following example, a central NXLog server receives events for all log sources within the enterprise and forwards each log source type via a TCP data input connection that has been preconfigured on the Splunk Enterprise server for that Source type.
<Extension json>
Module xm_json
</Extension>
# Receive Events from ALL Enterprise Servers
<Input syslog_in>
Module im_tcp
ListenAddr 0.0.0.0:1514
</Input>
<Input dns_audit_in>
Module im_tcp
ListenAddr 0.0.0.0:1515
</Input>
# Cache the Events to Disk in case of Splunk unavailability
<Output syslog_cache>
Module om_file
File '/opt/nxlog/var/log/cached/syslog.bin'
OutputType Binary
</Output>
<Output dns_audit_cache>
Module om_file
File '/opt/nxlog/var/log/cached/dns-audit.bin'
OutputType Binary
</Output>
# Read the Cached Events from Disk
<Input syslog_bin>
Module im_file
File '/opt/nxlog/var/log/cached/syslog.bin'
</Input>
<Input dns_audit_bin>
Module im_file
File '/opt/nxlog/var/log/cached/dns-audit.bin'
</Input>
#Forward Cached Events to Splunk
<Output splunk_syslog>
Module om_tcp
ListenAddr 192.168.1.71:1524
Exec to_json();
</Output>
<Output splunk_dns_audit>
Module om_tcp
ListenAddr 192.168.1.71:1525
Exec to_json();
</Output>
# Routes: TCP Received to Local Files
<Route syslog_tcp_to_cache>
Path syslog_in => syslog_cache
</Route>
<Route dns_audit_tcp_to_cache>
Path dns_audit_in => dns_audit_cache
</Route>
# Routes: Local Files forwarded as JSON to Splunk
<Route syslog_bin_to_splunk>
Path syslog_bin => splunk_syslog
</Route>
<Route dns_audit_bin_to_splunk>
Path dns_audit_bin => splunk_dns_audit
</Route>
Configuring Splunk
The following sections describe steps that may be required to prepare Splunk for receiving events from NXLog.
Adding a TCP or UDP data input
TCP or UDP log collection can be added from the Splunk dashboard web interface, however TLS encryption must be configured by editing configuration files.
-
Add a new data input.
-
On the Splunk web interface, go to Settings > Data inputs.
-
In the Local inputs section, for the TCP (or UDP) input type, click Add new.
-
Enter the Port on which to listen for log data (for example, port 514).
-
Fill in the remaining values, if required, and click Next.
-
-
Configure the input settings.
-
Select the Source type appropriate for the logs to be sent. For more information, see the Sending generic structured logs and Sending specific log types for Splunk to parse sections below.
-
Choose an App context; for example, Search & Reporting (search).
-
Adjust the remaining default values, if required, and click Review.
-
-
Review the pending changes and click Submit.
Configuring TLS collection
Follow these steps to set up TLS collection.
-
In order to generate certificates, issue the following commands from the server’s console. The script will ask for a password to protect the key.
$ mkdir /opt/splunk/etc/certs $ export OPENSSL_CONF=/opt/splunk/openssl/openssl.cnf $ /opt/splunk/bin/genRootCA.sh -d /opt/splunk/etc/certs $ /opt/splunk/bin/genSignedServerCert.sh -d /opt/splunk/etc/certs -n splunk -c splunk -p
-
Go to the app’s folder and edit the inputs file. For the Search & Reporting app, the path is
$SPLUNK_HOME/etc/apps/search/local/inputs.conf
. Add[tcp-ssl]
and[SSL]
sections.inputs.conf[tcp-ssl://10514] disabled = false sourcetype = <optional> [SSL] serverCert = /opt/splunk/etc/certs/splunk.pem sslPassword = <The password provided in step 1> requireClientCert = false
-
Edit the
$SPLUNK_HOME/etc/system/local/server.conf
file, adding asslRootCAPath
value to the[sslConfig]
section.server.conf[sslConfig] sslPassword = <Automatically generated> sslRootCAPath = /opt/splunk/etc/certs/cacert.pem
-
Finally, restart Splunk in order to apply the new configuration.
$ $SPLUNK_HOME/bin/splunk restart splunkd
-
Setup can be tested with
netstat
or a similar command. If everything went correctly, the following output is produced.$ netstat -an | grep :10514 tcp 0 0 0.0.0.0:10514 0.0.0.0:* LISTEN
-
Copy the
cacert.pem
file from$SPLUNK_HOME/etc/certs
to the NXLog certificate directory.
This configuration illustrates how to send a log file via a TLS-encrypted connection. The AllowUntrusted setting is required in order to accept a self-signed certificate.
<Output out>
Module om_ssl
ListenAddr 127.0.0.1:10514
CertFile %CERTDIR%/cacert.pem
AllowUntrusted TRUE
</Output>
Configuring HTTP event collection (HEC)
HTTP Event Collection can gather events, as JSON-formatted or as raw data, via HTTP/HTTPS. HEC is a stateless, high performance, token-based solution that is easy to scale with a load balancer. Furthermore, it offers token-based authentication. For more information about configuring and using Splunk HEC, see the following on Splunk Docs: Setup and use HTTP Event Collector in Splunk Web, Formatevents for HTTP Event Collector, and Input endpoint descriptions.
By default, Splunk HEC is disabled. To enable, follow these steps:
-
Open Settings > Data inputs and click on the HTTP Event Collector type.
-
Click the Global Settings button (in the upper-right corner).
-
For All Tokens, click the Enabled button.
-
Optionally, set the Default Source Type, Default Index, and Default Output Group settings.
-
Check Enable SSL to require events to be sent encrypted (recommended). See Configuring TLS collection.
-
Change the HTTP Port Number if required, or leave it set to the default port 8088.
-
Click Save.
Once HEC is enabled, add a new token as follows:
-
If not already on the HTTP Event Collector page, open Settings > Data inputs and click on the HTTP Event Collector type.
-
Click New Token.
-
Enter a name for the token and modify any other settings if required; then click Next.
-
For the Source type, choose Automatic. The source type will be specified using an HTTP header as shown in the examples in the following sections.
-
Choose an App context; for example, Search & Reporting (search).
-
Adjust the remaining default values, if required, and click Review.
-
Verify the information on the summary page and click Submit. The HEC token is created and its value is presented.
-
The configuration can be tested with the following command (substitute the correct token):
$ curl -k https://<host>:8088/services/collector \ -H 'Authorization: Splunk <token>' -d '{"event":"test"}'
If configured correctly, Splunk will respond that the test event was delivered.
{"text":"Success","code":0}
Sending generic structured logs
NXLog can be configured to send generic structured logs to Splunk in JSON format.
Sending structured logs via HEC
Events can be sent to the HEC standard /services/collector
endpoint using a specific nested JSON format.
In this way, multiple input instances can be used to gather log data, and everything forwarded using a single output instance.
The HEC uses a JSON event format, with event data in the event
key and additional metadata sent in time
, host
, source
, sourcetype
, index
, and fields
keys.
For details about the format, see Format events for HTTP Event Collector on Splunk Docs and in particular, the Event metadata section there.
Because the source type is specified in the event metadata, it is not necessary to set the source type on Splunk or to use separate tokens for different source types.
This example shows an output instance that uses the xm_json and om_http modules to send the data to the HEC. Events are formatted specifically for the HEC standard /services/collector
endpoint.
<Extension json>
Module xm_json
</Extension>
<Extension clean_splunk_fields>
Module xm_rewrite
Keep time, host, source, sourcetype, index, fields, event
</Extension>
<Output out>
Module om_http
URL https://127.0.0.1:8088/services/collector
AddHeader Authorization: Splunk c6580856-29e8-4abf-8bcb-ee07f06c80b3
HTTPSCAFile %CERTDIR%/cacert.pem
<Exec>
# Rename event fields to what Splunk uses
if $Severity rename_field($Severity, $vendor_severity);
if $SeverityValue rename_field($SeverityValue, $severity_id);
# Convert all fields to JSON and write to $event field
$event = to_json();
# Convert $EventTime to decimal seconds since epoch UTC
$time = string(integer($EventTime));
$time =~ /^(?<sec>\d+)(?<ms>\d{6})$/;
$time = $sec + "." + $ms;
# Specify the log source type
$sourcetype = "_json";
# Add other HEC metadata fields if available in the event data
if $Hostname $host = $Hostname;
if $SourceName $source = $SourceName;
# Remove all non-metadata fields (already stored in $event)
clean_splunk_fields->process();
# Write to JSON
to_json();
</Exec>
</Output>
{
"event": {
"EventReceivedTime": "2019-10-18 19:58:19",
"SourceModuleName": "in",
"SourceModuleType": "im_file",
"SyslogFacility": "USER",
"vendor_severity": "INFO",
"severity_id": 2,
"EventTime": "2019-10-18 19:58:02",
"Hostname": "myserver2",
"ProcessID": 14533,
"SourceName": "sshd",
"Message": "Failed password for invalid user"
},
"time": "1571428682.218749",
"sourcetype": "_json",
"host": "myserver2",
"source": "sshd"
}

Sending structured logs via TCP/TLS
It is also possible to send JSON-formatted events to Splunk via TCP or TLS. To extract fields and index the event timestamps as sent by the configuration below, add a new source type with the corresponding settings:
-
Open Settings > Source types.
-
Find the
_json
source type and click Clone. -
Provide a name for the new source type, such as
nxlog_json
. -
Under the Advanced tab, add the following configuration values:
Name Value TIME_PREFIX
"time":"
TIME_FORMAT
%s.%6N
Then select this new source type the TCP data input, as described in Adding a TCP or UDP data input.
This configuration sets the $time
field for Splunk, converts the event data to JSON with the xm_json to_json() procedure, and forwards via TCP with the om_tcp module.
<Extension json>
Module xm_json
</Extension>
<Output out>
Module om_tcp
ListenAddr 127.0.0.1:514
<Exec>
# Convert $EventTime to decimal seconds since epoch UTC
$time = string(integer($EventTime));
$time =~ /^(?<sec>\d+)(?<ms>\d{6})$/;
$time = $sec + "." + $ms;
delete($sec);
delete($ms);
# Write to JSON
to_json();
</Exec>
</Output>
{
"EventReceivedTime": "2019-09-30T20:00:01.448973+00:00",
"SourceModuleName": "in",
"SourceModuleType": "im_file",
"SyslogFacility": "USER",
"vendor_severity": "INFO",
"severity_id": 2,
"EventTime": "2019-10-03T05:36:58.190689+00:00",
"Hostname": "myserver2",
"ProcessID": 14533,
"SourceName": "sshd",
"Message": "Failed password for invalid user",
"time": "1570081018.190689"
}
Sending specific log types for Splunk to parse
Splunk implements parsing for a variety of log formats, and apps available on Splunkbase provide support for additional log formats. So in some cases it is most effective to send the raw logs and allow Splunk to do the parsing.
Forwarding Windows Event Log as XML
Windows Event Log data can be forwarded to Splunk in XML format. The Splunk Add-on for Microsoft Windows provides log source types for parsing this format.
These instructions have been tested with Splunk version 9.0.1 and the Splunk Add-on for Microsoft Windows version 8.5.0. |
-
Install the Splunk Add-on for Microsoft Windows. See About installing Splunk add-ons on Splunk Docs for more details.
-
Configure the log source type as XmlWinEventLog.
-
Optionally, add a configuration value to use the event
SystemTime
value as Splunk’s event_time
during indexing (otherwise Splunk will fall back to using the received time). This can be added to the specific event source or to the XmlWinEventLog source type. To modify the XmlWinEventLog source type from the Splunk web interface, follow these steps:-
Open Settings > Source types.
-
Find the XmlWinEventLog source type (uncheck Show only popular) and click Edit.
-
Open the Advanced tab and add the following configuration value:
Name Value EVAL-_time
strptime(SystemTime, "'%Y-%m-%dT%H:%M:%S.%9N%Z'")
-
-
Use the im_msvistalog CaptureEventXML directive to capture the XML-formatted event data from the Event Log. Forward this value to Splunk.
This example reads events from the Security channel.
With the CaptureEventXML directive set to TRUE
, the XML event data is stored in the $EventXML field.
The contents of this field are then assigned to the $raw_event
field, which is sent to Splunk by the splunk_hec
output instance.
<Input eventxml>
Module im_msvistalog
Channel Security
CaptureEventXML TRUE
Exec $raw_event = $EventXML;
</Input>
<Output splunk_hec>
Module om_http
URL https://127.0.0.1:8088/services/collector/raw
AddHeader Authorization: Splunk c6580856-29e8-4abf-8bcb-ee07f06c80b3
</Output>
Events should be automatically parsed by Splunk as shown below.

Forwarding BSD syslog data to Splunk
Splunk can parse BSD syslog events, so in this case it is not necessary to do any additional parsing with NXLog. The source type should be set to syslog.
In this example, events in syslog format are read from file and sent to Splunk via TCP with no additional processing. Because the source type is set to syslog, Splunk automatically parses the syslog header metadata.
<Input syslog>
Module im_file
File '/var/log/messages'
</Input>
<Output splunk>
Module om_tcp
ListenAddr 10.10.1.12:514
</Output>

Sending and receiving logs to and from the Splunk Universal Forwarder
The Splunk Universal Forwarder sends data in a so-called cooked format using the Splunk-to-Splunk (S2S) protocol. However, the cooked format can be disabled for communication with third-party solutions.
In this example, NXLog reads syslog messages from /var/log/messages
and forwards them to the Splunk Universal Forwarder in JSON format.
Splunk Enterprise should be configured to accept forwarder events on port 9997.
See How do I configure a Splunk Forwarder on Linux? on the Splunk Community website for more information.
Forwarder should also be configured to send data to the Index server by executing the following command:
$ sudo splunk add forward-server 10.0.0.10:9997
<Extension syslog>
Module xm_syslog
</Extension>
<Extension json>
Module xm_json
</Extension>
<Input var_log_messages>
Module im_file
File '/var/log/messages'
Exec parse_syslog();
</Input>
<Output splunk_universal_forwarder>
Module om_tcp
ListenAddr 10.0.0.22:1538
<Exec>
$Forwarder = TRUE;
to_json();
</Exec>
</Output>
This example demonstrates how to receive logs from the Splunk Universal Forwarder.
To configure Splunk to forward logs to NXLog, update the Splunk configuration file located at $SPLUNK_HOME/etc/system/local/outputs.conf
as follows:
[indexAndForward]
index = false
[tcpout]
defaultGroup = default-autolb-group
[tcpout-server://10.0.0.10:9997]
[tcpout:default-autolb-group]
disabled = false
server = 10.0.0.10:9997,10.0.0.10:9996
[tcpout-server://10.0.0.10:9996]
sendCookedData = false
And the NXLog configuration file:
<Input splunk_listen>
Module im_tcp
ListenAddr 10.0.0.10:9996
InputType Dgram (1)
</Input>
1 | The InputType common module directive needs to be set to Dgram to treat each packet as an event. |