Elasticsearch and Kibana
Elasticsearch is a search engine and document database that is commonly used to store logging data. Kibana is a popular user interface and querying front-end for Elasticsearch. Kibana is often used with the Logstash data collection engine—together forming the ELK stack (Elasticsearch, Logstash, and Kibana).
However, Logstash is not actually required to load data into Elasticsearch. NXLog can do this as well, and offers several advantages over Logstash—this is the KEN stack (Kibana, Elasticsearch, and NXLog).
-
Because Logstash is written in Ruby and requires Java, it has some high system resource requirements. NXLog has a small resource footprint and is recommended by many ELK users as the log collector of choice for Windows and Linux.
-
Due to the Java dependency, Logstash requires system administrators to deploy the Java Runtime Environment on their production servers and keep up with Java security updates. NXLog does not require Java.
-
Elastic’s Logstash wmi input plugin creates events based on the results of a WMI query. This method incurs a significant performance penalty. NXLog uses the native Windows Event Log API in order to more efficiently capture Windows events.
The following sections explain how to configure NXLog to:
-
send logs directly to Elasticsearch, replacing Logstash; or
-
forward collected logs to Logstash, acting as a log collector for Logstash.
Sending logs to Elasticsearch
Consult the Elasticsearch Reference and the Kibana User Guide for more information about installing and configuring Elasticsearch and Kibana. For NXLog Enterprise Edition 3.x, see Using Elasticsearch with NXLog Enterprise Edition 3.x in the Reference Manual.
-
Configure NXLog.
Example 1. Using om_elasticsearchThe om_elasticsearch module is only available in the NXLog Enterprise Edition. Because it sends data in batches, it reduces the effect of the latency inherent in HTTP responses, allowing the Elasticsearch server to process data much more quickly (10,000 EPS or more on low-end hardware).
nxlog.conf<Extension _json> Module xm_json DateFormat YYYY-MM-DDThh:mm:ss.sUTC </Extension> <Output out> Module om_elasticsearch URL http://localhost:9200/_bulk # Create an index daily Index strftime($EventTime, "nxlog-%Y%m%d") # Use the following if you do not have $EventTime set #Index strftime($EventReceivedTime, "nxlog-%Y%m%d") Exec to_json(); </Output>
Example 2. Using om_httpFor NXLog Community Edition, the om_http module can be used instead to send logs to Elasticsearch. Because it sends a request to the Elasticsearch HTTP REST API for each event, the maximum logging throughput is limited by HTTP request and response latency. Therefore, this method is suitable only for low-volume logging scenarios.
nxlog.conf<Extension _json> Module xm_json DateFormat YYYY-MM-DDThh:mm:ss.sUTC </Extension> <Output out> Module om_http URL http://localhost:9200 ContentType application/json <Exec> set_http_request_path(strftime($EventTime, "/nxlog-%Y%m%d/" + $SourceModuleName)); rename_field("timestamp", "@timestamp"); to_json(); </Exec> </Output>
Elasticsearch does its own internal parsing for timestamp fields. To avoid data loss, it is advised to specify a date format that can be parsed by Elasticsearch. If using functions or procedures provided by the xm_json module, the DateFormat module directive should be specified. If not, the DateFormat global directive can be used. For more information refer to the Elasticsearch documentation on the Date field type. -
Restart NXLog and make sure the event sources are sending data. This can be checked with
curl -X GET 'localhost:9200/_cat/indices?v&pretty'
. There should be an index matchingnxlog*
and itsdocs.count
counter should be increasing. -
Configure the appropriate index pattern for Kibana.
-
Open Management on the left panel and click on Index Patterns.
-
Set the Index pattern to
nxlog*
. A matching index should be listed. Click the > Next step button. -
Set the Time Filter field name selector to
EventTime
(orEventReceivedTime
if the$EventTime
field is not set by the input module). Click the Create index pattern button.
-
-
Test that the NXLog and Elasticsearch/Kibana configuration is working by opening Discover on the left panel.
Forwarding logs to Logstash
NXLog can be configured to act as a log collector, forwarding logs to Logstash. Logs can be sent in various formats such as JSON and syslog, and over different transport protocols including TCP, UDP, and HTTP(S).
The example below provides a simple configuration for sending logs to Logstash in JSON format over TCP. Further information and examples of how NXLog can send logs to Logstash can be found in the documentation on integrating with Logstash.
-
Set up a configuration on the Logstash server to process incoming event data from NXLog.
logstash.confinput { tcp { codec => json_lines { charset => CP1252 } port => "3515" tags => [ "tcpjson" ] } } filter { date { locale => "en" timezone => "Etc/GMT" match => [ "EventTime", "YYYY-MM-dd HH:mm:ss" ] } } output { elasticsearch { host => localhost } stdout { codec => rubydebug } }
The
json
codec in Logstash sometimes fails to properly parse JSON—it will concatenate more than one JSON record into one event. Use thejson_lines
codec instead.Although the im_msvistalog module converts data to UTF-8, Logstash seems to have trouble parsing that data. The
charset => CP1252
seems to help. -
Configure NXLog.
nxlog.conf<Extension _json> Module xm_json </Extension> <Output out> Module om_tcp Host 10.1.1.1 Port 3515 Exec to_json(); </Output>
-
Restart NXLog.