Azure (im_azure)
This module can be used to collect logs from Microsoft Azure applications. It can operate in three different modes to connect to the following data sources:
-
Azure Table Storage
-
Azure Blob Storage
-
Azure Log Analytics workspace
To examine the supported platforms, see the list of installer packages in the Available Modules chapter. |
Storage setup
For Blob
or Table
modes, Azure web application logging and storage can be configured from the Azure Management Portal by following these steps:
-
After logging in to the Portal, open the left portal menu, select Storage accounts, and then select Create.
-
Create the new storage account, providing a storage account name, region, and redundancy type. Take note of the storage name, you will need to specify this value in the NXLog configuration. More information on the storage settings can be found in the Microsoft documentation on how to Create a storage account.
-
On the Review + create tab, click Create and wait for storage setup to complete.
-
Navigate to your app and select App Service logs.
-
Select On for Application Logging (Blob).
-
Configure Storage Settings corresponding with the storage account created above. More information on the configuration settings can be found in the Microsoft documentation on how to Enable diagnostic logging for apps.
-
Confirm the changes by clicking Save, then restart the service. Note that it may take a while for Azure to create the table and/or blob in the storage.
Log Analytics workspace setup
For Analytics
mode, an application needs to be registered and
granted the necessary permissions to read from the Log Analytics workspace.
Follow these steps to create and configure the app from the
Azure Management Portal.
Register an Azure Active Directory application for NXLog:
-
After logging in to the Portal, open the left portal menu, and select Azure Active Directory.
-
Select App registrations and then + New registration.
-
Provide an app name and select who can use the application, then click Register.
-
Open the settings for your new app and take note of the Application (client) ID. You will need to specify this value for the ClientID directive.
Grant the app permission to use the Log Analytics API and create a client secret:
-
From the left menu, select API permissions and then + Add a permission. Select APIs my organization uses and choose Log Analytics API.
-
Select Application permissions for the type of permission required, and Data.Read for the permission.
-
Click Add permissions to create the permission.
-
From the left menu, select Certificates & secrets and then + New client secret. Enter a description and expiration period, then click Add.
-
Take note of the Value of the new secret, you will need to specify this for the SharedKey directive.
Add the app to your Log Analytics workspace:
-
Go to your Log Analytics workspace and take note of the Workspace ID. You will need to specify this value for the WorkspaceID directive.
-
From the left menu, select Access control (IAM). Click + Add and select Add role assignments.
-
Choose Log Analytics Reader for role, select the application registered above, and click Save.
For more information on the configuration settings see How to: Use the portal to create an Azure AD application in the Microsoft documentation, and the Microsoft Tech Community blog post on how to Access Azure Sentinel Log Analytics via API.
Configuration
The im_azure module accepts the following directives in addition to the common module directives.
- Mode
-
This mandatory directive specifies the type of service the module should connect to. It accepts one of the following values:
Table
,Blob
,Analytics
. Each mode requires a corresponding set of mandatory directives.
Table mode
The following are mandatory directives when using Table
mode.
- AuthKey
-
This directive specifies the authentication key to use for connecting to the Azure Storage account.
The AuthKey directive is deprecated and will be removed from NXLog Enterprise Edition 6.0. After that, the authentication key or client secret can only be defined in the SharedKey directive.
- StorageName
-
This directive specifies the name of the storage account to connect to.
- TableName
-
This directive specifies the storage table from which to collect logs.
Blob mode
The following are mandatory directives when using Blob
mode.
- AuthKey
-
This directive specifies the authentication key to use for connecting to the Azure Storage account.
The AuthKey directive is deprecated and will be removed from NXLog Enterprise Edition 6.0. After that, the authentication key or client secret can only be defined in the SharedKey directive.
- BlobName
-
This directive specifies either the name of the blob container or the path of a single blob (formatted as
container/blob
) from which to collect logs.
- StorageName
-
This directive specifies the name of the storage account to connect to.
Analytics mode
The following are mandatory directives when using Analytics
mode.
- AuthKey
-
This directive specifies the client secret to authenticate with Azure Active Directory.
The AuthKey directive is deprecated and will be removed from NXLog Enterprise Edition 6.0. After that, the authentication key or client secret can only be defined in the SharedKey directive.
- ClientID
-
This directive specifies the ID of the Microsoft Azure Active Directory application that will be used to authenticate with Azure Active Directory.
- TableName
-
This directive specifies the storage table from which to collect logs.
- TenantID
-
This directive specifies the ID of the Azure Active Directory tenant to connect to.
- WorkspaceID
-
This directive specifies the workspace ID of the Log Analytics account from which to collect logs.
- Address
-
This directive specifies the URL for connecting to the storage account. If this directive is not specified, it defaults to
http://<storagename>.<table|blob>.core.windows.net
or tohttps://api.loganalytics.io
depending on the mode. If defined, the value must start withhttp://
orhttps://
.HTTP is only supported for table or blob storage and the storage account must have the Require secure transfer for REST API operations security setting disabled. Log Analytics workspaces require HTTPS. The Address directive is deprecated and will be removed from NXLog Enterprise Edition 6.0. After that, the URL can only be defined in the URL directive.
- HTTPSAllowExpired
-
This boolean directive specifies whether the connection should be allowed with an expired certificate. If set to
TRUE
, the connection will be allowed even if the remote server presents an expired certificate. The default isFALSE
: the remote server must present a certificate that is not expired. This directive is only valid if HTTPSRequireCert is set toTRUE
.
- HTTPSAllowUntrusted
-
This boolean directive specifies that the connection should be allowed regardless of the certificate verification results. If set to
TRUE
, the connection will be allowed with any unexpired certificate provided by a server. The default value isFALSE
: the remote client must present a trusted certificate.
- HTTPBasicAuthUser
-
HTTP basic authorization username.
- HTTPBasicAuthPassword
-
HTTP basic authorization password.
HTTP authorization works only when both parameters are set. |
- HTTPSCADir
-
This directive specifies a path to a directory containing certificate authority (CA) certificates. These certificates will be used to verify the certificate presented by the remote server. The certificate files must be named using the OpenSSL hashed format, i.e. the hash of the certificate followed by .0, .1 etc. To find the hash of a certificate using OpenSSL:
$ openssl x509 -hash -noout -in ca.crt
For example, if the certificate hash is
e2f14e4a
, then the certificate filename should bee2f14e4a.0
. If there is another certificate with the same hash then it should be namede2f14e4a.1
and so on.A remote server’s self-signed certificate (which is not signed by a CA) can also be trusted by including a copy of the certificate in this directory.
The default operating system root certificate store will be used if this directive is not specified. Unix-like operating systems commonly store root certificates in
/etc/ssl/certs
. Windows operating systems use the Windows Certificate Store, while macOS uses the Keychain Access Application as the default certificate store. See NXLog TLS/SSL configuration in the User Guide for more information on using this directive. In addition, Microsoft’s PKI repository contains root certificates for Microsoft services.
- HTTPSCAFile
-
This specifies the path of the certificate authority (CA) certificate that will be used to verify the certificate presented by the remote server. A remote server’s self-signed certificate (which is not signed by a CA) can be trusted by specifying the remote server certificate itself. In case of certificates signed by an intermediate CA, the certificate specified must contain the complete certificate chain (certificate bundle).
- HTTPSCAThumbprint
-
This optional directive specifies the thumbprint of the certificate authority (CA) certificate that will be used to verify the certificate presented by the remote server. The hexadecimal fingerprint string can be copied from Windows Certificate Manager (certmgr.msc). Whitespaces are automatically removed. The certificate must be added to a Windows certificate store that is accessible by NXLog. This directive is only supported on Windows and is mutually exclusive with the HTTPSCADir and HTTPSCAFile directives.
- HTTPSSearchAllCertStores
-
This optional boolean directive, when set to
TRUE
, enables the loading of all available Windows certificates into NXLog, for use during remote certificate verification. Any required certificates must be added to a Windows certificate store that NXLog can access. This directive is mutually exclusive with the HTTPSCAThumbprint, HTTPSCADir and HTTPSCAFile directives.
- HTTPSCertFile
-
This specifies the path of the certificate file that will be presented to the remote server during the HTTPS handshake.
- HTTPSCertKeyFile
-
This specifies the path of the private key file that was used to generate the certificate specified by the HTTPSCertFile directive. This is used for the HTTPS handshake.
- HTTPSCertThumbprint
-
This optional directive specifies the thumbprint of the certificate that will be presented to the remote server during the HTTPS handshake. The hexadecimal fingerprint string can be copied from Windows Certificate Manager (certmgr.msc). Whitespaces are automatically removed. The certificate must be imported to the
Local Computer\Personal
certificate store in PFX format for NXLog to find it. To create a PFX file from the certificate and private key using OpenSSL:$ openssl pkcs12 -export -out server.pfx -inkey server.key -in server.pem
This directive is only supported on Windows and is mutually exclusive with the HTTPSCertFile and HTTPSCertKeyFile directives.
The private key associated with the certificate must be exportable.
-
If you generate the certificate request using Windows Certificate Manager, enable the Make private key exportable option from the certificate properties.
-
If you import the certificate with the Windows Certificate Import Wizard, make sure that the Mark this key as exportable option is enabled.
-
If you migrate the certificate and associated private key from one Windows machine to another, select Yes, export the private key when exporting from the source machine.
-
- HTTPSCRLDir
-
This directive specifies a path to a directory containing certificate revocation list (CRL) files. These CRL files will be used to check for certificates that were revoked and should no longer be accepted. The files must be named using the OpenSSL hashed format, i.e. the hash of the issuer followed by .r0, .r1 etc. To find the hash of the issuer of a CRL file using OpenSSL:
$ openssl crl -hash -noout -in crl.pem
For example if the hash is
e2f14e4a
, then the filename should bee2f14e4a.r0
. If there is another file with the same hash then it should be namede2f14e4a.r1
and so on.
- HTTPSCRLFile
-
This specifies the path of the certificate revocation list (CRL) which will be used to check for certificates that have been revoked and should no longer be accepted. Example to generate a CRL file using OpenSSL:
$ openssl ca -gencrl -out crl.pem
- HTTPSDHFile
-
This optional directive specifies file with dh-parameters for Diffie-Hellman key exchange. These parameters can be generated with dhparam(1ssl). If no directive is specified, default parameters will be used. See OpenSSL Wiki for further details.
- HTTPSKeyPass
-
This directive specifies the passphrase of the private key specified by the HTTPSCertKeyFile directive. A passphrase is required when the private key is encrypted. Example to generate a private key with Triple DES encryption using OpenSSL:
$ openssl genrsa -des3 -out server.key 2048
This directive is not needed for passwordless private keys.
- HTTPSRequireCert
-
This boolean directive specifies that the remote HTTPS client must present a certificate. If set to
TRUE
and a certificate is not presented during the connection handshake, the connection will be refused. The default value isTRUE
: each connection must use a certificate.
- HTTPSSSLCompression
-
This Boolean directive allows you to enable data compression when sending data over the network. The compression mechanism is based on the zlib compression library. If the directive is not specified, it defaults to FALSE (the compression is disabled).
Some Linux packages (for example, Debian) use the OpenSSL library provided by the OS and may not support the zlib compression mechanism. The module will emit a warning on startup if the compression support is missing. The generic deb/rpm packages are bundled with a zlib-enabled libssl library.
- HTTPSSSLProtocol
-
This directive can be used to set the allowed SSL/TLS protocol(s). It takes a comma-separated list of values which can be any of the following:
SSLv2
,SSLv3
,TLSv1
,TLSv1.1
,TLSv1.2
andTLSv1.3
. By default, theTLSv1.2
andTLSv1.3
protocols are allowed. Note that the OpenSSL library shipped by Linux distributions may not supportSSLv2
andSSLv3
, and these will not work even if enabled with this directive.
- LocalPort
-
This optional directive specifies the local port number of the connection. If not specified, a random high port number will be used, which may be unsuitable for firewalled network environments.
- PollInterval
-
This directive specifies how frequently the module will check for new events, in seconds. If this directive is not specified, it defaults to 1 second. Fractional seconds may be specified (
PollInterval 0.5
will check twice every second).
- ReadFromLast
-
This optional boolean directive instructs the module to only read logs that arrive after NXLog is started. This directive comes into effect if a saved position is not found, for example, on the first start, or when the SavePos directive is
FALSE
. When the SavePos directive isTRUE
, and a previously saved position is found, the module will always resume reading from the saved position. If ReadFromLast isFALSE
, the module will read all logs from the beginning of the file. This can result in a lot of messages and is usually not the expected behavior. If this directive is not specified, it defaults toTRUE
.The following matrix shows the outcome of this directive in conjunction with the SavePos directive:
ReadFromLast SavePos Saved Position Outcome TRUE
TRUE
No
Reads events that are logged after NXLog is started.
TRUE
TRUE
Yes
Reads events from saved position.
TRUE
FALSE
No
Reads events that are logged after NXLog is started.
TRUE
FALSE
Yes
Reads events that are logged after NXLog is started.
FALSE
TRUE
No
Reads all events.
FALSE
TRUE
Yes
Reads events from saved position.
FALSE
FALSE
No
Reads all events.
FALSE
FALSE
Yes
Reads all events.
- SavePos
-
If this boolean directive is set to
TRUE
, the file position will be saved when NXLog exits. The file position will be read from the cache file upon startup. The default isTRUE
; the file position will be saved if this directive is not specified. This directive affects the outcome of the ReadFromLast directive. The SavePos directive can be overridden by the global NoCache directive.
- URL
-
This directive specifies the URL for connecting to the storage account. If this directive is not specified, it defaults to
http://<storagename>.<table|blob>.core.windows.net
or tohttps://api.loganalytics.io
depending on the mode. If defined, the value must start withhttp://
orhttps://
.HTTP is only supported for table or blob storage and the storage account must have the Require secure transfer for REST API operations security setting disabled. Log Analytics workspaces require HTTPS.
Examples
When in Table
mode, im_azure expects a table containing a Message
field.
The value of this field is made available in the $raw_event
field.
This configuration collects logs from a table and converts the data to JSON format using the xm_json extension module. Since the configuration uses a secure URL, the HTTPSCADir is specified and points to a folder which contains the Trusted Root CA certificates on the machine. Alternatively, the HTTPSCAFile directive can be used to specify a file containing the complete certificate chain for the Azure server, or the HTTPSAllowUntrusted directive to accept all certifcates.
<Extension json>
Module xm_json
</Extension>
<Input azure_table>
Module im_azure
Mode Table
URL https://<storage_name>.table.core.windows.net/
HTTPSCADir /path/to/trusted/ca/cert/store
SharedKey storage_access_key
StorageName storage_name
TableName storage_table_name
Exec $Message = $raw_event; to_json();
</Input>
The following is a log record stored in an Azure table. It contains a Message
field and thus conforms to im_azure's data format requirements.
PartitionKey | RowKey | Tiestamp | Message |
---|---|---|---|
PK1 |
RK1 |
2021-08-10T14:14:59.0689266Z |
This is test message 1 |
The following is the same log message in JSON format after it was processed by NXLog.
{
"ProcessID": 0,
"ThreadID": 0,
"EventTime": "2021-08-10T16:14:59.068926+02:00",
"EventReceivedTime": "2021-08-18T16:01:59.531635+02:00",
"SourceModuleName": "azure_table",
"SourceModuleType": "im_azure",
"Message": "This is test message 1"
}
When in Blob
mode, im_azure expects data to be in CSV format and contain
the following header:
date,level,applicationName,instanceId,eventTickCount,eventId,pid,tid,message,activityId
This configuration collects logs from a blob container and converts the data to JSON format using the xm_json extension module. Since the configuration uses a secure URL, the HTTPSCADir is specified and points to a folder which contains the Trusted Root CA certificates on the machine. Alternatively, the HTTPSCAFile directive can be used to specify a file containing the complete certificate chain for the Azure server, or the HTTPSAllowUntrusted directive to accept all certifcates.
<Extension json>
Module xm_json
</Extension>
<Input azure_blob>
Module im_azure
Mode Blob
URL https://<storage_name>.blob.core.windows.net/
HTTPSCADir /path/to/trusted/ca/cert/store
BlobName blob_container_name
SharedKey storage_access_key
StorageName storage_name
Exec $Message = $raw_event; to_json();
</Input>
The following is a log record conforming to the CSV data format expected by im_azure.
2021-08-17T18:44:23.369931,2,MyWebApp,003,2,405,1001,1009,This is test message 1,3131
The following is the same log record in JSON format after it was processed by NXLog.
{
"SourceName": "MyWebApp",
"SeverityValue": 4,
"Severity": "ERROR",
"ProcessID": 1001,
"ThreadID": 1009,
"EventTime": "2021-08-17T20:44:23.369931+02:00",
"EventReceivedTime": "2021-08-18T15:30:42.290532+02:00",
"SourceModuleName": "azure_blob",
"SourceModuleType": "im_azure",
"Message": "This is test message 1"
}
This configuration collects logs from an AuditLogs table in Azure Log
Analytics workspace. im_azure receives records from the Log Analytics API in
JSON format, which it then parses into structured data and writes each record
as a list of key-value pairs to the $raw_event
field.
<Input azure_workspace>
Module im_azure
Mode Analytics
# Since the API uses HTTPS, SSL must be configured
HTTPSAllowUntrusted TRUE
ClientID azure_ad_app_id
SharedKey azure_ad_app_secret
TenantID azure_ad_tenant_id
WorkspaceID workspace_id
TableName AuditLogs
</Input>
The following is a record from the AuditLog table after it was processed by NXLog.
2021-08-19 12:23:15 nxlog-server INFO TenantId="c1580b88-581a-4cd0-a5c8-a3dd46241740" SourceSystem="Azure AD" TimeGenerated="2021-08-11 18:30:30" ResourceId="/tenants/50fc51f4-477d-4a3c-8ea2-d9306b08461f/providers/Microsoft.aadiam" OperationName="Update application" OperationVersion="1.0" Category="ApplicationManagement" ResultType="" ResultSignature="None" ResultDescription="" DurationMs="0" CorrelationId="947f8943-691f-4393-833f-e64ca47dfd49" Resource="Microsoft.aadiam" ResourceGroup="Microsoft.aadiam" ResourceProvider="" Identity="" Level="4" Location="" AdditionalDetails="[{\"key\":\"User-Agent\",\"value\":\"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.131 Safari/537.36 Edg/92.0.902.67\"}]" Id="Directory_947f8943-691f-4393-833f-e64ca47dfd49_H1F9O_140032957" InitiatedBy="{\"user\":{\"id\":\"f6385f8d-9c20-4235-a2df-17b76774f75f\",\"displayName\":null,\"userPrincipalName\":\"admin@example.com\",\"ipAddress\":null,\"roles\":[]}}" LoggedByService="Core Directory" Result="success" ResultReason="" TargetResources="[{\"id\":\"c118f7ea-f476-478a-9b7a-e6ad2d7b6d77\",\"displayName\":\"nxlog-agent\",\"type\":\"Application\",\"modifiedProperties\":[],\"administrativeUnits\":[]}]" AADTenantId="50fc51f4-477d-4a3c-8ea2-d9306b08461f" ActivityDisplayName="Update application" ActivityDateTime="2021-08-11 18:30:30" AADOperationType="Update" Type="AuditLogs"