The following procedures assume basic installation has been successfully completed.

F5 Logging Profile

  1. Create a new ASM logging profile: Security -> Event Logs -> Logging Profiles
  2. Apply the following settings:
security log profile Prod_ES_log_profile {
    application {
        Log_ElasticStack {
            filter {
                request-type {
                    values { illegal-including-staged-signatures }
                }
                search-all { }
            }
            format {
                field-delimiter "#"
                fields {
                    ip_client
                    geo_location
                    ip_address_intelligence
                    src_port
                    dest_ip
                    dest_port
                    protocol
                    method
                    uri
                    x_forwarded_for_header_value
                    request_status
                    support_id
                    session_id
                    username
                    violations
                    violation_rating
                    attack_type
                    query_string
                    policy_name
                    sig_ids
                    sig_names
                    sig_set_names
                    severity
                    request
                    violation_details }
            }
            local-storage disabled
            logic-operation and
            maximum-entry-length 64k
            protocol udp
            remote-storage remote
            servers {
                192.168.86.55:514 { }
            }
        }
    }
}
  • The above options can be configured in the GUI also: Security -> Event Logs -> Logging Profiles
  • Other data can be added to the logging profile. The Logstash filter below will need to be updated accordingly.
  • The servers definition is the location of Syslog-NG, which for this PoC is the same server as the ElasicStack.
  1. Apply the logging profile to the VIP with the traffic you want logged: Local Traffic -> Virtual Servers: Virual Server List -> Your Virutal Server -> Security -> Policy Settings -> Log Profile

Syslog-NG

  1. Create a configuration file in the folder: /etc/syslog-ng/conf.d/
  2. Apply the following configuration items as required:
options {
    check_hostname(yes);
    use_dns(no);
    dns_cache(no);
    create_dirs(yes);
    owner(root);
    group(root);
    perm(0777);
    dir_owner(root);
    dir_group(root);
    dir_perm(0777);
};

####################################################################
# Create per log listener

source syslog_source {
    syslog(ip(x.x.x.x) transport("udp"));
};

#####################################################################
# Set filter for the logs to be received only from the required device(s).

filter f5 {
    netmask("x.x.x.x/32") or
    netmask("x.x.x.x/32") or
    netmask("x.x.x.x/32") or
    netmask("x.x.x.x/32");
};

################################################################
# Create output files based on the log type

destination asm_logs_file {
    file("/var/log/bigip/asm/$HOST--$YEAR-$MONTH-$DAY-$HOUR.log");
};

###################################################################
#Output the logs

log { source(syslog_source);
      filter(f5);
      destination(asm_logs_file);
};
  • The filter f5 settings are IP addresses of the f5 devices sending logs. This is used to differentiate log files from other devices in the output section. This allows a different location on disk to be used, which makes it easier to separate devices in logstash if using different indexes.
  1. Restart the syslog-ng service:
$ systemctl restart syslog-ng
  1. Confirm syslog-ng is writing log files to the destination configured in the above config file.

Syslog-NG has plenty of configuration options available to capture data. For example, you can define listeners on different ports for different devices to separate data. This configuration has worked well for me. For more configuration and troubleshooting assistance, check out the official documentation.

Logstash

Once Syslog-NG is successfully writing log data, we can configure Logstash read, filter and output the data. As per basic installation, the Logstash configuration file is split into three parts:

  • input: Define the data to processed
  • filter: Manipulate the data to make it structured
  • output: Output the data, in our case, to Elasticsearch

The following configuration file should be saved to /etc/logstash/conf.d/. Note that multiple configuration files can exists in this folder, and they are loaded in the order they appear when performing a listing on the folder. Logstash concatenates the files, so defining type for conditional statements is critical if more that one configuration exists.

input {
    file {
        path => "/var/log/bigip/asm/*.log"
            start_position => beginning
            type =>"asmlogs"
            codec => plain {
                charset => "ISO-8859-1"
            }
    }
}

filter {
    if [type] == "asmlogs" {
        mutate {
            gsub => ["message","\"",""]
        }
        csv {
            separator => "#"
            columns => [
                "header",
                "geo_location",
                "ip_address_intelligence",
                "src_port",
                "dest_ip",
                "dest_port",
                "protocol",
                "method",
                "uri",
                "x_forwarded_for_header_value",
                "request_status",
                "support_id",
                "session_id",
                "username",
                "violations",
                "violation_rating",
                "attack_type",
                "query_string",
                "policy_name",
                "sig_ids",
                "sig_names",
                "sig_set_names",
                "severity",
                "request",
                "violation_details"
            ]
        }
        grok {
            match => { "header" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} ASM:%{IP:source_ip}" }
        }
        geoip {
            source => "source_ip"
            database => "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-filter-geoip-5.0.3-java/vendor/GeoLite2-City.mmdb"
        }
        mutate {
            remove_field => [ "message", "header" ]
        }
        mutate {
            gsub => ["sig_set_names", "},{", "}#{"]
            gsub => ["policy_name","\/Common\/",""]
        }
        date {
            match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
        }
    }
}

output {
    if [type] == "asmlogs" {
        elasticsearch {
            hosts => ["http://192.168.86.55:9200"]
            manage_template => false
            index => "asm-index-%{+YYYY.MM.dd}"
        }
    }
}
  • Path: This is the path configured in Syslog-NG for ASM logs.
  • Charset: This is due to UTF-8 being the default encoding, but occasionally chars not part of UTF-8 come through in ASM logs. This setting resolves the issue.
  • Geoip: For this to work properly, the mapping template will need to be updated for ASM logs.
  • Output => Hosts: This assumes Logstash and ElasticSearch are on the same server.
  • Elasticsearch will automatically create the index asm-index-*
  • Elasticsearch will also automatically create a mapping template for the data. May require updating the template if using GeoIP data in Kibana.

Official Plug-in Documentation:

Plugin URL
Input https://www.elastic.co/guide/en/logstash/current/input-plugins.html
Filter https://www.elastic.co/guide/en/logstash/current/filter-plugins.html
Output https://www.elastic.co/guide/en/logstash/current/output-plugins.html

As covered in basic usage, when testing your filter, I recommend the following:

  1. Stop Logstash service
  2. Configure sincedb => /dev/null to make replaying logs easier in testing.
  3. Configure Output filter as Stdout
  4. Run Logstash from the command line referencing this config file only

More details on the process here

That will help work out an issues with the filter before trying to push data to ElasticSearch. Once the Logstash filter is correct, change the output to ElasticSeach. To confirm ElasticSearch has received the data and dynamically created the index, run the following:

$ curl -XGET http://192.168.86.55:9200/_cat/indices?v
health status index                           uuid                   pri rep docs.count docs.deleted store.size pri.store.size
yellow open   asm-index-2018.08.27            zUoD1vwOSLuTSuCJYbYFaQ   5   1        936            0      1.9mb          1.9mb
yellow open   asm-index-2018.08.28            S7Gd9WvdRTeJ3BWbgMJKvQ   5   1        594            0        1mb            1mb
green  open   .monitoring-kibana-6-2018.09.12 1Ur0qDhrQiOXMXYnO5oLkg   1   0       8639            0      2.2mb          2.2mb
yellow open   asm-index-2018.09.12            NY_veMElT_6_aWYaZY-_XQ   5   1        402            0      1.2mb          1.2mb
yellow open   asm-index-2018.09.04            iesSjuGXTAug7ik-IWhcbg   5   1        378            0      807kb          807kb
yellow open   asm-index-2018.09.13            AIRS8PrxRouCbFULJXF_nQ   5   1         60            0      1.1mb          1.1mb
yellow open   asm-index-2018.09.06            rOYc_CRvTEatwCTiYh2nxg   5   1        386            0    931.7kb        931.7kb
yellow open   asm-index-2018.08.31            F47SJ60jQpiR7VMOqp-GlA   5   1        428            0      1.7mb          1.7mb
yellow open   asm-index-2018.09.08            zltKJXfUR8uaXq1tnfJRUw   5   1        515            0        1mb            1mb
green  open   .monitoring-kibana-6-2018.09.11 dohK3Q9kT1CsrA98pOKqpQ   1   0       6669            0      1.8mb          1.8mb
yellow open   asm-index-2018.08.29            ahziP221R6WZjuFP65Ibfg   5   1        348            0      1.1mb          1.1mb
yellow open   asm-index-2018.09.10            LKMQJv1zRMaiXsqoNN7Qjg   5   1        445            0        1mb            1mb
green  open   .monitoring-kibana-6-2018.09.13 SQRDvdbZS_OpV4OksCyOiA   1   0       1342            0    460.9kb        460.9kb
yellow open   asm-index-2018.08.30            42SXHlIuR7i_TL9RFxlJVg   5   1        211            0        1mb            1mb
green  open   .kibana                         -wdXmMqKQrSR8210_aLVYg   1   0         32            6    107.3kb        107.3kb
yellow open   asm-index-2018.09.05            HIO3l2XFSxqzXx9JmqtxYQ   5   1        443            0        1mb            1mb
yellow open   asm-index-2018.09.03            9YfIMRcVTsaBqmXI5pPf6g   5   1        151            0      484kb          484kb
yellow open   asm-index-2018.09.02            w0ITb3qeS1KDHPAS61ThoA   5   1      12185            0       19mb           19mb
yellow open   asm-index-2018.09.01            Srrp1qh3Rl-rcfmicSy0Ug   5   1       7245            0     10.6mb         10.6mb
yellow open   asm-index-2018.09.09            9YbyI8hJTd6SNGzt9heTwQ   5   1        415            0    962.4kb        962.4kb
yellow open   asm-index-2018.09.11            aOLttxpdSUWUZLsSkT1INg   5   1        355            0    992.7kb        992.7kb
yellow open   asm-index-2018.09.07            jmwMTu9nSPKFtYVikesjvQ   5   1        257            0    665.5kb        665.5kb

Note: The yellow health status is because there is a only single node with one instance of the index.

From here logs should be available in ElasticSearch, and thus Kibana to search and visualise.