The following procedures assume basic installation has been succesfully completed.

Fortigate Logging Configuration

I have the benefit of a Fortianalyzer, which makes it trivial to forward UTM logs to SyslogNG.

config system log-forward
    edit 1
        set mode forwarding
        set fwd-max-delay realtime
        set server-name "ElasticStack Syslog"
        set server-ip "192.168.86.55"
        set fwd-server-type syslog
            config device-filter
                edit 1
                    set device "xxx"
                next
            end
        set log-filter-status enable
        set log-filter-logic and
            config log-filter
                edit 1
                    set value "utm"
                next
            end
        set signature 4062638579936757096
    next
end

Syslog-NG

  1. Create a confiuration file in the folder: /etc/syslog-ng/conf.d/
  2. Apply the following coniguration items as required:
options {
    check_hostname(yes);
    use_dns(no);
    dns_cache(no);
    create_dirs(yes);
    owner(root);
    group(root);
    perm(0777);
    dir_owner(root);
    dir_group(root);
    dir_perm(0777);
};

####################################################################
# Create per log listener

source syslog_source {
    syslog(ip(x.x.x.x) transport("udp"));
};
#####################################################################
# Set filter for the logs to be received only from the required device.

filter fortianalyzer {
    netmask("x.x.x.x/32");
};

################################################################
# Create output files based on the log type

destination fortigate_logs {
    file("/var/log/fortigate/$HOST--$YEAR-$MONTH-$DAY-$HOUR.log");
};


###################################################################
#Output the logs

log {
    source(syslog_source);
    filter(fortianalyzer);
    destination(fortigate_logs);
};
  1. Restart the syslog-ng service: systemctl restart syslog-ng
  2. Confirm syslog-ng is writing log files to the destination configured in the above config file.

Syslog-NG has plenty of configuration options available to capture data. For example, you can define listeners on different ports for different devices to seperate data. This config has worked well for me though. For more configuration and troubeshooting assistance, check out the official documentation.


Logstash

Once Syslog-NG is successfully writing log data, we can configure Logstash read, filter and output the data. As per basic installation, the Logstash configuartion file is split into three parts:

  • input: Define the data to processed
  • filter: Manipulate the data to make it structured
  • output: Output the data, in our case, to elasticsearch

The following configuration file should be safe to /etc/logstash/conf.d/. Note that multiple configuration files can exists in this folder, and they are loaded in the order the appear when performing a listing on the folder. Logstash concatenates the files, so defining type for conditional statements is critical if more that one file exists.

input {
    file {
        path => [ "/var/log/fortigate/*.log" ]
        start_position => beginning
        type =>"fortigate"
    }
}

filter {
    if [type] == "fortigate" {
        grok {
            match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{GREEDYDATA:syslog_message}" }
            add_field => [ "received_at", "%{@timestamp}" ]
            add_field => [ "received_from", "%{host}" ]
        }
        date {
            match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
        }
        kv {
            source => "syslog_message"
        }
        geoip {
            source => "remip"
            database => "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-filter-geoip-5.0.3-java/vendor/GeoLite2-City.mmdb"
        }  
        mutate {
            rename => { "type" => "fgt_type" }
            rename => { "subtype" => "fgt_subtype" }
            add_field => { "type" => "fortigate" }
            remove_field => ["message","syslog_message"]
        }
    }
}

output {
    if [type] == "fortigate" {
        elasticsearch {
            hosts => ["http://127.0.0.1:9200"]
            manage_template => false
            index => "fortigate-logs-index-%{+YYYY.MM.dd}"
        }
    }
}

Points to note:

  • Path: Note you can define multiple paths in the input section. As I have three devices providing log data, I need to store them in seperate folders to make sure the sincedb is accurate. When ingesting data though, it can all pass through the same filter and index.
  • Mutate: As mentioned above, Type is critical when dealing with multiple configuration files. Unfortunately, the Fortigate logs also have a type field. This means without the mutate section effectively moving type -> fgt_type, the conditional statement in the output never fires, because the Fortigate type has overwritten the type definied in the input section. If that doesn't make sense, just trust me you need the mutate section. I spent days trying to figure this one out.
  • kv: This filter is for Key/Value pairs, and is extermely useful for syslog messages.
  • Geoip: For this to work properly, the template mapping will need to be updated for Fortigate logs. This is covered later in this document.
  • Output => Hosts: This assumes Logstash and ElasticSearch are on the same server.
  • Official Plugin Documentation:
Plugin URL
Input https://www.elastic.co/guide/en/logstash/current/input-plugins.html
Filter https://www.elastic.co/guide/en/logstash/current/filter-plugins.html
Output https://www.elastic.co/guide/en/logstash/current/output-plugins.html

As covered in basic usage, when testing your filter, I recommend the following:

  1. Stop Logstash service
  2. Configure sincedb => /dev/null to make replaying logs easier in testing.
  3. Configure Output filter as Stdout
  4. Run Logstash from the command line referencing this config file only

That will help work out an issues with the filter before trying to push data to ElasticSearch. Once the Logstash filter is correct, change the output to ElasticSeach. To confirm ElasticSearch has received the data and dynamically created the index, run the following:

$ curl -XGET http://127.0.0.1:9200/_cat/indices?v

You should see output similar to the following:

health status index                           uuid                   pri rep docs.count docs.deleted store.size pri.store.size
yellow open   fortigate-logs-index-2018.09.11 VIM-Rc86QOK59pKxtsbNaQ   5   1      16294            0      7.1mb          7.1mb
green  open   .monitoring-kibana-6-2018.09.12 1Ur0qDhrQiOXMXYnO5oLkg   1   0       8639            0      2.2mb          2.2mb
yellow open   fortigate-logs-index-2018.09.09 OHuqRdEcQNuj4HxpddM-WA   5   1       8128            0      5.7mb          5.7mb
yellow open   fortigate-logs-index-2018.09.08 JeUg01AwRa-krGJsXthYRg   5   1       5650            0        5mb            5mb
yellow open   fortigate-logs-index-2018.09.12 -q_RrXroTiCKv_TQKMrCjw   5   1      19288            0      9.4mb          9.4mb
yellow open   fortigate-logs-index-2018.09.10 8ALyLWXWQimMqfhDih0dwQ   5   1      17737            0      9.2mb          9.2mb
green  open   .monitoring-kibana-6-2018.09.11 dohK3Q9kT1CsrA98pOKqpQ   1   0       6669            0      1.8mb          1.8mb
green  open   .monitoring-kibana-6-2018.09.13 SQRDvdbZS_OpV4OksCyOiA   1   0       1754            0    559.5kb        559.5kb
green  open   .kibana                         -wdXmMqKQrSR8210_aLVYg   1   0         33            1     70.3kb         70.3kb
yellow open   fortigate-logs-index-2018.09.07 pPJJSA67QgCBZiHnE6995g   5   1      11063            0      5.7mb          5.7mb
yellow open   fortigate-logs-index-2018.09.13 9ZKExNwNQbmzSdZJagsHmw   5   1       9607            0      5.7mb          5.7mb

Note: Don't worry about the yellow health status. This will be because you have a single node with only once instance of the index.

From here logs should be available in ElasticSearch, and thus Kibana to search and visualise.