Building an ElasticStack implementation as a proof of concept for security log aggregation, including F5 ASM, Fortigate firewalls, Event Logs, VPN logs etc.


CentOS Server

  • Using a minimal installation of the current CentOS distribution. Version 7 at time of writing.
  • Required software will be installed through the process.
  • Configuration to suit environment.
$ sudo yum update

I also install vim, curl and wget as I use all throughout the process

$ sudo yum install vim curl wget


The minimal installation installs the firewall. Enable access to ports to allow SSH, Kibana and any other required ports.

  1. Stop and disable NetworkManager
$ sudo systemctl stop NetworkManager
$ sudo systemctl disable NetworkManager
  1. Confirm firewall is running
$ sudo firewall-cmd --state
  1. Check to see if the interface is already assigned to a zone
$ sudo firewall-cmd --list-all-zones

The output will display the configuration for all the zones, including interfaces. By default my interfaces are not assigned to a zone.

  1. Add interface to Work zone (change interface name to suit). This should also enable SSH.
$ sudo firewall-cmd --zone=work --change-interface=ens192 --permanent
  1. Enable the Kibana service on the Work zone
$ sudo firewall-cmd --zone=work --add-service=kibana --permanent
  1. Restart firewalld
$ sudo systemctl restart firewalld

That is my condensed procedure derived from this excellent Digital Ocean tutorial on Firewalld.


Install the current Java Runtime Environment (1.8.0_191 at time of writing). The yum search command below will help find the current version.

$ yum search java
$ sudo yum install java-1.8.0-openjdk.x86_64
$ java -version
openjdk version "1.8.0_191"
OpenJDK Runtime Environment (build 1.8.0_191-b12)
OpenJDK 64-Bit Server VM (build 25.191-b12, mixed mode)


Could use rsyslog, or any syslog package that will save to a file, which I expect is all off them. I'm not terribly well versed in Syslog, and the reason I choose Syslog-NG is because when I was piecing together the initial jigsaw of a configuration, I found some examples using it. Solid.

Installation instructions for the latest version of Syslog-NG on CentOS, as per the Syslog-ng Community Blog

The Extra Packages for Enterprise Linux repository contains many useful packages, which are not included in RHEL. A few dependencies of syslog-ng are available this repo. You can enable it by downloading and installing an RPM package

$ wget
$ sudo rpm -Uvh epel-release-latest-7.noarch.rpm
$ sudo yum install syslog-ng
$ sudo systemctl enable syslog-ng
$ sudo systemctl start syslog-ng

Official Documentation Here

Specific configuration details covered in individual device configuration posts.

ElasticSearch Repo

Import the GPG Key

$ sudo rpm --import

Create a file called elasticsearch.repo in the /etc/yum.repos.d/ directory...

$ sudo vi /etc/yum.repos.d/elasticsearch.repo

... containing ...

name=Elasticsearch repository for 6.x packages


At this point, it's worth noting that the ElasticStack consists of multiple components:

  • ElasticSearch: Database that stores all the data.
  • Kibana: Web based visualisation.
  • Logstash: Ingests log data and ships it to ElasticSearch
  • Beats: Various applications for capturing data.

That is a very high level description that doesn't do each nearly enough justice, but is sufficient for the purposes of this PoC.

Install the Elastic Stack in the following order:

$ sudo yum install elasticsearch
$ sudo systemctl daemon-reload
$ sudo systemctl enable elasticsearch.service
$ sudo systemctl start elasticsearch.service
$ sudo yum install kibana
$ sudo systemctl daemon-reload
$ sudo systemctl enable kibana.service
$ sudo systemctl start kibana.service
$ sudo yum install logstash
$ sudo systemctl daemon-reload
$ sudo systemctl enable logstash.service
$ sudo systemctl start logstash.service

That is all there is to the installation.



The ElasticSeach configuration file is /etc/elasticsearch/elasticsearch.yml. As with any new software install, backup the original configuration file before making changes.

# -------------------- Cluster ---------------------
# Use a descriptive name for your cluster:
# SecurityMetrics
# -------------------- Node ---------------------
# Use a descriptive name for the node:
# elasticstack
# Add custom attributes to the node:
#node.attr.rack: r1

# -------------------- Network ---------------------
# Set the bind address to a specific IP (IPv4 or IPv6):
# ""
# Set a custom port for HTTP:
http.port: 9200
  • I set the to match the server host name.
  • I set the to the IP address of the server. For the purposes of these posts, that will be

After updating the configuration file, restart ElasticSearch.

$ sudo systemctl restart elasticsearch.service

Test ElasticSearch

$ curl  

Output should look like this:

  "name" : "D4BE5fS",
  "cluster_name" : "SecurityMetrics",
  "cluster_uuid" : "KjPBCFIZTYyuECNWtiZ34w",
  "version" : {
    "number" : "6.5.0",
    "build_flavor" : "default",
    "build_type" : "rpm",
    "build_hash" : "816e6f6",
    "build_date" : "2018-11-09T18:58:36.352602Z",
    "build_snapshot" : false,
    "lucene_version" : "7.5.0",
    "minimum_wire_compatibility_version" : "5.6.0",
    "minimum_index_compatibility_version" : "5.0.0"
  "tagline" : "You Know, for Search"


Kibana configuration file is /etc/kibana/kibana.yml. Again, backup the original configuration file before making changes.

Update the following configuration items as required:

# Kibana is served by a back end server. This setting specifies the port to use.
server.port: 5601

# Specifies the address to which the Kibana server will bind. 
# IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines 
# will not be able to connect.  To allow connections from remote users, 
# set this parameter to a non-loopback address. ""

# The Kibana server's name.  This is used for display purposes. "SecurityMetrics"

# The URL of the Elasticsearch instance to use for all your queries.
elasticsearch.url: ""

# Time in milliseconds to wait for responses from the back end or 
# Elasticsearch. This value # must be a positive integer.
elasticsearch.requestTimeout: 90000
  • You can change the server port to anything you like, just remember to modify the firewall also, if still enabled.
  • The address will be used to connect to Kibana via a web browser.
  • The elasticsearch.url needs to match the and http.port defined in the elasticsearch config above.
  • I increased elasticsearch.requestTimeout from 30000 => 90000 in test as the VM I'm running has slow I/O.

After updating the configuration file, restart Kibana.

$ sudo systemctl restart kibana.service

Test Kibana

To test Kibana, browse to where the port matches the server.port defined in the kibana config above.

You should see the welcome to Kibana page below.


I've not bothered with the sample data in the past, but it's a good way to familiarise yourself with Kibana.


Logstash configuration file is /etc/logstash/logstash.yml. Again, backup the original configuration file before making changes.

# Settings file in YAML
# ------------  Node identity ------------
# Use a descriptive name for the node:
# elasticsearch
# If omitted the node name will default to the machine's host name
# ------------ Pipeline Configuration Settings --------------
# Periodically check if the configuration has changed and reload the pipeline
# This can also be triggered manually through the SIGHUP signal
config.reload.automatic: true
# How often to check if the pipeline configuration has changed (in seconds)
config.reload.interval: 3s
# ------------ Debugging Settings --------------
# Options for log.level:
#   * fatal
#   * error
#   * warn
#   * info (default)
#   * debug
#   * trace
# log.level: info
path.logs: /var/log/logstash

There are a lot of tuning options available in this config file. For the PoC purposes, I only change those above.

Test Logstash

  1. Stop the Logstash service
$ systemctl stop logstash.service
  1. Create a file called simple.conf
$ vim ~/simple.conf
  1. Add the following:
input { stdin { } }
output {
    elasticsearch { hosts => [""] }
    stdout { codec => rubydebug }
  1. Run the following Command
$ /usr/share/logstash/bin/logstash -f ~/simple.conf
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/ Using default config which logs errors to the console
[INFO ] 2018-11-20 20:37:23.700 [main] writabledirectory - Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
[INFO ] 2018-11-20 20:37:23.708 [main] writabledirectory - Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
[WARN ] 2018-11-20 20:37:24.044 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
[INFO ] 2018-11-20 20:37:24.055 [LogStash::Runner] runner - Starting Logstash {"logstash.version"=>"6.5.0"}
[INFO ] 2018-11-20 20:37:24.082 [LogStash::Runner] agent - No persistent UUID file found. Generating new UUID {:uuid=>"388281f0-b918-4946-9d6a-778e7edcaa8c", :path=>"/usr/share/logstash/data/uuid"}
[INFO ] 2018-11-20 20:37:27.260 [Converge PipelineAction::Create<main>] pipeline - Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[INFO ] 2018-11-20 20:37:27.646 [[main]-pipeline-manager] elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[]}}
[INFO ] 2018-11-20 20:37:27.656 [[main]-pipeline-manager] elasticsearch - Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>, :path=>"/"}
[WARN ] 2018-11-20 20:37:27.896 [[main]-pipeline-manager] elasticsearch - Restored connection to ES instance {:url=>""}
[INFO ] 2018-11-20 20:37:28.069 [[main]-pipeline-manager] elasticsearch - ES Output version determined {:es_version=>6}
[WARN ] 2018-11-20 20:37:28.073 [[main]-pipeline-manager] elasticsearch - Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[INFO ] 2018-11-20 20:37:28.099 [[main]-pipeline-manager] elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//"]}
[INFO ] 2018-11-20 20:37:28.118 [Ruby-0-Thread-5: :1] elasticsearch - Using mapping template from {:path=>nil}
[INFO ] 2018-11-20 20:37:28.141 [Ruby-0-Thread-5: :1] elasticsearch - Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[INFO ] 2018-11-20 20:37:28.188 [Converge PipelineAction::Create<main>] pipeline - Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x31c1db5f run>"}
The stdin plugin is now waiting for input:
[INFO ] 2018-11-20 20:37:28.213 [Ruby-0-Thread-5: :1] elasticsearch - Installing elasticsearch template to _template/logstash
[INFO ] 2018-11-20 20:37:28.243 [Ruby-0-Thread-1: /usr/share/logstash/lib/bootstrap/environment.rb:6] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[INFO ] 2018-11-20 20:37:28.473 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
  1. Once you see the output: The stdin plugin is now waiting for input: type Hello World or something far more interesting, and you should then see the following output, which is a response from ElasticSearch.
       "message" => "Hello World",
      "@version" => "1",
    "@timestamp" => 2018-11-20T09:38:38.520Z,
          "host" => "elasticstack"

You can also query elasticsearch to confirm logstash shipped Hello World properly.

$ curl -XGET '*/_search?pretty'
  "took" : 1,
  "timed_out" : false,
  "_shards" : {
    "total" : 5,
    "successful" : 5,
    "skipped" : 0,
    "failed" : 0
  "hits" : {
    "total" : 1,
    "max_score" : 1.0,
    "hits" : [
        "_index" : "logstash-2018.11.20",
        "_type" : "doc",
        "_id" : "OWF8MGcBQay7eXkoaARV",
        "_score" : 1.0,
        "_source" : {
          "message" : "Hello World",
          "@version" : "1",
          "@timestamp" : "2018-11-20T09:38:38.520Z",
          "host" : "elasticstack"

Assuming everything has worked to here, you can also view the event in Kibana.

  1. Go to the Kibana web interface
  2. Go to Management
  3. Click Index Patters
  4. Click Create index pattern
  5. Type logstash* in Index pattern
  6. Click Next step
  7. Select @timestamp in Time Filter field name
  8. Click Create index pattern
  9. Go to Discover

You should see your Hello World event in the discover page. If not, adjust the time frame in the top right corner appropriately.


For the purposes of this PoC, installation is complete.