Security Onion

Session data is the summary of the communication between two network devices. Also known as a conversation or a flow, this summary data is one of the most flexible and useful forms of NSM data. If you were to consider full packet capture equivalent to having a recording of every phone conversation someone makes from a their mobile phone, then you might consider session data to be equivalent having a copy of the call log on the bill associated with that mobile phone. Session data doesn’t give you the “What”, but it does give you the “Who, Where, and When”.

When session or flow records are generated, at minimum, the record will usually include the standard 5-tuple: source IP address and port, the destination IP address and port, and the protocol being used. In addition to this, session data will also usually provide a timestamp of when the communication began and ended, and the amount of data transferred between the two devices. The various forms of session data such as NetFlow v5/v9, IPFix, and jFlow can include other information, but these fields are generally common across all implementations of session data.

There are a few different applications that have the ability to collect flow data and provide tools for the efficient analysis of that data. My personal favorite is the System for Internet-Level Knowledge (SiLK), from the folks at CERT NetSA (http://www.cert.org/netsa/). In Applied NSM we use SiLK pretty extensively.

One of the best ways to learn about different NSM technologies is the Security Onion distribution, which is an Ubuntu-based distribution designed for quick deployment of all sorts of NSM collection, detection, and analysis technologies. This includes popular tools like Snort, Suricata, Sguil, Squert, Snorby, Bro, NetworkMiner, Xplico, and more. Unfortunately, SiLK doesn’t currently come pre-packaged with Security Onion. The purpose of this guide is to describe how you can get SiLK up and running on a standalone Security Onion installation.

 

Preparation

To follow along with this guide, you should have already installed and configured Security Onion, and ensured that NSM services are already running. This guide will assume you’ve deployed a standalone installation. If you need help installing Security Onion, this installation guide should help: https://code.google.com/p/security-onion/wiki/Installation.

For the purposes of this article, we will assume this installation has access to two network interfaces. The interface at eth0 is used for management, and the eth1 interface is used for data collection and monitoring.

Now is a good time to go ahead and download the tools that will be needed. Do this by visiting this URL http://tools.netsa.cert.org/index.html# and downloading the following

  • SiLK (3.7.2 as of this writing)
  • YAF (2.4.0 as of this writing)
  • Fixbuf (1.30 as of this writing)

Alternatively, you can download the packages directly from the command line with these commands:

wget http://tools.netsa.cert.org/releases/silk-3.7.2.tar.gz
wget http://tools.netsa.cert.org/releases/yaf-2.4.0.tar.gz
wget http://tools.netsa.cert.org/releases/libfixbuf-1.3.0.tar.gz

This guide reflects the current stable releases of each tool. You will want to ensure that you place the correct version numbers in the URLs above when using wget to ensure that you are getting the most up to date version.

Installation

The analysis of flow data requires a flow generator and a collector. So, before we can begin collecting and analyzing session data with SiLK we need to ensure that we have data to collect. In this case, we will be installing the YAF flow generation utility. YAF generates IPFIX flow data, which is quite flexible. Collection will be handled by the rwflowpack component of SiLK, and analysis will be provided through the SiLK rwtool suite.

SiLK Workflow

Figure 1: The SiLK Workflow

 

To install these tools, you will need a couple of prerequisites. You can install these in one fell swoop by running this command:

sudo apt-get install glib2.0 libglib2.0-dev libpcap-dev g++ python-dev

With this done, you can install fixbuf using these steps:

1. Extract the archive and go to the newly extracted folder

tar –xvzf libfixbuf-1.3.0.tar.gz

cd libfixbuf-1.3.0/

2. Configure, make, and install the package

./configure
make
sudo make install

 

Now you can install YAF with these steps:

1. Extract the archive and go to the newly extracted folder

tar –xvzf yaf-2.4.0.tar.gz
cd yaf-2.4.0/

2. Export the PKG configuration path

export PKG_CONFIG_PATH=/usr/local/lib/pkgconfig

3. Configure with applabel enabled

./configure --enable-applabel

4. Make and install the package

make
sudo make install

If you try to run YAF right now, you’ll notice an error. We need to continue the installation process before it will run properly. This process continues by installing SiLK with these steps:

1. Extract the archive and go to the newly extracted folder

tar –xvzf silk-3.7.2.tar.gz
cd silk-3.7.2/

2. Configure with a specified fixbuf path and python enabled

./configure --with-libfixbuf=/usr/local/lib/pkgconfig/ --with-python

3. Make and install the package

make
sudo make install

With everything installed, you need to make sure that all of the libraries we need are linked properly so that the LD_LIBRARY_PATH variable doesn’t have to be exported each time you use SiLK. This is can be done by creating a file named silk.conf in the /etc/ld.so.conf.d/ directory with the following contents:

/usr/local/lib
/usr/local/lib/silk

To apply this change, run:

sudo ldconfig

Configuring SiLK

With everything installed, now we have to configure SiLK to use rwflowpack to collect the flow data we generate. We need three files to make this happen: silk.conf, sensors.conf, and rwflowpack.conf.

Silk.conf

We will start by creating the silk.conf site configuration file. This file controls how SiLK parses data, and contains a list of sensors. It can be found in the previously unzipped SiLK installation tarball at silk-3.7.2/site/twoway/silk.conf. We will copy it to a directory that Security Onion uses to store several other configuration files:

sudo cp silk-3.7.2/site/twoway/silk.conf /etc/nsm/<$SENSOR-$INTERFACE>/

The site configuration file should work just fine for the purposes of this guide, so we won’t need to modify it.

Sensors.conf

The sensor configuration file sensors.conf is used to define the sensors that will be generating session data, and their characteristics. This file should be created at /etc/nsm/<$SENSOR-$INTERFACE>/sensors.conf. For this example, our sensors.conf will look like this:

probe S0 ipfix
  listen-on-port 18001
  protocol tcp
  listen-as-host 127.0.0.1
end probe
group my-network
  ipblocks 192.168.1.0/24
  ipblocks 172.16.0.0/16
  ipblocks 10.0.0.0/8 
end group
sensor S0
  ipfix-probes S0
  internal-ipblocks @my-network
  external-ipblocks remainder
end sensor

This sensors.conf has three different sections: probe, group, and sensor.

The probe section tells SiLK where to expect to receive data from for the identified sensor. Here, we’ve identified sensor S0, and told SiLK to expect to receive ipfix data from this sensor via the TCP protocol over port 18001. We’ve also defined the IP address of the sensor as the local loopback address, 127.0.0.1. In a remote sensor deployment, you would use the IP address of the sensor that is actually transmitting the data.

The group section allows us to create a variable containing IP Blocks. Because of the way SiLK bins flow data, it is important to define internal and external network ranges on a per sensor basis so that your queries that are based upon flow direction (inbound, outbound, inbound web traffic, outbound web traffic, etc.) are accurate. Here we’ve defined a group called my-network that has two ipblocks, 192.168.1.0/24 and 10.0.0.0/8. You will want to customize these values to reflect your actual internal IP ranges.

The last section is the sensor section, which we use to define the characteristics of the S0 sensor. Here we have specified that the sensor will be generating IPFIX data, and that my-network group defines the internal IP ranges for the sensor, with the remainder being considered external.

Be careful if you try to rename your sensors here, because the sensor names in this file must match those in the site configuration file silk.conf. If a mismatch occurs, then rwflowpack will fail to start. In addition to this, if you want to define custom sensors names then I recommend starting by renaming S1. While it might make sense to start by renaming S0, I’ve seen instances where this can cause odd problems.

Rwflowpack.conf

The last configuration step is to modify rwflowpack.conf, which is the configuration file for the rwflowpack process that listens for and collects flow records. This file can be found at /usr/local/share/silk/etc/rwflowpack.conf. First, we need to copy this file to /etc/nsm/<$SENSOR-$INTERFACE>/

sudo cp /usr/local/share/silk/etc/rwflowpack.conf /etc/nsm/<$SENSOR-$INTERFACE>/

Now we need to change seven values in the newly copied file:

ENABLED=yes

This will enable rwflowpack

statedirectory=/nsm/sensor_data/<$SENSOR-$INTERFACE>/silk

A convenience variable used for setting the location of other various SiLK files and folders

CREATE_DIRECTORIES=yes

This will allow for the creation of specified data subdirectories

SENSOR_CONFIG=/etc/nsm/<$SENSOR-$INTERFACE>/sensors.conf

The path to the sensor configuration file

DATA_ROOTDIR=/nsm/sensor_data/<$SENSOR-$INTERFACE>/silk/

The base directory for SiLK data storage

SITE_CONFIG=/etc/nsm/<$SENSOR-$INTERFACE>/silk.conf

The path to the site configuration file

LOG_TYPE=legacy

Sets the logging format to legacy

LOG_DIR=/var/log/

The path for log storage

Finally, we need to copy rwflowpack startup script into init.d so that we can start it like a normal service. This command will do that:

sudo cp /usr/local/share/silk/etc/init.d/rwflowpack /etc/init.d

Once you’ve copied this file, you need to change one path in it. Open the file, and change the SCRIPT_CONFIG_LOCATION variable from “/usr/local/etc/” to “/etc/nsm/<$SENSOR-$INTERFACE>/”

Starting Everything Up

Now that everything is configured, we should be able to start rwflowpack and YAF and begin collecting data.

First, we can start rwflowpack by simply typing the following:

sudo service rwflowpack start

If everything went well, you should see a success message, as shown in Figure 2:

 Starting rwflowpack

Figure 2: Successfully Starting rwflowpack

If you want to ensure that rwflowpack runs at startup, you can do so with the following command:

sudo update-rc.d rwflowpack start 20 3 4 5 .

Now that our collector is waiting for data, we can start YAF to begin generating flow data. If you’re using “eth1” as the sensors monitoring interface as we are in this guide, that command will look like this:

sudo nohup /usr/local/bin/yaf --silk --ipfix=tcp --live=pcap  --out=127.0.0.1 --ipfix-port=18001 --in=eth1 --applabel --max-payload=384 &

You’ll notice that several of the arguments we are calling in this YAF execution string match values we’ve configured in our SiLK configuration files.

You can verify that everything started up correctly by running ps to make sure that the process is running, as is shown in Figure 3. If YAF doesn’t appear to be running, you can check the nohup.out file for any error messages that might have been generated.

 Checking YAF

Figure 3: Using ps to Verify that YAF is Running

That’s it! If your sensor interface is seeing traffic, then YAF should begin generating IPFIX flow data and sending it to rwflowpack for collection. You can verify this by running a basic rwfilter query, but first we have to tell the SiLK rwtools where the site configuration file is. This can be done by exporting the SILK_CONFIG_FILE variable.

export SILK_CONFIG_FILE=/etc/nsm/<$SENSOR-$INTERFACE>/silk.conf
export SILK_DATA_ROOTDIR=/nsm/sensor_data/<$SENSOR-$INTERFACE>/silk/

If you don’t want to have to do this every time you log into this system, you can place these lines in your ~/.bashrc file.

You should be able to use rwfilter now. If everything is setup correctly and you are capturing data, you should see some output from this command:

rwfilter --sensor=S0 --proto=0-255 --type=all  --pass=stdout | rwcut

If you aren’t monitoring a busy link, you might need to ping something from a monitored system (or from the sensor itself) to generate some traffic.

Figure 4 shows an example of SiLK flow records being output to the terminal.

Flow Data

Figure 4: Flow Records Means Everything is Working

Keep in mind that it may take several minutes for flow records to actual become populated in the SiLK database. If you run into any issues, you can start to diagnose them by accessing the rwflowpack logs in /var/log/.

Monitoring SiLK Services

If you are deploying SiLK in production, then you will want to make sure that the services are constantly running. One way to do this might be to leverage the Security Onion “watchdog” scripts that are used to manage other NSM services, but if you modify those scripts then you run the risk of wiping out your changes any time you update your SO installation. Because of this, the best idea might be to run separate watchdog scripts to monitor these services.

This script can be used to monitor Yaf to ensure that it is always running:

#!/bin/bash
function SiLKSTART {
  sudo nohup /usr/local/bin/yaf --silk --ipfix=tcp --live=pcap --out=192.168.1.10 --ipfix-port=18001 –in=eth1 --applabel --max-payload=384 --verbose --log=/var/log/yaf.log &
}

function watchdog {
  pidyaf=$(pidof yaf)
  if [ -z “$pidyaf” ]; then
    echo “YAF is not running.”
  SiLKSTART
  fi
}
watchdog

This script can be used to monitor rwflowpack to ensure that it is always running:

#!/bin/bash
pidrwflowpack=$(pidof rwflowpack)
if [ -z “$pidrwflowpack” ]; then

  echo “rwflowpack is not running.”
  sudo pidof rwflowpack | tr ’ ’ ’\n’ | xargs -i
  sudo kill -9 {} sudo service rwflowpack restart

fi

These scripts can be set to run automatically at startup for ensured success

Conclusion

I always tell people that session data is the best “bang for your buck” data type you will find. If you just want to play around with SiLK, then installing it on Security Onion is a good way to get your feet wet. Even better, if you are using Security Onion in production on your network, it is a great platform to use for getting up and running with session data in addition to the many other data types. If you want to learn more about using SiLK for NSM detection and analysis, I recommend checking out Applied NSM when it comes out December 15th, or to sink your teeth into session data sooner, check out their excellent documentation (which includes use cases) at http://tools.netsa.cert.org/silk/docs.html.

“How do I find bad stuff on the network?”

The path to knowledge for the practice of NSM typically always begins with that question. It’s because of that question that we refer to NSM as a practice, and someone who is a paid professional in this field as a practitioner of NSM.

Scientists are often referred to as practitioners because of the evolving state of the science. As recently as the mid 1900s, medical science believed that milk was a valid treatment for ulcers. As time progressed, it was found that ulcers were caused by bacteria called helicobacter pylori and that dairy products could actually further aggravate an ulcer. Perceived facts change because although we would like to believe most sciences are exact, they simply aren’t. All scientific knowledge is based upon educated guesses utilizing the best available data at the time. As more data becomes available over time, answers to old questions change, and this redefines things that were once considered facts. This is true for Doctors as practitioners of medical science, and it is true for us as practitioners of NSM.

Unfortunately, when I started practicing NSM there weren’t a lot of reference materials available on the topic. Quite honestly, there still aren’t.  Aside from the occasional blog postings of industry pioneers and a few select books, most individuals seeking to learn more about this field are left to their own devices. I feel that it is pertinent to clear up one very important misconception to eliminate potential confusion regarding my previous statement.  There are menageries of books available on the topics TCP/IP, packet analysis, and various intrusion detection systems. Although the concepts presented in those texts are important facets of NSM, they don’t constitute the practice of NSM as a whole. That would be like saying a book about wrenches teaches you how to diagnose a car that won’t start.

With that in mind, my co-authors and I are incredibly excited to announce our newest project, a book entitled "Applied Network Security Monitoring". This book is dedicated to the practice of NSM. This means that rather than simply providing an overview of the tools or individuals components of NSM, we will speak to the process of NSM and how those tools and components support the practice.

Audience

This book is intended to be a training manual on how to become an NSM analyst. If you’ve never performed NSM analysis, then this book is designed to provide the baseline skills necessary to begin performing these duties. If you are already a practicing analyst, then my hope is that this book will provide a foundation that will allow you to grow your analytic technique in such a way as to make you much more effective at the job you are already doing. We’ve worked with several good analysts who were able to become great analysts because they were able to enhance their effectiveness with some of the techniques presented here.

The effective practice of NSM requires a certain level of adeptness with a variety of tools. As such, the book will discuss several of these tools as well, including the Snort, Bro, and Suricata IDS tools, the SiLK and Argus netflow analysis tool sets, as well as other tools like Snorby, Security Onion, and more.

This book focuses almost entirely on free and open source tools. This is in an effort to appeal to a larger grouping of individuals who may not have the budget to purchase commercial analytic tools such as NetWitness or Arcsight, and also to demonstrate that effective NSM can be achieved without a large budget. Ultimately, talented individuals are what make an NSM program successful. In addition, these open source tools often provide more transparency in how they interact with data, which is also incredibly beneficial to the analyst when working with data at an intimate level.

Table of Contents

Chapter 1: The Practice of Network Security Monitoring

The first chapter is devoted to defining network security monitoring and its relevance in the modern security landscape. It discusses a lot of the core terminology and assumptions that will be used and referenced throughout the book.

Part 1: Collection

Chapter 2: Driving Data Collection

The first chapter in the Collection section of ANSM provides an introduction to data collection and an overview of its importance. This chapter provides a framework for making decisions regarding what data should be collected using a risk-based approach.

Chapter 3: The Sensor Platform
This chapter introduces the most critical piece of hardware in an NSM deployment, the sensor. This includes a brief overview of the various NSM data types, and then discusses important considerations for purchasing and deploying sensors. Following, this chapter covers the placement of NSM sensors on the network, including a primer on creating network visibility maps for analyst use. This chapter also introduces Security Onion, which will be references throughout the book as our lab environment.

Chapter 4: Full Packet Capture Data
This section begins with an overview of the importance of full packet capture data. It will examine use cases that demonstrate its usefulness, and then demonstrate several methods of capturing and storing PCAP data with tool such as Netsniff-NG, Daemonlogger, and OpenFPC.

Chapter 5: Session Data
This chapter discusses the importance of session data, along with a detailed overview of Argus and the SiLK toolset for the collection and analysis of netflow data.

Chapter 6: Protocol Metadata
This chapter look at methods for generating metadata from other data sets, and the usefulness of integrating it into the NSM analytic process. This includes coverage of the packet string (PSTR) data format, as well as other tools used to create protocol metadata.

Chapter 7: Statistical Data
The final data type that will be examined is statistical data. This chapter will discuss use cases for the creation of this data type, and provide some effective methods for its creation and storage. Tools such as rwstats, treemap, and gnuplot will be used.

Part 2: Detection

Chapter 8: Indicators of Compromise
This chapter examines the importance of Indicators of Compromise (IOC), how they can be logically organized, and how they can be effectively managed for incorporation into an NSM program. This also includes a brief overview of the intelligence cycle, and threat intelligence.

Chapter 9: Target Based Detection
The first detection type that will be discussed is target based detection. This will include basic methods for detecting communication with certain hosts within the context of the previously discussed data types.

Chapter 10: Signature Based Detection with Snort
The most traditional form of intrusion detection is signature based. This chapter will provide a primer on this type of detection and discuss the usage of the Snort IDS. This will include the use of Snort, and a detailed discussion on the creation of Snort signatures. Several practical examples and case scenarios will be present in this chapter.

Chapter 11: Signature Based Detection with Suricata
This chapter will provide a primer on signature based detection with Suricata. This will include several practical examples and use cases.

Chapter 12: Anomaly Based Detection with Bro
Anomaly based identification is an area that has gotten quite a bit more attention in recent years. This chapter will cover Bro, one of the more popular anomaly based detection solutions. This will cover a detailed review of the Bro architecture, the Bro language, and several use cases.

Chapter 13: Early Warning AS&W with Canary Honeypots
Previously only used for research purposes, operational honeypots can be used as an effective means for attack sense and warning. This chapter will provide examples of how this can be done, complete with code samples and deployment case scenarios.

Part 3: Analysis

Chapter 14: Packet Analysis
The most critical skill in NSM is packet analysis. This chapter covers the analysis of packet data with Tcpdump and Wireshark. It also covers basic to advanced packet filtering.

Chapter 15: Friendly Intelligence
This chapter focuses on performing research related to friendly devices. This includes a framework for creating an asset model, and a friendly host characteristics database.

Chapter 16: Hostile Intelligence
This chapter focuses on performing research related to hostile devices. This includes strategies for performing open source intelligence (OSINT) research.

Chapter 17: Differential Diagnosis of NSM Events
This is the first chapter of the book that focuses on a diagnostic method of analysis. Using the same differential technique used by physicians, NSM analysts can be much more effective in the analysis process.

Chapter 18: Incident Morbidity and Mortality
Once again borrowing from the medical community, the concept of incident morbidity and mortality can be used to continually refine the analysis process. This chapter explains techniques for accomplishing this.

Chapter 19: Malware Analysis for NSM
This isn’t a malware analysis book by any stretch of the imagination, but this chapter focuses on methods an NSM analyst can use to determine whether or not a file is malicious.

Authors

Chris Sanders, Lead Author

Chris Sanders is an information security consultant, author, and researcher originally from Mayfield, Kentucky. That’s thirty miles southwest of a little town called Possum Trot, forty miles southeast of a hole in the wall named Monkey's Eyebrow, and just north of a bend in the road that really is named Podunk.

Chris is a Senior Security Analyst with InGuardians. He has as extensive experience supporting multiple government and military agencies, as well as several Fortune 500 companies. In multiple roles with the US Department of Defense, Chris significantly helped to further to role of the Computer Network Defense Service Provider (CNDSP) model, and helped to create several NSM and intelligence tools currently being used to defend the interests of the nation.

Chris has authored several books and articles, including the international best seller "Practical Packet Analysis" form No Starch Press, currently in its second edition. Chris currently holds several industry certifications, including the CISSP, GCIA, GPEN, GCIH, and GREM.

In 2008, Chris founded the Rural Technology Fund. The RTF is a 501(c)(3) non-profit organization designed to provide scholarship opportunities to students form rural areas pursuing careers in computer technology. The organization also promotes technology advocacy in rural areas through various support programs.

When Chris isn't buried knee-deep in packets, he enjoys watching University of Kentucky Wildcat basketball, amateur drone building, BBQing, and spending time at the beach. Chris currently resides in Charleston, South Carolina.

Liam Randall, Co-Author

Liam Randall is a principal security consultant with Cincinnati, OH based GigaCo.  Originally, from Louisville, KY he worked his way through school as a sysadmin while getting his Bachelors in Computer Science at Xavier University.  He first got his start in high security writing device drivers and XFS based software for Automated Teller Machines.

Presently he consults on high volume security solutions for the Fortune 500, Research and Education Networks, various branches of the armed service, and other security focused groups.  As a contributor to the open source SecurityOnion distribution and the Berkeley based Bro-IDS network security package you can frequently find him speaking about cutting edge blue team tactics on the conference circuit.

A father and a husband, Liam spends his weekends fermenting wine, working in his garden, restoring gadgets, or making cheese.  With a love of the outdoors he and his wife enjoy competing in triathlons, long distance swimming and enjoying their community.

Jason Smith, Co-Author

Jason Smith is an intrusion detection analyst by day and junkyard engineer by night. Originally from Bowling Green, Kentucky, Jason started his career mining large data sets and performing finite element analysis as a budding physicist. By dumb luck, his love for data mining led him to information security and network security monitoring where he took up a fascination with data manipulation and automation.

Jason has a long history of assisting state and federal agencies with hardening their defensive perimeters and currently works as an Information Security Analyst with the Commonwealth of Kentucky. As part of his development work, he has created several open source projects, several of which have become "best-practice" tools for the DISA CNDSP program.

Jason regularly spends weekends in the garage building anything from arcade cabinets to open wheel racecars. Other hobbies include home automation, firearms, monopoly, playing guitar and eating. Jason has a profound love of rural America, a passion for driving, and an unrelenting desire to learn. Jason is currently living in Frankfort, Kentucky.

Release Date

The tentative release date for Applied NSM is during the third quarter of 2013.