Visualising Sguil session data with NetFlow

I think this is the first time I’ve explicitly mentioned Sguil, and I’m not going to talk too much about the package itself as many others have already done it for me. Basically, Sguil is a nicely integrated suite of (free) tools that will help you put NSM principles into practice. It has a wonderful investigative interface, and allows easy access to all the alert, session, and full-content data you’ve been capturing via a TAP or SPAN port.

I want to talk about Sguil’s session data capture mechanism. SANCP is used for this purpose, and it’s nicely tied into the alert and full-content data captured by Sguil. It does a jolly good job, too.

SANCP isn’t not the only tool for handling session data, though – Cisco’s NetFlow is another. What I’m going to propose might sound a little odd, but I find it useful – I hope you will, too. Here goes:

My Sguil sensors maintain two stores of session data. One is the standard set as captured by SANCP, the other is a database of NetFlow records that express the same traffic.

“Why?”, I hear you ask. Isn’t this a duplication of data? Don’t you need some fancy Cisco router to export flows? The answers to these are “sort of” and “no”, respectively. Before we get onto how we make this happen, let’s talk about why we want to bother in the first place.

  • There is no easy and cheap visualisation of SANCP data. If you’ve got a TAP on a link and you’re feeding it into a Sguil sensor, it’d be great to see a nice graph of how much traffic is passing in each direction. If you can break it down by IP address and port, so much the better. Yes, you can export the SANCP data into something like Excel to graph it, but that’s hardly “easy” or “cheap”. If we can somehow extract NetFlow data from our TAPped link, we can use any number of NetFlow analysis tools to look at what’s going on.
  • SANCP isn’t (to my knowledge) timely. If you’ve got a long running session (e.g., a huge download) it won’t get logged by SANCP until it has finished (possibly many hours later). NetFlow devices by contrast can be instructed to prematurely export flows for sessions that haven’t finished yet, giving you a more real-time view of the data.
  • SANCP logs one row of data for every session it observes, in terms of source IP/port, destination IP/port, src->dest bytes/packets, dst->src bytes/packets, etc. By storing these “bidirectional” records, you sometimes get entries like this: -> -> -> ->

    All of the rows above show sessions to or from a webserver on Look at the last row, though – somehow is listed as the source rather than the destination (you’d have to ask someone familiar with the SANCP codebase why this happens from time to time). What this means is that if I want to write some SQL to show me all of the sessions hitting my webserver, I can’t just say “WHERE dst_ip = INET_ATON(‘’) and dst_port = 80” because I would miss out on the last row. I need to alter my SQL to include rows where “src_ip = INET_ATON(‘’) and src_port = 80” as well (I guess this is why there are UNION options in Sguil’s query builder!). IMHO SANCP’s “source” and “destination” columns might be better referred to as “peer A” and “peer B”, since there’s no consistency in the way sessions are listed. NetFlow, by contrast, will store unidirectional flows with separate database rows for traffic from a->b and from b->a, thereby avoiding this confusion.

Irrespective of whether you agree with the latter points, I think the visualisation aspect of this is worth it’s weight in gold. Here’s how to do it, and actually make it work in a useful fashion.

Firstly, we need some way of getting NetFlow exports from our TAPped traffic stream. softflowd is your friend here – get it installed on your Sguil box, and run it like this:

/usr/sbin/softflowd -i eth1 -t maxlife=60 -n

Let’s look at the parameters in turn:

“-i eth1” tells softflowd to listen on eth1, which is also Sguil’s monitoring interface in this case.

“-t maxlife=60” tells softflowd to export a flow record for non-expired flows after sixty seconds. This gives us the “timeliness” that SANCP lacks.

“-n” tells softflowd where to send its NetFlow exports to, in this case a NetFlow collector running locally on the Sguil sensor on port 9996.

So now softflowd is looking at Sguil’s traffic stream, and is converting what it sees to NetFlow exports which it is sending to a collector listening on port 9996. All we need now is a NetFlow collector to receive the exports.

There are many collectors available, but the one I’m going to show here is ManageEngine’s Netflow Analyzer (NFA). I use NFA because the free version will work perfectly in this scenario, and because it has one crucial feature that makes all of this useful (we’ll get to that later on).

So, download and install the free version of NFA on your Sguil sensor, and tell it to collect flows on port 9996. In very short order, NFA should start receiving flows and drawing a nice graph. A graph which will have one obvious and fatal flaw about it:


Whoops. We’re only showing “in” traffic, and it’s actually the sum of the “in” and “out” traffic on our monitored link. When you think about it, what else was softflowd supposed to do? After all, the only traffic it sees is coming “in” to eth1, so that’s all it can report.

However, NFA has a nifty feature we can use to make sense of the data – IP Groups. Set up a new IP Group (call it whatever you like), and add the IP addresses that Sguil/Snort considers to be HOME_NET. NFA will then use these IP addresses to determine which flows should be interpreted as “in” and which as “out”. If we navigate to our IP group in NFA we now get a nice graph that looks like this:


Woohoo! Now we’re talking! Now we can use NFA’s other cool features like drag-to-zoom and breakdowns by IP address and port.

Yes, we’ve bloated up our Sguil sensor in the process. Yes, we’re doubling up on session information. But if you’ve just arrived at a customer site and have plugged your sensor into what is (for you) an unfamiliar network, what better amd quicker way to get an idea of the kind of traffic you’re monitoring?

Comments welcome. I’m sure there’s a gigantic flaw in the plan somewhere!


Alec Waters is responsible for all things security at Dataline Software, and can be emailed at alec.waters(at)

6 Responses to “Visualising Sguil session data with NetFlow”

  1. It seems like sancp is redundant. Can sancp be replaced by netflow or will sguil break?

  2. Hi Joe,

    SANCP is far from redundant, and I didn’t mean to suggest that it was. Aside from being a source of information in its own right, it is the “glue” that links up the alerts from Snort with the full content capture.

    I think that Netflow is a complementary technology to SANCP in this instance, and a nice easy way to visualise the traffic your Sguil box is capturing. I’ve had good results with softflowd and NFA. If there are (rightful!) concerns about bloating up a sensor with NFA, there’s nothing to stop you from putting only softflowd on the sensor – NFA can live on another box.


  3. Great article and tips! I did have one comment about NFA and segmenting in and out traffic by the HOME_NET. What about internal to internal traffic? This will all show up as “IN” traffic won’t it?

  4. Hi,

    Thanks for the comment 🙂

    Whether your sensor will see internal to internal traffic will depend on where you’ve put it. In the example above, my sensor was at the border of the internal and external networks, so no internal to internal traffic would be observed at all.

    If your sensor does see internal to internal traffic, then I’m not quite sure how NFA will interpret it since the observed traffic is both entering and leaving the IP group at the same time. You may see the traffic “doubled up” on the IN and OUT graphs, or NFA may omit it altogether.

    If you try it out, please post back with your results!


  5. Hello,

    I just arrived here from the “Build Securely Snort with Sguil Sensor
    Step-by-Step Powered by Slackware Linux” doc on SANS by Guy Bruneau.

    I also realize this is quite an old post, but wanted to mention that I perform a similar feat with the use of a project out of the University of Munich, by Lothar Braun, titled flow-inspector.

    I’ve been attempting to drum up some interest in it, as I truly believe it is one of the first of it’s kind to visually render flow data: open source, free, and uses a more recent, totally extensible and generally damn cool visualization framework (d3.js). I believe the project truly has the potential to make a large impact on analyst workflow.

    I currently run VERMONT solely to source data for flow-inspector, utilizing argus to source session data records (and a limited amount of payload data) for long term storage.

    This combination seems to work quite well, although the web UI in flow-inspector is lacking in it’s current state, but with some minor tweaks has the potential to provide very useful, near real time info.

    The paper that linked to this post is:

    flow-inspector is accessible:
    the original paper is accessible:
    VERMONT is accessible:
    argus is accessible:

    As of this writing, VERMONT’s mainline doesn’t feature the module to export to the redis queue necessary for import to flow-inspector’s DB. I have written a guide for this:

    Hope you find flow-inspector helpful.



Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: