Archive for August, 2009

A summary of today’s threats…. set to music!

Posted in Silly on 25 August, 2009 by Alec Waters

Given the threat landscape in which we live, I personally think it’s far too dangerous to even consider switching on a computer, let alone using it to surf the web. Whilst I’m finishing up reading my copy of “Luddite Hermitry for Dummies”, I present a little ditty sung to the tune of “Money for Nothing” by Dire Straits:

Now look at them blackhats that’s the way you do it
Finding holes in your security
That ain’t workin’ that’s the way you do it
Mutating malware very frequently

Now that ain’t workin’ that’s the way you do it
Lemme tell ya them guys ain’t dumb
Maybe get a virus on your little palmtop
Maybe get a virus on your phone

We gotta install fake anti virus
Custom malware deliveries
We gotta move these stolen credentials
Gotta move these identities

See the little blackhat with the Gumblar and the KoobFace
Yeah buddy, they’re his own bots
That little blackhat got his own jet airplane
That little blackhat he’s a millionaire

We gotta install hidden keyloggers
Banking trojan deliveries
We gotta move these injected iframes
‘Sploit some code that’s written in C

I shoulda learned blackhat SEO
I shoulda learned to smash the stack
Look at that pointer, she got it pointin’ anywhere you like!
Man we can have some fun
And he’s up there, what’s that? PCI audit?
Does it get you much of that “security“??
That ain’t workin’ that’s the way you do it
Get your bad press for nothin’ or redundancy

We gotta install fake porno codecs
XOR’ed shellcode deliveries
We gotta DDOS a Georgian blogger
We gotta DDOS this ISP

Now that ain’t workin’ that’s the way you do it
Infecting websites via FTP
That ain’t workin’ that’s the way you do it
Pagerank for nothin’ and your clicks for free

Alec Waters is responsible for all things security at Dataline Software, and can be emailed at alec.waters(at)

Attack of the Clones

Posted in Case Studies, NSM on 20 August, 2009 by Alec Waters

It’s not always possible or feasible to collect the four types of information useful for conducting NSM, for the usual reasons (“cost of software/hardware/people/time” being near the top of the list). However, this doesn’t mean that the game is lost before it’s even begun – Sguil, for example, doesn’t have any facility for statistical alerts, but that doesn’t mean that it’s not a powerful tool.

The following tale took place where only session and alert data were available. Despite this apparent lack of information, we were able to solve the mystery without the intervention of Scooby and the gang, and we were able to dodge the temptation to take an IPS alert at face value (a clear case of defensive avoidance!)

The network in question was purely a client site; there were no public servers to worry about. Network security was pretty formulaic:


There’s a PIX doing the standard firewall/NAT job, and an inline IPS scrutinising everything that goes in or out. The logging level on the PIX is turned all the way up to “debugging”, so we get an export of session data in the form of messages like PIX-6-302013/PIX-6-302014 etc. Both the IPS and the PIX are reporting to a central log collector, a Cisco CS-MARS in this case.

The trigger for this investigation was an alert from the IPS. Lots of them, in fact. The signature that fired was one we’d never seen before, which either means another class of false positive to tune out or that something interesting is actually happening.

Even more interesting was the fact that the signature wasn’t just your typical brute-force pattern matching job – it was one of Cisco’s “anomaly detection” signatures that fires on behaviour observed over time. The signature denotes a TCP scanner hard at work scanning external IP addresses. The signature writeup is frustratingly lacking in detail; what it means when it says “scanning” would be a useful thing to know, for starters.

Never mind. NSM Ninjas don’t need vendor writeups. We can reverse engineer a signature’s firing conditions ourselves.

Looking at the alerts we’d got, we can see:

  • There were zillions of alerts over a five-ish minute period.
  • The alerts cite five distinct internal IP addresses as being those doing the “scanning”.
  • At the end of the five-ish minutes, the alerts stop as abruptly as they started.

Hmm. Let me see if I’ve got this straight. Five of my hosts all start “scanning” at the same time, they carry on scanning for five minutes, and then they all stop at the same time?


Maybe we really do have a worm outbreak here. But why only five hosts? Why did they stop at the same time? Is there a command and control element at work here? Are my hosts pwned? Do I trust the IPS alerts and start rebuilding the “compromised” hosts? Questions pour down like rain, and we’re in for some serious flooding unless we wheel out the umbrella-and-wellies combo that is NSM and Vigilance to Detail.

First, let’s see exactly what these hosts were doing during this five minute window. We’ve got no full-content capture here, remember, so we’re going to have to hit the session data from the PIX pretty hard. Using this, we can see that each of the five hosts tried to contact between two and three hundred non-local IP addresses in our five minute material time frame (MTF). This is definite worm behaviour. There’s a small degree of crossover between the pools of target IP addresses, but there’s no one address that they all have in common (i.e., there’s no single command and control channel).

Next, we can check the destination port – if we’re dealing with a worm, this will be a good clue to which one it is. All the ports were TCP, but the port numbers were random. All over the place. This doesn’t seem like worm behaviour to me – random IP addresses I can understand, but random ports makes little sense.

Now we can look at data volumes – how much data did our “scanners” actually send. We get another interesting answer – not a single byte of payload was carried. This could possibly be explained by the random nature of the destination ports – given the utter shotgun nature of the “scanning”, I guess it’s not too likely that we’re going to hit an open port.

So we have a frenzy of totally ineffective scanning, with the attackers apparently synchronised somehow. There’s not too much more we can learn from the session data at this point, so we have to look for other clues. The plan is to see what kinds of events the PIX was splurting out in the thirty seconds before and after the first IPS alarm – we’re after the catalyst for the scanning, if there is one.

All the while, I can’t help but think I’ve seen these five source IP addresses together before, but I can’t quite put my finger on it…

Anyway, back to the catalyst seeking. The ad-hoc query interface on the CS-MARS is pretty reasonable, and it’s really easy to ask it for a list of event types seen from a particular device for a particular MTF. Taking the start of the scanning as the start point and working from T-30 seconds to T+330, we notice a few things:

  • There seems to be a big gap in the events output by the PIX – it’s been totally silent during the initial period of scanning.
  • During the latter phases of scanning, there were loads of these messages logged: “%PIX-3-305006: outbound portmap translation creation failed”. These are raised when the PIX can’t create a NAT translation, due to lack of resources, or a TCP protocol violation, etc.
  • We also see a single instance of this: “%PIX-6-199002: Startup completed. Beginning operation”. This means that the PIX rebooted for some reason.

We can express this as a timeline:


Finally, I remember where I’ve seen the five IP addresses before, and all the pieces fall into place.

The five IP addresses are those of people who use Skype. Whilst it obviously has great merit as a piece of communications software, its use of apparently random destination IP addresses and ports plays merry hell with NSM reports based upon session data. For this reason, I run a daily report of Skype users so that I can exclude them from these reports if I need to (it’s easy to spot a Skype client starting up because it checks to see if it’s running the latest version – I look for which IP addresses are making the check).

After piecing together all the evidence, we come up with this:

  • Five Skype clients start up. They connect to many many destination IP addresses on random ports.
  • For whatever reason, the PIX crashes and reloads.
  • The Skype clients don’t know this, and try to maintain their existing TCP connections (they must do some kind of keepalive).
  • After a minute or two, the PIX has finished reloading.
  • Whilst this is going on, the Skype clients are still trying their keepalives. Once the PIX is working again, the keepalives still fail because the PIX is a stateful firewall. Each keepalive only has the ACK flag set because it’s part of an existing session as far as Skype is concerned. However, the PIX hasn’t seen the start of the TCP session and therefore has no “state container” for it. This is the reason for all the “outbound portmap translation creation failed” messages, and also the reason why we didn’t see any actual payload transferred – the PIX dropped all of the keepalives.
  • Meanwhile, the IPS (sitting in between the Skype clients and the PIX) is seeing all of this and is merrily firing it’s “External Scanner” signature.
  • Eventually, the session timeout on all the Skype clients fires, and they all declare their existing sessions dead and re-establish them from scratch with SYN.

So, there we have it. The IPS alerts were false positives in this instance, caused by a tenacious piece of software and a flaky piece of hardware. Our lack of full-content capture wasn’t a problem – we solved the mystery without it, and even if we’d had it there wouldn’t have been anything to see in this case. Another victory for the umbrella-and-wellies combo!

Alec Waters is responsible for all things security at Dataline Software, and can be emailed at alec.waters(at)

Detecting encrypted traffic with frequency analysis

Posted in Crazy Plans, net-entropy, NSM, Sguil on 12 August, 2009 by Alec Waters

Let’s start with a little disclaimer:

I am not a cryptanalyst. I am not a mathematician. It is quite possible that I am a complete idiot. You decide.

With that out of the way, let’s begin.

NSM advocates the capture of, amongst other things, full-content data. It is often said that there’s no point in performing full-content capture of encrypted data that you can’t decrypt – why take up disk space with stuff you’ll never be able to read? It’s quite a valid point – one of the networks I look after carries quite a bit of IPSec traffic (tens of gigabytes per day), and I exclude it from my full content capture. I consider it enough, in this instance, to have accurate session information from SANCP or Netflow which is far more economical on disk space.

That said, you can still learn quite a bit from inspecting full-content captures of encrypted data – there is often useful information in the session setup phase that you can read in clear text (e.g., a list of ciphers supported, or SSH version strings, or site certificates, etc.). It still won’t be feasible to decrypt the traffic, but at least you’ll have some clues about its nature.

A while ago, Richard wrote a post called “Is it NSM if…” where he says:

While we’re talking about full content, I suppose I should briefly address the issue of encryption. Yes, encryption is a problem. Shoot, even binary protocols, obscure protocols, and the like make understanding full content difficult and maybe impossible. Yes, intruders use encryption, and those that don’t are fools. The point is that even if you find an encrypted channel when inspecting full content, the fact that it is encrypted has value.

That sounds reasonable to me. If you see some encrypted stuff and you can’t account for it as legitimate (run of the mill HTTPS, expected SSH sessions, etc.) then what you’re looking at is a definite Indicator, worthy of investigation.

So, let’s just ask our capture-wotsits for all the encrypted traffic they’ve got, then, shall we? Hmm. I’m not sure of a good way to do that (if you do, you can stop reading now and please let me know what it is!).


…I’ve got an idea.

Frequency analysis is a useful way to detect the presence of a substitution cipher. You take your ciphertext and draw a nice histogram showing the frequency of all the characters you encounter. Then you can make some assumptions (like the most frequent character was actually an ‘e’ in the plaintext) and proceed from there.

However, the encryption protocols you’re likely to encounter on a network aren’t going to be susceptible to this kind of codebreaking. The ciphertext produced by a decent algorithm will be jolly random in nature, and a frequency analysis will show you a “flat” histogram.

So why am I talking about frequency analysis? Because this post is about detecting encrypted traffic, not decrypting it.

Over at Security Ripcord, there’s a really nifty tool for drawing file histograms. Take a look at the example images – the profile of the histograms is pretty “rough” in nature until you get down to the Truecrypt example – it’s dead flat, because a decent encryption algorithm has produced lots and lots of nice randomness (great terminology, huh? Like I said, I’m not a cryptanalyst or a mathematician!)

So, here’s the Crazy Plan for detecting encypted traffic:

  1. Sample X contiguous bytes of a given session (maybe twice, once for src->dst and once for dst->src). A few kilobytes ought to be enough to get an idea of the level of randomness we’re looking at.
  2. Make your X-byte block start a little way into the session, so that we don’t include any plaintext in the session startup.
  3. Strip off the frame/packet headers (ethernet, IP, TCP, UDP, ESP, whatever) so that you’re only looking at the packet payload.
  4. Perform your frequency analysis of your chunk of payload, and “measure the resultant flatness”.
  5. Your “measure of flatness” equates to the “potential likelihood that this is encrypted”.

Perhaps one could assess the measure of flatness by calculating the standard deviation of the character frequencies? Taking the Truecrypt example, this is going to be pretty close to zero; the TIFF example is going to yield a much higher standard deviation.

Assuming what I’ve babbled on about here is valid, wouldn’t it be great to get this into Sguil? If SANCP or a Snort pre-processor could perform this kind of sampling, you’d be able to execute some SQL like this:

select [columns] from sancp where src_randomness < 1 or dst_randomness < 1

…and you’d have a list of possibly encrypted sessions.

How’s that sound?

This post has been updated here.

Check out InfoSec Institute for IT courses
including computer forensics boot camp training.

Alec Waters is responsible for all things security at Dataline Software, and can be emailed at alec.waters(at)

Defensive Avoidance vs Vigilance to Detail

Posted in General Security on 5 August, 2009 by Alec Waters

NSM is a methodology that facilitates the investigation of security incidents. Whichever tools you use to accomplish this, at the end of the day it is you, the investigator, who has to make sense of the gathered information. There’s only one tool for doing this, and it’s called GreyMatter v1.0; it’s installed at the factory between the ears of every one of us. Well, most of us, at least.

I’m interested in investigation as a skill in its own right, and I find it useful to learn how investigators in areas outside of IT go about their business. Whilst their skills may not be directly relevant to the infosec world, there are definite parallels in that all investigators have to acquire and collate information and extract high-quality evidence from it.

As an example, Advanced Surveillance by Peter Jenkins is a great manual on physical surveillance. I’m not likely to ever lead a team imposing surveillance on Mister Big, but it’s very interesting to see how Peter goes about acquiring information and handling it in a manner acceptable to law enforcement and the courts. It also makes it easy to spot TV cops conducting surveillance badly. Dexter really needs to read a copy.

I’m currently reading Investigative Interviewing by Dr Eric Shepherd, a Consultant Forensic Psychologist (a cool job title if ever I heard one!). The book is aimed at UK law enforcement, and is intended to complement their interview training.

Early on in the book, Dr Shepherd introduces two investigative mindsets, “Defensive Avoidance” and “Vigilance to Detail”. Introducing Defensive Avoidance, Dr Shepherd says:

There are many pressures in the workplace: volume of work, shortage of staff, limited time and resources, and restricted budgets. These pressures are liable to lead investigators to adopt a mindset of defensive avoidance that is not consistent with quality performance. More cases can be worked more quickly and with less “grief” by not “doing” detail: by not enquiring in detail, by not observing closely, and by not examining systematically. Defensive avoidance is a decision to minimize the mental demands and to evade the complexity and implications of detail. It is characterised by taking the “short cut” as much as possible. [...] The common theme is confirmation bias, ie the search for information that confirms prior belief and ignoring that which does not.

An investigator with this mindset may approach their interviews with the goals of confirming what they know, or what they think they know, or what they want to be true. This is not necessarily borne out of laziness or incompetence; above, Dr Shepherd lists other pressures that lead to defensive avoidance.

Elsewhere, I’ve seen defensive avoidance categorised by:

  • Lack of vigilant search
  • Distortion of the meaning of warning messages
  • Selective inattention and forgetting
  • Rationalizing

Dr Shepherd then introduces a second mindset, Vigilance to Detail:

The alternative mindset to defensive avoidance [...] is vigilance: the decision to be attentive, observant, and circumspect in respect of detail. Common sense argues that the life-blood of an effective investigation is a comprehensive grasp of the fine-grain detail. An investigator who is not committed to all the detail – warts and all – is a contradiction in terms. Vigilance to detail is mentally and physically demanding. The pressure can be markedly eased by operating with a model of investigation that assists thinking and action in the gathering and processing of detail.

An interview conducted with this mindset will seek to establish an account of what happened. This then becomes a body of evidence, and the investigator will draw their conclusions from it well after the interview has concluded, not before or during it. This mindset will also drive you to pay attention to not only what was said, but what was not said.

Stepping back into the infosec world, these two mindsets seem familiar – I’ve seen defensive avoidance before:

  • In this story, anti-virus said that everything was OK, so the issue was not (initially) looked at in any more detail.
  • In this story, the helpdesk removed the obvious symptoms of infection without looking in more detail at what the infection actually was and how it got in (there was another driver towards defensive avoidance here – apathy. The helpdesk staff didn’t care about the detail – all they wanted was for the user to get off the phone as quickly as possible).
  • More than once I’ve reported an incident to a customer, only to have them say “it’s OK, our anti-virus caught it”. If your AV took care of it, why am I still seeing hostile traffic?
  • The lower tiers described here could be said to be exhibiting a degree of defensive avoidance, too.

Vigilance to Detail, on the other hand, reminds me much more of what you can do if you are practicing NSM principles:

  • NSM captures all the detail crossing the network and provides you with a terrific amount of information from which you can extract high-quality evidence.
  • It allows you to see not just what was said (AV/IDS alerts etc.) but also what was not explicitly said (eg, session data from apparently benign transactions).
  • NSM allows you to investigate indicators that others may take at face value (AV/IDS alerts etc.), or
  • To determine why you have a “non-barking dog” on your hands (eg a blatantly infected machine that’s spewing spam whilst its AV does nothing).

At the end of the day, Vigilance to Detail is just a mindset, and NSM is just a methodology. It’s up to us, the investigators, to make the most of both.

Alec Waters is responsible for all things security at Dataline Software, and can be emailed at alec.waters(at)

Packet Challenge

Posted in Packet Challenge on 3 August, 2009 by Alec Waters

I enjoy the little “packet challenges” that people post, and I’ve had a reasonable amount of success of late.

So, now it’s my turn. Chris Christianson over at has kindly posted a challenge I came up with – The Crypto Kitchen. There are two versions of the challenge, easy and hard, but the answer is the same in both cases. It’s your explanation that will count!

The question to be answered is – “what is the secret ingredient?”

Good luck, and thanks Chris for posting the challenge!

Alec Waters is responsible for all things security at Dataline Software, and can be emailed at alec.waters(at)


Get every new post delivered to your Inbox.

Join 28 other followers