Archive for the Case Studies Category

Private Investigations

Posted in Case Studies, General Security, Malware, NSM, Why watch the wire? on 25 May, 2010 by Alec Waters

The following is a sanitised excerpt from an after action report on a malware infection. Like the song this post is named after, the report goes all the way from “It’s a mystery to me” to “What have you got to take away?”. The report is in two parts:

  • Firstly, a timeline of events that was constructed from the various forms of log and network flow data that are routinely collected. Although not explicitly cited in this report, evidence exists to back up all of the items in the timeline
  • The second part is an analysis of the cause of all the mischief

To set the scene, the mystery to be unraveled concerned two machines on an enterprise network which reported that they had detected and removed malware. Detected and removed? Case closed, surely? Never ones to let a malware detection go uninvestigated, we dig deeper…

Part One – Timeline of events

Unknown time, likely shortly before 08:12:37 BST
Ian Oswald attaches a USB stick to his computer. $AV_VENDOR does nothing, either because it detects no threat or because it isn’t working properly. The last message from $AV_VENDOR on Ian’s machine to $AV_MANAGEMENT_SERVER was on 30th January 2009, suggesting the latter is the case.

Based upon subsequent activity, malware is active on Ian’s machine at this point, and is running under his Windows account as a Domain Administrator. The potential for mischief is therefore largely unconstrained.

08:12:37 BST
Ian’s machine ($ATTACKER) requests the URL:

This returns a page containing the outside global IP address of Ian’s machine (i.e., the IP address it will appear to have when communicating with hosts on the Internet).

Something on Ian’s machine now knows “where it is” on the Internet.

It is likely that the inside local IP address of Ian’s machine ( is also determined at this point, so something on Ian’s machine also knows “where it is” within the enterprise.

08:12:39 BST
Ian’s machine requests the URL:


This is a geolocation site, and returns the string containing a country code.

Something on Ian’s machine now knows “where it is” geographically.

08:12:56 BST
Ian’s machine attempts to download a hostile executable. This download is blocked by $URLFILTER, which is fortunate because at the time the executable was not detected as a threat by $AV_VENDOR.

NOTE – Subsequent analysis of the executable confirms its hostile nature; detailed discussion is out of the scope of this report, but a command and control channel was observed, and steganographic techniques were used to conceal the download of further malware by hiding an obfuscated executable in the Comments extension header of a gif file.

NOTE – It was not known until later in the investigation if Ian’s machine already knew where to download the executable from, or if a command and control channel was involved. Ian’s machine was running Skype at the time, which produces sufficient network noise to occlude such a channel when only network session information is available to the investigator.

After this attempted download, Ian’s machine starts trying to contact TCP ports 139 and 445 on IP addresses that are based on the determined outside global address of Ian’s machine (xxx.yyy.49.253).

TCP139 and TCP445 are used by the SMB protocol (Windows file sharing).

The scan starts with IP address xxx.yyy.49.1, and increments sequentially. As the day progresses, the scan of finishes, and continues to scan and

This is the behaviour of a worm. Something on Ian’s machine is trying to propagate to other machines nearby.

08:13:14 BST
Ian notices a process called ip.exe running in a command prompt window on his computer and he physically disconnects it from the network. This action was taken a very creditable 41 seconds after the first suspicious network activity.

Ian attempts to stop ip.exe running and remove it from his machine, and also deletes a file called gxcinr.exe from his USB stick.

08:24:51 BST
Ian reattaches his machine to the network.

08:25:36 BST
Ian uses Google to research ip.exe, and reads a blog posting which talks about its removal. Ian considers his machine clean at this point since the most obvious indicator (ip.exe) is no longer present.

08:57:32 BST
The external sequential SMB scanning observed before the attempted cleanup restarts at xxx.yyy.49.1.

Additionally, an internal scan commences at this point of the subnet (i.e., the enterprise’s internal network).

As the day progresses, the scan covers,,,,, and before Ian’s machine is switched off for the day. These latter subnets are not in use, so no connections were made to target systems.

The scan of is bound to bear fruit. Within this range, any detected SMB shares on enterprise computers will be accessed with the rights of a Domain Administrator.

8:58:43 BST
$AV_VENDOR detects and quarantines a threat named “W32/Autorun.worm.zf.gen” on $VICTIM_WORKSTATION_1 (Annie Timms’ machine). The threat was in a file called gxcinr.exe, which was in C:\Users\ on Annie’s machine. $AV_VENDOR cites $ATTACKER (Ian’s machine) as the threat source. This alert was propagated to the $SIEM via $AV_MANAGEMENT_SERVER, and the $SIEM sent an alert email to the security team.

9:00:08 BST
$AV_VENDOR detects and quarantines the same threat in the same file in the same location on $VICTIM_WORKSTATION_2. Linda Charles was determined to be the logged on user at the time, and again Ian’s machine was cited as the threat source. This alert was propagated to the $SIEM via $AV_MANAGEMENT_SERVER, and the $SIEM sent an alert email to the security team.

9:34:45 BST
$AV_VENDOR on $VICTIM_SERVER_1 detects the same threat in the same file, but in a different directory (C:\Program Files\Some software v1.1\). The threat was quarantined. No threat source was noted, although a successful type 3 (network) login from Ian.Oswald was noted immediately prior to the detection, making Ian’s machine the likely attacker. Unfortunately, the detection was _not_ propagated to $AV_MANAGEMENT_SERVER, and therefore did not find its way to the $SIEM to be sent as an email.

9:37:51 BST
The same threat was detected and quarantined on $VICTIM_SERVER_2, this time in E:\Testbeds\TestOne\. Again, a type 3 login from Ian.Oswald precedes the detection, which again was not propagated to $AV_MANAGEMENT_SERVER, $SIEM or email.

9:40:00 BST
The same threat appears on $VICTIM_SERVER_3, in C:\Program Files\SomeOtherSoftware. $AV_VENDOR does not detect the threat, because it isn’t fully installed on this machine.

NOTE – Detection of gxcinr.exe on this machine was by manual means, after the malware’s propagation mechanism was deduced (see next entry). $AV_VENDOR was subsequently installed on $VICTIM_SERVER_3 and a full scan performed. For what it’s worth, this did not detect any threats.

09:46:05 BST -> 09:54:44 BST
The border $IPS sensor detected Ian’s machine connecting to and enumerating SMB shares on three machines on $ISP’s network (i.e., other $ISP customers external to the enterprise).

This clue helps us see how the malware is spreading, and why the threats were detected in the cited directories.

The malware conducts a sequential scan of IP addresses, looking for open SMB ports. If it finds one, it enumerates the shares present, picks the first one only, and copies the threat (gxcinr.exe) to that location (i.e., \\VICTIMMACHINE\FirstShare\gxcinr.exe):

  • C:\Program Files\Some software v1.1\ equates to the first share on $VICTIM_SERVER_1 – \\$VICTIM_SERVER_1\Software
  • E:\Testbeds\TestOne\ equates to the first share on $VICTIM_SERVER_2 – \\$VICTIM_SERVER_2\TestOne
  • C:\Users equates to the first share on Annie’s and Linda’s machine – \\$VICTIM_WORKSTATION_1\Users and \\$VICTIM_WORKSTATION_2\Users
  • C:\Program Files\SomeOtherSoftware equates to the first share on $VICTIM_SERVER_3 – \\$VICTIM_SERVER_3\\SomeOtherSoftware

This knowledge allows us to manually check other machines on the enterprise network by performing the same steps as the malware. Other machines and devices were found to have open file shares, but either the shares were not writeable from a Domain Administrator’s account, or there was no trace of the threat (gxcinr.exe).

Circa 14:00 BST
Ian turns his machine off and leaves the office until the following Monday.

The following Monday
Ian returns to the office and wipes his machine, installing Windows 7 in place of the previous Windows Vista. “Patient Zero” is therefore gone forever, and any understanding we could have extracted from it is lost.


Part Two – Analysis of gxcinr.exe

It is necessary to understand what this file is, what it does, and how it persists in order to know if we have eradicated the threat. We also need to understand if gxcinr.exe was responsible for the propagation from Ian’s machine, or if it was just the payload.

Samples of gxcinr.exe were available in five places, namely the unprotected $VICTIM_SERVER_3 server and in the quarantine folders of the four machines where $AV_VENDOR detected the threat. We reverse-engineered the quarantine file format used by $AV_VENDOR and extracted the quarantined threats for comparison.

On $VICTIM_SERVER_3 machine, the MAC times for gxcinr.exe were as follows:

Modified: aa/bb/2009 09:13
Accessed: xx/yy/2010 09:40
Created: xx/yy/2010 09:40
No file attributes were set.

Additionally, a zero-byte file called khw was found alongside gxcinr.exe. Its MAC times correlate with those of gxcinr.exe, indicating that it was either propagated along with gxcinr.exe or created by it:

Modified: xx/yy/2010 09:40
Accessed: xx/yy/2010 09:40
Created: xx/yy/2010 09:40
Attributes: RHSA

khw was also found on Linda Charles’s machine, and removed manually. No other machines had khw on them.

All five samples of gxcinr.exe were found to be identical:

File size: 808164 bytes

MD5 : 2511bcae3bf729d2417635cb384e3c08
SHA1 : 45fe02e4489110723c2787f3975ae7122b905400
SHA256: b656c57f037397a20c9f3947bf4aa00d762179ebf6eb192c7bc71e85ea1c17f3

VirusTotal report is here:

The AV detection rate is pretty good, although we were the first people to submit this specific malware sample to VirusTotal for analysis (i.e., it’s a reasonably fresh variant of a malware strain).

Whilst it’s not a safe assumption to judge the nature of malware by AV vendors’ descriptions alone, most of the descriptions have AutoIt in their names. AutoIt is a scripting tool that can be used to produce executables to carry out Windows tasks. Analysis of ASCII and Unicode strings contained in the sample lends weight to this theory.

AutoIt has an executable-to-script feature, but this was unable to extract the compiled script. Research suggests that this feature has been removed from recent versions of the software as a security precaution.

The sample contains the string below, amongst many other intelligible artefacts:

“This is a compiled AutoIt script. AV researchers please email for support.”

The address above was emailed asking for help, but no response was received.

The next step was to carry out dynamic analysis of the sample (i.e., the executable was run in an instrumented and controlled environment and the results observed).

When run, gxcinr.exe did very little. There was no geolocation, no IP address determination, no instance of ip.exe, no scanning, and no second-stage download.

However, three temporary files were discovered which gxcinr.exe created and later attempted to remove:

  1. aut1F.tmp (random filename, judging by repeated runs) is binary, first four bytes is the ASCII string EA06 ( There is no obvious decode or deobfuscation.
  2. jbmphni (random filename, judging by repeated runs) is ASCII and starts off “3939i33t33i33t3135i33t…..”. There are many repeating patterns in the file, some of which are several tens of characters long ( Again, there is no obvious decode or deobfuscation.
  3. s.cmd is a cleanup script, run by gxcinr.exe after it itself has deleted the files above:

    del “C:\gxcinr.exe”
    if exist “C:\gxcinr.exe” goto loop
    del c:\s.cmd

Running the sample in this manner yielded no obvious activity, infection, propagation or persistence.

However, if the file khw is present in the same directory as gxcinr.exe, different behaviour is observed. The three files above are extracted, the cleanup above is observed, but also:

  • A slightly modified version of the sample is copied to c:\windows\system32 as csrcs.exe. The name of the file is a deliberate attempt to hide in plain sight – there is a legitimate windows file called csrss.exe. Additionally, the file’s create and modified times are artificially set to match the date that Windows was installed. VirusTotal says this of csrcs.exe:
  • No attempt is made to hide csrcs.exe from detection, nor does it delete its prefetch file. No matching prefetch files were found on the machines belonging to Annie and Linda, so it is unlikely that the malware executed there. Prefetch is disabled by default on Windows Server 2003, so this kind of analysis cannot be performed on $VICTIM_SERVER_1, $VICTIM_SERVER_2, and $VICTIM_SERVER_3.
  • csrcs.exe is set to auto-run with each reboot by means of various registry keys.
  • csrcs.exe contacts a command and control server at IP address qqq.www.eee.rrr on varying ports in the 81-89 range. The request was HTTP, and looked like this:

    GET /xny.htm HTTP/1.1
    Cache-Control: no-cache

    The response is encoded somehow:

    HTTP/1.1 200 Ok
    Content-Length: 2811
    Last-modified: xxx, xx xxx 2010 11:13:30 GMT
    Content-Type: text/html
    Connection: Keep-Alive
    Server: SHS

    <zZ45sAsM8Y77V69S888S6 … snip … 80ew0kty0j4tyj004>

    There is no obvious decode of the response, but we are likely receiving instructions of some kind. Looking retrospectively at the evidence secured at the time, we can see Ian’s machine contacting this IP address:

    08:12:39.048 BST: %IPNAT-6-CREATED: tcp xxx.yyy.49.253:50345 qqq.www.eee.rrr:85 qqq.www.eee.rrr:85

    08:12:41 BST Cisco Netflow : bytes: 289 , packets: 5 , /50345 -> qqq.www.eee.rrr /85 ­ TCP

    08:13:39.393 BST: %IPNAT-6-DELETED: tcp xxx.yyy.49.253:50345 qqq.www.eee.rrr:85 qqq.www.eee.rrr:85

    This C&C channel was not readily obvious due to the presence of Skype on Ian’s machine – there were too many other connections to random IP addresses on random ports for this to stand out.

    Despite the fact this suspected C&C channel uses unencrypted HTTP, only nominated ports are inspected via $URLFILTER (port 80 is inspected as the default, plus other ports where we have seen HTTP running in the past). At the time, 85 was not one of the nominated ports so no inspection of this traffic was carried out. Had port 85 been in the list, $URLFILTER would have blocked the request, as the destination is categorised as Malicious. It is unknown if this step would have prevented the worm from spreading, but it would have at least been another definite indicator of malice.

  • csrcs.exe then gets its external IP address and geolocation in the manner observed from Ian’s machine
  • csrcs.exe then starts scanning in the manner observed from Ian’s machine
  • csrcs.exe infects other machines in the manner observed from Ian’s machine

In our tests, csrcs.exe created a file on each remote victim machine called owadzw.exe, and put the file khw alongside it (suggesting that gxcinr.exe is a randomly generated filename). We did not observe any attempt to execute owadzw.exe, nor were any registry keys modified. The malware appears to spread, but seems to rely on manual execution when the remote file share is on a fixed disk.

However, if the file share that is accessed is removable media (USB stick, camera, MP3 player or whatever), an autorun.inf file is created that will execute the malware when the stick is inserted in another computer. It is likely therefore that Ian’s USB stick was infected in this manner, and the malware was unleashed on the enterprise by virtue of him plugging it in.

The VirusTotal result for owadzw.exe is similar to the results for gxcinr.exe and csrcs.exe, so they are all likely to be slight variations of one another:

We did not observe csrcs.exe trying to download any other executables, as was the case with Ian’s machine, nor did we observe ip.exe running on an infected machine.

Aside from spreading, the purpose of the malware is unknown. However, it is persistent (i.e., it will run every time you start your machine) and it does appear to have a command and control facility. It is entirely possible that at some later date it will ask for instructions and be told to carry out some other kind of activity (spamming, DOS, etc.) or it may download additional components (keyloggers, for example).

Where do we stand?

We understand the malware’s behaviour, and know how to look for indicators of it running both in terms of network activity and residual traces on the infected host. At present there are none, so we appear to be clean.

What went right?

  • An incident was triggered by virtue of an explicit indicator of malice (the $AV_VENDOR alerts from Annie’s and Linda’s machines).
  • Where functioning properly, $AV_VENDOR prevented the spread of the malware.
  • $URLFILTER blocked a malicious download.
  • We were able to preserve and analyse sufficient evidence in total absence of Patient Zero (Ian’s machine) for us to understand how the malware spreads. This let us carry out a comprehensive search for any other, undetected, infections (like the one on $VICTIM_SERVER_3).
  • We were able to recover a sample of the malware and analyse it to the extent that we can say with a good degree of confidence that it was present on Ian’s USB stick, and was responsible for the whole incident (as opposed to merely being the payload for some other unknown malware that had been running on Ian’s machine for an unknown period of time).
  • We were able to sharpen our detection of the malware, now that we know how it behaves.

What went wrong?

  • The infection was not stopped at its point of entry (Ian’s machine), most likely because $AV_VENDOR wasn’t working properly.
  • The malware executed as a Domain Administrator, effectively unlimiting the damage it could have caused.
  • The malware spread outside of the enterprise and infected other machines.
  • The malware infected an enterprise machine unprotected by $AV_VENDOR.
  • $VICTIM_SERVER_1 and $VICTIM_SERVER_2 did not report their infection to $AV_MANAGEMENT_SERVER. These detections were only discovered as part of the evidence preservation process.
  • $URLFILTER did not block the C&C channel due to the way it was configured.
  • The $IPS didn’t fire any “scanner” signatures.
  • No statistical alarms were raised by the $SIEM.

What can be changed or done better?

  • A review of the state of the $AV_VENDOR deployment should be carried out. We need to know what the coverage is like, how well the updates are working, and why certain machines don’t report to $AV_MANAGEMENT_SERVER.
  • Some form of USB device control should be implemented.
  • People with Administrative rights on the enterprise network should have two accounts, one “Admin” account and one “Normal” account. The Normal account should be used day-to-day, with the Admin account used only where necessary. This would put a cap on the capability of any malware that is able to run.
  • Unnecessary fileshares should be removed. It was determined experimentally that if you share anything within any user profile on a Vista or Win7 machine, the entire c:\users\ directory gets shared. This was the case on Annie’s and Linda’s machines.
  • The presence of Skype doesn’t help when dealing with an incident like this.
  • If a tighter outbound port filtering policy was applied, then the command and control channel would have been blocked, as would the worm’s attempts to propagate outside of the enterprise.


The production of this report would not have been possible without the routine collection of evidence from everything-you-can-lay-your-hands-on – servers, routers, switches, AV servers, URL filters and IPS devices all contributed to the report (notable things that did not contribute to the report are Ian’s machine and his USB stick, since they were wiped before they could play a part).

Without these event sources, all we’d have had were two reports of removed malware. Hardly cause for alarm, surely….

Alec Waters is responsible for all things security at Dataline Software, and can be emailed at alec.waters(at)

Attack of the Clones

Posted in Case Studies, NSM on 20 August, 2009 by Alec Waters

It’s not always possible or feasible to collect the four types of information useful for conducting NSM, for the usual reasons (“cost of software/hardware/people/time” being near the top of the list). However, this doesn’t mean that the game is lost before it’s even begun – Sguil, for example, doesn’t have any facility for statistical alerts, but that doesn’t mean that it’s not a powerful tool.

The following tale took place where only session and alert data were available. Despite this apparent lack of information, we were able to solve the mystery without the intervention of Scooby and the gang, and we were able to dodge the temptation to take an IPS alert at face value (a clear case of defensive avoidance!)

The network in question was purely a client site; there were no public servers to worry about. Network security was pretty formulaic:


There’s a PIX doing the standard firewall/NAT job, and an inline IPS scrutinising everything that goes in or out. The logging level on the PIX is turned all the way up to “debugging”, so we get an export of session data in the form of messages like PIX-6-302013/PIX-6-302014 etc. Both the IPS and the PIX are reporting to a central log collector, a Cisco CS-MARS in this case.

The trigger for this investigation was an alert from the IPS. Lots of them, in fact. The signature that fired was one we’d never seen before, which either means another class of false positive to tune out or that something interesting is actually happening.

Even more interesting was the fact that the signature wasn’t just your typical brute-force pattern matching job – it was one of Cisco’s “anomaly detection” signatures that fires on behaviour observed over time. The signature denotes a TCP scanner hard at work scanning external IP addresses. The signature writeup is frustratingly lacking in detail; what it means when it says “scanning” would be a useful thing to know, for starters.

Never mind. NSM Ninjas don’t need vendor writeups. We can reverse engineer a signature’s firing conditions ourselves.

Looking at the alerts we’d got, we can see:

  • There were zillions of alerts over a five-ish minute period.
  • The alerts cite five distinct internal IP addresses as being those doing the “scanning”.
  • At the end of the five-ish minutes, the alerts stop as abruptly as they started.

Hmm. Let me see if I’ve got this straight. Five of my hosts all start “scanning” at the same time, they carry on scanning for five minutes, and then they all stop at the same time?


Maybe we really do have a worm outbreak here. But why only five hosts? Why did they stop at the same time? Is there a command and control element at work here? Are my hosts pwned? Do I trust the IPS alerts and start rebuilding the “compromised” hosts? Questions pour down like rain, and we’re in for some serious flooding unless we wheel out the umbrella-and-wellies combo that is NSM and Vigilance to Detail.

First, let’s see exactly what these hosts were doing during this five minute window. We’ve got no full-content capture here, remember, so we’re going to have to hit the session data from the PIX pretty hard. Using this, we can see that each of the five hosts tried to contact between two and three hundred non-local IP addresses in our five minute material time frame (MTF). This is definite worm behaviour. There’s a small degree of crossover between the pools of target IP addresses, but there’s no one address that they all have in common (i.e., there’s no single command and control channel).

Next, we can check the destination port – if we’re dealing with a worm, this will be a good clue to which one it is. All the ports were TCP, but the port numbers were random. All over the place. This doesn’t seem like worm behaviour to me – random IP addresses I can understand, but random ports makes little sense.

Now we can look at data volumes – how much data did our “scanners” actually send. We get another interesting answer – not a single byte of payload was carried. This could possibly be explained by the random nature of the destination ports – given the utter shotgun nature of the “scanning”, I guess it’s not too likely that we’re going to hit an open port.

So we have a frenzy of totally ineffective scanning, with the attackers apparently synchronised somehow. There’s not too much more we can learn from the session data at this point, so we have to look for other clues. The plan is to see what kinds of events the PIX was splurting out in the thirty seconds before and after the first IPS alarm – we’re after the catalyst for the scanning, if there is one.

All the while, I can’t help but think I’ve seen these five source IP addresses together before, but I can’t quite put my finger on it…

Anyway, back to the catalyst seeking. The ad-hoc query interface on the CS-MARS is pretty reasonable, and it’s really easy to ask it for a list of event types seen from a particular device for a particular MTF. Taking the start of the scanning as the start point and working from T-30 seconds to T+330, we notice a few things:

  • There seems to be a big gap in the events output by the PIX – it’s been totally silent during the initial period of scanning.
  • During the latter phases of scanning, there were loads of these messages logged: “%PIX-3-305006: outbound portmap translation creation failed”. These are raised when the PIX can’t create a NAT translation, due to lack of resources, or a TCP protocol violation, etc.
  • We also see a single instance of this: “%PIX-6-199002: Startup completed. Beginning operation”. This means that the PIX rebooted for some reason.

We can express this as a timeline:


Finally, I remember where I’ve seen the five IP addresses before, and all the pieces fall into place.

The five IP addresses are those of people who use Skype. Whilst it obviously has great merit as a piece of communications software, its use of apparently random destination IP addresses and ports plays merry hell with NSM reports based upon session data. For this reason, I run a daily report of Skype users so that I can exclude them from these reports if I need to (it’s easy to spot a Skype client starting up because it checks to see if it’s running the latest version – I look for which IP addresses are making the check).

After piecing together all the evidence, we come up with this:

  • Five Skype clients start up. They connect to many many destination IP addresses on random ports.
  • For whatever reason, the PIX crashes and reloads.
  • The Skype clients don’t know this, and try to maintain their existing TCP connections (they must do some kind of keepalive).
  • After a minute or two, the PIX has finished reloading.
  • Whilst this is going on, the Skype clients are still trying their keepalives. Once the PIX is working again, the keepalives still fail because the PIX is a stateful firewall. Each keepalive only has the ACK flag set because it’s part of an existing session as far as Skype is concerned. However, the PIX hasn’t seen the start of the TCP session and therefore has no “state container” for it. This is the reason for all the “outbound portmap translation creation failed” messages, and also the reason why we didn’t see any actual payload transferred – the PIX dropped all of the keepalives.
  • Meanwhile, the IPS (sitting in between the Skype clients and the PIX) is seeing all of this and is merrily firing it’s “External Scanner” signature.
  • Eventually, the session timeout on all the Skype clients fires, and they all declare their existing sessions dead and re-establish them from scratch with SYN.

So, there we have it. The IPS alerts were false positives in this instance, caused by a tenacious piece of software and a flaky piece of hardware. Our lack of full-content capture wasn’t a problem – we solved the mystery without it, and even if we’d had it there wouldn’t have been anything to see in this case. Another victory for the umbrella-and-wellies combo!

Alec Waters is responsible for all things security at Dataline Software, and can be emailed at alec.waters(at)

The case of the Phantom Hacker

Posted in Case Studies, NSM on 3 July, 2009 by Alec Waters

It’s another quiet day in the Secret Underground NSM Bunker. The lights on the enormous world map at the far end of the cavern are blinking green, my army of orange-boiler-suit-clad underlings are hard at work on my next diabolical scheme, and the persian cat on my lap is purring contentedly. Martini-drinking spies are nowhere to be seen.

Out of the blue, the cat’s ears prick up and she starts to hiss. Seconds later, a security alert is raised (why have an IDS when you can have a cat?). Because we’re collecting logs as well as traffic data, we can tell pretty quickly when an account is locked out due to too many failed logins. It’s usually a fat-fingered user who will shortly be calling the helpdesk, but today it’s not. Today it’s one of the domain admins. Now admins can have a fit of fat-fingeredness too, but equally someone could have tried to brute force a login into a privileged account. We need to find out which it is.

A quick call to the admin’s desk reveals that he’s not there – he’s been out of the office for the last hour, and his computer appears to be off. Furthermore, one of the admin’s colleagues has had sight of the admin’s machine for the entire time, and nobody has been near it.

So, how has a powered-down computer managed to attempt a login without anyone actually physically being near it? I can’t ask the cat, because she’s run off and is hiding behind my latest prototype doomsday device. It’s time to mine all of that information we’ve been collecting, and as logs were the trigger we’ll start with these:

  • The “account locked out” message came from a domain controller, and was preceded by the proper number of “login failure” messages.
  • All of these messages cite the admin’s machine as the computer that was used for these login attempts. We hope these messages are trustworthy, otherwise we’ve got much bigger problems (i.e., someone is messing with machine accounts and server event logs).
  • The login failed messages all say that an interactive login was attempted locally – it wasn’t a remote desktop session or other network login (SMB, etc.).

There are two ways in which a local interactive login can be attempted. You can either sit at the machine and try it, or you can use some remote-access software like VNC that only does screen-scraping rather than interacting with the foul guts of Windows. We can rule out the former, assuming we can trust what the admin’s colleague has told us. We’ve got an NSM sensor at the network’s border, so we can see if the admin’s machine is being contacted over the Internet.

A few quick queries later, we can see that there has been absolutely no Internet-bound traffic either to or from the admin’s machine since they logged out and left for lunch. This isn’t the most favourable outcome, since the alternatives are:

  • There’s a physical backdoor to our network somewhere that someone is using. A modem or a 3G phone or a rogue AP or something (wireless is not permitted at all at this location so if wireless is involved it’s not an authorised device)
  • It’s an inside job – the attacker is either inside the building right now, or has just made their getaway. This would be irritating, not least since our NSM capability only covers the network’s borders – if it’s an inside job, NSM won’t have caught it at all.

The cat is now trying to force open the covers of the doomsday device in order to set it off… Bad kitty!

The orange-clad goons are busy mobilising to secure the admin’s machine for imaging and forensic examination when we get wind of an interesting new helpdesk ticket. Apparently, someone is having trouble with their new keyboard. Their new wireless keyboard.

A penny rolls towards a precipice and gets ready to drop, and the cat heads off to her dish of cream – always a good omen.

It turns out that a few people had taken delivery of wireless keyboards that morning. The admin was one of the recipients, and had set his up, logged out of his machine, and gone to lunch. A little later, someone else had unpacked theirs. They turned on their machine, and waited for it to boot. Not being a touch typist, they dropped their head to their new toy and hit control-alt-delete, and then hit tab to skip from the pre-populated username field to the password field (bad policy! naughty policymaker!). Still with head down, they typed their password and hit return.

Looking up at the screen they saw that it was still prompting for control-alt-delete, so they repeated the process. Several times. They finally hit the keyboard’s “sleep” key to no effect before getting the old keyboard back and raising a support ticket.

Now I’m sure you’ve all worked out what was going on here (the cat certainly had). The user’s new wireless keyboard was on the same channel as the admin’s, and all his keystrokes were driving the admin’s machine, eventually switching it off. Cue much debate about the merits of wireless keyboards…

Case closed, the cat returns to my lap, the orange-clad goons get back to their diabolical tasks, and all is well in the Secret Underground NSM Bunker….

Alec Waters is responsible for all things security at Dataline Software, and can be emailed at alec.waters(at)

Listening for the Grasshopper

Posted in Case Studies, NSM on 25 June, 2009 by Alec Waters

Here’s a case study I originally wrote for SecurityMonkey’s blog, tidied up a bit, and with a somewhat less monkey-related theme:

Listening for the Grasshopper

Hope you find it interesting. Comments welcome!

Alec Waters is responsible for all things security at Dataline Software, and can be emailed at alec.waters(at)


Get every new post delivered to your Inbox.

Join 28 other followers