Archive for the Cisco Category

A Tale of Two Routers

Posted in Cisco, NSM on 14 September, 2011 by Alec Waters

Take a look at the diagram below, showing two (Cisco) routers. HugeCorpCoreRouter is a mighty behemoth with a six figure price tag. It has redundant route processors, handles many gigabits per second of business-critical traffic, has all sorts of esoteric connections and requires a squad of elite ninja black-ops CCIEs to keep it all running.

TinySOHORouter, by comparison, is a trivial speck on the corporate network diagram. It has a single ADSL connection and performs the usual SOHO tasks of NAT, firewall, DSL dialup, etc. Both routers export Netflow data to a central collector.

As you ponder my da Vinci-like Visio skills, consider the following question. Which router will pose the greater Netflow analysis challenge to the security team?


You’ve probably guessed it by now – the troublesome router is TinySOHORouter. HugeCorpCoreRouter, whilst powerful and complex, has a relatively easy job when it comes to Netflow. TinySOHORouter however has three sticking points that could prove to be troublesome for a Netflow analyst. None of the following features are typically running on your average big beefy HugeCorpCoreRouter:

  1. The firewall process (or any kind of filtering ACL). HugeCorpCoreRouter is concerned with forwarding datagrams as fast as possible through the core – firewall operarions do not live here
  2. The NAT process
  3. The dialer interface associated with the ADSL connection

Let’s look at each of these in turn.

Sponsor Alec!
I’m running the Brighton Half Marathon in aid of Help for Heroes – please sponsor me if you can by clicking the link to the right:

The firewall process

Netflow is, by default, an ingress-based technology, which means that the router’s flow cache is updated when datagrams are received by an interface. However, a datagram doesn’t have to enter and leave the router to leave an impression in the flow cache. This manifests itself in an interesting way when a firewall is sticking its oar in.

The Netflow v5 flow record format has fields that describe the SNMP interface indexes of the input and output interfaces for any given flow. This is useful, because it means that your Netflow analysis tools can tell you that when 10.11.12.13 spoke to the webserver on 192.168.0.1, the traffic from 10.11.12.13 entered the router on FastEthernet4/23 and left it on GigabitEthernet0/2. This also makes it possible to draw pretty per-interface graphs of Netflow traffic. (BTW, you’ll want to use the “snmp-server ifindex persist” command otherwise the SNMP interface indexes could change when the router reloads, which can really confuse analysis!)

But what if there were an ACL in place that drops all traffic to port 80 on 192.168.0.1? Dropped datagrams are one of the byproducts of any kind of firewall or ACL – how does Netflow handle those?

Let’s say a datagram from 10.11.12.13 is received, destined for 192.168.0.1:80. As this destination is denied by an ACL, the router duly drops it. Netflow, being an ingress technology, will still put an entry into the flow cache to describe the flow, despite the fact that the datagram was dropped by an ACL (even if the ACL is applied in the inbound direction on the receiving interface). There is no output interface for the flow in this case, so what does the router put into the flow record to denote this?

Flows that are either a) dropped by the router or b) destined for the router itself (SSH sessions, for example) will have zero in the output interface field, to show that the flow entered the router but did not leave.

So why is this a problem for the analyst?

Let’s say I run a report that shows all destination ports for destination IP address 192.168.0.1 (in a naive attempt to find out “what services have people been using on my server?”). Much to my surprise, port 80 features prominently. Why’s it in the report? Isn’t it blocked by an ACL? Have we been hacked? Has the APT Bogeyman paid us a visit?

Fortunately, we’re safe. Port 80 features because 10.11.12.13 tried to talk to it, causing a flow to be logged despite the fact that the ACL dropped the traffic. If you were to re-run the report asking for the number of bytes transferred between 10.11.12.13 and 192.168.0.1:80, we’d see 40 bytes in the client->server direction (the size of an IP datagram with a TCP SYN in it) and zero bytes in the server->client direction, which describes the ACL drop nicely.

Keep this in mind when designing reports based on Netflow data. Certain products like Netflow Analyser are able to take this behaviour into account to a certain degree (“Suppress Access Control List related drops”). Alternatively, you could use the Netflow v9 flow record format if your router and analysis tools support it. There is a useful field called “FORWARDING STATUS” which tells you if a flow was forwarded, dropped or consumed, allowing the analyst to differentiate between traffic dropped by the router and traffic destined for the router. Very handy.

The NAT process

Our second bugbear can also cause problems, especially if we want to ask questions like “show me all the traffic destined for the single PC behind TinySOHORouter” – the report in this case will be totally blank, even if the PC has been hitting Facebook all day long. But why?

Take the simple case of an HTTP flow between our single PC at 10.11.12.13 (a private IP address on a router’s FastEthernet0 interface) and 123.123.123.123 (a public webserver on the Internet via FastEthernet1). On its way out of the router, the private 10.11.12.13 gets NATted into 111.111.111.111, the IP address of FastEthernet1.

From Netflow’s point of view, it goes like this:

  • A TCP segment from 10.11.12.13 destined for 123.123.123.123 is received on Fa0. An entry in the Netflow cache accounts for this.
  • The router decides that the traffic should be sent out via Fa1, and does a source IP address NAT translation from 10.11.12.13 to 111.111.111.111 before it sends it on its way.
  • The TCP response is eventually received on Fa1 from 123.123.123.123 destined for 111.111.111.111, which is 10.11.12.13’s “outside” address. An entry in the Netflow cache accounts for this.
  • The NAT translation from 111.111.111.111 to 10.11.12.13 takes place, and the TCP response is sent out of Fa0.

Therefore, all of the returning traffic will be shown as destined for 111.111.111.111 and never 10.11.12.13 – this is because input accounting (including Netflow) occurs on the router before the NAT outside-to-inside translation takes place:

http://www.cisco.com/en/US/tech/tk648/tk361/technologies_tech_note09186a0080133ddd.shtml

There are three ways to either get around or assist with this problem:

  1. If your router and Netflow collector support it, disable ingress Netflow accounting on Fa1 and enable both ingress and egress Netflow accounting on Fa0 (the inside interface). This means that all flows will be accounted for on the “inside” of the NAT process. Take care, though – by doing this we are causing Netflow to “ignore” all traffic that does not cross Fa0. This may or may not be a problem, depending on your topology and requirements. Also, think very carefully about this approach if your router has many layer 3 interfaces. If ingress and egress Netflow were to be enabled on both Fa0 and Fa1, there’s a chance your Netflow collector could see duplicated flows.
  2. If your router and Netflow collector support it, you can use the “ip nat log translations flow-export” command. This will log all NAT translations in a flow template that looks like this:
    templateId=259: id=259, fields=11
        field id=8 (ipv4 source address), offset=0, len=4
        field id=225 (natInsideGlobalAddress), offset=4, len=4
        field id=12 (ipv4 destination address), offset=8, len=4
        field id=226 (natOutsideGlobalAddress), offset=12, len=4
        field id=7 (transport source-port), offset=16, len=2
        field id=227 (postNAPTSourceTransportPort), offset=18, len=2
        field id=11 (transport destination-port), offset=20, len=2
        field id=228 (postNAPTDestinationTransportPort), offset=22, len=2
        field id=234 (ingressVRFID), offset=24, len=4
        field id=4 (ip protocol), offset=28, len=1
        field id=230 (natEvent), offset=29, len=1

    This will give you a log of all NAT translations that you can use to find out the actual destination for the traffic from 123.123.123.123 to 111.111.111.111. Your Netflow collector may even be smart enough to correlate this information onto other “standard” flow exports, which would be a very neat trick indeed.

  3. If your router supports it, you can use the “ip nat log translations syslog” command. This will dump all NAT translations to syslog like this:
    Sep 14 12:31:39.740 BST: %IPNAT-6-CREATED:
    tcp 192.168.0.88:4021 212.74.31.235:4021
    192.150.8.200:443 192.150.8.200:443
    Sep 14 12:32:53.733 BST: %IPNAT-6-DELETED:
    tcp 192.168.0.88:4021 212.74.31.235:4021
    192.150.8.200:443 192.150.8.200:443

    Take care, though – this approach has the possibility to add significant load to your router, your syslog server, and your syslog analysis mechanisms – it becomes a manual task to correlate the NAT translations from syslog to the Netflow exports from your router.

The ADSL link’s dialer interface

It varies with platform and configuration, but when using a DSL line with PPPoE/PPPoA a plethora of virtual interfaces get created by the router. Of these, only the following are really of interest:

interface ATM 0/0/0
The physical ADSL interface

interface dialer 0
The dialer interface created by the user in order to connect to the DSL provider

interface virtual-access XX
A virtual interface created by the router, cloned from and bound to interface dialer0

Of these, only the dialer and virtual-access interfaces are layer 3 interfaces that can participate in Netflow, and of these the user only has direct control over the configuration of the dialer interface. So we just enable Netflow on TinySOHORouter’s dialer0 and inside ethernet interfaces and we’re done, right?

Not quite.

If you were to use your Netflow analysis tools to look at an interface graph for dialer0, all you will see is outbound traffic. You’ll also notice that the virtual-access interface has popped up as well, showing only inbound traffic. No one interface has the complete picture.

This is, interestingly enough, the expected behaviour. Traffic from the ethernet network leaves the router via dialer0 because that’s what the default route says to do (“ip route 0.0.0.0 0.0.0.0 dialer0″). Therefore, when the ethernet interface receives a datagram destined for the Internet, Netflow will put the SNMP interface index of dialer0 into the flow cache. However, the router doesn’t actually use dialer0 to send or receive traffic, it uses the virtual-access interface cloned from it. This means that when datagrams are received from the Internet, they enter the router on virtual-accessXX instead of dialer0 or any of the other associated interfaces. This is why the dialer shows only outbound traffic and the virtual-access shows only inbound. All very logical and intuitive, I’m sure you’ll agree…

How to get around this? Either just “keep in it mind” when performing analysis, or hope that your Netflow analysis tools have some way to cater for it by plotting the outbound traffic on dialer0 and the inbound traffic on virtual-accessXX on the same graph.

Those are all the Netflow analysis “gotchas” that spring to mind – can anyone think of any others?


Alec Waters is responsible for all things security at Dataline Software, and can be emailed at alec.waters@dataline.co.uk

The Case of the Great Router Robbery

Posted in Cisco, Information Leaks, Networking on 23 May, 2011 by Alec Waters

Here’s another post I wrote for the InfoSec Institute. What are the consequences for an enterprise if one of their branch routers is stolen? Read the article here – comments welcome!


Alec Waters is responsible for all things security at Dataline Software, and can be emailed at alec.waters@dataline.co.uk

Cap’n Quagga’s Pirate Treasure Map

Posted in Cisco, Networking, NSM on 23 November, 2010 by Alec Waters

Avast, me hearties! When a swashbucklin’ pirate sights land whilst sailin’ uncharted waters, the first thing he be doin’ is makin’ a map. Ye can’t be burying ye treasure if ye don’t have a map, yarrr!

PUBLIC SERVICE ANNOUNCEMENT

For everyone’s sanity, the pirate speak ends now. Save it for TLAP day!

When searching for booty on a network, it’s often useful to have a map. If you’ve got a foothold during a pentest, for example, how far does your conquered domain stretch? Is it a single-subnet site behind a SOHO router, or a tiny outpost of a corporate empire spanning several countries?

To get the answer, the best thing to do is ask one of the locals. In this case, we’re going to try to convince a helpful router to give up the goods and tell us what the network looks like. The control plane within the enterprise’s routers contains the routing table, which is essentially a list of destination prefixes (i.e., IP networks) and the next-hop to be used to get there (i.e., which neighbouring router to pass traffic on to in order to reach a destination).

The routing table is populated by a routing protocol (such as BGP, OSPF, EIGRP, RIP, etc), which may in turn have many internal tables and data structures of its own. Interior routing protocols (like OSPF) are concerned with finding the “best” route from A to B within the enterprise using a “technical” perspective; they’re concerned with automatically finding the “shortest” and “fastest” route, as opposed to exterior routing protocols like BGP which are more interested in implementing human-written traffic forwarding policies between different organisations.

The key word above automatic. Interior routing protocols like to discover new neighbouring routers without intervention – it can therefore cater for failed routers that come back online, and allow the network to grow and have the “best” paths recomputed automatically.

So, how are we going to get our treasure map so that we know how far we can explore? We’re going to call in Cap’n Quagga!

Technically, it's James the Pirate Zebra, but seriously man, you try finding a picture of a pirate quagga!! They're extinct, for starters!

Pirate Cap'n Quagga aboard his ship, "Ye Stripy Scallywag"

Quagga is a software implementation of a handful of routing protocols. We’re going to use it to convince the local router that we’re a new member of the pirate fleet, upon which the router will form a neighbour relationship with us. After this has happened we’ll end up with our pirate treasure map, namely the enterprise’s routing table. Finally, we’ll look at ways in which the corporate privateers can detect Cap’n Quagga, and ways to prevent his buckle from swashing in the first place.

For the purposes of this article we’re going to use OSPF, but the principles hold for other protocols too. OSPF is quite a beast, and full discussion of the protocol is well beyond the scope of this article – interested parties should pick up a book.

Step One – Installing and configuring Quagga

I’m using Debian, so ‘apt-get install quagga’ will do the job quite nicely. Once installed, we need to tweak a few files:

/etc/quagga/daemons

This file controls which routing protocols will run. We’re interested only in OSPF for this example, so we can edit it as follows:

zebra=yes
bgpd=no
ospfd=yes
ospf6d=no
ripd=no
ripngd=no

As shown above, we need to turn on the zebra daemon too – ospfd can’t stand alone.

Next, we need to set up some basic config files for zebra and ospfd:

/etc/quagga/zebra.conf

hostname pentest-zebra
password quagga
enable password quagga

/etc/quagga/ospfd.conf

hostname pentest
password quagga
enable password quagga
log stdout

Now we can force a restart of Quagga with ‘/etc/init.d/quagga restart’.

For more information, the Quagga documentation is here, the wiki is here, and there’s a great tutorial here.

Step Two – Climb the rigging to the crow’s nest and get out ye spyglass

We need to work out if there’s a router on the local subnet that’s running OSPF. This step is straightforward, as OSPF sends out multicast “Hello” packets by default every ten seconds – all we have to do is listen for it. As far as capturing this traffic goes, it has a few distinguishing features:

  • The destination IP address is 224.0.0.5, the reserved AllSPFRouters multicast address
  • The IP datagrams have a TTL of one, ensuring that the multicast scope is link local only
  • OSPF does not ride inside TCP or UDP – it has its own IP Protocol number, 89.

The easiest capture filter for tshark/tethereal or their GUI equivalents is simply “ip proto 89″; this will capture OSPF hellos in short order:

Ahoy there, matey!

Apart from confirming the presence of a local OSPF router, this information is critical in establishing the next step on our journey to plunderville – we need Quagga’s simulated router to form a special kind of neighbour relationship with the real router called an “adjacency”. Only once an adjacency has formed will routing information be exchanged. Fortunately, everything we need to know is in the hello packet:

Ye're flying my colours, matey!

For a text only environment, “tshark -i eth0 -f ‘ip proto 89′ -V” provides similar output.

Step Three – configure Quagga’s OSPF daemon

For an adjacency to form (which will allow the exchange of LSAs, which will allow us to populate the OSPF database, which will allow us to run the SPF algorithm, which will allow us to populate the local IP routing table…), we need to configure Quagga so that all of the highlighted parameters above match. The command syntax is very Cisco-esque, and supports context sensitive help, abbreviated commands and tab completion. I’m showing the full commands here, but you can abbreviate as necessary:

# telnet localhost ospfd
Trying 127.0.0.1…
Connected to localhost.
Escape character is ‘^]’.

Hello, this is Quagga (version 0.99.17).
Copyright 1996-2005 Kunihiro Ishiguro, et al.

User Access Verification

Password:
pentest> enable
Password:
pentest# configure terminal
pentest(config)# interface eth0
! Make the hello and dead intervals match what we’ve captured
pentest(config-if)# ospf hello-interval 10
pentest(config-if)# ospf dead-interval 40
pentest(config-if)# exit
pentest(config)# router ospf
! eth0 on this machine was given 192.168.88.49 by DHCP
! The command below will put any interfaces in
! 192.168.88.0/24 into area 0.0.0.4, effectively
! therefore “turning on” OSPF on eth0
! The area id can be specified as an integer (4) or
! as a dotted quad (0.0.0.4)
pentest(config-router)# network 192.168.88.0/24 area 0.0.0.4
pentest(config-router)# exit
pentest(config)# exit

We can check our work by looking at the running-config:

pentest# show running-config

Current configuration:
!
hostname pentest
password quagga
enable password quagga
log stdout
!
!
!
interface eth0
!
interface lo
!
router ospf
network 192.168.88.0/24 area 0.0.0.4
!
line vty
!
end

The Hello and Dead intervals of 10 and 40 are the defaults, which is why they don’t show in the running-config under ‘interface eth0′.

Step Four – Start diggin’, matey!

With a bit of luck, we’ll have formed an OSPF adjacency with the local router:

pentest# show ip ospf neighbor

Neighbor ID Pri  State    Dead Time Address        Interface
172.16.7.6   1  Full/DR  32.051s   192.168.88.1 eth0:192.168.88.49

If we exit from Quagga’s OSPF daemon and connect to zebra instead, we can look at our shiny new routing table. Routes learned via OSPF are prefixed with O:

# telnet localhost zebra
Trying 127.0.0.1…
Connected to localhost.
Escape character is ‘^]’.

Hello, this is Quagga (version 0.99.17).
Copyright 1996-2005 Kunihiro Ishiguro, et al.

User Access Verification

Password:
pentest-zebra> show ip route
Codes: K – kernel route, C – connected, S – static, R – RIP, O – OSPF,
I – ISIS, B – BGP, > – selected route, * – FIB route

O   0.0.0.0/0 [110/1] via 192.168.88.1, eth0, 00:04:45
K>* 0.0.0.0/0 via 192.168.88.1, eth0
O>* 10.4.0.0/26 [110/1012] via 192.168.88.1, eth0, 00:04:46
O>* 10.4.0.64/26 [110/1012] via 192.168.88.1, eth0, 00:04:46
O>* 10.4.0.128/26 [110/1012] via 192.168.88.1, eth0, 00:04:46
O>* 10.4.0.192/26 [110/1012] via 192.168.88.1, eth0, 00:04:46
O>* 10.4.2.0/26 [110/1012] via 192.168.88.1, eth0, 00:04:46
O>* 10.4.3.0/26 [110/1012] via 192.168.88.1, eth0, 00:04:46
O>* 172.16.6.0/30 [110/15] via 192.168.88.1, eth0, 00:04:46
O>* 172.16.6.4/30 [110/16] via 192.168.88.1, eth0, 00:04:46
O>* 172.16.6.8/30 [110/11] via 192.168.88.1, eth0, 00:04:46
O>* 172.16.6.12/30 [110/110] via 192.168.88.1, eth0, 00:04:46
O>* 172.16.7.1/32 [110/12] via 192.168.88.1, eth0, 00:04:46
O>* 172.16.7.2/32 [110/13] via 192.168.88.1, eth0, 00:04:46
O>* 172.16.7.3/32 [110/16] via 192.168.88.1, eth0, 00:04:46
O>* 172.16.7.4/32 [110/1012] via 192.168.88.1, eth0, 00:04:46
O>* 172.16.7.5/32 [110/1012] via 192.168.88.1, eth0, 00:04:46

We clearly are not just sitting on a single-subnet LAN! Here are some of the things we can learn from the routing table:

  • Firstly, we’ve got a few more subnets than merely the local one to enumerate with nmap etc!
  • We can make some kind of estimation on how far away the subnets are by looking at the route metrics. An example above is the ‘1012’ part of ‘[110/1012]‘. 1012 is the metric for the route, with the precise meaning of “metric” varying from routing protocol to routing protocol. In the case of OSPF, by default this is the sum of the interface costs between here and the destination, where the interface cost is derived from the interface’s speed. The 110 part denotes the OSPF protocol’s “administrative distance“, which is a measure of trustworthiness of a route offered for inclusion in the routing table by a given routing protocol. If two protocols offer the routing table exactly the same prefix (10.4.3.0/26, for example), the routing protocol with the lowest AD will “win”.
  • A good number of these routes have a prefix length of /26 (i.e., a subnet mask of 255.255.255.192), meaning that they represent 64 IP addresses. These are likely to be host subnets with new victims on them.
  • The /30 routes (4 IP addresses) are likely to be point-to-point links between routers or even WAN or VPN links between sites.
  • The /32 routes (just one IP address) are going to be loopback addresses on individual routers. If you want to target infrastructure directly, these are the ones to go for.

If you want to start digging really deeply, you can look at the OSPF database (show ip ospf database), but that’s waaay out of scope for now.

Step Five – Prepare a broadside!

If we’ve got to this point, we are in a position not only to conduct reconnaissance, but we could also start injecting routes into their routing table or manipulate the prefixes already present in an effort to redirect traffic to us (or to a blackhole). Originating a default route is always fun, since it will take precedence over legitimate static default routes that have been redistributed into OSPF (redistributed routes are “External” in OSPF terminology, and are less preferable to “internal” routes such as our fraudulent default). If we had a working default route of our own, this approach could potentially redirect Internet traffic for the entire enterprise through our Quagga node where we can capture it. Either that or you’ll bring the network to a screaming halt.

Anyway, it’s all moot, since we’re nice pirates and would never consider doing anything like that!

Privateers off the starboard bow, Cap’n!

How can we detect such naughtiness, and even better, prevent it?

The first step is to use the OSPF command ‘log-adjacency-changes’ on all the enterprise’s OSPF routers. This will leave log messages like this:

Nov 23 15:11:24.666 UTC: %OSPF-5-ADJCHG: Process 2, Nbr 192.168.88.49 on Gi­gabitEthernet0/0.2 from LOADING to FULL, Loading Done

Keeping track of adjacency changes is an excellent idea – it’s a metric of the stability of the network, and also offers clues when rogue devices form adjacencies.

Stopping rogue adjacencies altogether can be accomplished in two ways. The first is to make OSPF interfaces on host-only subnets “passive“, which permits them to participate in OSPF without allowing adjacencies to form.

The second method is to use OSPF authentication, whereby a hash of a preshared key is required before an adjacency can be established. Either method is strongly recommended!

As always, keep yer eyes to the horizon, mateys! :)


Alec Waters is responsible for all things security at Dataline Software, and can be emailed at alec.waters@dataline.co.uk

The Cisco Kid and the Great Packet Roundup, part two – session data

Posted in Cisco, General Security, NSM on 26 October, 2010 by Alec Waters

In part one, I covered how to use Cisco routers and firewalls to perform full packet capture. This exciting installment will cover how to get network session data out of these devices.

Network session data can be likened to a real-world itemised telephone bill. It tells you who “called” who, at what times, for how long, and how much was said (but not what was said). It’s an excellent lightweight way to see what’s going on armed only with a command prompt.

There are several ways to extract such information from Cisco kit; we’ll look at each in turn, following Part One’s support/troubleshooting/IR scenario of accessing remote devices where you’re not able to make topological changes or install any extra software or hardware.

Netflow

The richest source of session information on Cisco devices is Netflow (I’ll leave it to Cisco to explain how to turn it on). If you’re able to set up a Netflow collector/analyser (like this one (free for two-interface routers), or many others) you can drill down into your session info as far as you like. If you haven’t got an analyser or you can’t install one in time of need, it’s still worth switching on Netflow because you can interrogate the flow cache from the command line.

The command is “show ip cache flow”, and the output is split into two parts. The first shows some statistical information about the flows that the router has observed:

router#sh ip cache flow
IP packet size distribution (3279685 total packets):
 1-32   64   96  128  160  192  224  256  288  320  352  384  416  448  480
 .000 .184 .182 .052 .072 .107 .004 .005 .000 .000 .000 .000 .000 .000 .000

 512  544  576 1024 1536 2048 2560 3072 3584 4096 4608
 .000 .000 .001 .020 .365 .000 .000 .000 .000 .000 .000

IP Flow Switching Cache, 278544 bytes
 57 active, 4039 inactive, 418030 added
 10157020 ager polls, 0 flow alloc failures
 Active flows timeout in 1 minutes
 Inactive flows timeout in 15 seconds
IP Sub Flow Cache, 34056 bytes
 57 active, 967 inactive, 418030 added, 418030 added to flow
 0 alloc failures, 0 force free
 1 chunk, 1 chunk added
 last clearing of statistics never
Protocol     Total    Flows   Packets Bytes  Packets Active(Sec) Idle(Sec)
--------     Flows     /Sec     /Flow  /Pkt     /Sec     /Flow     /Flow
TCP-WWW       6563      0.0       186  1319      1.2       4.7       1.4
TCP-other    16163      0.0         1    47      0.0       0.0      15.4
UDP-DNS         12      0.0         1    67      0.0       0.0      15.6
UDP-NTP       1010      0.0         1    76      0.0       0.0      15.0
UDP-Frag         2      0.0         6   710      0.0       0.2      15.3
UDP-other   316602      0.3         2   156      0.8       0.6      15.4
ICMP         31165      0.0         6    63      0.2      53.4       2.2
IP-other     46438      0.0        21   125      1.0      58.0       2.1
Total:      417955      0.4         7   574      3.3      11.0      12.7

In absence of a graphical Netflow analyser, the Packets/Sec counter is a good barometer of what’s “using up all the bandwidth”. To clear the stats so that you can establish a baseline, you can use the command “clear ip flow stats”.

After the stats comes a listing of all the flows currently being tracked by the router:

SrcIf     SrcIPaddress    DstIf     DstIPaddress    Pr SrcP DstP  Pkts
Fa4       xxx.xxx.xxx.xxx Local     yyy.yyy.yyy.yyy 32 3FAF 037C    16
Tu100     10.7.1.250      BV3       10.4.1.3        06 0051 C07A   663
Tu100     10.7.1.250      BV3       10.4.1.3        06 0050 C0AC   120
BV3       10.4.1.3        Tu100     10.7.1.250      06 C0AC 0050   116
Tu100     192.168.88.20   Local     172.16.7.10     01 0000 0800     5
BV3       10.4.1.3        Fa4       zzz.zzz.zzz.zzz 06 C0A2 0050   429
BV3       10.4.1.3        Tu100     10.7.1.250      06 C07A 0051   366
Fa4       bbb.bbb.bbb.bbb BV3       yyy.yyy.yyy.yyy 06 0050 C0A0     1
BV3       10.4.1.3        Fa4       ddd.ddd.ddd.ddd 06 C07E 0050     1
Tu100     192.168.88.56   Local     172.16.7.10     06 8081 0016     7
Fa4       zzz.zzz.zzz.zzz BV3       yyy.yyy.yyy.yyy 06 0050 C0A2   763
Tu100     192.168.88.28   Local     172.16.7.10     11 04AC 00A1     1
Tu100     192.168.88.28   Local     172.16.7.10     11 04A6 00A1     1
Fa4       aaa.aaa.aaa.aaa Local     yyy.yyy.yyy.yyy 32 275F BD8A     5
Fa4       ccc.ccc.ccc.ccc Local     yyy.yyy.yyy.yyy 32 97F1 E9BE     5
Tu100     10.7.1.242      Local     172.16.7.10     01 0000 0000     3
Fa4       ddd.ddd.ddd.ddd BV3       yyy.yyy.yyy.yyy 06 0050 C07E     1

The tempting simplicity of the table above hides a plethora of gotchas for the unwary:

  • The Pr (IP protocol number),  SrcP (source port) and DstP columns are in hex, but we can all do the conversion in our heads, right? ;)
  • Netflow is a unidirectional technology. That means that if hosts A and B are talking to one another via a single TCP connection, two flows will be logged – one for A->B and one for B->A. For example, these two rows in the table above are talking about the same TCP session (the four-tuple of addresses and ports is the same for both rows):
Tu100     10.7.1.250      BV3       10.4.1.3        06 0051 C07A   663
BV3       10.4.1.3        Tu100     10.7.1.250      06 C07A 0051   366
  • Unless you configure it otherwise, Netflow is an ingress technology. This means that flows are accounted for as they enter the router, not as they leave. You can determine what happens on the egress side of things because when a flow is accounted for the output interface is determined by a FIB lookup and placed in the DstIf column; in this way, you can track a flow’s path through the router. I mention this explicitly because…
  • Netflow does not sit well with NAT. Take a look at these two rows, which represent an HTTP download (port 0x0050 is 80 in decimal) requested of non-local server zzz.zzz.zzz.zzz by client 10.4.1.3:
BV3       10.4.1.3        Fa4       zzz.zzz.zzz.zzz 06 C0A2 0050   429
Fa4       zzz.zzz.zzz.zzz BV3       yyy.yyy.yyy.yyy 06 0050 C0A2   763

So what’s yyy.yyy.yyy.yyy, then? It’s the NAT inside global address representing 10.4.1.3. As Netflow is unidirectional and is recorded as it enters an interface, the returning traffic from zzz.zzz.zzz.zzz will have the post-NAT yyy.yyy.yyy.yyy as its destination address, and will be recorded as such.

Provided that you keep that lot in mind, the flow cache is a powerful tool to explore the traffic your router is handling.

NAT translations

A typical border router may well perform NAT/PAT tasks. If so, you can use the NAT database as a source of session information. On a router, the command is “show ip nat translations [verbose]“; on a PIX/ASA, it’s “show xlate [debug]“:

router#show ip nat translations
Pro Inside global         Inside local   Outside local    Outside global
tcp yyy.yyy.yyy.yyy:49314 10.4.1.3:49314 94.42.37.14:80   94.42.37.14:80
tcp yyy.yyy.yyy.yyy:49316 10.4.1.3:49316 92.123.68.49:80  92.123.68.49:80

If you’ve got a worm on your network that’s desperately trying to spread, chances are you’ll see a ton of NAT translations (which could overwhelm a small router). Rather than paging through thousands of lines of output, you can just ask the device for some NAT statistics. On a router, it’s “show ip nat statistics”; on a PIX/ASA, it’s “show xlate count”.

Keeping tabs on the number of active NAT translations is a worthwhile thing to do. I wrote a story for Security Monkey’s blog a while back which tells the tale of a worm exhausting a router’s memory with NAT translations; you can even graph the number of translations to look for anomalies over time.

Firewall sessions

Another way of extracting session information is to ask the router or PIX about the sessions it is currently tracking for firewall purposes. On a router it’s “show ip inspect sessions [detail]“; on the PIX/ASA, it’s “show conn [detail]“.

router#show ip inspect sessions detail
Established Sessions
 Session 842064A4 (10.4.1.3:49446)=>(92.123.68.81:80) http SIS_OPEN
  Created 00:00:59, Last heard 00:00:58
  Bytes sent (initiator:responder) [440:4269]
  In  SID 92.123.68.81[80:80]=>y.y.y.y[49446:49446] on ACL outside-fw (6 matches)
 Session 84206FC4 (10.4.1.3:49443)=>(92.123.68.81:80) http SIS_OPEN
  Created 00:00:59, Last heard 00:00:59
  Bytes sent (initiator:responder) [440:2121]
  In  SID 92.123.68.81[80:80]=>y.y.y.y[49443:49443] on ACL outside-fw (4 matches)
 Session 8420728C (10.4.1.3:49436)=>(92.123.68.81:80) http SIS_OPEN
  Created 00:01:01, Last heard 00:00:50
  Bytes sent (initiator:responder) [1343:48649]
  In  SID 92.123.68.81[80:80]=>y.y.y.y[49436:49436] on ACL outside-fw (44 matches)

This has the advantage of not being complicated by NAT, but still showing useful bytecounts and session durations.

Last resorts

If none of the above can help you out, there are a couple of last resort options open to you. The first of these is the “ip accounting” interface configuration command on IOS routers. To quote Cisco:

The ip accounting command records the number of bytes (IP header and data) and packets switched through the system on a source and destination IP address basis. Only transit IP traffic is measured and only on an outbound basis; traffic generated by the router access server or terminating in this device is not included in the accounting statistics. Traffic coming from a remote site and transiting through a router is also recorded.

Also note that this command will likely have a performance impact on the router. You may end up causing more problems than you solve by using this! The output of “show ip accounting” will look something like this:

router# show ip accounting
 Source          Destination            Packets      Bytes
 172.16.19.40    192.168.67.20          7            306
 172.16.13.55    192.168.67.20          67           2749
 172.16.2.50     192.168.33.51          17           1111
 172.16.2.50     172.31.2.1             5            319
 172.16.2.50     172.31.1.2             463          30991
 172.16.19.40    172.16.2.1             4            262

If “ip accounting” was a last resort, “debug ip packet” is what you’d use as an even lasterer resort, so much so that I leave it as an exercise for the reader to find out all about it. Don’t blame me when your router chokes to the extent that you can’t even enter “undebug all”…!


Alec Waters is responsible for all things security at Dataline Software, and can be emailed at alec.waters@dataline.co.uk

The Cisco Kid and the Great Packet Roundup, part one

Posted in Cisco, General Security, NSM on 11 August, 2010 by Alec Waters

Knowing what your network is doing is central to the NSM doctrine, and the usual method of collecting NSM data is to attach a sensor of some kind to a tap or a span port on a switch.

But what if you can’t do this? What if you need to see what’s going on on a network that’s geographically remote and/or unprepared for conventional layer-2 capture? Quite a bit, as it turns out.

In the first of a two-part post, the Cisco Kid (i.e., me) is going to walk you through a number of ways to use an IOS router or ASA/PIX firewall to perform full packet capture. The two product sets have different capabilities and limitations, so we’ll look at each in turn.

PIX/ASA

Full packet capture has been supported on these devices for many years, and it’s quite simple to operate. Step one is to create an ACL that defines the traffic we’re interested in capturing – because all of the captures are stored in memory, we need to be as specific as we can otherwise we’ll be using scarce RAM to capture stuff we don’t care about.

Let’s assume we’re interested in POP3 traffic. Start by defining an ACL like this:

pix(config)# access-list temp-pop3-acl permit tcp any eq 110 any
pix(config)# access-list temp-pop3-acl permit tcp any any eq 110

Note that we’ve specified port 110 as the source or the destination – we wouldn’t want to risk only capturing one side of the conversation.

Now we can fire up the capture, part of which involves specifying the size of the capture buffer. Remembering that this will live in main memory, we’d better have a quick check to see how much is going spare:

pix# show memory
Free memory:        31958528 bytes (34%)
Used memory:        60876368 bytes (66%)
Total memory:       92834896 bytes (100%)

Plenty, in this case. Let’s start the capture:

pix# capture temp-pop3-cap access-list temp-pop3-acl buffer 1024000 packet-length 1514 interface outside-if circular-buffer

This command gives us a capture called temp-pop3-cap, filtered using our ACL, stored in a one-meg (circular) memory buffer, that will capture frames of up to 1514 bytes in size from the interface called outside-if. If you don’t specify a packet-length, you won’t end up capturing entire frames.

Now we can check that we’re actually capturing stuff:

pix# show capture temp-pop3-cap
5 packets captured
1: 12:22:02.410440 xxx.xxx.xxx.xxx.39032 > yyy.yyy.yyy.yyy.110: S 3534424301:3534424301(0) win 65535 <mss 1260,nop,nop,sackOK>
2: 12:22:02.411401 yyy.yyy.yyy.yyy.110 > xxx.xxx.xxx.xxx.39032: S 621655548:621655548(0) ack 3534424302 win 16384 <mss 1380,nop,nop,sackOK>
3: 12:22:02.424691 xxx.xxx.xxx.xxx.39032 > yyy.yyy.yyy.yyy.110: . ack 621655549 win 65535
4: 12:22:02.425515 yyy.yyy.yyy.yyy.110 > xxx.xxx.xxx.xxx.39032: P 621655549:621655604(55) ack 3534424302 win 65535
5: 12:22:02.437462 xxx.xxx.xxx.xxx.39032 > yyy.yyy.yyy.yyy.110: P 3534424302:3534424308(6) ack 621655604 win 65480

To get the capture off the box and into Wireshark, point your web browser at the PIX/ASA like this, specifying the capture’s name in the URL:

https://yourpix/admin/capture/temp-pop3-cap/pcap

Don’t forget the /pcap on the end, or you’ll end up downloading only the output of the ‘show capture temp-pop3-cap’ command.

To clean up, you can use the ‘clear capture’ command to empty the capture buffer (but still keep on capturing) and the ‘no capture’ command to destroy the buffer and stop capturing altogether.

Provided one is careful with the size of the capture buffer, it’s nice and easy, it works, and it’s quick to implement in an emergency. If you’re using the ASDM GUI, Cisco have a how-to here that will walk you through the process.

IOS routers

As we’ll see, things aren’t quite as nice in IOS land, but there’s still useful stuff we can do. As of 12.4(20)T, IOS supports the Embedded Packet Capture feature (EPC) which at first glance seems to be equivalent to the PIX/ASA’s capture feature. Again, we’ll start by creating an ACL for capturing POP3 traffic:

router(config)#ip access-list extended temp-pop3-acl
router(config-ext-nacl)#permit tcp any eq 110 any
router(config-ext-nacl)#permit tcp any any eq 110

Now we can set up the capture. This involves two steps, setting up a capture buffer (where to store the capture) and a capture point (where to capture from). The capture buffer is set up like this:

router#monitor capture buffer temp-pop3-buffer size 512 max-size 1024 circular

Here is where Cisco seem to have missed a trick. The ‘size’ parameter refers to the buffer size in kilobytes, and 512 is the maximum. That’s “Why???” #1 – 512KB seems like a very low limit to place on a capture buffer. “Why???” #2 is the ‘max-size’ parameter, which refers to the amount of bytes in each frame that will be captured; 1024 is the maximum, well below ethernet’s 1500 byte MTU. So we seem to be limited in that we can capture only a small amount of incomplete frames, which isn’t really in the spirit of “full” packet capture…

Sighing deeply, we move on to setting up the buffer’s filter using our ACL:

router#monitor capture buffer temp-pop3-buffer filter access-list temp-pop3-acl

Next, we create a capture point. This specifies where the frames will be captured, both from an interface and an IOS architecture point of view:

router#monitor capture point ip cef temp-pop3-point GigabitEthernet0/0.2 both

‘ip cef’ means we’re interested in capturing CEF-switched frames as opposed to process-switched ones, so if traffic you’re expecting to see in the buffer isn’t there it could be that the router process switched it thus avoiding the capture point. The capture interface is specified, as is ‘both’ which means we’re interested in ingress and egress traffic.

Next (we’re almost there) we have to associate a buffer with a capture point:

router#monitor capture point associate temp-pop3-point temp-pop3-buffer

Now we can check our work before we start the capture:

router#show monitor capture buffer temp-pop3-buffer parameters
Capture buffer temp-pop3-buffer (circular buffer)
Buffer Size : 524288 bytes, Max Element Size : 1024 bytes, Packets : 0
Allow-nth-pak : 0, Duration : 0 (seconds), Max packets : 0, pps : 0
Associated Capture Points:
Name : temp-pop3-point, Status : Inactive
Configuration:
monitor capture buffer temp-pop3-buffer size 512 max-size 1024 circular
monitor capture point associate temp-pop3-point temp-pop3-buffer
monitor capture buffer temp-pop3-buffer filter access-list temp-pop3-acl

router#sh monitor capture point temp-pop3-point
Status Information for Capture Point temp-pop3-point
IPv4 CEF
Switch Path: IPv4 CEF            , Capture Buffer: temp-pop3-buffer
Status : Inactive
Configuration:
monitor capture point ip cef temp-pop3-point GigabitEthernet0/0.2 both

Start the capture:

router#monitor capture point start temp-pop3-point

And make sure we’re capturing stuff:

router#show monitor capture buffer temp-pop3-buffer dump
<frame by frame raw dump snipped>

When we’re done, we can stop the capture:

router#monitor capture point stop temp-pop3-point

And finally, we can export it off the box for analysis:

router#monitor capture buffer temp-pop3-buffer export tftp://10.1.8.6/temp-pop3.pcap

…and for all that work, we’ve ended up with a tiny pcap containing truncated frames. Better than nothing though!

However, there is a second option for IOS devices, provided that you have a capture workstation that’s on a directly attached ethernet subnet. It’s called Router IP Traffic Export (RITE), and will copy nominated packets and send them off-box to a workstation running Wireshark or similar (or an IDS, etc.). Captures therefore do not end up in a memory buffer, and it is the responsibility of the workstation to capture the exported packets and to work out which packets were actually exported from the router and which are those sent or received by the workstation itself.

After carefully reading the restrictions and caveats in the documentation, we can start by setting up a RITE profile. This defines what we’re going to monitor, and where we’re going to export the copied packets:

router#ip traffic-export profile temp-pop3-profile
# Set the capture filter
router(conf-rite)#incoming access-list temp-pop3-acl
router(conf-rite)#outgoing access-list temp-pop3-acl
# Specify that we want to capture ingress and egress traffic
router(conf-rite)#bidirectional
# The capture workstation lives on the subnet attached to Gi0/0.2
router(conf-rite)#interface GigabitEthernet 0/0.2
# And the workstation’s MAC address is:
router(conf-rite)#mac-address hhhh.hhhh.hhhh

Finally, we apply the profile to the interface from which we actually want to capture packets:

router(config)#interface GigabitEthernet 0/0.2
router(config-subif)#ip traffic-export apply temp-pop3-profile

If all’s gone well, the capture workstation on hhhh.hhhh.hhhh should start seeing a flow of POP3 traffic. We can ask the router how it’s getting on, too:

router#show ip traffic-export
Router IP Traffic Export Parameters
Monitored Interface         GigabitEthernet0/0
Export Interface                GigabitEthernet0/0.2
Destination MAC address hhhh.hhhh.hhhh
bi-directional traffic export is on
Output IP Traffic Export Information    Packets/Bytes Exported    19/1134

Packets Dropped           17877
Sampling Rate                one-in-every 1 packets
Access List                      temp-pop3-acl [named extended IP]

Input IP Traffic Export Information     Packets/Bytes Exported    27/1169

Packets Dropped           12153
Sampling Rate                one-in-every 1 packets
Access List                      temp-pop3-acl [named extended IP]

Profile temp-pop3-profile is Active

You get full packets captured (note packets, not frames – the encapsulating Ethernet frame isn’t the same as the original, in that it has the router’s MAC address as the source and the capture workstation’s MAC address as the destination), and provided you’re local to the router and can afford the potential performance hit on the box, it’s quite a neat way to perform an inline capture. Furthermore, this may be your only capturing option sometimes – granted, the capture workstation has to be on a local ethernet segment, but the traffic profile itself can be applied to other kinds of circuit for which you may not have a tap (ATM, synchronous serial, etc.). It’s a very useful tool.

In the next exciting installment, the Cisco Kid will look at ways of extracting network session information from IOS routers, PIXes and ASAs.


Alec Waters is responsible for all things security at Dataline Software, and can be emailed at alec.waters@dataline.co.uk

Follow

Get every new post delivered to your Inbox.

Join 32 other followers