Wednesday 4 January 2012

Brief overview of 4 NFATs


I was recently tasked with evaluating the functionality of the freeware version of Netwitness Investigator and other alternative network forensics analysis tools (NFATs) that emulate the functionality of the full version of Investigator, hopefully at a lower cost.

After compiling a list of close to 40 network capture and monitoring tools, I chose four to evaluate a little more closely: 

Netwitness Investigator (freeware version, Windows)
Xplico (open source, Linux)
Solera DeepSee (trial virtual appliance, proprietary OS w/ remote web interface), and
NetworkMiner (free edition, Windows)

I just saw this amazingly helpful list today of freely available forensic tools maintained by Forensic Control, but noticed that there is no network forensics tools category. I'll need to consolidate/condense my list a bit, but I'll see if I can post a table of similar format for network tools tomorrow.

Unfortunately, I didn't have a lot of time to delve deeply (nor do I claim to be anything close to an expert in network forensics), but I wanted to share my thoughts anyway...maybe I'll get a chance to revisit the tools later this week.

Netwitness Investigator (freeware version, Windows)

Netwitness Investigator is a powerful tool for network forensics, but the full version can be very expensive. They provide a great deal of functionality with the free version, but imported pcap files are limited to 1 GB.

Issues loading pcap data: I was not able to analyze a small (870 kb) pcap file – Netwitness gave the error, “not enough data to generate a timeline,” but it may have been some other problem with that particular pcap. DeepSee was also unable to read it (although it gave a different error).

Session reconstruction: Capture data is sorted into categories for easy drill-down, and then subcategorized, giving a summarized view of the data on one screen for quick analysis. The timeline feature shows a distribution of traffic over time and allows you to select a period of time  and “zoom in” for closer analysis. You can also create custom filters – Netwitness features “intellisense” filter action suggestions to assist with query generation.

Overall impression: Netwitness is still my favorite tool in terms of overall functionality and ease of use, despite the 1 GB pcap limit. Data organization and drill down takes some time to get used to due to the volume of data, but it is fairly easy to navigate after some use. The split-screen view of a list of payloads (after you’ve drilled down into a category) allows you to run through a set of data easily; when you have a payload in the preview pane, there is a toolbar at the top that allows you to switch easily between formats of the preview (hex, txt, mail, web, etc):



So for example, if you wanted to view an email, you can either view it with the “mail” reconstruction, or just view the raw text formatting if you want to see all the headers and the network handshake. If the mail has an attachment, there is a button that will give you the options of opening the file or saving it to disk. Furthermore, if there is audio content in the session, there is a button in the toolbar that will reconstruct it and play it back. This toolbar really makes digging through content much faster, and gives you the ability to see one piece of evidence from many different perspectives.

Perhaps my favorite feature, however, is the “file extract” functionality - it makes it very easy to extract and reconstruct files from a pcap. Netwitness comes with a set of predefined and categorized common file extensions to extract, and allows you to easily add your own category based on other extensions. Each category is filtered into its own folder to the destination you specify.

SplitCap and splitpcap.py

Regarding the 1 GB pcap file limit, I took a quick look at some methods for splitting pcap files. I found two tools, splitpcap.py and SplitCap. I’m giving up on splitpcap.py for now…it doesn’t appear to work with current architectures (I don't think it's been updated in awhile). There may be a simple python fix, but I'm not a python whiz, so it will take me some time to look into it.

I played a little with SplitCap though – it is a very powerful tool. It doesn’t really have an option to just split the file into 1 GB segments like splitpcap does, but if you knew what host or what kind of content you were looking for, it would be perfect for filtering out that pcap data for loading into Netwitness. Raw splitting by size of a capture is problematic anyway because the packets are not necessarily in order – they have to be reassembled by analysis tools such as Wireshark, Netminer, Netwitness, etc.

By default, SplitCap will take a pcap file and split it into multiple pcaps by protocol and then by host and port. So for example, for my 2.5 gb capture file, I ended up with 23,462 pcap files. You can shift-select multiple pcaps for import into Netwitness, but that is still kind of a pain. However, if you had some idea what you were looking for, you could easily weed out just that data for import. I won't go through all the available options here, but there is a good short video here that demonstrates some of the main features, like filtering by hostpair and restricting to application layer data.

Xplico (open source, Linux)

Part of my delay in making this post was wanting to revisit Xplico and get it installed on a real machine. I considered leaving it out completely, as I had so many problems with it, but it seems to be a very good tool. There is supposedly an Ubuntu 11.10 package, but the package manager says it has an unsatisfiable dependency on the python3.1-minimal package. I have not yet tried to build it from source – I just ran it off of the SecurityOnion live cd for my evaluation.

Unfortunately, the program stopped working properly several times when I was working with it, forcing me reboot to get it working again (I tried restarting the program and the service to no avail). I was suddenly unable to add a new case or session, although I could continue to analyze the pcap data I had already loaded.

But, I did have a chance to play with it a little bit, and I think it has a lot of potential. There is no limitation on pcap file import size, and I didn't have any issues loading pcap data (aside from the program malfunction I mentioned above).

Session reconstruction: Xplico seems to do a good job of reconstructing sessions from the capture data, although it was unable to categorize some data that was handled properly by other programs. For example, one pcap had several emails that were correctly identified by other tools, but were filed into the “undecoded” section by Xplico.

Session data is prefiltered into categories (Web, Mail, Voip, etc). Filters are based on simple keyword search. However, the Xplico components are modular and open source, so you can add custom functionality or change features as needed.

The initial summary of data gives a nice view of the distribution of network traffic types (http, email, etc.). However, the file extraction is one file at a time – there is no way to get all files of a certain type. I think the modularity sets it apart from the other tools, and make it worth spending some more time investigating...hopefully I'll have some time to reevaluate it soon.


Solera DeepSee (trial virtual appliance, proprietary OS w/ remote web interface)

DeepSee is a commercial product from Solera Networks built to be an enterprise-level network forensics solution. I downloaded a 30-day trial of their virtual appliance to see how it stacked up against the competition. I am certain I barely scratched the surface of the functionality provided by DeepSee, but I can share my first impressions in terms of usability.

There is no limitation on pcap file size, although I did have problems importing a wireless capture I took using Network Miner (this was the same pcap that Netwitness Investigator refused to load, so it could be that something with that particular file was corrupt). DeepSee gave me an error that the pcap “contained no ethernet data.” Thinking maybe it didn't handle wireless captures, I took another wireless capture using Wireshark instead, and it was able to read that with no errors.

Session reconstruction: DeepSee allows you to create filters on capture data as well as in the packet analyzer (using Wireshark syntax). It also provides a number of pre-defined common filters (such as “Email,” “IM Conversations,” etc.) and the ability to create custom summary widgets (although the queries to display cannot be customized; you must use one of their pre-defined queries). Still, I can see the value in creating your own custom summary screen of predefined “widgets” that pertain to the types of data in which you are commonly interested, giving you a good initial breakdown of the content.



The interface for accessing payload data is somewhat inefficient…you can drill down somewhat easily using their pre-defined filters, but those are somewhat limited. There is also no “batch” way to extract all files of a certain extension (or matching a filter/pattern); you have to select each file individually that you want to extract.

However, the timeline feature (showing the distribution of traffic over time and allowing you to jump to a particular time segment) is convenient, and the Artifacts view allows you to very easily see a preview of an individual artifact and quickly take action on it (download, analyze the packets, or view the “reputation” information).



DeepSee has some nice graphing and visualization features that I think some people would find very useful, although they don't top my list of important features. Their timeline feature is also nice, but I like the Netwitness timeline a little better, because you can select any size chunk of time you want to zoom in on and restrict your analysis to that. With DeepSee, you can click on a discrete time segment and see that data, but the interval is fixed.

One thing I do like about DeepSee is their artifact listing…if you click on one of the artifacts, it expands a small pane in the listing that gives you options specific to that piece of data. With one artifact expanded, you have the options to preview or download the file, analyze the packets, explore the root cause, or check its “reputation.” This last option is very interesting…if you see an artifact such as an executable in your listing, you can click on “reputation” and view the “Google safe browse,” ClamAV, and VirusTotal information for the file, and the ISC/SANS information for both hosts. I think the “root cause” information would be interesting to explore in terms of incident response applications, but I don’t have an appropriate data set to exercise it right now.

NetworkMiner (free edition, Windows)

NetworkMiner is another commercial NFAT tool for Windows, although they provide both a free version as well as a paid “Professional” version. Unlike the other tools, the main data display is “host centric,” grouping the data by network hosts rather than on a per-packet basis.
There are no limitations on the size of the pcap files you can import, and I didn't have any issues loading pcap data into the tool.

Session reconstruction: NetworkMiner provides comprehensive session recontruction, sorting the data into individual tabs based on content (Hosts, Files, Images, Credentials, etc.), which makes it easy to drill down into the content you need. Keyword search (either string or byte patterns) functionality is built in, but you have to reload the case files in order to execute a query you have constructed, which can be time consuming for large captures. It also allows you to perform a cleartext search using a loaded dictionary.

All files found in the capture are automatically reconstructed and saved to a folder on the hard drive. The files are sorted into folders by their originating host first, then the protocol. This makes it somewhat difficult to drill down to a particular file, and it is also not as easy to see all files of a certain extension. The program also does not separate assembled files by case, so all the data is saved to the same folder for all captures that are analyzed.

However, the Hosts and Credentials features I found very useful; the “Hosts” tab provides detailed information on every host found in the capture – ip address, operating system, hostname, etc. The “Credentials” tab will show any usernames and their associated passwords (if sent in cleartext), as well as the server logged onto.



Another convenient feature is the portability of NetworkMiner – it can be run directly off of a flash drive. I did not get a chance to give the professional version a test drive, but at only 500 EUR, it is far more affordable than the other commercial tools. As a side note, the NetworkMiner wiki also has a very useful collection of links to publicly available pcap files for testing and analysis.

Although my favorite tool to work with overall was Netwitness Investigator, NetworkMiner has some unique functionality that I am sure I will use again. I am also looking forward to getting Xplico up and running on my machine so I can play with it a little more and explore the expansion possibilities of the individual modules.

Sunday 1 January 2012

on contribution paralysis

A small diversion...

I figured I should start by addressing something that Harlan pointed out in this continuing recent twitter discussion regarding “contributing to the community” (stemming from the two posts I mentioned in my last post, along with this new one). He hit the nail on the head with the characterization of the attitude that some of us have towards contributing (namely, being afraid to speak up), describing it as, “paralysis.”

However painful to admit, I have a problem with paralysis, beyond just the fear of looking stupid on the internet before the whole world. I realized that it is something I fight every day in everything I do, whether it is learning something new, meeting new people, snowboarding on a new scary run, or trying out a new recipe when I'm cooking for others. It may be paralyzing fear at first, but it is fighting and overcoming that fear that has brought me satisfying growth in both career and personal endeavors. Like my friend told me (jokingly) when I first started snowboarding - “If you never fall, you're not pushing yourself hard enough.” But I think there's some truth to that - if you are constantly afraid of failing, you're probably not pushing your limits and learning (which reminds me of this recent article).

Part of being able to conquer this fear is having confidence – and the rewarding successes we enjoy after having defeated one of these little obstacles helps to build that confidence. I have been slowly gaining confidence in my own abilities and overcoming the fear of speaking up...I still have a long way to go, but I certainly would never have made it this far this quickly if it weren't for the continuing support and encouragement (however blunt sometimes ;) ) of the active members of the DFIR community.

When I first learned to snowboard, all my friends were already expert skiers and snowboarders. They goaded me into joining them in the back bowls and expert runs, even though I couldn't keep up with them and was scared to death most of the time. But having a constant push of someone more experienced than you can be a great motivation, especially when they can help you learn from your mistakes. That is why I am thankful that we have so many intelligent, caring individuals in this community, who are truly dedicated to growing the field. Thanks guys, for all your help and support.

Well, one quick opening paragraph turned into a post of its own...I think I'll save my technical content for a separate post (hopefully tomorrow if I don't finish tonight). I have a brief overview of a few network forensics analysis tools (NFAT) that I recently evaluated that I wanted to share.

.e

Wednesday 14 December 2011

new year's resolution?

I don't believe in new year's resolutions in general - I think you should make an effort to improve yourself and do something new every day of the year. However, I have been feeling guilty about not keeping up with my goal of trying to contribute and give back to the community, so I'm planning to change that.

I started volunteering for Habitat for Humanity, so that's a start on that front, but I want to try to contribute to the forensics community as well. There are so many great and talented people in this community who take time to share their knowledge and research...it is easy to feel intimidated as a novice. But the last couple of weeks have seen inspiring posts from Rob Lee and Harlan Carvey, encouraging us newbies to share what we can.

This will be short, as I need to get back to work, but I'm going to make a better go at it this time. I benefit every day from the knowledge shared in this community, and it's time to return the favor. I can't make any promises, but I'm going to try :)

Sunday 27 February 2011

so much for the weekend...

Well, I had grand plans of doing some side work on the memory research this weekend, but some medication complications have left me sleepless and therefore rendered me essentially useless. Today I wanted to at least do something fun, and decided to make a good dinner...and what's a good dinner without a good drink to go with it?

They had some good looking wild-caught alaskan sockeye salmon at the market yesterday, so I picked some up and decided to make this really tasty recipe for Orange-Roasted Salmon with Yogurt-Caper Sauce, although I added some grapefruit zest and chopped fresh thyme to the seasonings - a modification I added the last time I made it, and it turned out excellent. As a side, a friend of mine turned me on to this awesome Poblano Potato Salad from the "Mexican Made Easy" show on Food Network. It's simple and really delicious (and I am *not* a potato salad fan).

The grapefruit zest reminded me of these excellent ruby grapefruits I picked up at Whole Foods, and so I decided I wanted to incorporate the juice somehow. I like the taste of Bombay Sapphire with grapefruit juice, so I started looking around for recipe inspirations. I found a few recipes with combinations of bombay, grapefruit juice, orange liqueur, and lemon juice, and had an idea. Orange Liqueur, even Cointreau, gets too sweet and syrupy for me pretty quickly...I can't take much in a drink. But I knew what I had in my fridge, and decided to make a substitute - a rosemary-citrus simple syrup. Here is the recipe for the syrup and the drink I made:

rosemary-citrus simple syrup

1/4 cup water
1/4 cup sugar
1 T fresh rosemary (coarsely chopped)
1 stalk lemongrass (coarsely chopped)
juice of 1/2 lemon
1 T orange zest
mix ingredients in a small saucepan, bring to a boil, and simmer until reduced. strain and cool.

grapefruit 'tini

1 T (1/2 oz) rosemary-citrus simple syrup
1 shots (1.5 oz) fresh squeezed strained grapefruit juice (ruby)
3 shots (4.5 oz) bombay sapphire gin

muddle luxardo cherry in bottom of martini glass. add ingredients above to a cocktail shaker with ice, shake and strain into martini glass. garnish with fresh rosemary sprig.

Yeah, I feel a little guilty for not getting any RE or memory work done this weekend, but everyone needs a little time off sometime, right? ;)

Wednesday 23 February 2011

memory research interrupted...notes thus far

**Edit - 9/28/2016 - fixed RegRipper links

I was hoping to have time to write up some formal notes on my research, but unfortunately, I have been tasked with something more urgent. As such, I have to put this quest aside for the time being, but I wanted to make sure I captured a snapshot of where I'm leaving off (to facilitate picking it back up in the near future).

This is a loose, informal capture of the information I've found thus far, and the resources that have been most useful along the way. I realize that I have not correctly documented my references here - this is not meant to be a formal document by any means, just an organization of info for my own purposes (my bookmark manager was quickly becoming unmanageably large). That being said, I thought it may be of some use to others researching these topics.

I have recently been investigating methods for extracting key system information (namely OS version, physical RAM size, and CPU type) from volatile memory. While there are numerous existing tools available (such as volatility) that provide this functionality, they currently do so by parsing an existing image of live memory in search of certain hex signatures to find kernel structures and registry hives that contain these data.

The first such structure is the Kernel Processor Control Structure (KPCR). To quote Brendan Dolan-Gavitt, "The KPCR is a data structure used by the Windows kernel to store information about each processor, and it is located at virtual address 0xffdff000 in XP, 2003, and Vista." (2a)
This led me to wonder - is this structure located in the same offset in Windows 7 and 2008? What about 32-bit vs. 64-bit? I haven't had a chance to answer this question yet, but it looks like it *should* remain static in the newer Windows versions (need to verify).

Furthermore, one of the fields in the KPCR structure points to the _DBGKD_GET_VERSION64 structure, which contains a linked list of _KDDEBUGGER_DATA64 structures (2a). Given the KernBase field of the _KDDEBUGGER_DATA64 structure (at offset 0x18, according to the MS include files), the kernel image can be located. The executable will depend on the number of CPUs in the machine and whether physical address extension (PAE) is enabled (2d):
NTOSKRNL.EXE : 1 CPU
NTKRNLMP.EXE : N CPU
NTKRNLPA.EXE : 1 CPU, PAE
NTKRPAMP.EXE : N CPU PAE

Jamie Levy wrote up an excellent followup to Brendan's post (2b) which enumerated the different Size tags (from the _DBGKD_DEBUG_DATA_HEADER64 struct defined in wdbgext.h) for Windows 2000 through Windows 7/2008, and both x86 and x64 versions. As she also mentions, this OS determination was implemented by Mike Auty and integrated into volatility 1.4 (in the imageinfo.py plugin).

Need to investigate: How does volatility utilize the KPCR offset in imageinfo.py (kpcroffset = volmagic.KPCR.v())? Can this virtual address be mapped to a physical address by finding the kernel page directory table base and mapping according to the method described in section 3.2 of (2c)? If we can find the physical memory address where this structure is mapped, where does it reside? Can we typically find it in the first 100 MB of RAM? What about in the first 1000 MB? In other words, can we discover this information about the operating system quickly, without having to search the entirety of RAM?

I would also like to be able to find out the amount of system RAM quickly, to be able to estimate the expected size of the image, as well as get a rough idea of how long the imaging process will take. There is a registry key that holds physical memory information (HKLM\HARDWARE\RESOURCEMAP\System Resources\Physical Memory\.Translated) of type REG_RESOURCE_LIST, stored as a binary value in hexadecimal format. According to (1e), REG_RESOURCE_LIST is a "series of nested arrays. It stores a resource list used by a device driver or a hardware device controlled by that driver. The system writes detected data to the \ResourceMap tree." I have not determined if this resource list actually provides information regarding the total amount of physical RAM in the system. However, according to Brendan Dolan-Gavitt's paper research on the Windows registry in memory (1f), there are certain registry hives that reside only in memory, and are not stored anywhere on disk; one of these hives happens to be the HARDWARE hive, generated at boot time. 

I still have a great deal of work to do in understanding how to enumerate the registry hives by mapping the cell indices to virtual addresses. I also need to look into volatility's integration of (the very excellent tool) RegRipper. A prototype implementation was developed by Brendan Dolan-Gavitt (1d) back in 2009, but I haven't had a chance to look at the current volatility source to see if it has been formally integrated.
UPDATE: I just came across the "Volatility and RegRipper User Manual" by Mark Morgan, detailing how to use volatility and RegRipper together to carve out registry information from volatile memory. I'll definitely have to pick back up here when I get back to all this. 

So, a lengthy to-do list ahead of me, but I'm hoping to get back to this soon.

References:

1. RAM size info (from registry):

1a. Enumerating Registry Hives, http://moyix.blogspot.com/2008/02/enumerating-registry-hives.html
1b. Cell Index Translation, http://moyix.blogspot.com/2008/02/cell-index-translation.html
1c. Challenges in Carving Registry Hives from Memory, http://moyix.blogspot.com/2007/09/challenges-in-carving-registry-hives.html
1d. RegRipper and Volatility Prototype, http://moyix.blogspot.com/2009/03/regripper-and-volatility-prototype.html
1e. Types of registry data, http://kb.chemtable.com/en/types-of-registry-data.htm
1f. Forensic analysis of the Windows registry in memory, http://dfrws.org/2008/proceedings/p26-dolan-gavitt.pdf

2. OS version info (from kernel)

2a. Finding Kernel Global Variables in Windows, http://moyix.blogspot.com/2008/04/finding-kernel-global-variables-in.html
2b. Identifying Memory Images, http://gleeda.blogspot.com/2010/12/identifying-memory-images.html
2c. Windows operating systems agnostic memory analysis, http://www.dfrws.org/2010/proceedings/2010-306.pdf
2d. Basis – DBGKD_GET_VERSION64 and KDDEBUGGER_DATA64 http://antirootkit.wordpress.com/2009/03/31/basis-dbgkd_get_version64-and-kddebugger_data64/

3. General:

3a. NIST sample memory images, http://www.cfreds.nist.gov/mem/Basic_Memory_Images.html
3b. Volatility framework, https://www.volatilesystems.com/default/volatility
3c. Russinovich ME, Solomon DA. Microsoft Windows internals, Fourth edition: Microsoft Windows Server(TM) 2003, Windows XP, and Windows 2000 (pro-developer), http://www.microsoft.com/learning/en/us/book.aspx?ID=6710
3d. RegRipper, (Harlan Carvey) https://github.com/keydet89/RegRipper2.8
3e. Volatility and RegRipper User Manual, http://volatility.googlecode.com/files/How%20to%20use%20Volatility_v2.pdf 

Thursday 17 February 2011

my entry into the blogosphere...

Ok, so I decided to start documenting some of my stumbling through research online, mostly for my own benefit, but hopefully I can manage to pull together some info that might help other people out there. I don't have any immediate plans for creating any sort of focused body of research here, I just plan to post information on whatever I'm currently working on (or interested in). This will probably entail information pertaining to the world of digital forensics, for what it's worth.