Tool Output Precision Testing for Fixed-Size Artifacts

I’ve been working a lot with timelines recently, and I’ve been wanting to add records from non-resident INDX ($I30) attributes since log2timeline doesn’t seem to parse them. Someone please correct me if I’ve missed the parser or plugin for this.

Will Ballenthin has written, which is quite lovely. It relies on ewfmount, which only mounts .E01 files. I need a solution if I get something else like a .vmdk from a client. TZWorks’s wisp works for .vmdk and dd, but I didn’t get nearly as many results as Will’s tool. Of course, more is not better, but it warrants further investigation. Bulk_extractor-rec supports .raw, .img, .dd, .001, .000, .vmdk, .E01, and .aff. That could be great.

I used David Cowen’s File Server image from Defcon 2018 for this post. I’ve really been liking it and the other Defcon images for my testing recently. Wisp with valid and slack entries produced 1589 rows in a csv. I need to go back and do the hexdump option for further testing, but I abandoned wisp as this point because and produced 542,443 rows. Bulk_extractor-rec’s ntfsindx plugin produced 504,521. Closer, but what is the difference? Are there duplicate values from or did bulk_extractor-rec not find as many results?

Bulk_extractor-rec creates three files for different INDX formats: INDX, INDX_Misc, and INDX_ObjID-O. So in reality I got 504,521, 3378, and 385 rows, respectively. Still a big gap from 542k. How can I cross-check these? Instead of comparing csv files, I decided to go back to the raw output. We know INDX records are 4096 bytes. What if I split the records and hash them? So in Linux I execute split -b 4096 followed by rhash —md5 -p ‘%h,%p\n’ -r ./ > ../hashes.csv. I do this for both the output and bulkextractor-rec output. I upload the csv’s to Google Sheets. I’m sure there’s a way to do the next step with Linux kung fu or Beyond Compare instead, but Sheets it is for now.

I wanted to check for duplicates within the sheets first. The first column contains hashes, and the second column contains file names from the split command. I give the third column this formula to check for duplicates =COUNTIF($A$1: $A$10000, A1). There are many duplicates in each sheet. Now, I want to check if each tool got the same coverage as the other tool (i.e. did either tool produce a hash that the other tool did not?). In the fourth column I use this formula to check for duplicates across sheets =COUNTIF(bulkextractor!$A$1: $A$10000, A1). There were no zero’s in this column for both sheets, meaning the output of bulkextractor-rec included each unique INDX record of, and included every value bulk_extractor-rec produced. Conclusion: with more testing, I can feel good about the output of bulk_extractor-rec. The limitation of this method is it doesn’t really include validation. I don’t know if I got all INDX records from an image. Remember precision vs. accuracy in physics class? I would say this measures precision.

What if I got hashes one tool had but not the other? I would put those files in a directory and run log2timeline or another parsing tool against them. Then, I would seek to answer the question, “do these events add value to my timeline?”

I’m not sure this was the easiest way to do this, but it works. This could also be applied to MFT records since we know those to almost always be 1024 bytes. It would answer the question “Does bulk_extractor-rec’s ntfsmft plugin add value to my timeline?” This would also answer a similar question for non-duplicate artifacts found in volume shadow copies. Or maybe I need to learn more about log2timeline internals, and it already has this functionality. If you’ve got a resource on the internals, I’d be interested in reading/watching.

Relevant reading:
Elrick. (2014). Forensic Examination of Windows-Supported File Systems
Carrier. (2005). File System Forensic Analysis

My opinions are my own and may not represent those of my employer.

 Precision Testing Methodology

Precision Testing Methodology