In searching for DFIR tools over the years, I have found lots of “Top 10” lists. I feel that there can be a few improvements made with many of these lists. Here are some pointers on what to look for when looking for the “best DFIR tools”, which is what I look for.
Anything list with a “Top Ten” without an accompanying specific purpose is not as useful as a list that is specific. “Top 10 Registry Forensics Tools” is much better than “Top Ten Forensic Tools”. A generic list is practically useless if you are looking for something specific to accomplish a specific task. I’ve also seen lists that had such incorrect information, that the writer could not possibly have even tried the tool chosen as a “top 10” along with tools chosen that were clearly inappropriate to the list.
Even more specific, and just as important, is that of the licensing of software. Most lists that I have seen are a combination of commercial, shareware, freeware, and open source software. A much better list than “Top 10 Digital Forensic Tools” would be “Top 10 Open Source Digital Forensic Tools” or “Top 10 Commercial Digital Forensic Tools”. Some lists that I have seen have tools that are "free" but not for commercial use. This is a problem! If you can use a free tool for personal/home use, but use it for a commercial/legal matter, you could run into problems of violating the End User Licensing Agreement. Being a stickler for licensing details is not a bad thing to be.
So, if looking for an open source, registry forensics software, I would much rather find a list titled “Top 10 Open Source Registry Forensics Tools” instead of “Top 10 Digital Forensics Tools”. Even a list titled “Top 10 Registry Forensics Tools” will be a better list than a generic list.
Apples and Orange Comparisons
A few lists that I have seen which appeared to be specific, such as comparing forensic suites, have chosen incomparable tools. For example, choosing a non-forensic suite against a forensic suite for comparison isn’t a comparison of like tools. You can’t justify a comparison of a Ford Mustang against a semi-truck in a quarter-mile race, but I have seen it on a few occasions. A comparison of two different things with two different purposes is not a comparison.
Suite vs Suite
Anyone doing DFIR more than a few years knows that each forensic suite does something better than the other forensic suites. That just means that some things in Suite A are better than in Suite B, but some things in Suite B are better than Suite C, and some things in Suite C are better than Suite A (A>B>C>A>C>B>A).
Lists that compare suites and rank them generally as one being better than another may do a disservice unless the top tool does everything better than the tools lower on the list. I don’t find that to be accurate or possible. I have favorite suites, but none of any of my favorite suites do everything better than any other suite.
Getting drawn into a "Top 10" list of anything is easy enough, especially when the list name is so generic that it will always appear to cover what you are looking for. If you need a VSS forensic tool, and you see "Top 10 Digital Forensic Tools", well...that should cover what you need, right?
Poorly Titled Lists
Top 10 Digital Forensics Tools
Top 10 Open Source Forensic Tools
Top 10 Digital Forensic Suites
Better Titled Lists
Top 10 Open Source Registry Forensic Tools
Top 10 Commercial Forensic Suites for Email Analysis
Top 10 Open Source Suites for Internet Analysis
Is one better than the other?
Perhaps the most glaring problem with any Top Ten list is that of personal preference and bias . You can’t get away from this. If two tools can analyze email with the same results, why is one typically rated higher than the other? Personal preference or bias (bias can be a paid review or the reviewer personally liking one tool over another) might be the reason. If the list writer doesn't disclose bias in the list, then you have no idea of what puts one tool over another.
Coming up next!
I am creating a regular series on “Top DFIR Software” lists, based on me actually turning on the software and honestly comparing tools against each other. Some lists may be “Top 5” or “Top 3” if there isn’t that many to review for a specific topic. Which brings me to the specificity of the lists…
There will be Suite A is better than Suite B for email analysis , and Suite B is better than Suite A for Internet analysis , and so forth. By better , I mean that as defined by a set of criteria such as (1) results orientated, (2) personal preference, (3) speed, (4) cost, (5) ease of use, and other factors that are clearly defined in the list so you know exactly why I rank tools in any order. If you know the bias and rationale of how a list is made, you can better judge the list and even reorganize the list based on your needs.
Your input is welcome on tool selection and opinions on tools to test and tools that I rate and will rate (via contact form and polls that I create).