A forensic investigator's workflow is largely driven by the specific questions that they may be trying to answer. Examples of use-cases that commonly employ forensics include:
- A law enforcement officer has arrested an individual suspected of domestic terrorism and wants to identify all records of communication, internet activity and data related to prior or planned criminal activity.
- A breach investigation has identified evidence that an external attacker gained access to a corporate server housing sensitive intellectual property. Analysts wish to determine the initial means of access, whether any data was accessed or stolen, and whether the system was subject to any hostile activity (such as the introduction of malware).
How is it used and how does it work?
Kazanciyan said traditional computer forensics entail utilizing specialized software to make an image of a subject system's hard drives and physical memory, and to automatically parse it into human-recognizable formats. This allows an investigator to examine and search for specific types of files or application data (such as e-mails or web browser history), point-in-time data (such as the running processes or open network connections at the time of evidence acquisition), and remnants of historical activity (such as deleted files or recent activity).
The extent to which deleted data and historical activity may be recoverable varies on a few factors, but generally degrades over time and commensurate to the volume of activity on a system, he said.
This approach to computer forensics remains suitable for focused, small-scale investigations, but is too time and resource-intensive for enterprise-scale tasks, such as hunting across thousands of systems in a corporate environment, Kazanciyan said.
“As a result, technologies that facilitate rapid search and analysis of evidence across ‘live’ systems began to flourish in the past decade, and formed the foundation of what's referred to as the endpoint detection and response (EDR) market,” he said. EDR products typically provide some combination of the following capabilities:
- Continuous recording of key endpoint telemetry - such as executed processes or network connections - to provide a readily-available timeline of activity on a system. This is analogous to a black-box recorder on an airplane, he said. Access to such telemetry alleviates the need to reconstruct historical events via a system's native sources of evidence. It may be less useful in cases where investigation technology is deployed to an environment after a breach has already occurred.
- Analysis and search of a system's native forensic sources of evidence -- i.e., what's preserved by the operating system on its own during normal system operations. This includes the ability to run quick, targeted searches for files, processes, log entries, artifacts in memory and other evidence across systems at scale. It complements the use of a continuous event recorder and can be used to broaden the scope of an investigation and find additional leads that might not otherwise have been preserved.
- Alerting and detection. Products can proactively collect and analyze the sources of data cited above, and compare it to structured threat intelligence (such as Indicators of Compromise), rules or other heuristics intended to detect malicious activity.
- Evidence collection from individual hosts of interest. As investigators identify systems that warrant further inspection, they may conduct "deep-dive" evidence collection and analysis across the entirety of a subject system's historical telemetry (if present and recorded), files on disk and memory. Most organizations prefer to perform remote, triage-level analysis of live systems in lieu of comprehensive forensic imaging wherever possible, he said.
Sign up for CIO Asia eNewsletters.