Enfuse Session- Forensics Matters in Security: 360 Degree Visibility
“You guys are like the James Bond of IT… flying under the radar and doing cool things.” said Guidance Software’s Mr. Paul Shomo to begin his Enfuse 2016 presentation called “Forensics Matters in Security: 360 Degree Visibility”. It was then and there I knew we were in for an interesting session. What initially piqued my interest was Shomo’s non-traditional background compared to other speakers at the conference. His background largely revolved around research and development with a focus on forensics. He released his first incident response product in 2006, and moved on to managing strategic partnerships. There, he was responsible for managing integration of platforms and acquiring new technologies to be integrated with existing products.
After Shomo’s brief introduction, he outlined the key themes of his talk. We would be given the stages of a security breach, then relate this sequence to the “cyber kill chain” and map out pertinent forensic evidence that is left as in the wake of an attack. Additionally, Shomo touched upon an article he recently wrote for InformationWeek, Why Security Investors Should Care About Forensic Research, further hammering home the importance of our field.
The cyber kill chain he referenced is a conceptual model developed by Lockheed Martin detailing the phases of a security breach. The phases are as follows:
Reconnaissance: Gathering intelligence about a target, often using publicly available resources
Weaponization: Building the right tools to attack the target with
Delivery: Getting the code through the secure perimeter of the target
Exploitation: Taking advantage of a vulnerability in order to execute
Installation: Installation of the malicious code onto the target machine
Command and Control: Looking for data, spreading to additional assets, elevating privileges
Actions on Intent (or Exfiltration): Leaving or destroying all together
Shomo noted that the concept of the cyber kill chain is missing some key points; Most notably, the Command and Control stage is truly the lengthiest and most important part, where the most damage is inflicted. The original concept fails to highlight this and displays in equal parts to the other stages. In order to make up for these inefficiencies, Shomo began to pitch a new way to look at the timeline of a breach. Rather than a linear thinking, Shomo suggested looking at an attack as a circular, cyclic graph with multiple layers to it to reinforce depth of visibility when responding to a security breach.
At face value, we see what the operating system wants us to see through APIs, log files, and repositories. While useful and easily found, these artifacts do not provide the depth of visibility that Shomo is advocating. In order to dig deeper, we must look to Locard’s principle: it’s not about what’s right in front of us, it’s about the residual data that gets left behind. These could include items like .lnk files that contain information regarding who last clicked it, drive information, and the like. We need to have what Shomo called bare metal access in his slides. This means digging into physical memory using memory forensic techniques to gather data and analyze disk clusters from there.
Now that this baseline was established, we begin to look at the new temporal model for stages of a security breach. I noticed he had eliminated the Recon step from this model all together; I assume it’s because this phase speaks for itself. In that case, we begin with Delivery. The Delivery phase usually takes no more than a couple of days. Most of the time we see that attacks originate from planted USB drives and phishing emails sent to employees. Immediately we know to look for web history, download history, and USB history.
Next was the Exploitation stage, which in reality will only last just a few seconds. Since this is the phase where the malicious content will actively take advantage of the system, we now know to look at process history, file history (such as Prefetch files, MFT data, and Shim cache data), and to pay attention to timestamps to see if files were moved, copied or otherwise altered.
Once exploited, the malicious code can now install itself . Interestingly enough, this installed code is probably a new weapon—one not of the same nature that was used to load it onto the system. This rogue code will be designed for long-term effectiveness. The attacker could even package multiple malicious programs in the event that one is compromised so that the operation can continue. In order for it to work, the malicious code will need to connect to a Command and Control point on the internet to download binaries: because of this activity, it will probably also have the capability to cover its own tracks. This will send us to the MFT during analysis to look for evidence of such activity.
Then we move on to the most important part of the model: the Command and Control phase. Now that the malware has established itself, it will probably try and spread to other systems on the network—climbing the ladder of increasingly valuable assets, escalating privileges along the way. This phase can take months, or even years in some circumstances if it continues to run successfully without detection. This phase usually takes up about 90% of a breach timetable. Valuable assets could include high ranking employees, or high value network devices such as the domain controller. Simultaneous to this movement is the active search for sensitive data. While the most damaging phase, we can look a little on the bright side; it often leaves investigators with a plethora of forensic evidence to find. Pertinent evidence can include network log files, processes, obscured files, remote session activity, volatile memory, application data, deep file deletion, anti-forensic techniques (i.e. certain traces of files are missing, timestamp manipulation, etc.), file usage stats, user triggered processes, file movement and access, and all sorts of log files. Since we have all this evidence, it is easy to get caught up around strictly looking for malware. But a big point Shomo made was the importance of needing “not just malware centric thinking, but to have data centric thinking” during breaches, meaning think where your organization’s sensitive data is (is it in local repositories? Is it in the cloud?), and what exactly that data is. This is a good opportunity for risk management and governance groups to work with the IT and Information Security departments.
Finally, we have the Exfiltration stage. As mentioned, this is where the malicious code will abandon the system when its job is complete. This can occur peacefully , or leave a path of destruction. This is where we can look for user file activity, scrape for passwords, and look for encrypted and packaged files that may be sent through the network.
Navigating this multi-layered, in-depth model can be overwhelming, but there are a few main points that Shomo highlighted. The first of these is the sheer importance of the Command and Control phase. As the longest-lasting phase – and where the attacker will do the most damage – it leaves a wealth of forensic artifacts for investigators to look for during their analysis. But it is having that depth of visibility during the analysis which is extremely important. If the investigation does not include the lower level artifacts that require additional skillsets and focus to analyze, it is missing a multitude of useful artifacts. And lastly, the importance of “data centric thinking” Shomo mentioned. Knowing where exactly the organization’s sensitive data is and keeping that in mind in regards to an investigation is extremely important.
Shomo’s presentation was engaging and could apply to almost any investigator. He recognized a problem in a simple flow chart which pertains to digital forensic investigators, took a step back and utilized the model to create a new, more dynamic approach to security breaches which centered on the importance of forensic research and deep forensic analysis.