Introduction to cloud Forensics Update
Following initial research into the various cloud storage services involved in this project, the Cloud Forensics team has been assessing all possible avenues it can take to analyze as many artifacts as possible. We have appropriated an external hard drive to store images related to the project so that we can analyze the large wealth of data we intend to gather without reserving multiple workstations. The virtual machines we are using to conduct data generation have been transferred to an ESXi server so that each group has access to the virtual machine for their respective service from any VMWare vSphere client. Now that our procedure is set Cloud Forensics will be moving on to datagen, which should be completed fairly soon.
During the last week the OneDrive group has been focused primarily on finishing our datagen script. We plan on looking at four different file extensions: .docx, .jpg, .mp3 and .pdf. Another aspect of OneDrive we will be observing is the artifacts left behind on the browser and the artifacts left behind by the local client.
Due to iCloud being so integrated with Apple products, our team has decided to advance our investigation by analyzing the contents of a Mac instead of the Windows workstations utilized by everyone else. More specifically, we will be studying how file data changes in response to actions like opening, editing, and deleting them. In order to organize those operations, we have created a data gen sheet which describes the steps taken in detail. We have already studied possible locations to look for the changes in the data when it comes time for analysis.
The Google Drive team has realized that their portion of the project may deviate from the rest of the cloud services. Google Drive’s “Desktop Client” is extremely barebones, essentially revolving around one folder created during the download. There is no local client available to edit or view the files, instead Google Drive stores the files using their unique extensions such as .gdoc or .gslides. These files are tiny, containing only a URL, doc_id, email, and resource_id. When opened the default web browser will be used to bring the user to a web page of the document, or a sign in page if an active session with the specified email is not found. Now that we have a definitive script and a better understanding of Drive’s functions, we will be using this week to finish all data generation in short order and begin our analysis on the results.
We are placing bets that Dropbox will leave a considerable amount of information behind, especially because there is a lot of maintenance that comes with updating application databases. Because Dropbox supports many platforms – each with its own construction – the data specific to each platform is kept in its respective place. This is instrumental in the forensic analysis of Dropbox. We expect to recover thumbnails, log files of access to the account, and some deleted files, among other things. We are itching to continue; this is a very exciting part of the process.
Our new ESXi framework will result in a more efficient way to conduct data generation, but first we have to get user authorizations configured properly. Team members need enough access to retrieve and image the VMs to analyze their findings, but that doesn’t mean that everybody gets free administrator permissions! Everybody is raring to begin testing and most teams have already begun carrying out their datagen processes, but once we iron out the ESXi housekeeping we should be able to continue uninhibited. After data generation, we’ll need to utilize all of our preliminary research to seek out any artifacts left behind by the cloud services.
We’re all very eager to share our findings with you! Remember to send us any questions or comments you may have about the project by contacting the LCDI through their Twitter, Facebook, or via email at email@example.com.