Incident response timelines are an essential tool that every security analyst needs in their arsenal.
With a timeline, an analyst can study the events leading up to, during, and after a security incident and gradually put together a picture of what happened.
Sounds great, right? Unfortunately, timelining is a pain in the ass.
It often takes analysts hours of manual work to extract, combine, and wrestle with log files before they can piece together a usable timeline. Multiply this by the number of incidents the analyst investigates each week, and you have a serious time suck on your hands.
What if there was a better way?
When an incident occurs, one of the first things a security analyst does is build an incident response timeline. This typically involves extracting logs from various sources with a collection tool and combining them into a single, date- and time-ordered list. From there, the analyst can look for clues that may help them better understand the course of events before, during, and after the incident.
Using the timeline, an analyst can determine:
Typically, security analysts use Excel or a hodgepodge of open source scripts to build their timeline and analyze security data.
While conceptually simple, the process of building a timeline is often frustratingly long and complicated.
First, the analyst must collect the logs they need. If they have access to a SIEM, that may simply involve extracting the relevant datasets. However, many analysts—particularly at smaller organizations or working on behalf of smaller organizations for a Managed Security Services Provider (MSSP)—don’t have access to a SIEM. Even if they do have a SIEM, in almost every incident investigators have additional data that isn't being logged there.
Incident Responders use tools like KAPE or FireEye Redline to extract forensic artifacts and logs from various sources and combine them into CSV files. This process can be time-consuming, but it’s just the start of the problem. Often there are additional data artifacts that are outside of the reach of these tools.
Once all of the information is finally collected, the next step is to combine the log sources into a single, date-ordered file. Here’s where we hit the real crux of the problem. Consider the following log files:
Log file #1 - File changes:
Log file #2 - HTTP records:
Log file #3 - DNS records:
The data in each of these files has been simplified from around 25 columns to just six.
Each of these files has its own set of columns, headers, data formats, timestamps, etc. Before an analyst can combine the datasets and gain any meaningful insight into the course of events, they need to convert each file into a standard format.
As any IT analyst will testify, this process is far from straightforward. It can take an hour or more to complete… and must be repeated for practically every security incident.
The most popular tool for incident response timelining is Microsoft Excel—not because it’s well suited to the process, but because security analysts haven’t had access to anything better. More on that later.
The simplest approach is to standardize the timestamps across all files and combine the remaining columns within each dataset into a single summary column. You can do this using a simple string concatenation function. In Excel, this is either the & function or the CONCATENATE function.
To combine each row (excluding the timestamp) into a single cell, use a simple formula:
=B2&" | "&C2&" | "&D2&" | "&E2&" | "&F2&""
Or alternatively, using CONCATENATE:
=CONCATENATE(B2," | ", C2," | ", D2," | ",E2," | ",F2)
For the ease of comprehension, you might also add a column noting which log source each row comes from. After this process, each dataset should look something like this:
Once you have all log files in this format, combine them in a single Excel file and sort the dataset by timestamp running from earliest to latest.
There’s one final problem: UNIX timestamps, common in security logs, mean very little to humans. Unfortunately, Excel has no native function to convert UNIX time to UTC or any other timezone. So again, you need a workaround. To make the conversion, you can use the following formula:
=(A2/86400)+DATE(1970,1,1)
UNIX time counts the number of seconds that have passed since 00:00:00 UTC on 1 January 1970 (the ‘epoch’). To convert a UNIX timestamp into Excel time, the above formula divides a timestamp number by the number of seconds in a day (86400) and adds the Excel value for the epoch (25569). So, for example, the first timestamp in the list above converts like this:
1437839629.14031 ÷ 86400 = 16641.6623743
+ 25569 = 42210.6623743
When formatted as a date in Excel using the custom format: dd/mm/yyyy hh:mm:ss, this provides a human-readable date and time:
25/07/2015 15:53:49
Copy and paste (or autofill) the formula across all rows, and you finally have a usable dataset that includes human-readable timestamps:
Now, finally, you can go about the business of investigating the incident timeline.
Converting from UNIX time into UTC is annoying, but there aren’t many opportunities for mistakes. Unfortunately, the same can’t be said for converting between time zones.
Often, security analysts have to work with log datasets from more than one geographic location. While UNIX time is timezone agnostic, not all log sources use it. Instead, they may use a typical date and time entry. To make sense of a combined dataset, you’ll again need to convert all times into a standard time format—most likely, UTC.
This isn’t complicated, but it takes more time, and it’s easy to make mistakes.
You’ll need to determine how many hours a dataset is offset from UTC. For example, Tokyo operates on Japanese Standard Time (JST) which is UTC + 9. To convert between the two, add columns to note the location, time adjustment, and new timestamp.
Once that’s done, use this simple formula to make the conversion:
=timestamp+(hours/24)
By replacing the colored text with the relevant cell destinations, you can add the standardized UTC timestamp to each dataset. In the example below, column D contains the formula:
=A2+(C2/24)
As you can imagine, there are plenty of ways to go wrong here. Daylight savings, in particular, can cause problems—particularly because different parts of the world enter and leave daylight savings at various times throughout the year. This makes your job harder, as you must be sure you’re adding the correct time adjustment to each dataset for the specific date range it covers.
In the process described above, we didn’t even cover all the fiddly extra steps needed to make each timeline ‘work’. Anyone accustomed to Excel will be familiar with copying and pasting datasets into new tabs, altering, combining, or separating fields, formatting and reformatting cells, adding SUM, COUNT, and IF statements… the list goes on (and on, and on).
All told, building a single timeline file can take an hour or more. For particularly complex timelines that combine many different log sources, it can take much longer. And, when your job requires you to build a timeline for almost every incident you investigate… it becomes a huge time suck.
What if you could complete the entire timelining process in under 30 seconds? That’s precisely what Gigasheet’s incident response timeline tool allows you to do.
WAIT: Turned off by sales pitches? Had enough of the constant barrage of marketing messages that haunt the security world? Don’t worry. We aren’t trying to sell you anything. We're looking for beta testers to help us make Gigasheet—a billion cell spreadsheet designed for security data—into the ultimate tool for security analysts. Read to the end to register as a beta tester.
Here’s a quick walkthrough of the timelining process using Gigasheet:
Step 1: Upload all log files into Gigasheet (we’re using the same simplified logs as above as an example).
Step 2: In the ‘Your Files’ view, select all relevant log sources and click the Timeline button.
Step 3: Name each log source and select the timezone it was recorded in.
Step 4: Click Create Timeline.
That’s… well, that’s it.
Now you have the same file we built using Excel, and it took a about of 35 seconds. Maybe if you’re working with dozens of log files, it could take a little longer for them all to upload (pro tip: zip the files before uploading, Gigasheet will decompress them on the fly)… but we’re talking a few minutes MAX.
From here, you’re free to complete your incident investigation as usual. You can easily extract the timeline as a CSV file… or you could just use Gigasheet instead, which solves many of the frustrations security analysts face when working with security data in Excel or Google Sheets.
Gigasheet is a no-code, billion cell spreadsheet designed specifically for security datasets. It understands IP addresses, IoCs, email addresses, and other common security data, so you won’t waste time reformatting, splitting, or concatenating columns just to answer basic questions.
(You also won’t waste time building incident response timelines. Did we mention that already?)
If you’re used to working with spreadsheets, you’ll be up and running with Gigasheet in minutes… and seeing exactly where you’ll be saving time from then on. Gigasheet has all the spreadsheet functions you’d expect, plus built-in analytics for incident response and threat hunting to help you quickly identify patterns.
Not sold yet? Gigasheet also allows you to enrich IoCs with threat intelligence from leading vendors and works out-of-the-box with open source threat feeds.
We’re looking for beta testers right now to help us make Gigasheet the best it can be. We want to know exactly what problems you face as a security analyst so we can make sure Gigasheet is equipped to solve as many of them as possible.
If you’d like to help us make Gigasheet the ideal solution to your problems, sign up to join the hundreds of other analysts who use Gigasheet.