Appropriate Response Mechanisms When a Data Breach Occurs
By John Attala, Director, North America, Endace
There’s a huge imbalance between attackers and defenders when it comes to protecting the corporate network. Defenders must protect against a myriad of threats while an attacker only needs to find one vulnerability to gain a foothold into the network. Once past the layers of real-time protection, sophisticated attackers can take their time to accomplish their objectives – whether that is disruption, intellectual property theft or fraud – remaining undetected, often for months.
Inevitably, a skilled attacker will make their way past real-time defenses, however good. The challenge is being able to detect them quickly when they do and respond before they have the time to cause major damage.
In 2014, Microsoft coined the phrase ‘Assume Breach,’ which advocates assuming that a breach has already occurred and acted accordingly. It emphasizes detection and response rather than focusing exclusively on prevention. This message, albeit slowly, seems to be starting to filter through.
Last year, Gartner reported that enterprise security spending is shifting from pure prevention towards detection and response, and predicted spending on enhancing detection and response capabilities would be a key priority for security buyers until the end of 2020.
However, judging by the number of breaches being reported, it appears this shift may still not be happening quickly enough. Organizations are regularly failing to detect intruders in time to prevent serious breaches from occurring.
And when businesses do report a breach, it frequently transpires their initial assessment of the breach’s impact was under-reported. The media is full of stories reporting yet another revised estimate of the scope of a breach. It is not uncommon for breach revisions to continue for months after the initial report.
So why are these breaches happening? And what can be done about them?
The Tyranny of the Urgent
In many cases, the sheer volume of alerts that real-time security tools are generating is overwhelming security teams. So much so, the industry has invented the term “alert fatigue” to describe the problem.
According to McAfee Lab’s Dec 2016 Quarterly Threat Report:
“Most organizations are overwhelmed by alerts, and 93% are unable to triage all relevant threats. On average, organizations are unable to sufficiently investigate 25% of their alerts, with no significant variation by country or company size.”
“Because of the time needed to manually investigate each alert to determine whether it is really critical or a false positive, teams are falling behind on alerts – creating a huge backlog of unworked tickets. This is a strong reason why dwell time for breaches is over six months.”
Monahan goes on to say that almost half of the alerts generated by security tools (46%) are automatically classified as “critical”, one third (31%) turn out to be false positives, and over half (52%) are mis-prioritized.
Curing Alert Fatigue
Put simply, the solution to alert fatigue is to improve the accuracy of the alerts being generated and reduce the time it takes to investigate them.
This means reducing or eliminating false positives and providing better context about alerts so they can be triaged and prioritized accurately. It requires knowing what vulnerabilities attackers might target, and what “crown jewels” most need to be protected so that teams can prioritize response to attacks focused on these.
SIEM (Security Event and Event Management) tools can help, combining information from multiple sources and correlating it with the alerts raised by security tools to give a holistic picture of events. Increasingly, SIEM tools can include data such as Threat Intelligence (TI) feeds and data from vulnerability scans to help identify attacks against vulnerable systems.
AI-based security tools too are following a similar approach, typically ingesting data from multiple sources to help give security analysts greater context around the security events that are detected, and even enabling automated response to many threats – which frees security teams up to focus on more serious threats and to proactively hunt for evidence of intrusions.
The Need for Definitive Evidence
Improving the context around security alerts helps security teams reduce the “noise” and focus on the important threats. But investigating alerts still takes far too long. The problem is that assembling the relevant data to reconstruct events simply takes too long. And investigations are often inconclusive due to lack of data.
This is where full packet capture data can help. Packets provide an indelible record of what an attacker does while they are on the network. Unlike log files, which can be deleted or doctored by an attacker, packet data can be recorded without an attacker knowing it is happening. In the event of data being exfiltrated by an attacker, stolen data can be accurately identified from the packets – allowing security teams to be certain what was taken and who was affected.
In addition, historical network history can be “played back” to detection tools to look for evidence of past intrusions. This adds to the investigative capability teams have and allows them to conduct systematic threat hunting or scan for evidence of “zero-day threat” attacks that might have occurred before new detection rules were available.
Accelerating Investigation and Response
The real benefit of network history comes from integrating it into existing security solutions such as monitoring tools or SIEMs. This allows analysts to pivot directly from an alert to the related recorded packets, streamlining the investigation process and reducing investigation times from hours or days to minutes.
Context and Evidence: The
Ultimate Investigation Toolkit
Enriching security tools with context and definitive evidence is a proven recipe for reducing the risk of security breaches. Just as DNA and CCTV have revolutionized criminal investigations in the physical world, packet capture delivers definitive evidence that can revolutionize the cybersecurity investigation process.
And, when a breach occurs, having packet data at hand ensures organizations can quickly and accurately assess the damage, identifying who is affected and responding appropriately.
About the Author
John Attala is the Director – North America for Endace, a world leader in high-speed network monitoring and recording technology. As the North America sales leader, John has played a pivotal role in launching and building Endace’s network monitoring business within North America. He has more than 20 years’ experience in selling networking and security solutions to Fortune 1000 companies and government accounts—bringing a deep understanding of the market, delivering a consultative, solution selling approach to solve complex problems and improving network security across the globe.