Prevention effectiveness – the phishing threat
By Andrew B. Goldberg, Chief Scientist, Inky Phish Fence
Phishing emails can take many forms; from massive email, blasts valuing quantity over quality, to spear phishing and Business Email Compromise (BEC) attacks custom- tailored to maximize the probability of success. Regardless of form, all phishing emails are designed to trick the recipient into taking some action that hurts them or benefits the attacker.
Phishing is currently a massive threat and growing larger. According to Symantec, over half of all emails are spam and IBM claims that the number of spam emails increased 4x in 2016. In 2016, 1 in every 131 emails contained malware and over two-thirds of installed malware was delivered via email attachments.
While the phishing threat is a well-known, and common focus of cybersecurity training, phishing attacks are still very effective. On average, 30% of phishing emails are opened by their intended recipient, and 12% of recipients will click on a malicious link or open a malicious attachment from a phishing email. As a result, an estimated 95% of successful cyber attacks targeting enterprises start as a spear phishing email.
The potential business impact of clicking on a phishing email can be significant. Obviously, phishing emails can be a vector for malware infections or a source of data loss, but financial and regulatory risks are also potential concerns. On average, a successful phishing attack costs an organization 1.6 million USD, which is a pretty big hit to take just because someone clicked a link or opened an attachment. With the introduction of the General Data Protection Regulation (GDPR) in Europe, the impact of a data breach has increased with stiff penalties for the loss of sensitive customer or employee data.
With all of the available cybersecurity training out there, it seems like phishing emails should no longer be an issue. The majority of phishing emails rely on commonly known attack vectors like malicious links and attachments containing malware. These types of attacks can be easily prevented by users willing to take the time to carefully inspect each link and attachment before clicking or downloading. However, no one has the time to put this amount of effort into each and every email that they receive. Here, we’ll talk about the tradeoffs between usability and security in preventing phishing attacks, and how anti-phishing software can be designed to improve security without harming usability.
Usability and phishing prevention
Usability is commonly considered to be the enemy of security. In general, being secure means taking extra steps to avoid falling for different attacks. This takes time and effort which could otherwise be spent on other tasks. This is especially true of phishing where the best ways to prevent against most phishing attacks are commonly known, but cybersecurity guidance is rarely followed. This is because the sheer volume of emails that the average person receives in a day means that dealing with each one properly would take up a significant amount of time that could be used to actually do one’s job.
In this section, we’ll discuss the results of human factors and usability studies. Each one has a direct impact on phishing prevention, and can be used to design more effective and efficient anti-phishing software.
Everyone has a way they view the world and their own preconceptions about “how the world works”. In general, it’s easier to make decisions that fit with our worldviews. Since generally, employees are busy and tired, this means that when they are faced with a decision, they’re unlikely to think it through and more likely to just “go with their gut”.
This can have a serious impact on cybersecurity in general and phishing in particular. Phishing emails are specifically designed to look as legitimate as possible. This means that by default, most people will believe them and click on malicious links or download infected attachments. Most anti-phishing training is focused on getting people to take the steps necessary to protect themselves against phishing attacks (verifying senders and links, not enabling macros on documents, etc.). However, since this takes time and effort (which employees have in short supply), phishing is still an effective strategy for attackers. A key aspect to a phishing protection strategy is destroying the appearance of legitimacy of phishing emails (through warnings, etc.) So that users make the correct decision by default.
Fitts’s Law is a scientific law based on the study of human movement. It states that the time it takes for a human to move to a given target is proportional to the distance to the target (from the starting location) and the width of the target. For example, it may take roughly equal time to accurately touch a narrow, near target and a wide, far target but it will take longer to pinpoint a narrow, far target. This has been demonstrated to be true for both hand and eye movements.
Fitts’s Law is directly applicable to usability in general, and cybersecurity usability in particular. Since anti-phishing protection software isn’t perfect (some legitimate emails look a lot like spam and vice versa), the use of warning banners and reporting buttons are common anti-phishing techniques. Based on Fitts’s Law, these banners and buttons should be sized and placed to minimize the effort required for a user to read or click on them.
Eye-tracking scanning patterns
Eye tracking is a common area of research in usability and ergonomics research. Everyone wants to know how people instinctively look at a page so that content can be placed for maximum impact. Based on this research, several common scanning patterns have been identified.
The F-shaped scanning pattern is a common one for web content. Users typically read the first few lines of an article, a line or so further down, and skim down the left side of the page. The read sections from the shape of the letter F (hence its name). Other common patterns include layer-cake (reading only headings), spotted (skimming the page briefly looking for links, bold, etc.), and committed (reading all content on a page).
Knowledge of scanning patterns is useful for improving usability for anti-phishing products because it helps to determine the optimal place to put warnings and other informative content. In general, most scanning patterns involve reading the top of the page (which is why warning banners are commonly placed there). Differentiating warnings and reporting buttons by size, color, bolding, etc. Improves the probability that they will be noticed by users. In phishing, where a single click can cost a company millions, improving the visibility of banners by any possible means is important.
Improving phishing protection through increased usability
Usability is commonly considered to be in direct conflict with security. The only truly secure computer is one that’s unplugged and locked in a vault somewhere with the key thrown away. And, even then, there is still the possibility of safecrackers or tunneling. Regardless, you can’t achieve perfect security without rendering a system unusable, which means that some tradeoffs must be made to achieve a balance where a machine is capable of doing its job in an acceptably efficient manner while not decreasing security any more than is strictly necessary.
Protection against phishing emails is one of the most well-known areas where usability has to be weighed against security. Most phishing attacks are based on commonly
Known methods that can be defeated with a little bit of effort from the recipient. For example, malicious links don’t work if the recipient visits the target site directly (either by typing in the URL or finding it via a web search engine) and then navigates to the relevant page using internal links on the site. However, most people don’t want to take the time and effort to do this for every email, so standard phishing attacks remain effective.
Effective anti-phishing software will provide increased protection to users without negatively impacting the usability of their email. It is not uncommon for people to receive hundreds of emails in a day, so even small delays add up. In this section, we’ll discuss a few ways that anti-phishing software can improve security without negatively impacting user productivity.
Painless phishing reporting
When an end user identifies a potential phishing email, their default response is to ignore or delete it and move on with their day. By identifying the potential threat and avoiding it, they protect their organization against the attack with minimal disruption to their workflow.
However, while it is good that a particular user is sufficiently well-trained, vigilant, and paranoid to identify and respond to the potential threat, the same may not be true of all potential targets within the organization. To reach the target’s Inbox, the phishing email needed to evade all of the organization’s defenses and may have reached the Inboxes of other members of the organization.
In order for an organization’s network security team to respond properly to the threat, they need to be aware of it. Someone needs to report the phishing email and quickly. According to a study by Verizon, the average time between a phishing attack’s launch and the first person clicking on a malicious link within the phishing email is only 82 seconds. The sooner that the network security team can respond to a threat, the better an organization’s chances of mitigating it before damage is done.
In order to convince end users to report, rather than simply deleting phishing emails, it is key that reporting be quick and easy. Including an obvious “Report Phishing” button or link within an email or email client that takes care of forwarding the email to the security team and deleting it from the user’s Inbox is a great solution. It’s important to take into account Fitts’s Law mentioned above: the button should be placed and sized so that reporting an email does not require any more time or effort than deletion or marking as spam. By making reporting as easy as deleting, an organization can improve its protections against phishing attacks.
Informative threat banners
A banner across the top of an email indicating that the email is suspicious is a common method of protecting against phishing attacks. However, how effective is a generic banner that is attached to any email regardless of the suspected threat type and threat level?
As mentioned above, it is more difficult for humans to make decisions that are in conflict with their view of “how the world works”. If an email is obviously spam, a warning banner makes sense to the user for that email since, in their world, this email is suspicious and should be marked as such. Over time, their worldview includes the fact that their anti-phishing program distrusts certain emails based on certain attributes like embedded links. Since the program only tells them something that they already know (that a spam email is suspicious) and nothing more, the warning becomes background noise.
However, if an email is a well-crafted phishing or spear phishing attack, the target is more likely to believe in the authenticity of the email. If they’ve grown accustomed to ignoring warnings since they only appear on obvious spam (and possibly wrongly on legitimate emails), they’re more likely to dismiss the warning as to their anti-phishing program being overzealous and making another mistake.
In order for anti-phishing warnings to be effective, they need to resist users becoming desensitized to them. The best way to accomplish this is to design them so that they provide useful information to the user, and reduce the cognitive load of trying to determine whether or not an email should be trusted. Rather than just saying that an email is suspicious, an anti-phishing program should tell the user why that label is applied and provide enough information for the user to make a decision on their own. If an email contains malicious links, the program should say so and warn the user not to click on them.
If an email looks like a Business Email Compromise (BEC) attack, explain what a BEC attack is and why this email looks like one. Rather than acting as an oracle labeling emails as malicious or not, an anti-phishing program should provide users with the information to make the decision whether or not to trust on their own. These banners should also be placed in a way that maximizes the probability that the user will see them while just scanning the page (based on the human factors research presented earlier).
One of the primary threats with phishing emails is malicious links. It is not uncommon for phishers to take a legitimate email from a company and then change the links from the legitimate site to a site under their control. Since all visible parts of the email are legitimate, the look and feel of the email matches the recipient’s expectations of an email from that company and they are more likely to accept it as fact. An untrained user would be very likely to click on the links or buttons in the email.
Checking the validity of links within emails is a central part of most anti-phishing training. Using mouse pointer hovering, a user can confirm that a link’s target is in-line with their expectations, i.e. A link pointing to a web page under a domain belonging to the message sender. Phishers know this and take advantage of the fact that the recipient is likely in a hurry and not scrutinizing links carefully (if at all). By registering similar-looking, like arnerica.com instead of america.com, or plausible, like company- customercare.com, domains, phishers can increase the probability that a malicious link will be accepted as legitimate.
Common cybersecurity guidance for dealing with malicious links is for recipients to visit the supposed sender’s legitimate site (by typing in the URL or using a web search engine) and then navigate to the target webpage using internal links on the site. However, this is time-consuming and most people will just click the link if it looks good to them.
One way that anti-phishing programs can improve security without impacting usability is to make protection against malicious links transparent to the user. Using fuzzy string matching against domains known to be targeted by phishing and contextual clues from the email or a URL blacklist, a program can determine if a domain is not legitimate. An inline anti-phishing solution can then rewrite the malicious URL to the benign equivalent and guarantee that a user’s click will not take them to a phisher’s site. By testing both at the time of receipt of the email and time of opening, the program can use the most updated information and provide real-time protection.
Choosing usable phishing protection
The main takeaway of this article is that users are efficient (i.e. Lazy) and are prone to falling for phishing schemes because phishers take advantage of this fact. Most phishing attacks are based on techniques with a known defense; however, properly defending against phishing takes time and effort, which can be used on other things. The key to effectively protecting against phishing attacks is making doing the “right” thing easier than doing the “wrong” thing.
In this article, we discussed usability research and how it relates to protecting against phishing attacks. Humans are predictable (a fact that phishers take advantage of) and anti-phishing products can take advantage of this fact to improve their protections. By designing warning banners and reporting links to maximize visibility and usability, an effective anti-phishing solution can improve the level of protection that it offers.
Beyond suggestions based on usability research, three concrete methods of providing highly usable phishing protection were also addressed. In order to maximize protection against phishing attacks, it is important to choose a phishing protection product that provides all of this functionality. By removing as much of the burden as possible from the user, effective anti-phishing software increases an organization’s security and the efficiency of its users.
About the Author
Andrew B. Goldberg, Chief Scientist at Inky Phish Fence. He is leading development at Inky, an enterprise communications security platform, working to protect corporate email from new breeds of sophisticated phishing attacks. Andrew can be reached online at @inkymail, and at our company website https://www.inky.com