• United States



Contributing Writer

When blaming the user for a security breach is unfair – or just wrong

Dec 05, 20228 mins
Business IT AlignmentEmployee ExperienceEmployee Protection

Training non-tech savvy users to recognize phishing and other credential-based attacks is essential but expecting employees to man the front lines against intrusions is a mistake, experts say. Harmony between staff psychology and frictionless security technology is the ideal to shoot for.

blame 174280704
Credit: Thinkstock

In his career in IT security leadership, Aaron de Montmorency has seen a lot — an employee phished on their first day by someone impersonating the CEO, an HR department head asked to change the company’s direct deposit information by a bogus CFO, not to mention multichannel criminal engagement with threat actors attacking from social media to email to SMS text.

In these cases, the users almost fell for it, but something didn’t feel right. So, they manually verified by calling the executives who were being impersonated. De Montmorency, director of IT, security, and compliance with Tacoma, Washington-based Elevate Health, praises the instincts that stopped the attacks from causing financial or reputational damage. Yet, he contends that expecting users to be the frontline defense against rampant phishing, pharming, whaling, and other credential-based attacks increasingly taking place over out-of-band channels is a recipe for disaster.

“Of course, train your staff. The human element is the weakest link here. But don’t rely on training alone — or technology alone — to protect the organization. What you’re looking for is a balance,” de Montmorency says.

Protecting out-of-band usage

As attackers go after employees over out-of-band channels such as Zoom, Slack, and Teams, user education and technical controls must follow. Enterprises need visibility into what their users are clicking, downloading, uploading, or linking to in what can be dozens of collaborative platforms, he adds. His company uses SafeGuard Cyber to monitor east-west traffic and detect malicious activities being attempted over these channels. The tool is agentless and only requires a single user sign-on to access their platforms of choice, making it frictionless to users, which is one of the key criteria in getting user buy-in to necessary security controls.

A recent report by the University of Madison dissects the many ways business collaboration platforms (BCPs) can be leveraged for app-to-app delegation attacks, user-to-app interaction hijacking, and app-to-user confidentiality violations. In their tests, researchers were able to send arbitrary emails on behalf of victims, merge code requests, launch fake video calls with loose security settings, steal private messages, and maintain a malicious presence even after app uninstallation. Using homemade scraping tools, researchers estimated that 1,493 (61%) of the 2,460 Slack apps analyzed and 427 (33%) of 1,304 Microsoft Teams apps analyzed were vulnerable to delegation attacks. Additionally, 1,266 (51%) of Slack apps use slash commands, which are vulnerable to both user-to-app and app-to-user violations.

These trends show how social engineering attacks have moved to where employees are working over collaborative platforms. That means security awareness education and security controls need to work in tandem to protect users, their devices, and their credentials from these ever-evolving threats, no matter where they’re working from. And they need to do so in a way that enables collaboration, rather than blocking it, according to experts.

Security as a matter of psychology

“In the past, we’ve treated our employees as extensions of our computers. We would lay down the law: ‘No, you cannot go to this site or use this social platform.’ But people are still human, and their number one priority is getting their work done, so they will go around draconian rules and blocking if they have to,” says Russell Spitler, co-founder and CEO of Nudge Security, which recently commissioned a study of 900 users titled Debunking the Stupid User Myth.

In the study, 67% of participants said they would not comply with these types of blocking interventions and would instead look for a workaround if blocking got in the way of doing their jobs. Inversely, the report states that if organizations empower their users to make more educated decisions, they could achieve two times the compliance rate of blocking intervention.

“If you can remove the psychological reasons not to do something with respect to security, you can hope the person will be an ally,” says Dr. Aaron Kay, a professor of management and psychology and neuroscience at Duke University who advised on the report. 

Kristofer Laxdal, CISO of Mississauga, Ontario-based Prophix, a financial performance management platform with more than 500 employees, agrees that punitive training methods and restrictive security controls cause more harm than good because phishing, pharming, whaling, and other social engineering attacks aimed at leveraging privileged access are only getting worse. He also feels the industry is at an inflection point where security can improve user experience rather than getting in the way.

Make security frictionless for users

For example, Laxdal cites zero trust and passwordless computing as technological controls that take the onus off the employee while improving security and reducing risk. Onerous controls such as screen locks and timeouts, on the other hand, are causing users to covertly install mouse jigglers and keypress generators to keep their screens from locking up when their computers are idle because they don’t have time to keep logging in, he adds.

“Security practitioners have thrown in layers of technical controls and security awareness training. Yet, phishing, IP theft, pharming, and ransomware have gone on far too long,” he says. “So, while there is indeed a human component to security, the controls themselves need to be frictionless, because users are tired of inputting multiple logins to multiple systems. They’re also experiencing multifactor authentication fatigue. Technical controls need to be put in place to remove the issues that the end user is experiencing.”

Know thy users

The best place to start is understanding employee roles, resources, and access habits, Laxdal says. For example, financial workers should understand the specific risks to business accounts and social engineering attempts such as BEC scams that may target them. Development departments will have different risk areas to focus on; for example, their IP on hosted servers or malware hidden in public open-source libraries. HR, on the other hand, is dealing with PII (financial, banking, and healthcare information) that shouldn’t be shared over any channel, particularly given that anyone can impersonate a CEO and request files or transfers.

“All of these vectors are being used globally against information assets and are overwhelmingly credential-based attacks that are perpetrated through phishing. Users need to understand why and be part of that discussion with real-world examples,” Laxdal explains. “Sit down with your employees, ask about their typical day and access requirements. And understand each functional area of the business so you can design controls and training for their business.”

His company uses Ninjio, which combines behavioral analytics with security awareness training done in an anime style that, he says, is made compelling and engaging by the use of real-world hacks to show users what could happen if a user takes a dangerous action. Nudge Security also deploys analytics to identify when users stray off their approved platforms, engages the user by asking questions, and even assists them to securely set up the new platform with two-factor authentication and other secure enablement.

“If you want happy, compliant workers, they need to have agency in their decisions and feel that they are trusted and respected by their organizations,” adds Kay. “Deliver messages that facilitate that feeling. Be transparent about the reasons for your programs. And don’t frustrate users by ordering them to do things they don’t understand.”

Trust degrades over time

But just what is trust? What do you base it on, and how do you apply that to users, asks Winn Schwartau, an early infowar and security awareness pioneer. “I trust them not to steal from me based on what criteria? I trust that they’ve got the best interest of my company, based on what? I trust them not to click on malicious links or attachments based upon what? Because I trained them? I’ve been in this business a long time and I can tell you training doesn’t move the bar as much as people would like to believe.”

Schwartau believes that user education and training should be part of a holistic security program that starts with critical asset identification physically and electronically, limiting user access to only those systems they need, and “detection-in-depth” monitoring for abuse. He suggests employing a high-speed OODA loop for examining user behavior and to align technical controls. 

This, he adds, can help determine a soft initial level of trust. But scammers and social engineers continue to hone their tactics, so CISOs need to adapt and uplevel their user education and technical controls. Over time, trust goes down and risk increases in any environment, which he mathematically details in his book Analogue Network Security. “In many ways, it’s a total paradox,” he adds. “Employees are your greatest asset. And yet, employees are also your weakest link.”