Hacked Opinions II

Hacked Opinions: The legalities of hacking – Jen Ellis

Rapid7's Jen Ellis talks about hacking regulation and legislation

usgovt supreme court
Credit: Thinkstock

Jen Ellis, from Rapid7, talks about hacking regulation and legislation with CSO in a series of topical discussions with industry leaders and experts.

Hacked Opinions is an ongoing series of Q&As with industry leaders and experts on a number of topics that impact the security community. The first set of discussions focused on disclosure and how pending regulation could impact it. This week CSO is posting the final submissions for the second set of discussions examining security research, security legislation, and the difficult decision of taking researchers to court.

hacked opinion small Thinkstock

CSO encourages everyone to take part in the Hacked Opinions series. If you have thoughts or suggestions for the third series of Hacked Opinions topics, or want to be included as a participant, feel free to email Steve Ragan directly.

What do you think is the biggest misconception lawmakers have when it comes to cybersecurity?

Jen Ellis, Vice President of Community and Public Affairs, Rapid7 (JE): Lawmakers often have a strong focus on increasing penalties and law enforcement authorities as a means of reducing cybercrime.

This approach works on the assumption that you can deter criminal behavior with the threat of negative repercussion. However, with regards to cybercrime, this isn’t true much of the time. Numerous studies have shown that people decide whether to commit a crime based primarily on the likelihood of being caught, not the severity of the penalty. So given that a great deal of cyberattacks come from overseas, often from non-extradition countries, and perhaps even with State-sponsorship, the likelihood of harsh prison sentences being a deterrence is low.

An example of how we’re seeing this play out in practice is the current DC dialogue around extending penalties for damage to critical infrastructure. An attack against critical infrastructure is certainly very serious; however, the kinds of actors that would undertake such a thing are unlikely to be swayed by the thought of going to prison.

Another challenge that faces lawmakers is the complexity and interconnectedness of our technical systems. It’s hard to legislate for a domain without clear boundaries and borders.

What advice would you give to lawmakers considering legislation that would impact security research or development?

JE: I would advise legislators to work with the security research community – include security professionals in the conversation. Explain the goals and challenges to them and ask them to help you find solutions. By each side sharing their relevant domain expertise, and respecting that of the other, we can best find a route forward that achieves the desired goal without causing unintended negative consequences.

If you could add one line to existing or pending legislation, with a focus on research, hacking, or other related security topic, what would it be?

JE: Ideally I’d like to add a definition of “authorization” to the Computer Fraud and Abuse Act, the main US anti-hacking law. The entire statute basically revolves around the concept of authorization – you’re either accessing a computer without authorization, or you are exceeding authorized access. Unfortunately, there is no definition or explanation provided, resulting in a huge amount of grey area and disagreement over what constitutes a violation of this law. It seems to me that people should be able to understand what a law is telling them not to do: you can’t ensure you don’t cross a line if there is no line. The challenge with creating a definition is that there is little agreement over what it should be.

As a result, I might be more inclined to propose removing a section of law, instead of adding a line. I’d like to strike section (g) of the CFAA as it is the part that authorizes civil action against CFAA violators.

Now, given what you've said, why is this one line so important to you?

JE: Given that as mentioned above, there is a great deal of ambiguity over what is a violation, enabling private entities to take action seems like granting them an undue and dangerous amount of authority.

This is particularly a problem as many technology providers use the threat of civil action under the CFAA as a means of frightening away good-faith researchers. This can chill security research and by extension, exposes consumers to risk. Removing section (g) would not make the CFAA easier to understand and follow, but it would reduce fallout from the lack of clear boundaries.

Do you think a company should resort to legal threats or intimidation to prevent a researcher from giving a talk or publishing their work? Why, or why not?

JE: There’s no simple answer to this as it always depends on the details. Researchers disclosing findings in good faith should not be threatened by companies that are seeking only to protect their reputation and avoid pressures to issue updates.

On the other side though, companies need to have recourse for situations when research is not undertaken in good faith, for example if research is used as a means to extort the company.

What types of data (attack data, threat intelligence, etc.) should organizations be sharing with the government? What should the government be sharing with the rest of us?

JE: The Government has access to a large amount of cyber threat information gathered through either law enforcement investigations or national security efforts.

This information can help organizations understand how attacker methods are evolving, and enable them to better defend against threats. It’s important that this information is shared by the Government in a timely and privacy-conscious way so organizations can continue to improve their security programs and defenses. Many organizations in the private sector also have cyber threat information from the attacks they see against their own network.

Sharing this information with each other similarly enhances the general level of security awareness and the ability of organizations to defend themselves. For example, the attack methodology we saw against Target last year was also used against a number of other retailers. Sharing information about what happened helped other retailers check whether they had appropriate mitigations in place, and investigate whether they had already been subject to the same type of attack.

In that example, as in many cases, the attackers were in their victims’ networks for extended periods, so sharing the information early could have enabled investigators to catch attackers in the midst of an attack, and helped victims minimize the harm to their organization.

Insider: Hacking the elections: myths and realities
View Comments
Join the discussion
Be the first to comment on this article. Our Commenting Policies