Contrast Security's Founder and CTO, Jeff Williams, talks about hacking regulation and legislation with CSO in a series of topical discussions with industry leaders and experts.\n\nHacked Opinions is an ongoing series of Q&As with industry leaders and experts on a number of topics that impact the security community. The first set of discussions focused on disclosure and how pending regulation could impact it. Now, this second set of discussions will examine security research, security legislation, and the difficult decision of taking researchers to court.\n\nCSO encourages everyone to take part in the Hacked Opinions series. If you would like to participate, email Steve Ragan with your answers to the questions presented in this Q&A. The deadline is October 31, 2015. In addition, feel free to suggest topics for future consideration.\n\nWhat do you think is the biggest misconception lawmakers have when it comes to cybersecurity?\n\nJeff Williams (JW): Lawmakers tend to think politically, which can lead to surprisingly unsophisticated thinking about systemic problems like cybersecurity, crumbling infrastructure, and global warming. The knee jerk reaction to something unpopular is to create a legal regime that targets the bad actor. This might take the form of tort law, hacking back, financial penalties, etc\u2026\n\nThe misconception \u2014 the faulty assumption \u2014 is that we can accurately identify these bad actors. Even in the highly publicized cases, like Sony and OPM, this so-called \u201cattribution\u201d problem is extremely labor intensive and doesn\u2019t produce compelling results. Given this, legislation targeting attackers is almost certainly ineffective political theater that won\u2019t actually protect anyone.\n\nEven focusing on information sharing reeks of this bias. The concept is that if we can simply share information about attacks and respond faster then we can win. Sort of military thinking. But when the enemy is completely anonymous and untraceable, this strategy is doomed to failure.\n\nLegislators would be much better served by focusing on encouraging better defenses. I think this is best served through creating visibility rather than creating liability for \u201cless than rigorous\u201d software development.\n\nThe idea is to fix the \u201casymmetric information\u201d problem in the software market, and encourage the market to produce strong code rather than attempt (and certainly fail) to legislate or regulate it.\n\nWhat advice would you give to lawmakers considering legislation that would impact security research or development?\n\nJW: Security research and development is a critical part of the cybersecurity ecosystem. Rather than worrying about the small amount of potential harm that this research might cause (which is real), they should focus on the enormous downside of preventing this research from being performed.\n\nCurrently, security research drives many of the processes that our companies and agencies use to keep themselves secure. Although there are only a small number of researchers and they are only testing a tiny fraction of the software and devices on the market, their work pushes vendors and organizations alike to do better.\n\nPreventing these researchers from doing their work would have a huge ripple effect causing widespread insecurity across the country.\n\nAgain, the government should do everything that it can do to make security visible. This enables everyone from producer, to consumer, to evaluator, to buyer and end user to make informed decisions about what level of security they want. Enabling the market to work is the only way to improve cybersecurity at planet level.\n\nIf you could add one line to existing or pending legislation, with a focus on research, hacking, or other related security topic, what would it be?JW: I\u2019m going to focus on the \u201cBreaking Down Barriers to Innovation Act of 2015,\u201d which attempts to fix some of the problems with the broad language in the Digital Millennium Copyright Act (DMCA) Section 1201 that prevents anyone from \u201ccircumventing\u201d any \u201ctechnological measure\u201d that \u201ceffectively\u201d controls access to a copyrighted work, and on selling hardware or software tools that can break or bypass DRM.\n\nThe problem with this rule is that it\u2019s frequently used to threaten security researchers and prevent innovative (if unexpected) uses of technology.\n\nThis proposed legislation helps to fix some of these problems, but doesn\u2019t go nearly far enough. Fundamentally, I believe the public has an incredibly important interest in the software that they use. Yet they are prevented from checking that software to see if it is something that they can trust. The recent Jeep Cherokee and Volkswagen incidents are incredibly compelling reasons to empower everyone to do their own analysis.\n\nSo I would love to add this line to the \u201cBreaking Down Barriers to Innovation Act of 2015\u201d:\n\n\u201cNotwithstanding anything in the DMCA, the prohibition against circumventing a technological measure shall only apply to circumvention carried out in furtherance of infringement of a protected work."\n\nNow, given what you've said, why is this one line so important to you?\n\nJW: This is important because it would allow anyone \u2013 researcher and the general public \u2014 to do their own security verification of the software that they are trusting everything (finances, healthcare, privacy, defense, even happiness) to.\n\nThe DMCA was designed primarily to prevent people from stealing content from DVDs. Making this change wouldn\u2019t affect the ability of the law from being used for its intended purposes.\n\nDo you think a company should resort to legal threats or intimidation to prevent a researcher from giving a talk or publishing their work? Why, or why not?\n\nJW: Of course not. There\u2019s very little chance that this type of action will either 1) prevent security research from being revealed, or 2) recoup any losses related to the security research.\n\nHowever, it\u2019s a lock that the issue will generate a great deal of negative PR so that even more people eventually find out about the issue. You also risk a retaliatory strike by groups like Anonymous and Lulzsec, such as the one that they launched against Geohot (George Hotz) after he hacked the Sony Playstation and they pursued him legally.\n\nIf a researcher has discovered something, then you should respond as though the \u201cbad guys\u201d already know about it. Take responsibility, fix the issue, get great PR for your security program. Use this as an opportunity to build trust with your clients.\n\nWhat types of data (attack data, threat intelligence, etc.) should organizations be sharing with the government? What should the government be sharing with the rest of us?\n\nJW: The short answer is, \u201cit doesn\u2019t matter.\u201d The idea here is that government facilitated sharing of information about attacks will enable a sort of \u201cherd immunity.\u201d\n\nWell, that might work if we were actually any good at detecting attacks, but it turns out most attacks go on for months or years before being detected. So the vast majority of these attackers are never going to get identified by the first company, which means that sharing information about the attack won't help.\n\nThere's nothing wrong with sharing a little information, but don't think that this is going to make us any more resilient against attack. The information shared are IP addresses and domains of suspected attackers and compromised computers. According to the most recent Verizon 2015 DBIR report, a lot of this information comes from honeypots.\n\nThese are systems placed on the Internet to trick hackers into attacking them, and thereby gathering information about their sources and methods. Real attackers are likely to focus their attacks, not blindly scan the Internet. So their information is unlikely to be in the honeypots, and therefore won't get shared.\n\nAll the arguing about information sharing legislation is not a total waste, but almost. The crisis really isn't that we're not sharing information: The crisis is that we have huge numbers of systems that are basically totally unprotected against cyber attack. We need to create a software market that rewards organizations that put appropriate protections in place. That's a problem we are never going to solve with information sharing.\n\nBut there is a role for government in fixing the dreadful state of the software market. Currently, security is an afterthought. We need to change the incentives so that software companies are encouraged to produce secure code. I'm not a fan of liability or taxation regimes. How about some legislation that requires companies to disclose some basic facts about the security of their software.\n\nThings like: was security testing done, where developers train in security, are basic defenses in place, and are components free of known vulnerabilities. Many other industries have labels and and data sheets that disclose this kind of information. Why not software? This is a powerful, non-intrusive way for government to help fix the security of our nations infrastructure.