The U.S. Food and Drug Administration (FDA) has, for the second time in two years, issued recommendations to improve the security of connected medical devices. Not mandates \u2013 recommendations.Which immediately raises the question: Will anything that is non-binding put enough pressure on manufacturers to spend the time and money it will take to improve device security?That, as is frequently said, remains to be seen.The FDA issued what it called \u201cguidance\u201d on the \u201cpostmarket management of cybersecurity for medical devices,\u201d at the end of last year.This follows \u201cpremarket\u201d guidance that the agency issued two years earlier.And while there is no legal requirement to implement them, some experts say they will still have the power to force change, noting that just because they are not mandates doesn\u2019t mean they can\u2019t have significant legal impact. All it would take to confirm that is to talk with a lawyer about the implications of \u201cbest-practice\u201d recommendations in a lawsuit over harm to a patient from a device that was hacked because of poor security.But then, there is Bruce Schneier, CTO of Resilient Systems and a privacy and encryption expert, who wondered in a blog post shortly after the postmarket guidance was published what was the point.Schneier, who has called for government regulation of the entire Internet of Things (IoT) industry, wrote of the guidelines that there was, \u201cnothing particularly new or interesting; it reads like standard security advice: write secure software, patch bugs, and so on. Note that these are \u2018non-binding recommendations,\u2019 so I'm really not sure why they bothered.\u201dBut that got some immediate blowback in his reader comment section. \u201cDoug,\u201d said he had been in the device industry for 30 years, and that while the law regulating medical devices would not change, \u201cthe interpretation and enforcement will.\u201cBy knowing what the FDA is thinking, we can adapt our design, validation, and manufacturing efforts to meet these expectations. Guidance documents drive much of what we do,\u201d he wrote.The agency itself, in a statement to CSO, said the guidance, while nonbinding, is an interpretation of regulations, which are binding. It said the failure to follow the agency's Quality Systems Regulation (QSR) \u201cadulterates\u201d devices, and can result in their \u201cseizure or injunction.\u201d+ RELATED: How to securely deploy medical devices +Several experts agreed that the guidance is worthwhile, and should push manufacturers in the right direction. But none of them thinks that it is time, or will soon be time, for users of such devices to relax. Obviously the stakes are high \u2013 a hack of an implantable or other connected device can cause much more harm than the theft of data or identity. It could kill.And that risk, while there are no reports yet of deaths caused by it, is real. Medical devices have been rigorously designed to work properly for years. But most have not been designed with cybersecurity in mind, and many of them remain insecure to the point of potential catastrophe, as an audit of a heath organization showed in 2014 \u2013 things like, \u201clack of authentication \u2026; weak passwords or default and hardcoded vendor passwords like \u2018admin\u2019 or \u20181234\u2019; and embedded web servers and administrative interfaces that make it easy to identify and manipulate devices once an attacker finds them on a network.\u201dAndrew Ostashen, cofounder and principle security engineer at Vulsec, said he was at a hospital when a neonatal system, \u201cwent offline from discovery scan through an assessment due to the fact the organization was not aware where the device was configured in the network.\u201dThat, he said, meant that hackers would be able to take the same system offline if they got inside the organization. In this case, \u201cluckily the device was not in use at the time of the assessment. Otherwise, this could have been catastrophic,\u201d he said.And while physical harm to patients is clearly the most crucial risk to mitigate, there is also the risk of attackers hijacking a device in order to get inside a healthcare organization\u2019s network.TrapX Labs, a cybersecurity defense vendor, in a breach report at the end of last year, said hijacked medical devices are being used as a back door to hospital networks.\u201cUnfortunately, hospitals do not seem to be able to detect MEDJACK or remediate it,\u201d said Moshe Ben-Simon, cofounder and vice president of services in a press release.So, how far will the recent FDA guidance move the security needle? It wouldn\u2019t have to move much to make a difference.As Schneier noted, the new guidance does not break major new ground. It covers what most experts call good risk management and security \u201chygiene.\u201d The FDA said in its statement that its recommendations are \u201cencompassed" by the QSR, and which include requirements for for handling complaints, audit standards, corrective and preventive action, software validation and risk analysis and servicing.The agency also calls for manufacturers to, \u201capply the NIST (National Institute of Standards and Technology) voluntary cybersecurity framework, which includes the core principles of \u2018Identify, Protect, Detect, Respond and Recover.\u2019\u201dBut the overall focus, which calls for manufacturers to maintain the security of devices throughout their entire life cycle is significant since, as has been widely reported, those devices tend to have a development cycle of five years or more, and then useful lives of 10 to 20 years.Patch headacheSo following the recommendation obviously means designing in from the start the capability to patch and update vulnerabilities throughout the life cycle.The FDA also addresses what has been one complaint of manufacturers \u2013 some critics call it an excuse \u2013 that if they update a device, they have to go through a certification process again.The new guidance makes it clear that routine patches and updates don\u2019t need to be reported or reviewed by the FDA. Vulnerabilities don\u2019t need to be reported unless they cause deaths or other adverse events, or can\u2019t be patched within 60 days. Manufacturers are, however, required to notify users, make changes that lower risk and to be a member of an ISAO, to which they must report the vulnerability and what they did to fix it.Ted Harrington, executive partner at Independent Security Evaluators, noted that the long development cycle of such devices is primarily focused on performance and safety of their mechanical elements, not the software.\u201cThe software itself can and should be evolved throughout the approval process, and must have an update mechanism to account for the evolutions in attack techniques, discovery of previously unknown flaws in operating systems and communication protocols, and other performance enhancements,\u201d he said.Of course, even a routine security update process needs security built in. Stephanie Domas, lead medical security engineer at Batelle DeviceSecure Services, said, \u201cupdating mechanisms by their nature take in new code, in some format, and save it to the device to execute. This makes them enticing targets for malicious actors \u2013 there have been several occasions where software updaters were hijacked for nefarious purposes.\u201dThe FDA also recommended that all stakeholders in the industry join Information Sharing Analysis Organizations (ISAO) to promote the sharing of threat information within the private sector and with government as well.It said ISAOs, with adequate privacy provisions in place, can help, \u201cdetect, mitigate or recover from the effects of cyber threats \u2026\u201dThat last item drew some criticism from Dr. Kevin Fu, CEO of Virta Labs and an associate professor at the University of Michigan, who recommended \u201ccaution and skepticism\u201d regarding ISAOs in a letter last April on a draft of the guidelines.\u201cThe sharing of data is not useful if the data are not high quality,\u201d he wrote, citing one case where a report of a vulnerability in one server prompted a hospital to use an even less secure server.Regarding the overall concept of government involvement in setting security standards for medical devices, there is some debate. Shawn Merdinger, an independent security researcher, said the market can be a more potent force for improving security than government regulation.He pointed to the move last fall by short-seller investment firm Muddy Waters to publicize research by MedSec Holdings that found flaws in pacemakers and defibrillators made by St. Jude Medical Inc., which drove the company\u2019s stock price down.That collaboration was sharply criticized in some of the IT media, and St. Jude sued both Muddy Waters and MedSec, but Merdinger noted that, \u201cthe bottom line at this point appears to be that St. Jude is issuing patches, ICS-CERT is issuing advisories, and nobody has been arrested or otherwise shut down on the business side.\u201dThe point, he said, is that if the stock price of a company is threatened, \u201cthat means executive bonuses and shareholder value is impacted. Once you start taking away peoples' boat payments, it's a whole new ballgame.\u201dAnd Harrington said he is not a fan of government regulation in cyber security for several reasons. \u201cIt takes too long to develop, is outdated by the time it becomes enacted, is too riddled with compromise and it attempts to apply a uniform security model to organizations that are innovating and thus by definition are not uniform,\u201d he said.But, he does not think the FDA\u2019s guidance is useless. He said it can, \u201chelp align the various stakeholders \u2013 from device manufacturers, hospitals, patients, and the government \u2013 as to what is important. It provides a common language around which the discussion of security can be centered.\u201dOstashen said government should play a role \u2013 a more aggressive role. \u201cThe FDA must set up regulations as strict as those for HIPAA (Health Information Portability Accountability Act, which mandates the protection of personal health information),\u201d he said, adding that he sees cyber liability insurers refusing to pay for damages if they believe an organization was negligent for not following best practices.\u201cMedical device manufacturers need to be held accountable for being negligent,\u201d he said. \u201cYes, the development of these devices can take five years, but secure architecture and development must be a conscious thought at the inception of the product.\u201dOverall, Domas said she applauds the FDA for, \u201ctaking the issue of security in medical devices seriously.\u201d She noted that the agency has been heavily involved in medical conferences and guidance working groups.\u201cThey have been soliciting feedback and buy in from the whole medical ecosystem, including medical manufacturers, hospitals, and security researchers,\u201d she said.Harrington said while it will be a long time before, \u201cend users can be fully relaxed and confident in the security posture of medical devices, I am optimistic that this will improve over time.\u201dAnd the FDA said, \u201cwe are starting to see a change in mindset among all stakeholders\u2013manufacturers are realizing the importance of implementing comprehensive cybersecurity controls throughout a product\u2019s lifespan.\u201dGuide your comments over to Facebook.