• United States



by Senior Editor

SecTor 2010: Why security is the enemy of privacy

Oct 26, 20103 mins
ComplianceData and Information SecurityData Breach

Some might expect security to be the answer for privacy protection, but it's often part of the problem. Tracy Ann Kosa, a privacy impact assessment specialist with the government of Ontario, explains why at SecTor 2010.

TORONTO — As we noted in last year’s CSO article, “Six ways we gave up our privacy,” people are increasingly — and willingly — throwing their privacy to the wind, thanks to an addiction to Google apps, GPS devices, the BlackBerry, iPhone and Android, and social networking sites like Facebook and Twitter. Some security experts believe privacy is dead already.

But there’s a twist to the story, according to Tracy Ann Kosa, a privacy impact assessment specialist with the government of Ontario, Canada:

Security, seen by many as privacy’s last hope, has become part of the problem, she explained in her talk at the 2010 SecTor conference.

Also see “Slapped in the Facebook: social networking dangers exposed

“Privacy advocates spend a lot of time refuting the high-profile discussions about the pending death of privacy, particularly online,” she said. “The focus would be better spent addressing the cause: security.”

She noted that personally identifiable information appears where people least expect it to, leaving an often brutally detailed virtual trail. Security procedures and technology contribute to the problem because it often forces the recording, monitoring and auditing of that information in the interest of protecting people from an unseen threat.

The systems we created to serve us have become the masters, and to regain some privacy we must rethink the way we use the machinery, she said.

One starting point might be to rethink security standards and regulations that have failed to protect privacy and led to people getting the blame for failures when the problem started with security technology that didn’t work as advertised, she said.

One example of failure, she noted, was that the Hannaford Brothers supermarket chain had been declared PCI compliant before it suffered a massive breach of personally identifiable customer information. “Why have the standard if it doesn’t work in the first place?” she asked.

She also cited an infrastructure vs. end user problem: “We tell people to put their data somewhere, that it’s safe and secure,” she said. “Then, when there is a breach it’s easier to turn to the information collector and blame them. ‘The infrastructure didn’t fail, you didn’t follow the rules.’ But there shouldn’t have been the ability for the end user to mess it up in the first place.”

Kosa mentioned a medical researcher who was studying women and health records on breast cancer. The system was breached and she was blamed for “not using the system properly.” To that, Kosa said, “If we created this infrastructure, declare it secure and demand people use it, we can’t really blame them when it doesn’t work.”

Meanwhile, cameras are everywhere and people can watch CCTV cameras from the comfort of their home. But there are so many cameras out there that there aren’t enough people to watch real-time activity. The bad guys know this, and so that security technology ceases to be an effective deterrent.

“We spy on each other and talk about complex solutions to system problems that didn’t exist before we started collecting data,” she said.

Is there a solution? Perhaps not. But in recent CSO articles on the subject, security professionals have kicked around some ideas.

Educating younger folks on what they are giving away is a good place to start, practitioners have said. Businesses could steer clear of something like Gmail if they have sensitive data to send someone. And consumers can demand that government agencies crack down on the privacy-stealing practices of private-sector companies.

Of course, for these things to happen, the people who have willingly given away their privacy up to this point have to want it.