Information security began as more of a concern for businesses and governments. These organizations used computers and networks to manage information well before the rest of the public came online. But as the internet and smartphones began connecting people all over the world, new services appeared that processed more and more data from average consumers. This evolving landscape created new potential disruptions for people’s lives.
For example, AOL released a dataset of user searches in 2006 for research purposes. While the company removed account information, researchers were able to identify people based on their search terms alone. Similar breaches of confidentiality fell under the rubric of privacy, since they aligned well with our understanding of privacy violations in the non-digital world: someone wrongfully gained access to previously secret information about a person.
But seeing privacy as simply security applied to user data fails to capture the full scope of what changed when everyone began engaging with the digital world. In the past, information moved around in fairly mechanical ways, requiring significant time and resources to spread broadly. Even as we developed technologies for broadcast and distribution, most people only engaged with them as viewers or consumers.
When we step into today’s online world, all of us can quickly send vast amounts of information to anywhere on Earth. This opens up all sorts of new information flows that no longer follow the rules we had come to expect offline, leaving us to navigate a very different landscape. Suddenly, most people do not understand how things work, and have to adapt.
For direct communications, we have come up with new ways to express context and set norms, such as abbreviations and emojis. But while we had ways of recognizing when someone was staring at us in the offline world, others in the online world can more easily “stare” without detection. When Samuel Warren and Louis Brandeis first penned “The Right to Privacy” in 1890, the invention of portable cameras evoked similar concerns for them about who was learning what and how they might use that knowledge.
Even then, the real issue involved more than simply keeping our private lives hidden. More broadly, it meant people should have a degree of control over information about themselves and their activities when broadcasting such information did not serve the public interest. In other words, the right to privacy was about recognizing people’s interests when knowledge about them is shared, and that they should have a say in such exchanges. This leads to my preferred definition of privacy in today’s online contexts: respecting people’s agency and interests when handling their information.
For companies handling personal data, this notion of privacy also encompasses much more than simply ensuring PII does not leak. You need to be asking questions about whether you should collect such data to begin with, about how it actually gets used, about what your users expect you to know and do. These questions often become more values-driven and difficult to capture in tidy metrics, but that does not diminish their importance. Since this whole concept involves people and their behaviors, it naturally gets fuzzier and more organic than other more technical aspects.
It also means expertise in security does not automatically translate to expertise in privacy. Security vulnerabilities can certainly have an impact on privacy, and security techniques such as defense in depth remain useful in a privacy context. But security alone will not help answer the full range of privacy questions that arise when products or features alter information flows. You can violate someone’s privacy expectations in a very secure way.
If you are interested in learning more about privacy apart from its overlap with security, start with these resources from three excellent researchers:
- In 2010, danah boyd (founder of Data & Society, Principal Researcher at Microsoft Research) gave two talks on the public/private binary which remain highly relevant (links to transcripts): “Making Sense of Privacy and Publicity” at SXSW and “Privacy and Publicity in the Context of Big Data” at WWW2010
- More recently, Helen Nissenbaum (Professor of Information Science at Cornell Tech, creator of the “contextual integrity” framework) gave an interview on why we should not simply rely on consent for privacy (paywalled link, but you can find archives): “Stop Thinking About Consent: It Isn’t Possible and It Isn’t Right”
- Finally, Maritza Johnson (researcher at the International Computer Science Institute) gave a 10-minute talk at the 2018 OURSA conference on baking privacy into the user experience (video link): “Wait, how was I supposed to know?”