Library privacy from the perspective of “groups”

I am trying to think of privacy from the perspective of the individual, the group, and society as a whole.

When it comes to “groups”, there are so many different types
Collective groups
Ascriptive groups
Ad-hoc groups

I am trying to figure out whether it is helpful or useful to think of privacy in relation to groups, particularly in a library context. Doing a bit of brainstorming, here’s a listing of different types of groups that I have come up with (some are clearly library-related, while others might be less obviously so):

Adult writers group
Biological group
BME/BAME group
Cultural group
Discussion group
ebook working group
Ethnic group
Family group
Focus group
Friends of the library group
Genealogy group
Information Assurance Group
Interest groups
Journals club
Knitters group
Library user group
Linguistic group
Lobby group
Marginalized groups
Patron privacy interest group
Peer group
Performing arts group
Political groups
Professional groups of librarians
Public library client groups
Reading group
Religious group
Social groups
Socio-economic group
Youth group
Vulnerable groups
-Domestic violence survivors
-Racial and ethnic minorities
-LGBT communities

What about “group” in the sense of a “legal personality” other than a natural person (a company, a trust, an association, a community benefit society, or in the case of The London Library, an organisation with a royal charter) http://www.londonlibrary.co.uk/images/PDFs/LLCharter.pdf

Thinking in terms of a “legal personality”, that could be:
the organisation running the library,
a library vendor,
or a third party.

Then a few thoughts about the nature of the group:
Common bond group (member attachment, based primarily on attachments among group members)
Common identity group (group attachment based primarily on direct attachments to the group identity)

Dynamic and fluid group
Stable group

Has common interest
Capable of joint action
Explicit common traits and purposes

Does the group’s identity survive even when there are changes to the individual membership constituting the group

Does the group have
Its own memories
Its own culture
Its own choices
Its own rites and customs

Why do people give away their privacy so easily?

I often wonder why people seem to give away their privacy so readily. And I am rapidly coming to the conclusion that privacy is a hugely complex topic, where there are no easy answers to any of the fundamental questions that one might ask.

So what follows are a few random thoughts

Firstly, it’s a question of invisibility. Data about all of us is being collected all the time, but we don’t notice that it is happening because for the most part it is being done invisibily. And even if we have a vague feeling that our data is being collected, we don’t really have any real sense of what data companies have about us (especially when some of this has been inferred), nor do we know how that data is being used. (Susser 2016) says that “information technology makes social self-authorship invisible and unnecessary, by making it difficult for us to know when others are forming impressions about us, and by providing them with tools for making assumptions about who we are which obviate the need for our involvement in the process”.

Secondly, we don’t truly understand the value of the data that is being collected. How many of us happily use the functionality of Twitter or Facebook to demonstrate that we “like” particular items? Do we stop to think about just how much those “likes” can tell about us? (Kosinski, Stillwell et al. 2013) say that “relatively basic digital records of human behavior can be used to automatically and accurately estimate a wide range of personal attributes that people would typically assume to be private”. They demonstrate this using Facebook Likes, showing how these can be used to automatically and accurately predict a range of highly sensitive personal attributes including: sexual orientation, ethnicity, religious and political views, personality traits, intelligence, happiness, use of addictive substances, parental separation, age, and gender.

Thirdly, its contextual. (Wang, Yan et al. 2017) talk of how contextual settings influence the way people manage their privacy saying “we may reveal personal information to a stranger on a plane, which rarely happens in other situations”. (Westin 1967p. 34) says that anonymous relations give rise to what Georg Simmel called the “phenomenon of the stranger”, the person who “often received the most surprising openness – confidences which sometimes have the character of a confessional and which would be carefully withheld from a more closely related person”. In this aspect of anonymity the individual can express himself freely because he knows the stranger will not continue in his life and that, although the stranger may give an objective response to the questions put to him, he is able to exert no authority or restraint over the individual.

Fourthly, there’s the question of “nudge design”. Internet giants deliberately design their interfaces to make people feel as comfortable as possible with sharing their information. How many people, for example, stop to question why a new website that you want to use suggests that you sign in with your Google, Facebook, or Twitter account credentials? When LinkedIn tells people how filling in certain fields on their profile will increase their visibility, do they ever stop to think through the implications? Are they a fortune teller, able to tell all the ways in which the site’s terms and conditions will be changed, eroding their privacy bit by bit?

Fifthly, there’s the reassuring language used. For example, you don’t need to worry about government surveillance and bulk datasets, because its only the metadata. One needs to step back and think of the bigger picture. Its not just the metadata of one individual. Imaging how valuable the metadata of millions of people can be. I always think of metadata in aggregate being as valuable as the content. In many ways one could go further and say that the aggregate metadata is actually even more valuable than the content

A final point relates to the power imbalance of one individual up against internet giants, with all the might, influence and resources that they can muster. Even if one were to seek redress for a privacy harm that you had experienced, is there a means to do so. Wouldn’t the damage done be seen as miniscule, whereas collectively it’s a very different story. Max Schrems has gone to court for the right to bring a class action lawsuit against Facebook. He argues it is vital that the case be treated as a class-action suit. He believes 25,000 individual lawsuits on user privacy would be “impossible” due to the financial burden on users and the inefficiency for judges.

REFERENCES

KOSINSKI, M., STILLWELL, D. and GRAEPEL, T., 2013. Private traits and attributes are predictable from digital records of human behavior. Proceedings of the National Academy of Sciences of the United States of America, 110(15), pp. 5802-5805.

SUSSER, D., 2016. Information Privacy and Social Self-Authorship. Techné: Research in Philosophy and Technology, .

WANG, L., YAN, J., LIN, J. and CUI, W., 2017. Let the users tell the truth: Self-disclosure intention and self-disclosure honesty in mobile social networking. International Journal of Information Management, 37(1), pp. 1428-1440.

WESTIN, A.F., 1967. Privacy and freedom. (1st ed.). edn. New York: Atheneum.

 

Wealth of privacy theories

Having initially chosen privacy in the context of library & information services for my PhD research, it wasn’t until I started to read more widely around the topic that I began to realise just how complex and wide-ranging the concept of privacy really is.

A good starting point for understanding the meaning of privacy are the key texts by Warren & Brandeis (1890), Prosser (1960),and Westin (1967).

In all my reading on the topic I have probably considered about 60+ different privacy theories. There’s certainly no shortage of them. Below I have picked out some of them. In some cases they shed light on privacy from the perspective of the individual, the group, and society; in the case of Neil Richards, I have picked that one because it is highly relevant to the information profession; and I have chosen others because they give a different or interesting perspective on privacy.

Anita Allen – unpopular privacy

Neil Richards – intellectual privacy

Irwin Altman – social interaction theory

Sandra Petronio – communication privacy management theory

Dinev & Hart – privacy calculus theory

Kahneman and Tversky – prospect theory

J. D. Elhai – anxiety model

Rogers – Protection motivation theory

Jeremy Bentham – panopticon

Edward Bloustein – individualistic theory

John Dewey – relationship between individual and society

Michel Foucault – surveillance (using panopticon metaphor)

Woodrow Hartzog – obscurity

Lawrence Lessig – code as law

Jens Erik Mai – datafication

Helen Nissenbaum – contextual integrity

S. C. Rickless – barrier theory

Luke Stark – emotional context of information privacy

Warren & Brandeis – right to be let alone

Dwork – differential privacy

Judith Wagner – cluster concept of privacy

Rachels-Fried theory on intimacy

 Icek Ajzen – theory of planned behaviour

Model for ontological frictions (cf @Floridi)

Many of the frictions that determine the ease/difficulty with which one can access personally identifiable information fall under the heading of technology.

Whilst technology accounts for many frictions, it certainly doesn’t cover all of them. So, I have been trying to think about how to encapsulate the various different frictions in a model.

Yesterday, I very tentatively posted a slide on twitter about the opposing forces of privacy invasive technologies versus privacy enhancing technologies, and one response was simply to say “brilliant”. I had been quite wary of posting the slide, because it only tells part of the story. Other aspects might include privacy by design and differential privacy.

My efforts to come up with a suitable model are only in their initial stages, and right now its very much a work in progress.

Thinking of the technology segment, the nitty gritty might cover things like

  • Encryption
  • Secure networks
    –           VPNs
    –           Separation of staff wifi from user wifi etc
  • Strong passwords
  • 2FA
  • Use of blocking to inhibit tracking mechanisms
  • Password protections/password encoding
  • Specifically devised protocols or services
  • Warning systems (for externally captured data)
  • Limited disclosure technology (eg Sudoweb, Facecloak)
  • Use of blocking to inhibit tracking mechanisms
  • Pro-active information security measures
  • Network penetration testing
  • Limiting editing/access rights to those who really need them
  • Ensuring ability to undertake a forensic audit
  • Firewall
  • Proactively take measures to protect privacy
  • Clear cookies and browser history
  • Delete/edit something you posted in past
  • Set your browser to disable or turn off cookies
  • Adblockers
  • Addons to prevent tracking (PrivacyBadger, Ghostery etc)

The technology piece could be seen in the larger context of regulation – cf. Lessig’s code as law. So I then took David Haynes’ regulation model which covers four elements: norms, law, code, and self-regulation and tried to think from there about all of the other types of friction that aren’t covered by those headings.

It is definitely not easy to try and make sense and categorise the other elements, not least because they aren’t necessarily mutually exclusive.

For example, I decided to create a heading for “Obscurity” to cover obscurity, practical obscurity and obfuscation. These can in many cases be achieved through technology, but not necessarily. Making a deliberate decision NOT to digitise a work would be a means of achieving practical obscurity, and of ensuring that access to its content was far more restrictive than it ever would be had it been digitised. And if that contained sensitive personal data about individuals, the decision not to digitise will have restricted the flow of information, and would therefore be one of the frictions that Floridi refers to as “informational friction” or as “ontological friction”

For the moment, other than the regulation segments, the headings I have come up with are:

  • Temporal
  • Spatial
  • Sensory
  • Nature of the data
  • Obscurity
  • Digital information literacy (thinking specifically of digital privacy literacy)
  • Digital privacy protection behaviour
  • and finally contextual integrity (Nissenbaum).

Privacy and orders of worth

Sometimes it is in the most unexpected of places that you find some of the most useful information. In a book entitled “From categories to categorization” (edited by Rodolphe Durand et al) is a chapter on privacy which looks at the topic from another angle.

(Bajpai, Weber 2017) analyse emerging notions of informational privacy in public discourse and policymaking in the United States.

They say that conceptions of privacy were tied to institutional orders of worth. Those orders offered theories, analogies and vocabularies that could be deployed to extrapolate the concept of privacy into new domains, make sense of new technologies and to shape policy.

Drawing on the work of Boltanski & Thevenot ((2006)[1991]) they list the following orders of worth applied to privacy:

  • Inspired
  • Domestic
  • Fame
  • Civic
  • Market
  • Industrial

A question I would ask is whether privacy is now seen through the “market” perspective at the expense of other perspectives. So, the market world values competition, winning and self interest; and devalues loss and scarcity; whereas the inspired world, for example, values spontaneity, independence and authenticity; devalues habit, regulation and routine. So, in the inspired world data must be free for creative use and data controls are deeply personal decisions.

Bajpai and Weber say that according to Habermas (1991 p319) a person’s life-world is divided into a private sphere (traditionally family, private household, and intimate relationships) and a public sphere (traditionally political and civic life, and public spaces). Clearly that distinction has come under strain, and I think that it is misleading to see things in such black and white terms.

A particularly noteworthy observation of Bajpai and Weber is that organisational research on categories has drawn mostly on theories of what cognitive psychologists and anthropologists call “object concepts” at the expense of “abstract concepts” such as truth, rights, self, democracy or privacy.

They say that “informational privacy is an abstract concept that rests on translating ideas of privacy from the predigital to the digital era. The reformulation of privacy as informational privacy entails political struggles over epistemic control that are only weakly bounded by “objective” qualities of the category”.

“The policy actors involved in translating the concept of privacy to digital privacy are arguably less constrained by material properties of privacy practices and conventions understandings, as the technologies, practices, and conventions in the digital domain are less settled and rapidly emerging. Policy actors are then involved in creating subsequent constraints in the form of legal doctrine and public policy rather than responding to them”.

REFERENCES

BAJPAI, K. and WEBER, K., 2017. Privacy in public: Translating the category of privacy to the digital age. From Categories to Categorization: Studies in Sociology, Organizations and Strategy at the Crossroads. Emerald Publishing Limited, pp. 223-258.

HABERMAS, J.[.A., BURGER, T. and LAWRENCE, F., 1991. The structural transformation of the public sphere : an inquiry into a category of bourgeois society. Cambridge: MIT Press.

UK data protection & privacy research in LIS sector

In the UK there are a number of initiatives in the LIS sector relating to data protection and privacy issues.

These include research by Jo Bailey in Sheffield:
to investigate data protection management in libraries. Specifically, this research will: establish the level of training or support available to library and information professionals, the prevalence of specific policies in organisations and the opinions and perspectives on data protection legislation amongst library and information professionals.

Research by David McMenemy, Nik Williams, and Lauren Smith on the chilling effect https://twitter.com/D_McMenemy/status/885085826023084037

The Carnegie Trust and CILIP are working together on a project relating to privacy https://www.carnegiecouncil.org/studio/briefings/20170510-privacy-in-a-digital-age-video-highlights

and https://www.carnegieuktrust.org.uk/project/balancing-privacy-public-benefit/

The distinction between public & private

The law tends to treat information in only two ways: either as public or as private.

But is the public / private dichotomy also a false dichotomy?

The dichotomy is challenged by Nissenbaum’s theory of contextual integrity

Is the public/private distinction a quaint norm from an irrelevant past? (Showers, Dawsonera 2015)

Many younger internet users see things in a far more nuanced way than simply in terms of public versus private, where these things are multiple and overlapping

One might, for example, restrict access and visibility to families, to friends or to employers

It isn’t as straightforward as information being either public or private, or indeed coming up with a third category of “semi-public” without going on to develop these concepts further.

The lines between these designations are at times blurred, mutable, even non-existent on occasion

Technology (such as data mining etc) creates interconnections with what were formerly separate spaces

Is the information restricted by technological features?

  • Passwords
  • Privacy settings
  • etc

Where there are barriers to access, the very existence of barriers communicates to those with the right credentials that there is a desire for or an expectation of privacy.

Solove (2004) argues that the secrecy paradigm “fails to recognise that individuals want to keep things private from some people but not from others”.

The two privacy torts that are most relevant to the public/private distinction are:

  1. Public disclosure of private facts (which limits liability to defendants who publicize information that is private, not of legitimate public concern and which is disseminated in a highly offensive manner)
  2. Intrusion upon seclusion

When can a plaintiff reasonably expect information about himself to remain “private” after he has shared it with one or more person?

A workable definition of online obscurity is needed.

REFERENCES

SHOWERS, B. and DAWSONERA, 2015. Library analytics and metrics: using data to drive decisions and services. London: Facet Publishing.

SOLOVE, D.J., 2004. The digital person: Technology and privacy in the information age. NyU Press.