Data protection laws are primarily geared to protecting the privacy of the individual, and more specifically, to protecting identifiable individuals.
Individual privacy protects a person’s individual autonomy, human dignity, personal freedom or interests. It is about protecting the right to control, manage, own or prevent access to information about the self.
When it comes to group privacy, one has to ask whether there is indeed any such thing as group privacy. And if there is such a thing, whether it means that groups can have rights and responsibilities, including moral duties.
(Finn, Wright et al. 2013) identify seven types of privacy, and one of these is privacy of association, including group privacy:
- privacy of the person,
- privacy of behaviour and action,
- privacy of personal communication,
- privacy of data and image,
- privacy of thoughts and feelings,
- privacy of location and space and
- privacy of association (including group privacy).
Privacy of association addresses the rights people have to associate with anybody they wish to, without unauthorized monitoring or marginalization. This category also addresses the types of groups that individuals belong to, for which they have no control, for example, ethnicity or ancestry.
There are those who would argue that if a group does have a right to privacy, that this merely consists of an aggregation of the privacy rights of each of the individuals that are members of the group. That, in effect, the focus is once again on the individual and not the group.
The real situation is far more complex than the reductionist view (ie. the view that the privacy interests of the group can simply be reduced down to a sum of the rights of the individual members of the group).
Some proponents of group rights conceive of right-holding groups as moral entities in their own right, so that, as a right-holder, a group has a being and status which is analogous to those of an individual person.
I find it useful here to think of the concept of a legal personality – ie. where the law recognises an entity along the lines of an individual. So, for example a registered company has a legal personality, and as such it would technically be possible to libel a company.
People sometimes make the mistake of confusing group rights with group-differentiated rights:
“Group rights” in the ordinary sense of that phrase refer to rights possessed by the group qua group, rather than ones that are possessed by its members severally.
“Group-differentiated rights” refers to rights which people possess as members of a group, but which are in reality nothing more than individual rights. So, for example, if all residents of a particular area are entitled to membership of their local public library, this right is really a right of the individual right-holder rather than being a group right as such.
There are a number of different types of groups – collectives, ascriptive groups, and then there is another type which I will refer to as ad-hoc groups (cf Mittelstadt 2017).
Collectives are groups which are intentionally joined due to collective interests, shared background or other explicit common traits and purposes. In a library context, this could cover a writers’ group, a focus group, a friends of the library group, a book group, or a journal club.
Ascriptive groups are groups whose membership is determined by inherited or accidentally developed characteristics. In other words, the group’s membership is pre-determined, because such a group cannot be intentionally joined or left without the boundaries of the group being redefined. It would, for example, cover an ethnic group.
Larry May says that “when a collection of persons displays either the capacity for joint action or common interest, then that collection of persons should be regarded as a group” (May 1989).
The third category – ad hoc groups – is one which I believe should be treated in its own right, because the nature of such groups creates a number of issues and problems that are best dealt with totally separately. And because big data and the extensive use of algorithms means that the issues and problems arising are happening on a huge scale which demands attention. “Ad hoc groups” refers to groups whose membership is assembled for a third party interest. The group is often created for a time- or purpose- limited period with volatile membership requirements.
The term “ad hoc groups” would cover groups whose identity consists of classifications and rules are constructed by an algorithmic classification system. But it would be wrong to limit this group type purely to groups generated in the context of big data and algorithms.
Another example of ad hoc groups might be groups that are identified merely by having something in common that they have all read (for example, if the police were to come into the library demanding details of all the people who had recently been looking at books on explosives; or lie detectors; or satanism & the occult; or books on childbearing).
Data protection practice depends largely on the concept of notice and consent, reflecting its focus on identifiable individuals. However, when it comes to algorithmically created groups, “How do you assess buy-in when fundamentally, because you can’t do individual consent, you are talking about community consent? And if you are doing community consent, that is gendered, class-based and ethnic in a way that presents even more dimensions of problems (Floridi 2017) (who cites the quotation from an interview with Nathaniel Raymond, Director, Signal Program on Human Security and Technology, Harvard University (25.2.2015)).
Ad hoc groups do not currently have any privacy rights or duties. I believe that for data protection law to reflect the way things work, this needs to be remedied. (Mittelstadt 2017) argues the case for granting a specific informational privacy right, a right to inviolate personality.
Privacy protections for ad hoc groups would need to cover:
- Unwarranted third party manipulation of identity
- Harms from automated decision-making based on profiling identities assembled by third parties. Such harms are less likely to be caused by access to personally identifiable information on individuals and more likely to occur where authorities or corporations draw inferences about people on the group level. A good example of group harm is the well-known Facebook emotion experiment (this sort of thing can be used to identify vulnerable individuals susceptible to advertising, to political messages)
- There is a need to guard against oppressive or authoritarian powers being used to harm the group or suppress its activities.
What is clear to me is that proactive protections are required, and that is because ad-hoc groups lack self-awareness, collective agency and identity. In other words, the members of an ad hoc group wouldn’t even be aware that such a grouping had been created, let alone have a means of acting collectively to challenge any harms arising from the use of that grouping.
Algorithmically generated groups do not have the individual as their main focus. Yet, (Barocas, Nissenbaum 2014) warn that “even when individuals are not “identifiable”, they may still be “reachable”, …may still be subject to consequential inferences and predictions taken on that basis”.
From an ethical perspective, I think of the principles of non-maleficence and beneficence. The Latin phrase “Primum non nocere” which translates as “first, do no harm” and which is also referred to as non-maleficence is a precept used in the sphere of bioethics. It is contrasted with its corollary of beneficence, which is used in a healthcare context to mean doing good; for example researchers should have the welfare of their research participants uttermost in their minds as a goal of any research study or clinical trial. I believe that there should be an ethical duty imposed on data processors.
Not all groups bear “group rights”, that is rights which are possessed by the group qua group, as opposed to being merely an aggregation of the rights of the individual members of the group.
“An essential condition for many theorists is the integrity manifested by a group: a group must surmount a threshold of unity and identity if it is to be potentially capable of bearing rights” (Jones 2016)
(French 1984) distinguishes between “aggregate collectivities” and “conglomerate collectivities”:
- An aggregate collectivity is a mere collection of individuals such as a crowd or the people standing at a bus stop or a statistical category such as middle-income earners. If we were to ascribe either moral responsibility or moral rights to an aggregate, that responsibility and those rights would be reducible, without remainder, to the responsibilities and rights of the individuals who make it up.
- A conglomerate collectivity, by contrast, possesses a unified being since it is formally constituted as an organisation with an internal structure, rules, offices and decision procedures.
(Newman 2004) distinguishes between “sets” and “collectivities”:
- A set, like French’s aggregate, becomes a different set each time its membership changes. It has no identity separate from the individuals who make it up.
- A collectivity, by contrast, remains identifiable as the same collectivity even though its membership changes.
The theories of both French and Newman envisage the possibility that groups can have an identity which survives even when there are changes to the individual membership that constitutes that group. Further, that this is an essential prerequisite of the sorts of groups that have group rights.
(Petronio, Altman 2002) differentiates three general patterns in how people manage group boundaries: inclusive boundary coordination, intersected boundary coordination, and unified boundary coordination:
- Inclusive boundary coordination refers to person A giving up privacy control to person B in order to get something in return (e.g., a patient talking about their eating habits to a doctor so the doctor can provide adequate consultation with regard to his or her health status).
- In intersected boundary coordination, the concealed information is perceived as comparable, and person A and B are considered as equals (e.g., two friends mutually disclosing the troubles they face at home).
- Unified boundary coordination is a pattern whereby everyone is in control of the private information, whilst no one really owns the information. Here, the power of person A over B or the equal sharing of information between person A and B is not the most important aspect (e.g., members of a sports club concealing that they have cheated during a game). Rather, ‘‘the body of private information typically found in this type of coordination often predates all members nd new members make contributions, yet the information belongs to the body of the whole’’ (Petronio, Altman 2002)
Petronio conceptualizes group privacy management as coordinating unified boundaries, while individual privacy management is conceptualized as the coordination of privacy rules around the self.
I don’t believe that it is possible simply to look at things through the prism of individual, group, or society. Drawing such definite distinctions fails to address some of the problems that can arise from our interconnected world. There is the “network effect”, where the loss of privacy of one individual may have an impact upon the privacy of others. So, for example, imagine that a library holds a Christmas party for the benefit of its staff. At the party, someone takes a picture of a member of library staff looking a little bit worse for wear. That person then proceeds to post the picture on their Facebook page. Imagine, further, that the picture features half a dozen staff members, albeit that the person who appears most prominently in the picture is the individual who looks a bit drunk. What if the photographer didn’t just post the photograph onto their social media account, but they also tagged and therefore identified everyone who appeared in the photograph.
Another example of the network effect might be the way in which some people might share their email address book with others, whether this was done knowingly or not. Whereas some people may deliberately choose not to share information in that way. Now, imagine how that information could be used to generate information about someone who hadn’t directly shared their own information. If Joe Bloggs appears in the address books other people have shared, then companies would be able to build a partial picture of the contacts in Joe Bloggs own addressbook, even if he didn’t share his addressbook directly himself.
I may not have explained this very well, so let me give a totally different example which illustrates this concept of interconnectedness.
An online vendor has a product which consists of trade data. The vendor has negotiated agreements directly with the statistical offices of many of the largest trading nations, but one or two statistical offices had been difficult over the contract terms, and the price they wanted to charge for the data. The vendor is still able to get a certain amount of the data relating to those countries, because they can derive the data from the countries whose statistical offices have done deals with the vendor. If the statistical data covers trade, trade is a two-way process and it will of course therefore cover the trade that those countries have with other countries, albeit that this would be thought of as “derived” data.
Privacy from the perspective of society as a whole
There is a challenge involved in balancing the individual’s right to privacy alongside the interests of society as a whole (eg. to protect the country against terrorist activities where the balance is between the safety of society versus one person’s privacy, and where the likely outcome is that the security side of the scales will win).
The legal scholar Thomas Emerson states that privacy “is based upon premises of individualism, that society exists to promote the worth and the dignity of the individual… The right of privacy …is essentially the right not to participate in the collective life – the right to shut out the community” (Emerson 1970). But that is precisely what it is not. Privacy is not about cutting oneself off from society altogether. (Bernal 2013) says that privacy has a collective benefit, supporting coherent societies.
The idea of privacy as being all about the individual and their right to shut themselves off from society can be characterised as Kantian. Immanuel Kant focusses on free individual choice, where any beneficent action that interferes with or usurps the recipient’s free choice is wrong. His philosophy gives recognition to the individual whilst overlooking the idea of the welfare of everyone.
Where people take a utilitarian perspective and balance individual rights with the common good it is very rare for the rights of the individual to prevail. The fundamental problem is that when one tries to balance individual rights with social responsiblities, individuality with community that dichotomy sees the interests of the individual and the interests of society as being in conflict with each other. As though that were inevitable.
John Dewey takes a different view. His theory of the relationship between individual and society is one where the value of protecting individual rights emerges from their contribution to society. Dewey’s view means that society deliberately makes space for the individual precisely because of the social benefits of doing so. Dewey measures the value of rights based on “the contribution they make to the welfare of the community” (Dewey 1936)
Daniel Solove takes a similar view to Dewey. He says that “part of what makes a society a good place in which to live is the extent to which it allows people freedom from the intrusiveness of others. A society without privacy protection would be oppressive. When protecting individual rights, we as a society decide to hold back in order to receive the benefits of creating free zones for individuals to flourish” (Solove 2013).
Daniel Solove says that “privacy should be understood as a societal value, not just an individual one” (Solove 2013)
Speaking up for the privacy of the individual within society would therefore be a demonstration of the norms and values which characterise that society. Far from viewing a set of scales where it is trying to balance the privacy of the individual on one side with the interests of society on the other as though you can only have one at the expense of the other, it is instead a case of balancing societal interests on both sides of the scales.
BAROCAS, S. and NISSENBAUM, H., 2014. Big data’s end run around anonymity and consent. Cambridge University Press, NY.
BERNAL, P., 2013. Individual privacy vs. collective security? No! Paul Bernal’s blog, (October 17),.
DEWEY, J., 1936. Liberalism and civil liberties. Social Frontier, 2, pp. 137-138.
EMERSON, T.I., 1970. The system of freedom of expression. Random House Trade.
FINN, R.L., WRIGHT, D. and FRIEDEWALD, M., 2013. Seven types of privacy. In: S. GUTWIRTH ET AL. (EDS.), ed, European Data Protection: Coming of Age. Springer Netherlands, pp. 3.
FLORIDI, L., 2017. Group Privacy: A Defence and an Interpretation. In: L. TAYLOR, L. FLORIDI and B. VAN DER SLOOT, eds, Group Privacy: New Challenges of Data Technologies. Cham: Springer International Publishing, pp. 83-100.
FRENCH, P., 1984. Collective and corporate responsibility. Columbia University Press.
MAY, L., 1989. The morality of groups. University of Notre Dame Press.
MITTELSTADT, B., 2017. From Individual to Group Privacy in Big Data Analytics. Philosophy & Technology, .
NEWMAN, D.G., 2004. Collective Interests and Collective Rights. The American Journal of Jurisprudence, 49(1), pp. 127-163.
PETRONIO, S. and ALTMAN, I., 2002. Boundaries of privacy.
SOLOVE, D.J., 2013. Nothing to Hide: The False Tradeoff Between Privacy and Security. New Haven: Yale University Press.