To address privacy issues, we must take account of big data, algorithms & the like

There is a real risk that we will fail to adequately protect the privacy of the individual if we don’t address the potential risks and harms posed by algorithms, predictive analytics and the like.

This is a complex topic, and its hard to get across in a short and snappy, easily understandable way precisely what is at stake. But below I have made a few notes about some of the issues involved:

  • Big data
  • Open data – the risks of re-identification increase as new datasets emerge, and as new technological capabilities become possible
  • Data mining
  • Predictive analytics – problem of there being a margin of error; and when we’re talking of “big data”, that means a lot of people affected. Guessing rather than driving analytics based on hard facts has the same effect as using falsified information
  • Machine learning
  • Algorithms – Cathy O’Neil says that algorithms are opinions embedded in code. She comments that most people think algorithms are objective, true and scientific, but that that’s a marketing trick.
    • Algorithms going bad because of unintentional problems that reflect cultural biases
    • Algorithms going bad through neglect eg auto-tagging of images where black people were labelled as gorillas
    • Nasty but legal algorithms eg. showing advertisers how to target vulnerable teenagers
    • Nefarious and sometimes illegal algorithms eg. China’s social credit score which has the potential to function as a way of keeping tabs on an individual’s political opinions
    • Algorithmic harms:
      • Tacit collusion (conscious parallelism) eg tacitly collude on posted price to profit from “low value” and loyal customers
      • Hub-and-spoke – a hub-and-spoke framework may emerge when sellers use the same algorithm or the same data pool to determine price
      • Behavioural discrimination eg behaviourally discriminate for “high value” customers
    • Scale of privacy risks
    • Focus on groups: Data analytic technologies are rarely focused on individuals, but are instead focussed on the crowd of technology users.
    • Networks
    • Social networks
    • Network effect – when present, the value of a product or service is dependent on the number of others using it
    • Metcalfe’s law
    • Infrastructure for tracking
    • Data can transfer around the globe in fractions of seconds
    • Networked surveillance
    • VPNs
    • Tor browser relays
    • Contacts/Friends lists – use of friends/contacts lists to map who someone associates with. If, unbeknownst to them, one of their contacts has a criminal record, is it fair for them to be negatively affected because of having such an association?
    • Networked devices
    • Internet of things (Bruce Schneier refers to IoT as a world size robot)
    • Networks not bounded by state boundaries – ability to take effective legal action
    • Social surveillance
    • Network friction (an agent’s network fluidity)
    • Network of agents (human, artificial, hybrid)
    • Interoperability has the effect of making the network bigger
Advertisements

Ontological frictions as a mindmap

Luciano Floridi (@floridi) envisages “Ontological friction”  as referring to the forces that oppose the information flow within (a region of) the infosphere, and hence (as a coefficient) to the amount of work required for a certain kind of agent to obtain information.

I have been building a list of the different types of friction which will determine the ease  or difficulty with which personal data can flow back and forth. The problem has been that it is difficult to try and summarise everything in a single page.

I have started to map out the frictions in the form of a mindmap. This is very much a first attempt at doing so, and it will no doubt need plenty of refining. The link below is to a PDF document which will need to be enlarged in order to read the contents of the mindmap.

Ontological frictions mindmap

To what extent does big business set the privacy agenda

– In terms of the language they use, do they use reassuring language to disguise their true intent?
– Why do we often hear that light touch regulation is needed for them to innovate, as though regulation makes innovation impossible
– In terms of sponsoring speakers at key privacy conferences, where those speakers don’t always declare that they are being sponsored
– In terms of nudge design
– Given that they wield huge influence, and have the potential to influence people’s emotions, voting intentions etc, are they behaving ethically? To what extent are they transparent? Accountable?
– Why do we get stories about their ability to target vulnerable individuals, susceptible to advertising
–  Why do we see the wish of individuals to have their privacy respected presented as a negative thing: Consumers around world sabotage business success every day by providing wrong info when asked for personal details https://t.co/41oBjQg89J
– To what extent are governments and corporates not just active players in the personal data universe but also stimulators shapers of the ecosystem ?
– “We risk becoming digital peasants owned by software &; advertising companies not to mention overreaching governments” Fairfield, 2017

Privacy: developing a theoretical framework

To say that the privacy landscape is complex is an understatement. How does one make sense of the multi-faceted concepts that it encompasses?

My research will be using @Floridi’s idea of ontological friction to consider the ways in which personal data is able to flow within the infosphere. However, in order to put ontological friction into context I have been trying to put together a wider theoretical framework for privacy:

  1. Having come up with an initial set of key concepts I then developed each of these in more detail.
  2. I then undertook a textual analysis of the notes I have made from many months of reading around the topic in order to double-check that I hadn’t missed anything
  3. and I re-read my notes to cross-check that I had picked out the important elements

and so far, the broad headings that the framework covers are listed below. Are there any that you think are missing?

  • Level of analysis: entity type
  • Legal status (natural person, legal personality etc)
  • Duties of data processors
  • Rights of data subjects
  • Ontological frictions
  • Data gathering
  • Content (PII, DII, Sensitive personal data)
  • Ownership access and control
  • Purpose of processing
  • Uses
  • Risks & Harms
  • Intent
  • Organizational attitude to privacy
  • Remedies
  • Information Behaviour
  • Digital literacy/Training & Awareness
  • Public/private
  • Stakeholders

Just by way of example, the heading for Public/Private is given below in more detail. Its fascinating the reaction you get when you mention to someone about the different shades of privacy, of how it is far more complex than a simple dichotomy of being either public or private, with no subtleties inbetween those two extremes. As though that can’t be possible, that it must surely be one thing or the other.

PUBLIC / PRIVATE

  • Reasonable expectation of privacy?
  • Criteria listed in Murray v Express Newspapers [2008] EWCA Civ 446 (para 36))
    • The attributes of the claimant;
    • The nature of the activity in which the claimant was engaged;
    • The place in which it was happening;
    • The nature and purpose of the intrusion;
    • The absence of consent; and
    • Whether it was known or could be inferred;
    • The effect on the claimant; and
    • The circumstances in which and the purposes for which the information came into the hands of the [person using it]
  • Context (people may want to keep things private from some people but not from others)
  • Is access restricted by technical means (password, privacy settings etc)
  • Was it only “public” because someone other than the data subject had already publicly disclosed private facts about them without their consent/knowledge
  • Degrees of invisibility

Great example of how a group view can differ from that of every group member

I realise that I have an uphill battle to convince people that the group perspective is fertile ground for research as far as privacy is concerned.

Again and again people seem to be saying, what can that add that you don’t get from thinking about the individual (and in the case of a group, the individuals within the group).

So it was great to find the following example today. What I need now is a way of translating that across to a privacy context!

Let’s imagine that we have a group in the form of a grading committee. And that committee are asked to mark the essay written by Joe Bloggs. Three members of the committee each look at Joe’s work, but they all come to different conclusions. One thinks his work is worth a grade A, another thinks its worth a grade B minus, and  yet another thinks it is worth a grade C.

Together the group decide to award a grade B. But of course, if you stop and think about it, that wasn’t the view of any of the individual members who did the initial grading.

I get the feeling that what I need to do is to find examples, stories that people can relate to  in order to get my point across. So my search continues.

 

Why its easy to (but we shouldn’t) dismiss group privacy as a useful prism to study informational privacy

Why group privacy is by no means easy to study

  • Groups are dynamic entities
  • Groups come in endless numbers of sizes, compositions and natures
  • Groups are fluid
  • Some might argue that with groups acting as moving targets and no clear or fixed ontology for them there is little hope a theory of group privacy may ever develop

 

Why we mustn’t dismiss the study of group privacy as pointless

  • A full understanding of group privacy will be required to ensure that our ethical and legal thinking can address the challenges of our time.
  • Groups need protection as entities, and this requires a new approach that goes beyond current approaches to data protection
  • Cases of direct harm occurring on a group basis such as the Facebook emotion experiment
  • The problem of group profiling contains recognisable elements of both privacy and data protection problems: people’s fundamental right to autonomy is being affected, but they are also consequently being made vulnerable to discrimination and personal danger
  • Group privacy based on the right to inviolate personality specifically aims to protect against (1) unwarranted third party manipulation of identity and (2) harms from automated decision-making based upon profiling identities assembled by third parties.
  • Decision-making in areas such as risk stratification, credit scoring, search and media filtration, market segmentation, employment, policing and criminal sentencing is now routinely informed by analytics (algorithmically generated groups)
  • Groups can’t file complaints under ECHR (and only in VERY limited circumstances can legal personalities)
  • Both an individual and group’s right to inviolate personality can be violated when identity is crafted externally, without either’s consent or awareness.

 

Examples of group rights

  1. the right of a cultural group that its culture should be respected and perhaps publicly supported
  2. the right of a linguistic group that its language should be usable and provided for in the public domain
  3. the right of a religious group that it should be free to engage in collective expressions of its faith and that its sacred sites and symbols should not be desecrated
  4. the right of a nation or a people as a group to self determination

 

What we only get by studying the group dimension

  • In-group knowledge
  • The dynamic of the group protecting the privacy of its members
  • Metadata from a group provides far more value than just looking at the metadata of the individual (the network effect)
  • Whether all of the members of the group have the same understanding and the same commitment as to what should be kept private
  • Non-reducible identity (ie. more than the sum of the identities of its members)
  • Group rights held by the group qua group, rights attributed to the group itself

Groups, rights, and choice theory

According to the choice theory of rights, to have a right is to have a choice such that it makes sense to ascribe rights only to beings that are capable of choice (Jones 2016)

Each approach to group rights supposes that ascribing a right to a group entails conceiving the group as a moral entity in its own right: the group must possess a moral standing that is not reducible to the standing of its members.

(Bisaz 2012 p11) says that group rights are permanent and distinct from “affirmative action” (timely, limited and transitional preferential measures). He distinguishes between group rights and human rights.

(Lerner 2003 p39-41) lists a number of examples of group rights, such as

  • The right of existence
  • The right of non-discrimination
  • The right to the preservation of the identity of the group
  • The right to special measures needed for the maintenance of the identity of the group
  • The right to decide who is entitled to membership in the group and the conditions of maintaining that membership
  • The right to establish institutions
  • The rights to communicate, federate and cooperate with similar groups
  • The right to impose duties upon the members of the group
  • The right (of some groups, in certain conditions) to the recognition of their legal personality
  • The right of self-determination

(Bisaz 2012 p14) says that the choice theory emphasises the power of choice of the rightholder and his control over the correlative duty of a person, his power to claim performance or remedy for non-performance. Hence rights are seen as “protected choices”.

However, Bisaz believes that the theory has some serious flaws when it comes to duty rights, rights of children or inalienable rights.

(Bisaz 2012 p14) According to the choice theory, the only requirement that right-holders have to fulfil is that they are theoretically capable of a choice. This requirement may close the door for minor children as they can be seen as not capable of making a choice, but not for groups as long as they can organise themselves in a way in which they are able to make a choice collectively.

But as Bisaz points out, from a conceptual point of view it is nonsensical to speak of human rights if a human being has no mental capacity to make a choice. This would be very much counterintuitive and a theory on the concept of rights that comes to such conclusions can only be limited in its explanatory force.

I would ask, for example, where does someone who has been picked out as part of an algorithmically generated group have a choice. They are unlikely to know they have been picked out, that they have been discriminated against, or who the other members of the group are to get together to co-ordinate a response.

In short, as Bisaz says, it is nonsensical to speak of human rights if a human being has no mental capability to make a choice.

So the question of whether a group can organise itself in a way in which they are able to make a choice collectively is key.

REFERENCES

BISAZ, C., 2012. The concept of group rights in international law. Brill.

JONES, P., 2016. Stanford encyclopedia of philosophy (entry for group rights).

LERNER, N., 2003. Group rights and discrimination in international law. Martinus Nijhoff Publishers.