Protecting #privacy : what we are up against

Virtually all web services that we use have a set of terms and conditions. These supposedly tell us all the basis on which we are entitled to use the service in question. And some websites won’t permit us to proceed any further until we have clicked on the “I accept” button. There are plenty of reason why this method of supposedly giving our “consent” is flawed:

  • The T&C’s are usually written in legalese, and due to their length and the complex legal language that they use they seem to be deliberately designed not to help us understand where we stand
  • The companies are able to change those T&C’s unilaterally, and make it very difficult for us to know when, whether, and how any changes are made to those terms and conditions, because they reserve the right to change them at any time almost like a blank cheque, where we have no idea what that permission will really be used for
  • One study estimated that in order to read all website privacy policies encountered in one year, it would take 201 hours annually – equalling $3,534 – per American internet user (McDonald & Cranor, 2008-2009, p565)
  • These “click use” agreements don’t give any opportunity for negotiating particular clauses within those agreements

So, we are left being subjected to the policies of large internet companies who have a huge amount of power over individuals, as well as over businesses (think, for example, how many businesses have suffered a drop in trade when a big search engine’s algorithm is updated without any warning and their website no longer appears towards the top of the search results).

These companies often only promote privacy when it is in their business interests to do so. Their privacy settings are usually designed to ensure that people’s information is shared as widely as possible. Why, for example, is it always a case of us having to opt out of things, rather than giving us an active and meaningful choice whereby we only opt in if we really want to. Why are the default settings always set to make our information public by default. The answer of course is that they know that by doing so they maximise the likelihood (indeed they make it a near certainty) that information will be shared. And just in case we do make the effort to change the privacy settings to protect our privacy, they play another little trick. One of the top internet/tech companies which I won’t name, recently launched their most privacy-invading software yet. And having taken the time and trouble to reset all the privacy settings, I then discovered that when they released a software upgrade, that this overrode some of the settings I had deliberately changed from their defaults. And of course, they would have shared my information in the meantime before I found out, and they conveniently omitted to tell me what they had done.

Meanwhile, in the UK we have the Investigatory Powers Bill which has now been through most of its parliamentary stages and will receive royal assent very soon. This will make mass surveillance lawful, without any of the sorts of accountability required to be anything like proportionate to the sweeping powers that they are introducing. Indeed, the governments doesn’t want the information provider to tell the user that their data has been accessed, to the point where it will be illegal for a company holding your data to tell you that they have passed it on to the government. Governments don’t even want to say how many times they have requested data

We are left with the idea of the Warrant Canary – see