When I started my research and focused on spam I intended to interview people at two main spam organizations – Spamhaus and London Action Plan. I thought I will be able to understand why and how they established their organizations and importantly – Understand how and why they categorize specific behaviors as spam. Unlike my enthusiasm to spam, they didn’t want to participate and didn’t want to be interviewed. Epic fail right? Wrong! As an investigative journalist I knew this means one thing – I’m in the right direction. Instead I decided I’ll dedicate (some of) the chapter to one of my passions – analyzing legal documents through Critical Legal Studies (C.L.S.), which is an approach that deconstructs legal texts (supposedly objective and truth discourses) to reveal how power is constructed in them.
The 2nd distortion story moves to the early 2000s and shows how the digital advertising industry constructed the media category of spam. By lobbying EU legislators and the Internet Engineering Task Force (IETF), the digital advertising industry and tech companies standardized the category of spam around any ‘problematic’ behavior threatening their business. This required configuring spaces and people on the internet. So in this chapter I begin by showing how the digital advertising industry created a false analogy between private and public spaces online. It did this by categorizing spam as unsolicited communication in private space, while web-cookies were categorized as wanted communication in public space. One of the key ways to create such an online trade-friendly territory was by deciding what created a ‘burden’ on this infrastructure, producing a certain rhythmedia that ordered legitimate communication while illegitimizing others. The argument is that spam operates as a regulatory tool applied to any type of behavior that can interfere with the functioning of e-commerce.
One of the main things I’m showing in this chapter is that as researchers we need to stop repeating what computer scientists and lawmakers are saying – That cookies are ‘just text files’ sent to your computer. By analyzing The Internet Engineering Task Force (IETF) cookie standard – I show that cookies are a form of communication, that has turned people’s behavior into data – the message – communicated between non-humans (people’s browsers, publishers and multiple types of adtech) and operated by multiple actors. Electronic communication is protected, at least in EU law, in the e-Privacy Directive now turned Regulation. This is why definitions are so important (we saw this with the latest article by the Markup about how specific political emails were categorized as unimportant and didn’t reach people’s inbox), and why lobbying to change them is in fact a battle over the way the internet functions and importantly – how we understand it!
In the second part of this chapter, I show how people were (re)produced as data subjects by standardizing the way their behavior was measured. Advertisers are encouraged to listen to server logs, which helps to identify abnormal behaviors in four main ways: identifying users performing multiple sequential activities; users with the highest levels of activity; users who act in consistent interaction attributes; and ‘other suspicious activity’. These 4 criteria also imply that there are guidelines of specific ‘legitimate’ digital bodies’ behavioral traits. According to such standards, the way humans behave is categorized as inconsistent, low-level (frequency) activity and sporadic singular activities. The issue of filtration points to the difficulty of measuring accurately and the need to control people’s behavior to avoid mistakes in calculations. This is precisely why it was so important for the advertising industry to make spam illegal through legislation – Because such non-human behaviors that are not controlled by the advertising industry, such as spam, can damage the industry’s ability to make sense of online behavior measurement and create inaccurate profiles and audiences.
The rhythm of communication in this online market changes and accelerates as non-human actors are introduced into the multiple channels. The name ‘real-time bidding’ is interesting as, in fact, it creates different temporalities, accelerated rhythms for trade, one that is so fast that humans cannot comprehend or notice it. In this way, the type of content and ads that people engage with change according to their behavior. Real-Time-Bidding, which relies on ‘real-time processing’, disguises the fast-rhythm processes that happen at the ‘back-end’ by non-human actors, to order the ‘front-end’ human experience. Real-Time-Bidding borrowed the advertising network DoubleClick’s slogan, a technology that “enables you to deliver the right message to the right person at the right time”. The ‘right’ people, spaces and timing are produced by a particular rhythmedia conducted by the digital advertising industry, which filters and reorders whoever does not fit into its business model.
The final section of the this distortion story examines how the European Union developed education material to educate users to maintain their safety. It’s a great example of the way digital skills were aimed to keep citizens passive and with limited understanding of how the internet works. The European Union’s Safer Internet Programs were running from 1999 until 2013, whereby people were taught to report on harmful content and to avoid actions that could harm the protection of reputation and intellectual property. Teaching citizens how the internet works, how to encrypt their communication or to use more privacy friendly services was never part of these programs. Not to mention teaching citizens about laws that they can use to object, protest or negotiate things on the internet.
Control mechanisms came in the shape of browsers’ default settings, which prescribed the preferred way to behave, yet offered ‘empowerment’ and freedom of choice in laborious setting tools within browsers, which enabled users sometimes to reject third-party cookies. Other mechanisms of control came with the introduction of the ‘Agree/OK/I Consent’ notification that websites had to present when users visited websites. In this way, EU citizens were trained to click on such buttons without knowing what cookies were, how they worked, who the entities were that operated them, and, importantly, the consequences of this communication. These control mechanisms, then, and especially the notion of ‘consent’, trained people’s bodies to understand that they had power and choice by clicking. Importantly, they carried responsibility of the consequences of every action. The term ‘control’, here, refers to the control of users’ behavior, not to giving them control.
This section led me to my current research project around citizens data literacies (Check out our latest report – “Understanding citizens’ data literacies: thinking, doing & participating with our data“). It’s important to examine these first attempts of government bodies to keep people’s understanding of the internet and what they can do in it as narrow and controlled as possible. In my current project we developed the notion of ‘data participation’ which is the proactive practices that people can do with their data and their communities. Data Participation examines the collective and interconnected nature of data society. Through Data Participation citizens seek opportunities to exercise their rights as well as to contribute to and shape their collective data experiences. Examples of Data Participation might include a person who actively contributes to online forums, citizens using open data for the benefits of their community, helping others to set up a secure password, engaging in privacy or disinformation debates or takes steps to protect their personal information.