The practice of listening is about making decisions about what to focus on, to tune into a stream of sounds and make a distinction and separation between specific elements. Listening can redraw boundaries of space, challenging categories of private and public. As Tom Rice (2015) argues, listening gives the ability to tune in and out of spaces in a selective way; it “is understood to involve a deliberate channeling of attention toward a sound. . . . The term encompasses a wide variety of modes, qualities, or types of auditory attention” (p. 99). So instead of the commonly used terms ‘seeing’, ‘invisibility’ and ‘black-box’ I propose a different way of thinking about the multi-layer mediated territories we live in and use the term processed listening.
In computation, to process means to deploy a procedure, or several, on data according to specific protocols that can include (re)organization, removal, deletion, filtering, and adaptation. This term has inspired processed listening, which attends to the context of media knowledge production and includes monitoring, measuring, detecting, categorizing, and filtering. This mode of listening describes the way media workers selectively tune into different sources through the media apparatus, by using several tools (which can be automatic or manual), in different temporalities, to produce different kinds of knowledge for various purposes (mostly economic and political).
Measuring people’s behavior with different tools did not start the internet. As I show in chapter 3, this was a project conducted by Bell Telephone who measured three main things: New York City, people’s behavior and their telephone operators’ bodies. In the picture above Rogers Galt from Bell Telephone accompanied with a representative from Johns-Manville (a company that manufactures insulation, roofing materials, and engineered products) used Bell’s tools to measure city noise in multiple locations. Here Bell’s engineers were the new experts who can decide what will be considered as sound and what will be considered as noise. Interested groups wanted to map the city and people’s behavior to restructure it for their economic benefit.
After circulating a questionnaire to New York City citizens, the Noise Abatement Commission mapped traffic noise across the city (as the image above here shows). But of course cars were never banned or regulated, instead specific groups of people who were considered as a hazard to the economic endeavors of Bell and other interest groups were categorized as noise. This was a way to standardize and regulate street market, and similar strategies were later conducted on the internet.
With the introduction of the web the listening capacities have expanded drastically thanks to new automated tools and systems such as cookies, pixels and Real-Time-Bidding. This meant that the time of the listening event stretched into a continuous process that created a feedback loop of knowledge production that co-created different objects, subjects and the architectures of these spaces. It also meant that listening was done in several spaces, following the sound of the subject across the different sites visited.
One of the most influential architecture feature today are Facebook’s social plugins, which are an improved version of digital advertisings’ cookies, along with pixels, which listen users’ behavior outside the territory, wherever a website, game, application or other publisher integrates these tools. Social plugins listen to Facebook members and non-members whether or not they are logged in to create a database of behaviors.
But it’s important to understand that not everyone has the same listening capacities. The advertising industry, browsers and Facebook want to control and limit people’s listening abilities. This means that people can only access and experience the web and Facebook in a restricted and narrow way and communicate with their computers and other users without knowing what happens in the ‘back end’, in other layers.