Understanding the Power Behind Spam, Noise, and Other Deviant Media
Media Distortions is about the power behind the production of deviant media categories. It shows the politics behind categories we take for granted such as spam and noise, and what it means to our broader understanding of, and engagement with media. The book synthesizes media theory, sound studies, science and technology studies (STS), feminist technoscience, and software studies into a new composition to explore media power. Media Distortions argues that using sound as a conceptual framework is more useful due to its ability to cross boundaries and strategically move between multiple spaces—which is essential for multi-layered mediated spaces.
Drawing on repositories of legal, technical and archival sources, the book amplifies three stories about the construction and negotiation of the ‘deviant’ in media. The book starts in the early 20th century with Bell Telephone’s production of noise, tuning into the training of their telephone operators and their involvement with the Noise Abatement Commission in New York City. The next story jumps several decades to the early 2000s focusing on web metric standardization in the European Union and shows how the digital advertising industry constructed web-cookies as legitimate communication while making spam illegal. The final story focuses on the recent decade and the way Facebook filters out antisocial behaviors to engineer a sociality that produces more value. These stories show how deviant categories re-draw boundaries between human and non-human, public and private spaces, and importantly, social and antisocial.
6 Conclusion: Transducing the Deviant
If you have reached this part of the book you probably know by now that spam is not just that junk folder in your email account. As I showed throughout the chapters, spam is much more than Nigerian princes or Monty Python’s (excellent) sketch. More than that, investigating ‘deviant’ media categories can tell us a lot about media. As Michel Foucault argues, if we want to “find out what our society means by sanity, perhaps we should investigate what is happening in the field of insanity. And what we mean by legality in the field of illegality” (Foucault, 1982: 780). These Media Distortions, these deviant behaviors and these irregularities, were exactly what I was questioning, challenging and re-telling. To make a sense from the common-sense.
As I am writing this chapter, a new wave of research interest is sparking. It examines disconnections, antisocial behaviors, and lack of access to media companies’ databases. Scholars from media studies are starting ask what do media and the types of access they allow enable or restrict our understanding of them? What types of antisocial behaviors are media companies trying to filter out and why? This is a big step forward, because it acknowledges our limitation as researchers and amplifies the importance of types of behavior which might not get registered or receive any kind of cue. Deviance matters. It encourages us to think and examine beyond what is available to us, to listen deeper through the mediated layers ←251 | 252→of our lives. Interestingly, still, questions around spam are not raised. Spam is still not interesting or relevant to scholars.
Addressing these concerns, Robert Gehl’s latest book (2018) dug deeper into the Dark Web to explore the power of constructing legitimacy. Similar to this book, Gehl shows many similarities between the Dark Web and what he calls the ‘Clear Web’, the ‘normal’ web that most people use. As he argues “the connotation of ‘darkness’ in Dark Web has more to do with encryption, anonymization, and leaving standard communication channels” (Ghel, 2018: 6). As Gehl shows, the difference between the ‘Dark’ and ‘Clear’ web are not around any black magic or blasphemy that happens in one as opposed to the other. In fact, these media have much more in common than people think, and the same activities happen in both. The difference is mainly about the way to access and use these different mediated territories, and the anonymity of both people and the websites. Hence, what stands behind the ‘darkness’ is alternative ways of communicating through similar media infrastructures.
Media Distortions shows there are always other ways to develop, engage and understand media. What I show throughout this book is the power behind forcing you to listen to the same record over and over again, making you believe there is one way to experience media. The people and organizations that continuously create the distinction between categories, between what is deviant and what is not—is my focus. It is the way they negotiate, lobby, have conflict with, and establish these deviant categories and what are the consequences of that. Deviant media categories are about the struggles to determine what is human, normal, and social—It is about what makes us as individuals and society, it is about the default settings of our lives.
Because deviant categories constantly change, it might well be that in a few years from now spam will be something else, like the new stream of research around ‘fake news’ and mis- and dis-information or maybe other problematic behaviors in media that will emerge. And they will emerge. So what is important to take from this? Why is it important to examine spam or any other types of deviant media? The answer is, as I showed throughout this book—power. Who has the power to decide and enact what will be categorized as deviant? What is the rationale behind it? How are these boundaries negotiated? How do these categories influence people’s behaviors, feelings, preferences and tastes? How do these categories affect what we consider as human? How do these categories produce new territories? And how do these deviant categories shape how sociality is understood and performed?←252 | 253→
These are important questions because they shape the way we understand and engage with media and ultimately ourselves and our surroundings. It means that if I think that making many actions at the same time is spam, and hence prohibited, then I might not choose to act like that, even though this type of action can be quite useful for protest and self-determination. For example, in 1998 The Electronic Disturbance Theater (EDT), an Internet performance art activist group, used a website called FloodNet (developed by by Brett Stalbaum and Carmin Karasic) for collective digital protest and artistic expression (Carasic, nd). The EDT’s aim was to use internet technology as a collective activist tool for non-violent resistance. They wanted to act in solidarity with the Zapatista rebels residing in Mexico by staging virtual sit-ins online. Inspired by street theater and political rallies FloodNet would reload a URL for short several times, and in doing so would slow the website and network server down. Such disruptive activities would be later called spamming, hacking and especially Distributed Denial of Service (DDOS) ‘attack’ because they would disrupt the fast-pace rhythm of ‘real-time’ digital experience. Categories change, and so do their meanings, but our need to question, protest and negotiate how they are produced is fundamental to our political futures.
For example, in a recent research conducted by Nathan Matias and others at Princeton University in the United States (2018), they examined how Facebook categorizes ads. As they argue “Facebook detects political ads through machine learning algorithms and human reviewers”. As they show, there are a lot of conflicts between two main categories—political and issue advertisements—especially around election time. But one of the main issues is that because the biggest tech companies are based in America, their policies are shaped by these tech companies’ legal definitions and are regulated by the Federal Election Commission (FEC). So while there is a clear definition for political ads, ‘issue ads’ are harder to define and ads about disabled veterans and national parks were prohibited. As Matias and colleagues argue:
If corporate political filters make enough mistakes, they could substantially impact American civic life. Public holidays, community centers, and news conversations knit together the civic fabric of democratic life, enabling Americans to understand each other and work together despite our differences. Each time a platform wrongly restricts a community announcement, this civic fabric weakens, with fewer people honoring American veterans, fewer relationships among neighbors, and less common understanding at a time of growing polarization. (Matias et al., 2018)←253 | 254→
As Matias et al. argue, by reviewing so many advertisements platforms become policy enforcement of society’s norms and values, and when their main logic is profit, (according to Statista website, Facebook’s 2019 ad revenue is more than 69 billion dollars) this is a problem. As I showed in Chapter 4, not having a clear definition of spam was a powerful tool for the advertising industry. It enabled them to categorize anything that interfered with their business model as spam. And this is precisely why researching these topics is important. It is important because decisions of categorization and their operationalization affect our everyday lives. And by not revealing some of the decision-making processes of content moderators and other moderators of ads and content on these platforms we simply cannot know the amount of filtering our mediated experience goes through.
This book shows the ways media practitioners construct specific behaviors as deviant in different periods and territories and what that means. Unlike many scholars from the history of science and media and communication I use sound concepts to theorize and conceptualize power relations in media rather than vision, (in)visibility, and seeing. Two main theoretical and analytical tools guided Media Distortions: processed listening and rhythmedia. These sound concepts, I argue, are more suitable when examining media knowledge production and power relations, because of their abilities to cross boundaries (of bodies and spaces).
As ‘deviant’ media receive different categories and configurations in different periods and media, I outline broad strategies that show how power has been enacted. These broader strategies show longer lineages of ‘new’ media phenomena, while emphasizing the local adaptations such strategies take. This book’s main argument is that media practitioners in different periods have been using processed listening and rhythmedia as part of seven sonic epistemological strategies to (re)produce subjects and territories. The first three strategies are associated with processed listening: new experts, licensing and measurement; the next four strategies are related to rhythmedia: training of the body, restructuring territories, filtering, and de-politicising. Through the three distortion stories, I illustrate how these strategies have been deployed in different ways and degrees to show how power is put into action, as Foucault would phrase it (1982: 788). I demonstrate how such power came into action by restructuring territories and training people to become specific subjects.←254 | 255→
Although Foucault never talked about media or lived to experience how networked territories such as the internet, the web, and social media platforms developed, his theory of governmentality and the axis of power/knowledge have been influential for this book. As the chapters in this book have chronologically progressed, the power of states was gradually delegated to commercial actors and especially media companies to produce knowledge about populations. This is not to say that states have stopped producing knowledge or lack power, but rather that the power of media companies can be stronger and have more capacities. In a way, media companies and states are not separate entities, because more and more collaborations are conducted between governments and private companies as more state responsibilities get privatized.
One just needs to listen to the way companies like Google, Facebook, and Amazon have become gateways to almost every activity we do, from searching information, to watching videos, listening to music, paying for things, connecting with employers, finding romantic or sex partners, keeping in contact or fighting with family and politicians, shopping and maintaining our mental and physical health. As our primary mediators of life, it is important to remember that Facebook and Google’s main business model is advertising, while Amazon recently announced that it will also join the party: “Thanks to its wealth of data and analytics on consumer shopping habits, it can put ads in front of people when they are more likely to be hunting for specific products and to welcome them as suggestions rather than see them as intrusions” (Creswell, 2018). What these companies don’t advertise is how they turn people into the product, that is kept silent.
Therefore, the way these platforms shape the kind of things we can interact with and consequently our subjectivities, our relations and our understanding of the world—should be challenged, questioned and examined. It is not only about what is spam, then, it is about what is filtered, (re)shaped, managed, (re)ordered through media. It is about the power these media companies have on every aspect of our lives, and even deaths.1
Unlike several streams of actor network theory who argue that all elements have equal weight in a network, and following Raymond Williams, I believe that it is important to amplify the intentions behind media companies strategies. I show the entities—people or otherwise—have been conducting rhythmedia and show that there are organizations who are responsible for designing, managing, shaping, and using different strategies to create specific subjects. I do not believe any research is neutral or objective from politics, and I can definitely say that this current book takes a stand. As a researcher, but more importantly as a feminist and digital rights advocate, I do not intend to reproduce a soundtrack that suits ←255 | 256→big media companies. ‘Tech won’t save us – Unionize’, is not just a cool t-shirt slogan. It is the argument that guides this book.
Spam is political because a lot of behaviors women (such as breastfeeding), LGBTQ+ (using names that were not given by birth) and people of color (Harlem house parties) perform are often categorized as inauthentic, noisy, and spammy according to media companies. Categorizing such actions through processed listening and shaping different territory architectures in a particular rhythmedia then, has been designed to produce subjects that are in congruent with a standard that is not their own. Importantly—it aims to discourage and de-politicize their actions, gatherings and expressions. The boundaries of what can be done, said, thought and embody has been a constant battle for redefining the meanings of our lives. Nevertheless, we must persist to resist.
This book shows how power has been enacted by actions deployed on actions in media, whether through modifying (physical or digital) territories to influence people’s behaviors, or through actions on people’s behaviors or their friends’ behaviors, in the present with an ambition to influence their future actions. In each of these stories, I show how power relations have been enacted in a process that was co-produced by human and non-humans and conducted by a rhythmedia that mostly benefited particular actors. Conducting rhythmedia shapes people’s behaviors by repetitive training of individual bodies and populations as a whole. This rhythm is far from being neutral, and understanding this means that we can also change the tune into a melody that makes sense to us.
Using sound concepts has been productive, especially in relation to multi-layered communication channel territories such as the web and Facebook. As I show in Chapters 4 and 5, in only two decades, the number of communication channels that have been developed and are operating in a new territory has increased immensely. There are multiple spaces operating simultaneously, conducted in different rhythms, and importantly—unregulated at the ‘back-end’. With Amazon’s Alexa, for example, the company’s listening capacities expand even further and add more layers where it can conduct processed listening to our behaviors and voices inside our homes, even if we are not using the phone or any other media device.
Both processed listening and rhythmedia are constantly feeding each other with knowledge that (re)produces subjects and the territories in which they live. In this way, they are never finished subjects or territories. This is precisely why spam has been perceived as noise in the past and why cookies were not considered spam in the 2000s, because, in each setting, the conditions changed along with different politics that came into place.←256 | 257→
Processed listening involves monitoring, measuring, categorizing, recording and archiving to produce a dynamic knowledge database. Each of these actions already produces, shapes, includes and excludes certain types of subject. Having more listening capacities gives you more power to penetrate more mediated layers and know more about people’s behaviors in different times and spaces. It creates a dynamic database/archive because there is a continuous process of listening, which adds more knowledge to create profiles and audiences. Remember Ghost in the Shell’s Puppet Master? As they argue in the film: “Man is an individual only because of his intangible memory. But memory cannot be defined, yet it defines mankind”. The creation of this database/archive produced a memory that media companies have of you, to produce the options available to define you. But you don’t have access or knowledge about it.
Processed listening involves strategies of new experts, licensing and measurement to create a database that can then be used for rhythmedia. In Chapter 3, I show how in 1929, Bell listened to people in multiple spaces across New York City, using the tools it developed—the audiometer and the noise meter—and with its measuring unit—decibel. Licensed by the Noise Abatement Commission (NAC), Bell became the new experts. The company’s media practitioners were able to measure, categorize and decide what the thresholds were for the normal, healthy and human, by defining anything that interfered with its business as noisy. Bell was joined by other interest groups from the NAC, but all of them relied on Bell’s metrics to categorize behaviors and spaces that interfered with their business or values as noisy.
The dynamic database Bell produced enabled it to reconfigure specific groups of people, behaviors and spaces so that the city of New York would be produced as a territory that suited Bell and the NAC’s goals. These goals included pushing the telephone apparatus and the services it provided. Importantly, the NAC project that was promoted across the city and media outlets standardized the way people thought and understood sociality according to Bell’s measuring unit. This reoccurred in Chapter 5 with Facebook and its standardized unit, the Like. The ‘Like’ has become a way to describe popularity, desirability, attention, affection and a way to show you are a good ‘friend’. The production of knowledge in these cases, then, was not only production of the measuring tools, units and the drawing of noise maps; it also reproduced people as subjects who experienced, understood and performed their lives according to Bell and Facebook’s standards.←257 | 258→
A second strategy was the surveys that New York City newspapers circulated to educate people into understanding their relations with other people and objects according to Bell’s rationale. They also enabled the NAC to give controlled listening capacities to the city’s citizens so they could be trained and ‘empowered’ to identify noisy behaviors. People in New York did not have Bell’s measuring devices so they could not measure and provide exact units. This did not matter so much as to train them that they should care about noise and to define and perform their relations according to the decibel. This shows how controlled listening capacities were also given to ‘normal people’ in a disciplinary mode that trained their bodies to become disciplined subjects. It also encouraged people to educate their peers by policing the noisy people (who were mostly foreign or black) or informing the authorities about them—deviant behaviors should be controlled and managed all the time. Ultimately, both the NAC and Facebook have used only survey findings that suit their rationales, while ignoring others.
When it came to Bell’s operators, the company expanded its listening capacities, they penetrated into more spaces trying to measure and record as much data as they could into a database. As Chapter 3 showed, Bell listened to its operators inside and outside their work hours and also inside and outside their workspaces. Bell stretched its listening capacities to be able to collect as much information as possible about the operators’ lives, activities, preferences, body and hygiene habits and desires. With the Design for Living program, the boundaries of operators’ bodies, time and minds were re-drawn by Bell, and molded like objects. By organizing group meetings to talk about topics such as etiquette, money management, travel and hobbies, Bell wanted to create a specific default design for the operators’ lives. The company did this to gain more knowledge about their behaviors, desires and thoughts so they could be trained as more efficient and obedient communication channels and filters. The operators were an important element in the system which made sure people would experience a ‘real-time’ communication with no interferences. But as they were supposed to decrease interferences, Bell made sure to interfere as much as possible in their lives.
The two events in Chapter 3, I argue, provided inspiration for Claude Shannon’s information theory and cybernetics’ conceptualization of noise and, importantly, automation. In both of these events, Bell’s engineers were the new experts who could operate the listening devices, measure people and spaces, and have the authority to categorize noisy behaviors or spaces. Operators were trained to detect malfunctions and understand what customers were saying, sooth their anger, filter noise from the signal, and predict future behaviors while applying their memory. They were part of the communication channel and its filter. ←258 | 259→Importantly, as they were able to fix the apparatus, like engineers, another key characteristic the operators embodied was the feedback: the ability to adjust future conduct according to past knowledge. These functions were later partly delegated to automatic communication channels operated by several technologies such as codes, algorithms, and protocols.
The more knowledge media practitioners produced, the more they could turn it into various types of product and service. These procedures were later delegated, partly, to automatic machines, which accelerated the listening process and thus the ability to produce subjects and territories. An automation also meant that it was easier to conceal the people, values and decisions that are still involved in these processes. What these stories show is the way that listening to people in multiple spaces can produce more knowledge about them, and this in turn can feed various business models. Hence, the development of more layers of communication channels that are described in Chapters 3 and 4 have been inspired by this intrusion to people’s lives, inside and outside work, as well as their bodies and minds, and got more digital. The more layers that were added, the more aspects of life that were listened to and then monetized.
The early establishment of regimes of noise is then compared to the advanced electronic networks of the 21st century, where one of the main media territories that continues this project of automation is the web, which is elaborated in Chapter 4. Around the 2000s, the number of media practitioners that deployed processed listening increased and power relation were decentralized to involve more actors. Here, we tune into to the involvement of the advertising industry and its various types of actors such as advertising associations (IAB, EASA, FEDMA), advertising companies, advertising networks, advertising exchanges, Supply-Side Platforms (SSP) and Demand-Side Platform (DSP).
These media practitioners were licensed by the European Commission’s soft law approach to be the new experts that could listen to people across multiple spaces on the internet and the web. These multiple practitioners conduct an automated online market that facilitates multi-layered communication channels. This online market territory was created following the transition in business models of the web from subscription to free access to services. With this shift, the digital advertising industry became the main sponsors of the web and therefore became key players of this territory and gained more power with their ability to listen and produce knowledge (profiles and audiences). Other actors such as web browsers and publishers were also licensed to listen as they provided the territories and measuring tools to conduct most of these practices. Today these licenses are given through voluntary ‘self-regulation’ mechanisms to technology companies ←259 | 260→who along with academics and governments create multiple ‘ethics’ guidelines, principles and statements for artificial intelligence as a way to avoid actual regulation. (You can check the Algorithmic Watch AI Ethics Global Inventory to understand the scope: https://algorithmwatch.org/en/project/ai-ethics-guidelines-global-inventory/).
Chapter 4 shows how the penetrating to people’s private spaces—their bodies—by tracking, monitoring, and monetizing on their every move has been normalized. The advertising industry practices enable them to dis-embody people by treating them like objects, and by doing so de-humanize them. Both digital advertisers and others who work in the technology industry do not understand or want to understand the harms that their processed listening practices create because they think this is a sort of abstraction, just data objects rather than people. But as Chris Gilliard argues:
Privacy for marginalized populations has never been, and will never be an abstract. Being surveilled, whether by private actors, or the state, is often the gateway to very tangible harms–violence in the form of police brutality, incarceration, or deportation. And there can be more subliminal, insidious impacts, too… The norm-shifting involved around privacy works to benefit tech companies who profit immensely from labelling extraction as “sharing” and “community”. (Gilliard, 2019)
It is exactly the normalization of ‘extraction’ through media that this books aims to reveal. To know people on the web, the digital advertising industry standardized web metrics, including listening tools such as first- and third-party cookies, pixels and log files as well as measurement units such clicks, unique visitors and page impressions. Various practitioners from the advertising industry listened to individual bodies that were associated with their IP addresses through cookies and pixels. They also listened to them as populations in multiple spaces across the web to collect information about their preferences, behaviors and habits.
People’s behavioral traits were divided into groups according to gender, age, location, interests, marital status, health status and other characteristics. This knowledge was used to match them to particular profiles according to audience segmentation. These classifications of populations were then fed back to them by shaping the architecture and things according to pre-defined profiles. Although these data are never an accurate depiction of people’s personality or traits, and in fact because of this, the power media companies have in shaping the options available to people in mediated spaces is dangerous. As Foucault argues in relation ←260 | 261→to governmentality, such strategies will “act either directly through large-scale campaigns, or indirectly through techniques that will make possible, without the full awareness of people … the directing of the flow of population into certain regions and activities” (Foucault, 1991: 100). In this way, the digital advertising industry produced ways of living.
A recent example of this is the Cambridge Analytica case. As part of Donald Trump’s election campaign in 2016, Project Alamo was particularly special. As Jamie Bartlett shows in his two-piece documentary for BBC2—The Secrets of Silicon Valley—Trump’s campaign team used Facebook to target specific groups of people, according to their traits and then tailor specific ads for them. “It wasn’t uncommon to have 35 to 45 thousands of these types of ads everyday”, said Theresa Hong, the Digital Content Director of Donald Trump’s Project Alamo. Cambridge Analytica targeted people according to ‘universes’, audience segments that were informed by things like: when was the last time people voted, who did they vote for, what type of car do they drive, and importantly—what kinds of things do they look at when on the internet. They also examined things like emotional and personality traits such as whether people are introverts, fearful or positive.
Cambridge Analytica developed their own archive, combining various databases from Facebook and others to create rich profiles and then tailor election ads aiming to influence the most vulnerable audiences. The company was also able to constantly monitor the effectiveness of its messaging on people, giving them a constant feedback on engagements. This in turn, would enable them to constantly update and improve different messages to different people’s profiles. By knowing people’s behaviors, the company also knew when was the ‘right’ time to show them these messages for maximum influence. It is difficult and probably impossible to know the impact of these messages on the people they showed, but nevertheless the intentions and shaping of territory in specific ways is something that they have continuously conducted. And they were not the only ones.
Listening to people as a population, as I showed in Chapter 4, also helped the advertising industry to statistically map behavior online and then draw the boundary of which behavior should be categorized as human and robotic. In this way, they decided how and which bodies count. The more knowledge they had on people’s behavior, the more they were able to categorize behaviors that did not suit their business model as robotic or spammy. Doing this, they were able to redraw the boundaries of what it meant to be human and ‘healthy’—a ‘self’ in the EU online territory, as computer scientists Forrest and Beauchemin (2007) would call it.
The listening ‘event’ never finishes because Facebook needs to keep selling people as products in its online market. This is why as I show in Chapter 5, ←261 | 262→listening is conducted even when people log off, and even when they have not subscribed to the service at all. What people do outside of Facebook is valuable data and the company considers every person as a potential subscriber, soon to join. It is important to emphasize, though, that measurement is never quite accurate. On October 17, 2018, it was reported that Facebook hid inflated ad metrics from 2015 until 2016 (Rosenblatt, 2018). In fact, Facebook has a notorious reputation of skewing metrics, unsurprisingly mostly to its own advantage. So although I talk about the importance of accuracy in measurement, in practicality this is pretty difficult to achieve. Measurements of audiences has always been a difficult task, and the digital advertising industry managed to sell this fake dream of accurate profiles and audiences even though they could never properly prove it. In fact, they justify their methods and lack of accuracy by saying that more surveillance would help achieve better profiling.
All the chapters show that processed listening was conducted to produce a dynamic database/archive. This is created by measuring, categorizing, recording and archiving behaviors and relations. This is an ongoing process because, to use the database for monetization, it needs to be as large and updated as possible. Power is enacted in each of these stages, from the type of measuring devices and who can operate and infer them, to the units of measurement and deciding what to categorize and count, and onto what is archived in the database and who can access and analyze that knowledge, and importantly – for what purposes.
As danah boyd and Kate Crawford mention in regards to big data practices, this is “a new way to claim the status of quantitative science and objective method. It makes many more social spaces quantifiable. In reality, working with Big Data is still subjective, and what it quantifies does not necessarily have a closer claim on objective truth” (boyd and Crawford, 2012: 667). In this way, people are taken out of context and understood as data points to be assembled and reassembled, under a neo-liberal market-driven logic that has profit as its main value. When the media practitioners discussed in the three stories gain knowledge about people and territories, they are able to temporally and spatially reorder them in a rhythmedia that benefits their business.
Rhythmedia has been enacted in all three stories in different ways, and it serves several purposes: one, restructuring the territory in a way that promotes a rhythm ←262 | 263→that increases value (and hence profit) for media companies; two, filtering out advertising practices that do not suit the dominant experts; three, producing specific temporalities (speed and frequencies of actions, prioritising specific times of the day/week/year, reordering and stretching work/leisure time) that benefit the media company in terms of efficiency and more value; four, preventing political gatherings by un-crowding, de-politicizing them; five, expanding listening capacities to gain more knowledge about people; and, importantly, six, reproducing people into particular subjects by training their bodies with repetitious actions. Rhythmedia means reconfiguring anything that interferes, harms, burdens their business as deviant, noise or spam.
In Chapter 3, I show how rhythmedia was conducted by filtering street commerce and Black American’s behaviors to have a different street rhythm, one that promoted big retail shopping centres. In New York, Black American’s behaviors were also listened to and defined as noisy by Bell and the NAC. Black Americans in Harlem challenged the spatial and temporal ordering of the white locals; their subjectivities and actions were produced as noisy and distorted the white order of life. They were holding loud parties during the night, and placing loudspeakers on the windows of their houses, thus redrawing the boundaries of night and day, and private and public spaces. Their behavior was constructed as noise, a threat to other bodies and minds, to the healthy rhythm of people in the city.
At the same time because they were framed as noise it was easier to police them as they were monitored through the (processed) listening of authorities and citizens through the layers of the cityscape. Categorizing such activities as noise helped Bell to sell the telephone and its services by restructuring the streets to serve its own service and interests. This was achieved by pushing retail stores that were using the telephone to sell their products, which helped to promote Bell by advertising the telephone as a necessary apparatus for shopping.
Specific rhythms were more valuable, and bodies were (re)shaped accordingly through repetitive training. The telephone operators’ rhythm had to be as fast as machines to be efficient and make more money for the company. Bell listened to the operators’ bodies, broke their actions into smaller segments and then reordered them to become more efficient communication channels to maintain the real-time experience while simultaneously filtering interferences that harmed this feeling of immediacy. It trained their bodies in terms of their diet, how, when and at what pace they should move in the workplace, what they should wear and how they should speak. Bell also intervened in the operators’ leisure time, defining what they should read, their ‘social norms’, how they should spend their money and so on. Listening to its operators, Bell measured, categorized, recorded ←263 | 264→and archived everything it could about them, in order to restructure them into more efficient and obedient objects.
Listening to the operators, foreign street commerce traders and Black-Americans in Harlem were conducted both on individual bodies, but also, more broadly, on groups of people such as peddlers, and, importantly, workers unions. These populations’ behaviors interfered with the economic goals of Bell, retail stores, real estate agents and others from the NAC. Their rhythms did not bring value to these interest groups and, therefore, had to be controlled, filtered out and, hopefully, eliminated. With both New York City’s infrastructure and the operators, another goal was to circumvent political gatherings of unions, and the aim was to un-crowd them. In this way, bodies and territories were reconfigured; they silenced disturbing rhythms.
In Chapter 4, I discuss the database created by digital advertisers, which produces profiles and audiences, transforming them into commodities that are traded in ‘real-time bidding’ (RTB). Here, advertisers construct their own ‘real-time’ in the new online market territory, in which individuals, audiences (population segments) and spaces are traded within milliseconds to the advertisers who offer the most money. With rhythmedia, every rhythm has a value. Commercial rhythms are constructed and promoted, and become the main engine that (re)produces new notions of time, subjectivities and territories,—All orchestrated at the back-end. Here, I illustrate how real-time transactions are conducted by algorithms and automated systems, but are managed and given instructions by humans. As humans are the product, it is important to make a distinction between behaviors that bring value and therefore will be categorized as human, and non-profitable behaviors which will be categorized as non-human or spam.
Like any new market, this automated market demanded standardization and authorization of the legitimate actors involved. Therefore, as I show in Chapter 4, it was crucial to illegalize specific unsolicited bulk communications and categorize them as spam. Despite spam and cookies having a similar rhythm, cookies were authorized by default (design) while spam was filtered out. The digital advertising industry did not want excessive behaviors, human or robotic, to interfere with the measurement of behaviors and, thus, efficient operation of the automated market they facilitated. A similar strategy happened in Chapter 3, where people’s crowded (‘bulky’) behavior was promoted as long as it was part of legitimate commerce activities in retail stores. But when such crowded activities were in the streets conducted by peddlers or unions they were illegitimate and criminalized.
To authorize cookies, the advertising industry lobbied European Union legislators and the ‘Internet Engineering Task Force (IETF) so that they would ←264 | 265→be considered legitimate communications. Browser settings helped in this standardization process by ignoring the IETF recommendations (which were later softened) and not giving people listening capacities to inspect their own bodies. Here, again, the ‘back’ and ‘front’ ends of browsers default settings drew territory design boundaries of asymmetric power relations. This boundary determined who can listen to what; people cannot listen to what is being conducted to their bodies, while their bodies are a public listening space. In this way, the ‘back’ and ‘front’ ends also drew boundaries between the human and robotic behaviors that were conducted in different temporalities. The ‘back-end’ is operating a multi-layered market at fast-paced rhythms so they could restructure the territories people experience in the ‘front-end’ and, consequently, produce their options of living.
People, then, are the start and end points of this feedback loop, which operates in a continuous process of subject production. The dynamic database/archive the advertising industry produces based on people (according to profiles and segments of populations) also shapes the way the territory is reordered according to their actions in specific times. Ironically, during the period of Chapter 4, the digital advertising industry argued that the industry is too young to be regulated and that ‘innovation’ should not be stifled. Today, the same industry argues that it is too late to be regulated because this is the way things are.
These repetitive changes of the territory’s design aim to manipulate feelings by shaping different temporalities, and this was illustrated in a larger scope with Facebook’s ‘emotional contagion’ experiment in 2014. In 2017, Facebook also claimed to know when young teens are feeling down (worthless, insecure, anxious, etc.) according to their platform use, and then sold this vulnerable moment to advertisers who could target them and exploit their fragile emotional state (Machkovech, 2017). Trying to monetize people’s emotions is one of the ‘hottest’ debates at the moment with companies such as Verily, which is owned by Google, tracking the way people tap, type, and scroll on their phone to identify mental illnesses (Kaplan, 2018). Having more knowledge about people’s relations with other people, brands, content, objects and devices, whether these relations are silent or not, is used to influence their behaviors and feelings towards creating more value.
In their report about the way digital infrastructures create new opportunities for political manipulation, Anthony Nadler, Matthew Crain, and Joan Donovan (2018), argue that the digital advertising industry and other intermediaries weaponize data collection and targeting to strategically influence individuals and groups:←265 | 266→
[D]ata-driven advertising allows political actors to zero in on those believed to be the most receptive and pivotal audiences for very specific messages while also helping to minimize the risk of political blowback by limiting their visibility to those who might react negatively. (Nadler, Crain and Donovan, 2018: 1)
But the main influence here, as they argue, is not really changing people’s beliefs but rather the goals of such campaigns is to stir emotions, anxieties and resentments around specific topics. These strong feelings subtly influence the most vulnerable people’s political decisions. But such infrastructures influence people not only in the context of politics with a big P, and not only the vulnerable ones. Rather, everyone is on the target because we all have moments when we are more emotionally receptive to specific messages and interactions. Every aspect of our lives is monetizable. And precisely here lies the power of these practices—They shape people’s behaviors and perceptions, whether about politics, economics, music, film, sex or love.
Another way that Facebook conducts rhythmedia is by using its other human processors, its paid workers—the content moderators. Similar to the telephone operators, content moderators also have to detect problematic content, people or brands, filtering them according to specific instructions (according to manuals that are updated constantly) and remember these actions so they can predict future problems. Content moderators are trained to become automatic machines, hidden from people and other actors; they are part of the communication channel but also its filter. They do this by deploying processed listening, tuning in and out of multiple locations in the back-end of the media apparatus, and performing various procedures to maintain the media as efficiently as possible and, importantly, profitable.
This is what Astra Taylor (2018) calls Fauxtomation, a marketing strategy to make technologies seem as if they are ‘smart’ but importantly—operate with no human intervention. The purpose behind fauxtomation, as Taylor rightfully points, is that it reinforces ideas that if specific work is unpaid or underpaid then we won’t need it. As she argues:
Automation is both a reality and an ideology, and thus also a weapon wielded against poor and working people who have the audacity to demand better treatment, or just the right to subsist. But if you look even closer, things get stranger still. Automated processes are often far less impressive than the puffery and propaganda surrounding them imply—and sometimes they are nowhere to be seen. Jobs may be eliminated and salaries slashed but people are often still laboring ←266 | 267→alongside or behind the machines, even if the work they perform has been deskilled or goes unpaid. (Taylor, 2018)
Both telephone operators and content moderators illustrate the politics behind undervalued and underpaid work of women and people of color and the way their work is silenced. But as I have shown, there is a decision-making process used by these human communication channels, and the filtering that these workers deploy has immense implications for the way we experience and understand media, and, importantly, ourselves and our surroundings. Their work can determine which people and behaviors are considered to be illegitimate, deviant, noisy and spam.
At the same time, in all three cases, people are educated to take care of their bodies. As I show in Chapter 3, citizens of New York City were trained to be quiet, and operators had to eat a specific diet and take exercise to keep their bodies healthy. Chapter 4 shows how the EU Safer Internet Action Plans encouraged people to be aware of harmful and illegal content, rather than understand how the internet works, how to encrypt their actions and how to collectively protest against various things. Chapter 5 highlighted how Facebook encouraged people to report harmful content and harmful peers according to its community standards. People are produced as subjects that need to take care of their own bodies in a way that benefits the ‘health’ of the media companies. Here, Foucault’s notions of the management of the self emerges again, where people are expected to take care of their own well-being. Because people produce value for media companies, they have to be kept in good condition for monetization and trade, and not be too noisy.
Only these days more scholars and organizations call for ‘digital understanding’ (Doteveryone, 2018), and critical data literacies (Carmi et al, forthcoming) so that people would understand what’s happening at the back-end. However, as I mentioned above, understanding these complex media territories is hard because people do not have the resources and processing capacities of technology companies. So calls for transparency and ability to be forgotten from the archives of the web and platforms (like the GDPR enables citizens in the EU to do) is only a partial and very limited solution. If we still have territories with inefficient systems of ‘consent’ then people’s agency, self-determination and autonomy are not going to be possible options of living. It will still keep people responsible for exploitative practices conducted to their bodies and allow technology companies to treat people as objects to own and monetize.←267 | 268→
As this book shows, order has its own rhythmedia and it is often influenced by capitalistic considerations. Media companies’ strategies are especially important as they are often intended to de-politicize behaviors, and un-crowd gatherings or mass actions of people. Designing mediated territories according to the ideal of personalization, is one of the consequences of such individualization. Personalization has been the main rhythmedia promoted by Bell, the digital advertising industry and Facebook to cater to their business models that targets individual people to produce them as products.
This type of infrastructure that promotes individualization of territory also enables problematic algorithmic ordering, such as deceptive, misinformation or disinformation (political) ads, to go unnoticed and unchallenged because they run under the radar of journalists, scholars, regulators, politicians and the general public. Many of the problems democratic societies experience today—fake news, micro-targeting, disinformation, profiling, propaganda, and more that will probably be added while this book is published—are caused by the personalized mediated territories. They have been enabled by the normalization of surveillance through processed listening which creates huge databases that can be manipulated by the highest bidder.
At the same time, such personalized territories order a specific rhythmedia which discourage, filter and remove political mass actions by un-crowding people. This is how power sounds like. Nora Draper and Joseph Turow (2019) call this type of ‘helplessness’ feeling - digital resignation, it is an “explanation for the inaction, limited actions, or inconsistent actions that individuals take in relation to their privacy concerns: they are resigned. That is, while these people feel dissatisfied with the pervasive monitoring that characterizes contemporary digital spaces, they are convinced that such surveillance is inescapable” (Draper and Turow, 2019: 2). They ask ‘what contributes to this feeling’ and the lack of willingness to engage in collective action. As I showed in this book, the main reason is these personalized territories, producing a particular individual experience which is meant to ‘un-crowd’ collective and political action. The dominant rhythmedia that media companies orchestrate with personalization, then, produces asymmetric power whereby ‘sociality’ is individual, de-humanized, and raw material for intervention.
Such personalized territories include default settings, ‘consent’ boxes, tempo-spatial orders customized for maximum profit and the illusion of control. Media companies’ framing of ‘control’ constructs people as ‘rational’ data-objects, individual things that can have ownership over themselves just like these companies do. It maintains people’s agency as reactive rather than proactive, limiting their ←268 | 269→imaginations, actions and understandings. Media can be designed to promote collective actions where people can debate and negotiate how to live their lives and what the deviant means in different contexts of their lives. This will be an ongoing process which involves negotiating between multiple values and interests, and therefore not an easy task. However, considering where we stand now, it seems like it is worth a try.
The rhythmedia that media (re)construct, then, influences the way people think, feel, act, rebel, desire, protest, and interact with one another. Precisely because of this, the way that ‘deviant’ and illegitimate behaviors are (not) defined, constructed or negotiated is about power. Such power manifestations have transitioned from the direct action of sovereign and discipline power to soft power, a more biopolitical strategy operating by indirection, flexibility and mutability. This book opened the labyrinth that happens in the back-end, by amplifying its multiple communication channels and how they distort our lives. But there are many more yet to be opened. The way people are measured through media can provide a lot of insight into the way (non)humans are (re)configured, and, as the power of platforms increases, we must be able to critique and reject the stories they try to tell and sell us. Nothing in the way media are created, developed and used is inevitable, just as spam can sometimes be a tasty and interesting thing to digest.
1.See Edina Harbinja’s (for example—Harbinja, 2017) work on what happens to people’s online data after they die.
Bartlett, J. (2018). The Secrets of Silicon Valley. BBC2.
boyd, d., & Crawford, K. (2012). Critical questions for big data: Provocations for a cultural, technological, and scholarly phenomenon. Information, Communication & Society, 15(5), 662–679.
Bucher, T. (2018). If… Then: Algorithmic power and politics. Oxford, UK: Oxford University Press.
Carasic, C. (ND). Electronic Disturbance Theater and FloodNet Scrapbook. Available at: www.carminka.net/scrapbook/edt_scrapbook.htm (Accessed on 2 December 2019).
Carmi, E., Yates, J., Lockley, E., & Pawluczuk, A. (forthcoming). Rethinking digital literacy in the age of disinformation.←269 | 270→
Creswell, J. (2018). Amazon sets its sights on the $88 billion online Ad market. New York Times. Available at: https://www.nytimes.com/2018/09/03/business/media/amazon-digital-ads.html (Accessed on 23 April 2019).
Doteveryone. (2018). People, power and technology: The 2018 digital understanding report. Available at: http://understanding.doteveryone.org.uk/ (Accessed on 22 March 2019).
Draper, N. A., & Turow, J. (2019). The corporate cultivation of digital resignation. New Media & Society, 1461444819833331.
Forrest, S., & Beauchemin, C. (2007). Computer immunology. Immunological Reviews, 216(1), 176–197.
Foucault, M. (1982). The subject and power. Critical inquiry, 8(4), 777–795.
Foucault, M., Burchell, G., Gordon, C., & Miller, P. (1991). The Foucault effect: Studies in governmentality. Chicago, IL: University of Chicago Press.
Gehl, R. W. (2018). Weaving the Dark Web: Legitimacy on Freenet, Tor, and I2P. Cambridge, MA: MIT Press.
Gilliard, C. (2019). Privacy’s not an abstraction. Fast Company. Available at: https://www.fastcompany.com/90323529/privacy-is-not-an-abstraction (Accessed on 3 December 2019).
Harbinja, E. (2017). Post-mortem privacy 2.0: Theory, law, and technology. International Review of Law, Computers & Technology, 31(1), 26–42.
Kaplan,. M. (2018). Happy with a 20% chance of sadness. Nature. Available at: https://www.nature.com/articles/d41586-018-07181-8 (Accessed on 23 April 2019).
Machkovech, S. (2017). Report: Facebook helped advertisers target teens who feel “worthless”. Arstechnica. Available at: https://arstechnica.com/information-technology/2017/05/facebook-helped-advertisers-target-teens-who-feel-worthless/ (Accessed on 23 April 2019).
Matias, N., Hounsel, A., & Hopkins, M. (2018). We tested Facebook’s Ad screeners and some were too strict. The Atlantic. Available at: https://www.theatlantic.com/technology/archive/2018/11/do-big-social-media-platforms-have-effective-ad-policies/574609/ (Accessed on 23 April 2019).
Nadler, A., Crain, M., & Donovan, J. (2018). Weaponizing the digital influence machine: The political perils of online Ad tech. Data & Society. Available at: https://datasociety.net/wp-content/uploads/2018/10/DS_Digital_Influence_Machine.pdf (Accessed on 23 April 2019).
Rosenblatt, J. (2018). Facebook accused of hiding inflated Ad metrics back in 2015. Bloomberg. Available at: https://www.bloomberg.com/news/articles/2018-10-17/facebook-accused-of-hiding-inflated-ad-metrics-back-in-2015 (Accessed on 23 April 2019).
Taylor, A. (2018). The automation Charade. Logic: A Magazine about Technology, 5. Available at: https://logicmag.io/05-the-automation-charade/ (Accessed on 23 April 2019).