Understanding the Power Behind Spam, Noise, and Other Deviant Media
Media Distortions is about the power behind the production of deviant media categories. It shows the politics behind categories we take for granted such as spam and noise, and what it means to our broader understanding of, and engagement with media. The book synthesizes media theory, sound studies, science and technology studies (STS), feminist technoscience, and software studies into a new composition to explore media power. Media Distortions argues that using sound as a conceptual framework is more useful due to its ability to cross boundaries and strategically move between multiple spaces—which is essential for multi-layered mediated spaces.
Drawing on repositories of legal, technical and archival sources, the book amplifies three stories about the construction and negotiation of the ‘deviant’ in media. The book starts in the early 20th century with Bell Telephone’s production of noise, tuning into the training of their telephone operators and their involvement with the Noise Abatement Commission in New York City. The next story jumps several decades to the early 2000s focusing on web metric standardization in the European Union and shows how the digital advertising industry constructed web-cookies as legitimate communication while making spam illegal. The final story focuses on the recent decade and the way Facebook filters out antisocial behaviors to engineer a sociality that produces more value. These stories show how deviant categories re-draw boundaries between human and non-human, public and private spaces, and importantly, social and antisocial.
5 Engineering the (anti)social
Within a relatively short amount of time, from 2004 onwards, it has become difficult to imagine our lives without social media. Whether you are an avid user or only an occasional one, platforms, and specifically Facebook (mainly in the West) became inseparable of so many things we do every day: get reminders about birthdays, get invitations for events, text your friends, upload pictures, get updates about news and people (friends?) lives, chat in private groups, and if you’re really lucky, get the occasional dick pic. If you are single or just looking for sexual encounters then in order to use dating apps like Tinder, GrindR, Bumble, and Feeld you will usually have to register with your Facebook account. Facebook has diffused into so many aspects of our lives that some people do not even consider themselves ‘on’ it when they use it. Although these platforms are free to use, there is a price to pay, we are just not made aware of it.
By now, I am sure that you want to throw away any book, article, or post that mentions Facebook. The company managed to go from the cool kid on the block to one of the most annoying and disputed ones. So why should you read another chapter about Facebook? The answer is that Facebook is just an example of bigger questions that we as media scholars and members of society need to ask. Examining Facebook opens up important questions about the kind of power media companies have. How do specific behaviors and people become categorized ←181 | 182→as deviant? Why do they receive this categorization? Under which conditions are they made deviant? And, importantly, what is at stake?
As the previous chapter showed, soft-law approaches have been the governing model for the EU internet from the end of the 1980s. Even softer approaches have been adopted in the USA and other regimes. When this kind of transition of power from states to media companies is racing forward, we need to pause, listen to this soundtrack and ask—Is this the kind of society we want to live in? Do these companies have too much power? What other ways of using these technologies are possible? And, what can we demand from these companies? We need to reclaim our technological future.
Trying to tune into Facebook has been quite challenging for researchers. Facebook does not reveal the rationale behind its algorithms, ordering, and even its workers. Facebook does not give access to the way its various components function which has given it the label of a ‘walled-garden’ (Berners-Lee, 2010). Several scholars from media and communication, digital sociology, and software studies have examined Facebook using various tools. It has been a challenge because Facebook has multiple layers that consist of software, algorithms and code, but also human workers. In addition, these elements are constantly changing, and some workers are outsourced (such as content moderators) so they are not technically considered to be direct Facebook workers. The company also collaborates, purchases and affiliates with many other companies, which makes it difficult to understand how long it stretches its tentacles.
Because of such challenges, scholars have developed creative ways to examine platforms like Facebook. Carolyn Gerlitz and Anne Helmond (2013), for example, examine Facebook from a ‘medium-specific’ approach, inspired by Richard Rogers (2013). In this method, they ‘follow the medium’ and, as part of the Digital Methods Initiative (DMI), have developed a tool called Tracker Tracker, to try and track Facebook’s tracking techniques. But while the DMI methods make important contributions to the debates about platforms, they still provide only one aspect of it: the medium side. They do not account for the humans, both users and workers, who take part in the complex assemblage that is Facebook. Importantly, they do not question the medium’s tools, units and standards. They take many things for granted, especially in our case—what is spam. Such deviant behaviors are not counted or included in the analysis as equal components of value. However, as the chapters above have shown, media’s infrastructure, design, ways of use, measurements and units are all developed with specific values and intentions baked into them – Nothing is inherently deviant, it is (re)produced.←182 | 183→
Tackling some of these obstacles, Beverly Skeggs and Simon Yuill (2016) developed several methods and tools to ‘get inside’ Facebook and challenge the platform’s self-description of being a ‘social-network’. Importantly, they used rhythmanalysis as a way of understanding the relations between different elements. Specifically, they use rhythms of life rather than networks as a way of explaining what Facebook ‘does.’ At the same time, they investigated whether Facebook makes people do things by untangling forms of engagement, whereby they asked people about their use of the platform. Although they argue that Facebook is an epistemological platform that is performative, they focus mainly on ‘lifeness’, a term borrowed from Sarah Kember and Joanna Zylinska (2012). Therefore, they do not account for the way the divisions between rhythms of ‘life’ and ‘non-life’ have been rationalized, enacted, and negotiated.
This issue has been emphasized by Nicholas John and Asaf Nissenbaum (2019) who analyzed the APIs of 12 major social media platforms and found that they “do not enable individual users to obtain knowledge about negative actions on social media platforms” (John and Nissenbaum, 2019: 8). This new field of investigations into dis-connectivity (see also Karppi, 2018), points to these rhythmic irregularities and how particular rhythms are encouraged while others are suppressed. As Media Distortions emphasizes—all rhythms count, the processes that turn them into ‘lifeness’ rhythms while filtering others is what I am interested in amplifying. In other words, all the deviant, spammy, silent, and unwanted rhythms—all are counted and have value.
So how do we approach Facebook, then? As Taina Bucher argues in regards to the non-useful use of the ‘black-box’ metaphor, instead of considering platforms as impossible to ‘see’—“ [a]sk instead what parts can and cannot be known and how, in each particular case, you may find ways to make the algorithm talk” (Bucher, 2018: 64). To understand how Facebook orchestrates people, objects and their relations through rhythmedia, I used five methods. First, following Bucher’s (2018) technography method, I conducted an auto-ethnography on my newsfeed to examine how it orders my experience by checking how often the Top Stories and Most Recent preferences change. Second, I catalogued different term of use sections for one year, to examine what kinds of arguments Facebook makes, and how various definitions and explanations change over time. So yes, I am the 1% who has read their terms and conditions.
To get a sense of how Facebook works, there is a need to go to the ‘back-end’ of the software in other ways. Therefore, for the third method I developed a method I call platform reverse engineering, meaning that I read platform companies’ research articles. In this context, I refer to reverse-engineering of software, and ←183 | 184→the attempt to analyze and identify its components and functions. As Chikofsky and Cross define reverse engineering, it is “the process of analysing a subject system to identify the system’s components and their interrelationships and create representations of the system in another form or at a higher level of abstraction” (1990: 15). By ‘reverse engineering’, I mean that I analyze these articles by searching for specific information that can reveal the way the platform develops its functions.
I focused on the rationale that guides the research; What are the company’s researchers trying to examine? Which tools and methods the company researchers use; and the way they conceptualize the platform and the people who use it. In this way, software and algorithms can be examined with details given by the companies that produce them. Facebook operates its own research center that employs in-house researchers to conduct research published in peer-reviewed journals, just like any academic research. This ‘archive’ (https://research.facebook.com/publications/) can also shed light on the motives, interests, and rationale that stand behind the company.
Fourth, I followed several pages that Facebook uses to announce news about its platform, mainly Facebook’s News Room, where it shares different statements about its current and new features. Finally, I analyzed specialist technology websites, which provided in-depth understanding about things that Facebook did not reveal. The websites I analyzed were Wired, Slate, TechCrunch, Salon and more.
Facebook was chosen as a case study as it is the most dominant social media platform (in terms of the number of users, engagement and revenues), and presents a new kind of digital territory that tries to colonise the whole web. If the previous chapter shows how multiple accelerated rhythm channels were introduced by third-party cookies, here, a different kind of restructuring territory is at play. This chapter illustrates how these channels are centralized back to a main node, which is Facebook. This chapter corresponds with the previous chapters, and shows similarities and differences in governing, managing, controlling and (re)producing people and territories by media practitioners with the use of seven strategies.
Social media platforms offer their services for free because they operate a multi-sided market where people’s behavior turns into the product (Zuboff, 2015), and is traded between multiple third-party companies, mainly advertisers. These media companies operate platforms which algorithmically sort, rank, classify, amplify and filter different types of information and relations. Because the business model relies almost exclusively on advertising, the way that people and things are ordered is designed to cater for them. This means that the more engagement ←184 | 185→means more value, and more profit. This was confirmed by Facebook Chief Operating Officer Sheryl Sandberg, who along with other social media platforms representatives were called to Capitol Hill on September 4, 2018 to answer questions about propaganda and voter manipulation. Sandberg agreed to Senator Kamala Harris’ question that “the more people that engage on the platform, the more potential there is for revenue generation for Facebook” (Glaser, 2018). But it is not only more engagement, but a very particular engagement.
This chapter shows the power relationships that Facebook establishes through its ability to listen to people in various spaces across the web, enables it to define, construct, and manage what constitutes as ‘social’ and ‘sociality’. The chapter outlines Facebook’s filtering machines, which include both human, non-human paid and non-paid actors trained in a feedback loop to behave in the appropriate way. In this way, Facebook determines what it means to be human and social on the web and beyond. It does so by listening and creating a dataset that includes all knowledge about people, and by rendering only what it considers to be ‘social’ as possible options of living in its territory and beyond. In this context, examining how deviant, ‘noisy’, and ‘spammy’ behaviors are constructed can tell us a great deal about what is considered to be the normal, or, in this case, how to engineer the social.
Facebook offers (new) ways and spaces for communication between human and non-human, in the territory it produces. Facebook creates means for (self) expression, action, participation, channels of communication, and the architecture that enables, controls, or restricts them. It structures mechanisms and tools that enable people to present themselves and interact with others in its territories by pushing specific formats as expression. At the same time, the platform also limits, restricts, reduces, and filters people’s options of actions and expressions, their way of living. This is similar to Bell’s operators who had to express themselves through the ‘voice with the smile’, meaning in a positive way, in the same way as the ‘Like’ button. By doing so, the service is training the (digital) body towards behaviors framed as and reduced to ‘positivity’. By stripping away contexts, nuances, and feelings from the way people can present or express themselves, Facebook de-politicizes its people through a biopolitical mechanism. Importantly, Facebook limits, constructs, shapes, manages, and commodifies the way humans and non-humans can behave within its territory and beyond.←185 | 186→
Filtering is an important strategy for keeping Facebook’s multiple communication channels as productive and efficient as possible. Filtering in this context is conducted by human and non-human actors, paid and unpaid, who have different considerations and motivations, but who are all ordered in a particular rhythmedia. In order to operate as good filters, according to Facebook’s business model(s), all the elements involved, both human and non-human, need to go through training programs. Such training of the body is meant for all actors to internalize the correct ways of behaving in the platform’s territory, but it also turns them into educators of others who do not obey these standards.
The separation between signal and noise in this context is complicated as what constitutes a disturbance is decided by multiple actors, and is not restricted to those who create the medium. What needs to be filtered constantly changes because what is considered to be an interference to the business model is also constantly in flux. Thus, filtering is a continuous process that adjusts according to new and emerging trends, legal cases, economic shifts, elections, and also the business development of Facebook, its affiliates, its subscribers, and all non-human actors involved. This is shown in the diagram I made below.
To filter unwanted content and behaviors and order its territory accordingly, Facebook (re)produces four main filtering mechanisms, which function in a recursive feedback loop (see Figure 5.1). The first two are Facebook’s non-human elements: Facebook’s architecture design, specifically the audience selector, sponsored stories, and social plugins; and Facebook’s algorithms, specifically the newsfeed, and the Facebook Immune System. The other two filtering mechanisms are human elements. These include the free labor1 of its (human) subscribers who perform as filtering machines in four ways: rating what is interesting by ‘Liking’ content (but not in an excessive way), reporting what is not interesting or is offensive/unwanted (which then enables users to ‘unfollow/see less/see first/favourite’ friends), filling out surveys, and listening to other users. The second group of human actors includes Facebook’s human labor workforce, which consists of low-waged, outsourced labor that conducts content moderation, as well as in-house raters called the Feed Quality Panel. Each filter will be discussed below according the order outlined above.
As I have shown in the previous chapters, the architecture that media prescribe is not neutral. Facebook’s architecture is also not neutral, natural or static, and it is influenced by its business model and operated by the filtering mechanisms, including Facebook’s users, bidding for ads, newsfeed algorithms, and the platform’s content moderators. This section shows how Facebook’s powerful position is established through its ability to listen to people’s behavior within and, outside its platform. This enables the company to produce knowledge, profiles and audience segments, that can then inform the design of specific features. By modifying its multiple communication channels and features, Facebook can shape, control and manage people’s self-presentation, expression, actions and the tools they can use. In this way, Facebook (re)produces subjects that, through architectural training of the body, behave in a way that creates more value to Facebook; it conducts rhythmedia.
According to Facebook’s Statement of Rights and Responsibilities, although the platform “provide[s] rules for user conduct, we do not control or direct users’ actions on Facebook” (Facebook, 2015). However, most of the research conducted by Facebook explicitly aims to influence people’s behavior to increase the value of the service. Following the public outrage after the ‘emotional contagion’ research was exposed in July 2014, Facebook’s Chief Technology Officer Mike Schroepfer ←187 | 188→argued that, “[we] do this work to understand what we should build and how we should build it, with the goal of improving the products and services we make available each day” (Schroepfer, 2014). Building and changing the architecture, then, is done to ‘improve the products and services’, which are offered with payment to advertisers and companies. Therefore, engineering various elements in their platform should yield as much profit as possible from the free service it offers to its ‘normal’ subscribers.
Most of the research that Facebook conducts in the guise of academic research is intended to provide advice for platform designers on how to create architectures in a way that will influence people’s behavior to benefit company’s goals. As Facebook’s researchers argue, “Social networking sites (SNS) are only as good as the content their users share. Therefore, designers of SNS seek to improve the overall user experience by encouraging members to contribute more content” (Burke et al., 2009: 1). This ‘improvement’ comes in the shape of changing and influencing the architecture, the way people connect with their peers (Taylor et al., 2013) and their overall well-being (Burke et al., 2010; Burke and Develin, 2016). It also involves filtering problematic behaviors which can harm the platform, which can be conducted by its algorithms, its workers, and the people who use it (more on this in the following sections). The arguments from Facebook researchers show a clear intention to bring more value, mainly economic, to platforms. They advise on architecture changes to influence people’s behaviors and emotions towards more engagement, and preferably positive, to cater for the advertisers who sponsor it.
A paragon example of a design feature intended to influence people’s behavior on Facebook is the newsfeed. The newsfeed feature was launched on September 5, 2006, and provided a space where people can “get the latest headlines generated by the activity of your friends and social groups” (Sanghvi, 2006). However, it is also apparently a way to motivate newcomers’ contributions on the platform by ‘social learning’. With the newsfeed, people learn how others behave on the service (Burke et al., 2009). Social learning, as Facebook researchers argue, is about listening to other people’s behavior without distraction, and then performing the same behavior. To have a space where people can learn the correct way to behave, Facebook introduced the newsfeed, which:
[A]llows newcomers to view friends’ actions, recall them later, and may make links to the tools for content contribution more salient … Social networking sites offer the opportunity to fine-tune the social learning metric, by taking into account friends’ actions and exactly which actions the newcomers were exposed to. (Burke et al., 2009: 2)←188 | 189→
By introducing this key architecture feature, Facebook wanted to teach people how to behave on its platform according to its definition of sociality, learning by listening to peers behavior to create a desired rhythmedia. In many of Facebook’s research findings, platform designers are advised to encourage people to engage more by contributing content and interacting with other people or brands, by either influencing the architecture or people’s friends. Facebook researchers advise “nudging friends to contact another user” (Burke et al., 2011: 1), “engineer features which encourage sharing or make peer exposure a more reliable consequence of product adoption or use” (Taylor et al., 2013: 2), or “creating and optimising social capital flows on their services” (Burke et al., 2011: 9). Here, Facebook reconstruct its territory and nudges people and their peers for more engagement, and hence more value for the company.
Metrics influence people’s behavior. For instance, on the science fiction television show Black Mirror episode ‘Nosedive’ we see a character lose her shit because her rating is not high enough. In the episode, people rate other people for their everyday behavior, metrics that put a number on everything you do, from the smallest interactions with your coffee barista and onto weddings. These ratings change how people behave, think, and feel. Although the episode is intended to be a satire on social media, it provides a poignant example into the power metrics can have on us, and it is not funny.
In 2014, two years before ‘Nosedive’ aired, the new media artist and scholar Benjamin Grosser (2014) was also intrigued by the power of metrics and wanted to understand what will happen if we remove them. Grosser developed the web extension, Facebook Demetricator (2012–present), which removes all metrics from Facebook’s interface to examine how the lack of metrics influence people’s experience. For his research, Grosser interviewed people after they had used the Demetricator and they said that their desire for more Likes, Shares, or interaction decreased. Such metrics, as Grosser argues, construct an economic-driven architecture that influences the way people feel and behave. Listening to other people’s metrics creates a competitive environment in which people want more. According to Grosser:
Facebook metrics employ four primary strategies to affect an increase in user engagement: competition, emotional manipulation, reaction, and homogenization … Through these strategies, metrics construct Facebook’s users as homogenized records in a database, as deceptively similar individuals that engage in making numbers go higher, as users that are emotionally manipulated into certain behaviors, and, perhaps more importantly, as subjects that ←189 | 190→develop reactive and compulsive behaviors in response to these conditions. In the process, these metrics start to prescribe certain kinds of social interactions. (Grosser, 2014)
This section looks precisely on the way Facebook’s architecture prescribes social interactions by focusing on some of these design features, specifically, audience selector, sponsored stories and social plugins. These features are not the full list of the service’s architecture; however, they provide examples whereby multiple practitioners conduct processed listening to people’s behavior in different spaces and times.
The Audience Selector feature offers people who use Facebook the ability to control which people can listen to them. Facebook elaborates on this feature by saying that, “When you share something on your Timeline, use the audience selector to choose who it’s shared with. If you want to change who you shared something with after you post it, return to the audience selector and pick a new audience”. As Mark Zuckerberg argues in relation to such mechanisms, “Control was key” (Zuckerberg, 2011). He continues by arguing that this feature:
[M]ade it easy for people to feel comfortable sharing things about their real lives … With each new tool, we’ve added new privacy controls to ensure that you continue to have complete control over who sees everything you share. Because of these tools and controls, most people share many more things today than they did a few years ago. (Zuckerberg, 2011)
Features are introduced to persuade people to share more and hence increase the value of the platform. As I showed in the previous chapter, promoting people’s control through browser settings was meant to make users feel as though they were empowered. Such ‘control’ narratives were meant to encourage them to contribute more personal information, opening their bodies to more (processed) listening tentacles of web-cookies. Here, similar strategies are at play whereby the Audience Selector feature is presented as a control and empowerment tool; redrawing an artificial line between private and public spaces. However, as with the cookie control mechanism, the responsibility of what happens with the information shared is passed to individual platform users. In this context, as well, while people provide information to different audiences, they still do not know how the company uses this data. So while they are offered more control over ←190 | 191→which other individuals within their network see their content, the control they have over the back-end aspects of the interface is quite limited.
People cannot control if Facebook and other third-party companies listen to their behaviors, because they are not offered such an option. What people ‘share’, and the knowledge gathered about them is also unclear, since this can be a wide range of inputs, visible or not, given by people and their relations to others. Facebook’s meaning of ‘public’ is outlined in the News Feed Privacy section:
If you’re comfortable making something you share open to anyone, choose Public from the audience selector before you post. Something that is Public can be seen by people who are not your friends, people off of Facebook, and people who view content through different media (new and old alike) such as print, broadcast (television, etc.) and other sites on the Internet. When you comment on other people’s Public posts, your comment is Public as well.
However, this definition changed on November 13, 2014 into a much broader definition under the question ‘What information is public?’ In this newer version of what ‘public’ means, Facebook provides tools for people, but these have limitations when it comes to specific categories of information that will always be public. Moreover, the default setting of Facebook is always public, which means that to change this setting people must be aware of the consequences of what happens when information is public. If people do not feel comfortable with this, they must actively change the default settings, a task that, as will be shown below, is not necessarily respected by the service (also because the definitions of ‘public’ change with time). Importantly, as I have discussed above—people just do not change their default settings. Therefore, unlike the example Facebook gives in this definition, people do not need to ‘select’ public in the Audience Selector because this option is already selected for them.
The reasons behind providing the Audience Selector as a feature on Facebook’s architecture is not about empowering people to control the information they share, but rather the contrary: to encourage them to share more. Facebook’s research shows that while the company claims that the Audience Selector is a tool to empower people’s privacy, it is an architecture design solution to the problem of people who self-censor themselves:
Understanding the conditions under which censorship occurs presents an opportunity to gain further insight into both how users use social media and how to improve SNSs to better minimize use-cases where present solutions might ←191 | 192→unknowingly promote value diminishing self-censorship. (Das and Kramer, 2013: 1, emphasis in original)
Here again, the rationale of Facebook is revealed through its researchers who highlight that ‘improving’ social media means more engagement and hence more value. Facebook’s researchers, Das and Kramer, give the example of an undesirable behavior of a college student who self-censored herself by not posting an event to a group because she feared it might be spammy to her friends who were not in that group. This means that there is an attempt to change people’s perceptions towards what they interpret as a spammy activity, and adapt it to what Facebook wants them to think about this activity—that it is not spam. The rationale behind this feature is to increase the value of the company by contributing as much information as possible (but not excessively as I will show below) and, by doing so, providing richer data that media practitioners can use to produce profiles to trade with. Other architecture design features are also meant to bring more value to Facebook; in the next section it is by using people’s friends as channels of advertising.
One of the things that Foucault emphasises in relation to power is the ability to influence action of others, and Facebook’s Sponsored Stories is a great example of exactly that. As he argues, “Power relations are rooted in the system of social networks” (Foucault, 1982: 793). Sponsored Stories is a feature that was introduced on 2011. This feature shows advertisements on the newsfeed by using people’s peers’ identities, making it look as though they recommend a particular brand, but without their knowledge or consent. It is designed to look like a ‘normal’ post within the newsfeed (not on the right-hand side, which is a designated space for other advertisements), with people’s names and photos following their interaction with this brand (Like, Share, or Post). As Facebook describes in the Advertising and Facebook Content section, they “sometimes pairs ads with social context, meaning stories about social actions that you or your friends have taken”. People’s behaviors and interactions with other people, objects, pages, brands, and groups can be used to promote products and services without their knowledge or consent. In this way, people are not only the product but are also used as channels to promote other products, for free.
People are not allowed to monetize their own profiles on Facebook. As the platform makes clear in its ‘registration and account security’ section—’You will not use your personal timeline primarily for your own commercial gain, and will ←192 | 193→use a Facebook Page for such purposes’. Here Facebook demands people create a license to make profit from themselves in the shape of Pages. Facebook, on the other hand, can monetize people’s actions, their friends, and their relations to other entities such as brands. As indicated under the section ‘advertisements and other commercial content served or enhanced by Facebook’, 2013 version:
You give us permission to use your name, profile picture, content, and information in connection with commercial, sponsored, or related content (such as a brand you like) served or enhanced by us. This means, for example, that you permit a business or other entity to pay us to display your name and/or profile picture with your content or information, without any compensation to you. If you have selected a specific audience for your content or information, we will respect your choice when we use it.
The argument made by the plaintiffs (and accepted by the court) is that by merely participating in the SNS, users create a measurable economic value … The maintenance of an online persona (updating photos, publishing posts, commenting, Liking, or simply moving in real space with location services activated on a mobile device) is redefined by users as a form of labour, since maintaining this online presence creates economic value in social media. (Fisher, 2015: 1118)
Beyond the information people provide by performing their everyday life on Facebook, such as liking, sharing, commenting and listening (discussed more below), they are encouraged to provide as many identifying details on themselves, such as their location (their home and places they visit), work, and education, phone number, family members, and relationship status and so on. In addition, people are also encouraged to share their feelings and their preferences such as favorite books, films, TV show, music, etc. As Facebook says in its ‘Advertising and Facebook content’ section:
So we can show you content that you may find interesting, we may use all of the information we receive about you to serve ads that are more relevant to you. For example, this includes: information you provide at registration or add to your account or timeline, things you share and do on Facebook, such as what you like, and your interactions with advertisements, partners, or apps, keywords from your stories, and things we infer from your use of Facebook.
Facebook listens to people’s lives (both actions and ‘non-actions’ which are silent) but also ‘infers’ people’s profile by analysing previous behaviors it archives and makes predictions. The more knowledge and accurate details people provide, the better Facebook and advertisers can target them or their friends in the future. Facebook argues that this is to serve relevant ads, selling personalization as the preferred way to experience their platform. To push people to provide more details, Facebook added a feature to ask friends to give more details. If Facebook users are not willing to behave in the desired way then their social networks can be mobilized to help them do so.
Facebook’s researchers have conducted experiments to understand how different visual displays of Sponsored Stories, which they call ‘social advertising’, influence the way people respond to these ads. In this way, Facebook wants to examine what architecture design is needed to yield the best interactions with ads. According to Facebook’s researchers:
Sponsored story ad units resemble organic stories that appear in the News Feed when a peer likes a page. Similar to conventional WOM approaches, the story does not include an advertiser-generated message, and must be associated with at least one peer. The main treatment unit is therefore the number of peers shown. ←194 | 195→Since the ad units are essentially sponsored versions of organic News Feed stories, they follow the same visual constraints imposed by the News Feed: they must feature at least one affiliated peer, and a small version of the first peer’s profile photo is displayed in the leftmost part of the unit. (Bakshy et al., 2012: 7)
This description shows how design features are used to blur the difference between what Facebook calls ‘organic’ (more on the politics behind ‘organic’ below) and Sponsored Story in two ways: by the appearance of a story and by positioning the sponsored story on the newsfeed. This is a spatial design very similar to newspapers, as the platform usually designates ad spaces on the right-hand side, which creates a separation between ads and the newsfeed. In this way, Facebook reorders the spaces that people have become accustomed to, to influence them with advertisements. Interestingly, on November 14, 2014, Facebook’s newsfeed announcement argued that, from survey the company conducted it was discovered that people want “to see more stories from friends and Pages they care about, and less promotional content” (Facebook, 2014). Sponsored Stories, which continue to exist in various forms until this day, are not stories from Pages that people Like, but rather are stories paid for by advertisers that their peers Like. In this way, you have to listen to stories you are not necessarily interested in.
‘Social advertising’, which monetizes interactions that people’s peers have with brands and products, uses social cues and is very similar to word-of-mouth-marketing. For Facebook’s researchers, ‘a positive consumer response’ means that people have clicked on the ad or liked the product/organization. The researchers also examined the way the strength of the relationship between friends can influence people into higher engagement with ads. To do this, they measured the frequency of communications between people, which included commenting on or liking posts, but also sending private messages, within a period of 90 days. As will be elaborated in the ad auction section—time, the frequency of actions, and the repetitiveness of behaviors are a key measurement for Facebook’s business model. It enables the platform to monetize on people’s repetitive actions, and hence preferred actions, relations and things: to orchestrate their rhythms toward more value.
As Facebook researchers argue, “social networks encode unobserved consumer characteristics, which allow advertisers to target likely adopters; and the inclusion of social cues creates a new channel for social influence” (Bakshy et al., 2012: 2, my emphasis). ‘Encoding’ here means conducting processed listening to create a database which can then be used to reorder things, people and their relations in ways that yield more profit. Social cues are the way Facebook conducts ←195 | 196→rhythmedia; they are architecture designs which produce people into communication channels to influence their peers. These experiments show is that people’s behavior is measured, categorized and archived to then be mobilized toward influencing their friends’ behavior. Following Foucault, here, power is enacted over people’s actions and, in particular, their relations with their peers through special architecture design. This is achieved by both Facebook and advertisers, who can listen to people’s characteristics, behavior and the strength of their ties to produce advertisements and also to turn users into communication channels that can be mobilized for advertising.
The last principle on (the already deleted) Facebook’s Principles section, was advocating for ‘One World’, meaning that Facebook’s service ‘should transcend geographic and national boundaries and be available to everyone in the world’. This principle is key to Facebook’s mission to render the world into its own media standards, including currency, legitimate/appropriate behavior, trade practices, and products. This practice is enabled through Facebook Connect, which was launched on December 4, 2008, and was the next step after social buttons were introduced in 2006. Facebook Connect turned the company into the digital territory’s central node through which data is communicated to and from the rest of the web, laying the groundwork for the social plugins integration with the rest of the web in 2010.
During Facebook’s third conference, f8, in April 2010, Facebook launched its Open Graph service and provided an Application Programming Interface (API). This meant that it literally and technically opened the platform and enabled third parties and their developers to receive data from Facebook. At the same time, these third parties fed their data back to Facebook, integrating into its Open Graph, and embedding it deeper within the web’s architecture. As Facebook argues, the Open Graph started with the Social Graph, which was:
[T]he idea that if you mapped out all the connections between people and the things they care about, it would form a graph that connects everyone together. Facebook has focused mostly on mapping out the part of the graph around people and their relationships. (Hicks, 2010)
With the Social Graph, the vertices of connection were between ‘friends’ who served as nodes within the network. With with the Open Graph, however, these links went beyond friends and include various types of objects and activities ←196 | 197→conducted within Facebook’s territory and spanning out onto the rest of the web. This was done to stretch Facebook’s knowledge database beyond a confined space (of its platform), as in a disciplinary mode of governmentality, and onto wider spaces (the rest of the internet), as with biopolitics. Listening was stretched across multiple spaces both within and outside Facebook to produce richer profiles, and importantly—produce a new territory.
The Open Graph includes Facebook’s subscribers’ data, consisting of information they share and their behaviors, which are rendered and filtered according to Facebook’s architecture, tools, design and currency. As Taina Bucher explains:
Open Graph is modelled on RDF, a W3C recommended standard for marking up a webpage in order to be able to encode data in a universally recognisable way … This mark-up code turns external websites and digital objects into Facebook graph objects, understood as entities made legible by the Facebook platform. (Bucher, 2012b)
In this way, Facebook translates other websites, objects, and actions into its own standards, while people’s activities on these places are fed back to it. As Mark Zuckerberg argued, in 2010, when he introduced the Open Graph feature:
[W]e are making it so all websites can work together to build a more comprehensive map of connections and create better, more social experiences for everyone. We have redesigned Facebook Platform to offer a simple set of tools that sites around the web can use to personalize experiences and build out the graph of connections people are making. (Zuckerberg, 2010)
For Zuckerberg, being ‘social’ means that ordering of people and objects are filtered through Facebook’s territory, measuring units and understanding of value—all according to his desired rhythmedia. Facebook orders people’s tempo-spatial experiences to create ‘personalization’ according to their profiles. The way to produce profiles and create a dynamic database was conducted with social plugins. When websites, platforms, and apps install social plugins, they establish two-way communication channels between their territory and Facebook. So instead of websites linking to each other in a decentralized manner as is the case with hyperlinks, there is a double process of decentralizing and recentralizing from and to Facebook. As Zuckerberg argues above, ‘social’ means personalized experiences, and these are produced by conducting processed listening to multiple spaces across the web and then reordering their experiences in a personalized manner with rhythmedia on Facebook.←197 | 198→
In order to tailor the architecture to the person, the platform needs to know them well enough to be able to produce spaces and times that fit their profile, but, importantly, one which nudges them towards more engagement. Facebook’s Open Graph creates a particular type of ‘social’ compared to the previous (relatively) decentralized web as social graph has made it so all roads go from and come back to Facebook, centralizing the platform as the central node. The social plugins that Facebook launched when it began were the Like Button, the Activity Feed, Recommendations, the Like Box, the Login Button, Facepile, Comments, and the Live Stream. Facebook describes social plugins in the Other Websites and Applications sub-section under the Data Use Policy section:
Social plugins are buttons, boxes, and stories (such as the Like button) that other websites can use to present Facebook content to you and create more social and personal experiences for you. While you view these buttons, boxes, and stories on other sites, the content comes directly from Facebook. Sometimes plugins act just like applications. You can spot one of these plugins because it will ask you for permission to access your information or to publish information back to Facebook.
As this definition illustrates, there is no need to click on any button in order for the social plugin to communicate your behavior through multiple channels, as this is initiated by just loading a webpage. In 2010, Facebook announced that the Like button would cross territorial boundaries and take over the web by transforming the way people connect with websites, publishers and platforms outside Facebook. To emphasize the value of the Like button, Facebook provided data on the people who use it and argued that they are more engaged, have more friends, and are more active. Facebook argued that:
By showing friends’ faces and placing the button near engaging content (but avoiding visual clutter with plenty of white space), clickthrough rates improve by 3–5x … Many publishers are reporting increases in traffic since adding social plugins … people on their sites are more engaged and stay longer when their real identity and real friends are driving the experience through social plugins. (Facebook, 2010)
Different websites across the web were encouraged to embed social plugins to their architecture to gain more traffic and insights on people’s real identities. However, persuading publishers and websites that they should integrate social plugins took time. This is similar to Bell persuading department stores that using the telephone ←198 | 199→for purchasing will be better for them. At the same time, this practice helped to promote Bell through the co-operative advertising of showing telephone numbers in newspapers. By pushing websites to integrate social plugins Facebook aimed to standardize and commodify people’s interactions with objects and other people, their self-expression, and make the rest of the web use its market currency.
As I show in the previous chapter, while the advertising industry wanted to standardize listening tools and units that all digital advertisers, publishers and other companies should use, Facebook aimed to be the exclusive standard. This means that the web is filtered through Facebook’s social plugins in a recursive feedback loop that goes back and forth and adjusts itself according to the four mechanisms discussed in this chapter.
Social plugins and Facebook’s API render people’s digital lives, conducted outside Facebook’s territory, into its units and integrating them back into its platform while gaining more knowledge about people’s actions across various spaces. This creates more value for Facebook. This kind of social engineering has become a primary tool for the biopolitical management of Facebook’s users, because by reproducing and filter human (and non-human) interactions, the company wants to make more value. For example, in its Information we received and how it is used sub-section under the Data Use Policy section, Facebook indicates that:
We receive data whenever you visit a game, application, or website that uses Facebook Platform or visit a site with a Facebook feature (such as a social plugin), sometimes through cookies. This may include the date and time you visit the site; the web address, or URL, you’re on; technical information about the IP address, browser and the operating system you use; and, if you are logged in to Facebook, your User ID. Sometimes we get data from our affiliates or our advertising partners, customers and other third parties that helps us (or them) deliver ads, understand online activity, and generally make Facebook better. For example, an advertiser may tell us information about you (like how you responded to an ad on Facebook or on another site) in order to measure the effectiveness of—and improve the quality of—ads.
The time (date and specific time), physical location, type of browser, operating system and device you use all matter to Facebook for their database, as they did for other advertisers, discussed in the previous chapter. Here, Facebook has delegated some listening capacities to advertisers who, in turn, help the service to improve ordering ads by knowing more about its subscribers. This is done through every website, game and application, as well as Facebook’s affiliates and advertising partners that have integrated the social plugins. Data are communicated ←199 | 200→into Facebook and filtered through its currencies and ‘correct’ behaviors, which receive a classification that is then scanned by the Facebook Immune System algorithm (more on this below).
As the previous chapter showed, one of the main web economies has been facilitated by cookies, whereby publishers and advertising networks opened a whole trading network in the back-end which was silent to ‘ordinary people’. It is a network of accelerated rhythm communication channels which are plugged into people’s bodies and create profiles based on their behaviors over time and all the time. In the previous chapter, publishers and website owners usually listened to people through cookies sent from their sites (first-party cookies) or from a group of sites facilitated by an advertising network (third-party cookies), which was still relatively decentralized. With Facebook there is a re-centralization of listening powers back to platform, which listens to people’s behavior across the web, wherever there are social plugins.
Web economies that the digital advertising industry developed in the late 1990s flourished from measuring technologies and units such as cookies, web-bug/pixels, clicks, impressions and hyperlinks, have merged together in Facebook’s territory and beyond. This is discussed in the Interactive Advertising Bureau (IAB)’s document ‘Social Media Ad Metrics Definitions’ (2009), in which they want to standardize the social media metric, as they argue they want to:←200 | 201→
[S]timulate growth by making the reporting of metrics for agencies and advertisers across multiple media partners more consistent. The IAB hopes that all players in the Social Media space will coalesce around these metrics to encourage growth through consistency. (IAB, 2009: 3)
In the document, all the previously used metrics appear again: unique visitors, page views, (return) visits, interaction rate, time spent, and video install (posting a link). The measurement of many other actions can now be listened to, however, through social plugins that the IAB calls ‘relevant actions taken’, which include: games played, videos viewed, uploads (e.g. images, videos), messages sent (e.g. bulletins, updates, emails, alerts), invites sent, newsfeed items posted, comments posted, friends reached, topics created, and number of shares (IAB, 2009: 8). Therefore, it is not only the ‘Like economy’, as Helmond and Gerlitz suggest (2013), but a mix of clicks and links, but, most importantly, cookies combined with pixels (which are basically ‘web-bugs’, as discussed in the previous chapter) that allow multiple communication channels to function simultaneously in the ‘back-end’. These mechanisms allow Facebook to listen to people’s behavior across the entire web. These channels are all linked to Facebook which produce both the architecture and the subjects, and, therefore, make its territory a central node that filters data through its territory.
While these websites and advertising companies produce people’s profiles by assigning what they consider to be anonymous IDs, Facebook already has profiles of users by forcing them to use their real names. In doing so, Facebook has further developed cookies and provided a face and a name to the ID numbers that cookies provided in the past. At the same time, this technology development has helped Facebook to promote its service and standardize its own measuring unit, the Like. As Robert Gehl argues:
Facebook Connect is the ultimate expression of the standards-setting project of the IAB; after spending years building up a user base via network effects, Facebook’s IAB-inspired standardised datasets were opened up to marketers across the Web. Thus, social media templates have developed in large part as a result of the standardization of advertising practices established by the IAB. (Gehl, 2014: 108)
Facebook’s social plugins were a development inspired by the advertising industry, and specifically advertising networks structure. The main architecture characteristic that Facebook developed was its position as the central node that orchestrates ←201 | 202→the rhythms of multi-layered communication channels. These channels simultaneously listen and produce subjects, which can later be targeted in ‘custom audiences’. Therefore, social plugins allow an enhancement of Facebook’s listening capabilities by knowing people’s behavior inside and outside Facebook’s territory. With social plugins, Facebook can draw the Open Graph map of the web with richer profiles because it can listen to people’s behavior anywhere on the web and across devices.
At the same time, Facebook also filters the way people’s behavior will be categorized in the normality curve it structures. Instead of being an axis for advertising channels of communication, Facebook has changed what an ad network means by transforming the central node into a whole platform. This new, ever-mutating and expanding territory enables people to carry out their everyday lives; but they are constantly filtered through Facebook’s changing definitions of what it means to be ‘social’ and human by deciding what has more value and hence more profit. Importantly, Facebook simultaneously conducts multiple communication channels, which cater for different elements that are involved with this rhythmedia feedback loop, including: users, publishers, advertisers, advertising networks, and affiliates.
Facebook provides these third-party companies limited and controlled listening capacities, allowing them to produce data subjects. As the IAB’s metric standardization guide for social networks indicates, with Facebook Connect, “Web publishers are now able to build an even richer site experience by incorporating social features. These features include accessing user and friend data to customize the user’s experience and publishing user activity back to newsfeeds on social networks” (IAB, 2009: 7). However, advertising companies and publishers are restricted by Facebook in the kinds of listening they can deploy. In doing so, Facebook tries to shift the power relation and become a sort of advertising association that provides licenses to advertisers; deciding how much listening capacities they will have but at the same time making their own standards for measurement of people.
The section Facebook Ad Tracking Policy, which appeared under the umbrella of the Facebook Ads section, was removed in December 2014. It outlined the kinds of listening advertisers can and cannot conduct. Facebook also restricts advertisers that bid on subjects’ data with techniques such as ‘Impression Tracking Data’, ‘Third Party Ad Tracker’, and ‘Click Tracking Data’. All of these are measuring units discussed in the previous chapter that were developed by advertising associations. With such terms, Facebook estanlishes that, now, all of these must be authorized, licensed and filtered through its own units and communication channels. As the policy shows, such companies were obliged to be certified with ←202 | 203→Facebook by 2011, presumably in order not to make profit on its subscribers data behind Facebook’s back (Figure 5.2).
Only Facebook’s measuring tools and units are authorized to produce data subjects; while all other players, from publishers, advertisers, apps, games, etc., need to adopt and listen to these data subjects in the same standardized and yet limited manner. As illustrated in Facebook’s Advertising Guidelines:
In no event may you use Facebook advertising data, including the targeting criteria for a Facebook ad, to build or augment user profiles, including profiles associated with any mobile device identifier or other unique identifier that identifies any particular user, browser, computer or device.
In this way, Facebook aims to produce data subjects and the meaning of sociality as a standard that everyone else needs to adjust to, but only the company has access to the full dataset. While other advertisers were restricted by Facebook to produce subjects, the company does not restrict itself to creating profiles from a wide range of sources, even those who are not subscribed to the platform. In October 2011, Byron Acohido, a journalist for USA Today, revealed that users are being listened to across the web even if they logged out and even if they have not subscribed to Facebook. According to van Dijck, Acohido
[F]ound out that Facebook tracks loyal users as well as logged-off users and non-members by inserting cookies in your browser. These cookies record the ←203 | 204→time and date you visit a website with a Like button or a Facebook plug-in, in addition to IP addresses … When confronted with these findings, Facebook claimed it was using these tactics for security reasons, but, obviously, tracking these kinds of correlations could also become a tempting business model. (van Dijck, 2013: 53)
This business model has already been used by the advertising industry for more than a decade, and Facebook has developed it further. In fact, Facebook has repeatedly argued that creating profiles for non-members is a bug. A good example of this is Facebook’s announcement on June 21, 2013 of the bug fix that jeopardized six million users, but on the way exposed the fact that the platform was building ‘shadow profiles’ through listening to people’s contact lists or address books on their phones and uploading them to Facebook (Facebook, 2013). According to tech journalist Violet Blue, “Facebook was accidentally combining user’s shadow profiles with their Facebook profiles and spitting the merged information out in one big clump to people they ‘had some connection to’ who downloaded an archive of their account with Facebook’s Download Your Information (DYI) tool” (Blue, 2013). But as we know with most of Facebook bugs: it’s not a bug, it is a feature.
This ‘bug’ was revealed to be part of Facebook’s business strategy on May 26, 2016, whereby Facebook argued that it wants to bring ‘better’ ads by “expanding Audience Network so publishers and developers can show better ads to everyone—including those who don’t use or aren’t connected to Facebook” (Bosworth, 2016). Whether or not ‘everyone’ wanted better ads was beside the point, apparently. On the same day, Facebook also changed its ad privacy control, which changed people’s preferences to opt in even if they clearly indicate they want to opt out. In this way, Facebook changes people’s options of living online to fit to its own version of sociality.
According to Arnold Roosendaal (2011), Facebook sends a unique user ID cookie when a person first creates an account. As Facebook indicates in its Data Use Policy, a User ID is “a string of numbers that does not personally identify you, while a username generally is some variation of your name. Your User ID helps applications personalize your experience by connecting your account on that application with your Facebook account. It can also personalize your experience by accessing your basic info, which includes your public profile and friend list”. According to Roosendaal, when users attempt to login to the service from a different device, Facebook sends a temporary (session) cookie, which after you log in is then replaced with the same unique user ID, allowing the service to link the same ←204 | 205→person across different devices. In this way, Roosendaal argues, Facebook knows who a user is even before they fill in the details of their username and password. This is a similar technique to ad networks’ practice of cookie-synching, whereby the network can identify users by synching their cookies communication from multiple websites. The user ID, then, is the data subject that Facebook produces, but since the company has people’s names they can match the ‘anonymous’ numbers to people’s identity.
Therefore, people’s behavior across the web, apps and devices, specifically where social plugins and pixels are installed, is being listened to by Facebook and connected to their Facebook profiles, which include their real names. In doing so, Facebook wants to make sure it listens to the same body because it needs accurate production of data subjects that can then be monetized, either by selling them or influencing their peers. According to Roosendaal, Facebook also sends cookies to non-members, which creates ‘shadow profiles’; so that if and when this person creates a Facebook account, the history of their behavior that has been archived thus far will be synched to their unique user ID cookie and a Facebook profile. The data subject is in a constant process of production, with Facebook’s memory.
With social plugins, Facebook has expanded the listening process even further to capture all the temporalities of people’s actions within and outside its territory. In the next section, I focus on another non-human filter—algorithms. I illustrate the way Facebook uses algorithms to reorder people’s spatial and temporal configuration; to influence their behaviors by encouraging specific ones it prefers (sociality) and filtering and removing ones that can harm the business model of the service (spam). In short, how Facebook conducts rhythmedia.
Facebook operates several algorithms that have different purposes. According to Tarleton Gillespie (2014), algorithms are procedures that use input data and process them into desired output by using specific calculations that instruct the steps to be taken. Because algorithms rely on input data, meaning people’s behavior, the bigger the database the more relevant they can operate (whatever relevance may mean to the company that deploys them). Therefore, Facebook’s social plugins are a way for the company to listen to people’s behaviors beyond its platform and produce a richer database/arhive that its algorithms can use. As Gillespie (2014) argues, algorithms “not only help us find information, they provide a means to know what there is to know and how to know it, to participate in social and ←205 | 206→political discourse, and to familiarize ourselves with the publics in which we participate. They are now a key logic governing the flows of information on which we depend” (2014: 167). In this sense, algorithms are one of the tools media practitioners use to conduct rhythmedia, a way to reorder and shape people’s temporal and spatial boundaries. Algorithms want to know us to order us.
However, as Taina Bucher argues, algorithms “do not merely have power and politics; they are fundamentally productive of new ways of ordering the world. Importantly, algorithms do not work on their own but need to be understood as part of a much wider network of relations and practices” (Bucher, 2018: 20). These networks that Bucher points at are precisely the four filtering mechanisms I outlined above. It is important to remember that each of the filtering mechanism I describe are interrelated, entangled, and feed one another; they do not operate by themselves. Interestingly, though, Bucher examines mostly people and journalists’ engagements with Facebook’s algorithms with hardly any consideration of the way such orderings are influenced by Facebook’s main source of income—advertisers. It is precisely this ordering that I focus on now. The two algorithms that will be discussed in this section are the newsfeed algorithm, usually termed EdgeRank and the Facebook Immune System (FIS). As with any platform, these algorithms may not exist by the time this book is out, they may have been tweaked, changed and divided to other algorithms. The main point here is not their names but rather what they do, what is the rationale behind them.
Facebook’s newsfeed algorithm is meant to organize and present things according to a specific tempo-spatial order that is calculated by various factors. Facebook argues that EdgeRank’s calculations operate according to three main parameters: affinity, weight, and time decay. Mimicking the advertising network DoubleClick’s motto mentioned above, Facebook argue that its newsfeed’s goal “is to deliver the right content to the right people at the right time so they don’t miss the stories that are important to them” (Backstrom, 2013). Because people do not have enough time to go over all of the stories, Facebook wants to optimize their experience and reorder their time right. But what is ‘right’ for Facebook is not necessarily what is right for people. As Facebook says, its “ranking isn’t perfect, but in our tests, when we stop ranking and instead show posts in chronological order, the number of stories people read and the likes and comments they make decrease” (Backstrom, 2013). Ranking, then, leads to more engagement, ←206 | 207→and this is what Facebook sells as the right thing for you. Since engagement is important for the ongoing production of data subjects, any sign of a decrease in such actions is something the company would like to avoid. Therefore, the timing of content and interactions on Facebook is not presented in chronological order.
Since timing is so crucial to producing data subjects and sociality, Facebook’s newsfeed algorithm produces a certain temporality that engineers all these elements in the ‘right’ way. During the year-long auto-ethnography experiment I conducted on Facebook’s desktop website from November 2013 until November 2014, Facebook changed my newsfeed preferences 71 times against my wishes from Most Recent to Top Stories. These changes occurred mostly when I did not visit Facebook frequently, and sometimes it changed my preference several times on the same day if I visited the platform many times during that day. The platform listened to my daily rhythms and consequently changed and adapted the architecture accordingly. I received an experience that I actively chose not to have.
The design of the newsfeed sorting is confusing because the user needs to press the ‘sort’ button and discover the two options to make the choice. More effort and steps had to be taken to change to Most Recent because the default setting is always Top Stories. When Most Recent is chosen then the newsfeed has a sentence written at the top that tries to persuade the user to come back to the desired feature: ‘Viewing most recent—Back to top stories’. So despite arguing in its post about newsfeed that the way its shows content is by ‘letting people decide who and what to connect with’ (Backstrom, 2013), Facebook constantly ignored my explicit wishes and changed the sorting back to the default. My default settings were chosen for me.
This matter was disclosed in the Controlling what you see in Newsfeed section in a small note at the bottom of the section, saying—‘Your News Feed will eventually return to the Top Stories view’. However, this statement only started to appear on July 27, 2014. In this way, people are repetitively nudged through default design to learn how to behave in Facebook’s ‘right’ way. What Facebook does here is orchestrate people’s territory by re-ordering things in a way that the company thinks might yield more engagement. “An algorithm”, as Bucher argues, “essentially indicates what should happen when, a principle that programmers call ‘flow of control’, which is implemented in source code or pseudocode” (Bucher, 2018: 22, emphasis in original). It is precisely the when that Facebook aims to orchestrate, but instead of calling it in the passive concept of ‘flow’ I use rhythmedia which like Williams argues, brings back the intention. In other words, Facebook conducts rhythmedia in a way that intends to influence people’s behavior into more engagement and hence more knowledge production.←207 | 208→
Another way to encourage people to engage more by reordering time on the platform is conducted by resurfacing older posts on the newsfeed. This change to the newsfeed algorithm was announced on August 6, 2013, when Facebook argued that its “data suggests that this update does a better job of showing people the stories they want to see, even if they missed them the first time” (Backstrom, 2013). According to Backstrom, tests showed that there was an increase of 5% in Likes, Comments, and Shares for ‘organic’ stories and an 8% increase in Page engagement. More engagement on its platform produces more value, so Facebook provides instructions to its algorithm accordingly—emotions such as nostalgia can be manipulated to bring more profit. This notion was probably inspired by Facebook’s research two years earlier, which suggested that:
[S]ince much of the content on social media services has an ephemeral nature, disappearing from view a few weeks after it was shared, a final means of stimulating communication could be the resurfacing of prior content. For relationships that have been inactive for some time, services could choose to highlight prior interactions, such as a status update or photos with comments. These stories could spur nostalgic memories and create a context to re-engage. (Burke et al., 2011: 9)
The researchers try to argue that the ephemeral design of social media is some sort of ‘natural’ objective and organic way platforms operate by, and not how they engineered this. But while Facebook tries to present this conduct as ‘stimulating’, ‘highlighting’ and ‘re-engaging’, what is actually happening is a calculated manipulation of time and emotions to increase engagement; our past has value for the future. In this way, although people are supposedly given the option to engage only the most recent and ‘fresh’ posts and photos, Facebook pushes its own ‘right’ way of what might be more (emotionally) engaging through specific instructions to its algorithm. Importantly, Facebook constantly restructures its territory, features and algorithms to push people into more engagement on the platform as this gives it more data to listen to, enabling it to produce richer data subjects for monetization.
Our past and the emotions and hence behaviors that it can ‘stimulate’ is a strategy to package the future of possible engagement. Platforms like Facebook try to be the producers of time, to be able to control and shape it according to their business model. As Facebook researchers argue, the newsfeed “algorithmically ranks content from potentially hundreds of friends based on a number of optimization criteria, including the estimated likelihood that the viewer will interact with the content” (Bernstein et al., 2013: 2). Ranking, then, is also influenced ←208 | 209→by predictions of people’s future engagement. As the digital advertising industry understood since the late 1990s, predictions about future actions can be more accurate by analysing people’s past behavior. At the same time, it also means that our past dictates our future, and this is dangerous in ways that are hard to predict.
For example, as Julia Angwin has revealed that Facebook’s advertising system enables advertisers to discriminate people according to their race (or ‘ethnic affinity’ as the platforms calls it). Angwin and Terry Parris Jr. show how they managed to create ads for housing while excluding Black, Hispanic, and other “ethnic affinities” from seeing these ads. In a separate article she showed how people were able to be excluded from seeing specific jobs ads according to their age (Angwin et al., 2017), and other characteristics. Additionally, Karen Hao shows (2019), Facebook’s ad system also discriminates according the gender, by showing job ads for nurses and secretaries to a higher fraction of women. People with demographic (race, gender, sexual preference, socio-economic condition) and other characteristics (ableism or others) that deviate from an ideal norm are filtered out as they do not yield value or profit. They are excluded from specific options of living. Through this act of filtering, Facebook conducts a rhythmedia that orders people’s personalized spaces according to their past and by doing so, perscribes their future.
Another important input that is measured and calculated by the newsfeed algorithm is the speed of people’s mobile networks or Wi-Fi connections. This input is especially relevant for people who come from developing countries and whose connections are slow or less stable. As Chris Marra, Emerging Markets Product Manager, and Alex Sourov, Emerging Markets Engineering Manager at Facebook, argue, “if you are on a slower internet connection that won’t load videos, News Feed will show you fewer videos and more status updates and links” (Marra and Sourov, 2015). This is a way for Facebook to listen to ‘lesser able’ bodies and restructure the territory in a way that will enable them to engage as well. It enables them to still be reproduced by not getting irritated by slow or lack of access.
However, there are other factors that instruct algorithms to calculate the inputs they use. These are advertisements that advertisers and brands pay and bid to be ranked higher on people’s newsfeeds. This is usually semantically distinguished by calling sorting that is influenced by ad payment as ‘paid’ as opposed to ‘organic’ which is supposed to be the naturally sorted feed. Facebook Product Management Director on the ads team, Fidji Simo, argues that:
The value for advertisers is a combination of how much they bid for their ad as well as the probability that their ad will achieve the objective the advertiser sets for it—whether that’s a click, a video view, an impression or anything along ←209 | 210→those lines. Value for users is determined by how high quality the post is and whether it will impact the user experience. (Lynley, 2014)
‘Ad placement’ is carried out in a careful way whereby the end goal is to influence people towards a specific behavior. It is a calculative game trying to encourage advertisers to bid as much as possible while not driving away people, especially since most users prefer not to have ads on their newsfeed (Levy, 2015). This factor, of brands or advertising companies paying to appear, and preferably higher, on people’s newsfeed, is not described as part of Facebook’s newsfeed algorithm calculations. In their How News Feed Works section, Facebook presents several questions about the functions of its algorithm, specifically addressing the question, ‘How does my News Feed determine which content is most interesting?’. Facebook answers: “The News Feed algorithm uses several factors to determine top stories, including the number of comments, who posted the story, and what type of post it is (ex: photo, video, status update, etc.)”. There is no mention of ‘organic’, ‘paid’, or bidding of advertisers, brands and other third-party companies. However, as shown elsewhere in this book, they are a vital component in the way the newsfeed algorithms operate. Facebook’s relationship with advertisers is complex as they are the main funders of the platforms and yet Facebook cannot afford giving them too much power. This intricate dynamic is evaluated here below.
Paying to be ordered by Facebook’s newsfeed algorithm means that advertisers need to act in congruence with what Facebook defines as legitimate advertising practices. An example of this surfaced in a video, called Facebook Fraud, published by the Veritasium2 project on February 10, 2014. In the video, Derek Muller, the creator of this YouTube channel, shows how he tried to promote his page in the authorized—licensed way—using Facebook’s Promote Page. Muller discovered that of the approximately 80,000 Likes he got following his purchase, most came from Asia and that these ‘paid users’ clicked on a wide variety of brands and entities to avoid detection. However, these clicks did not result in engagements, which made the page, as Muller stated, ‘useless’. This was because these paid users, human or non-human, were not Commenting, Sharing, or Liking the content on his page, which signalled to the newsfeed algorithm that this content should be less prominent. This would then affect people who had engaged, since the Veritasium Page would not appear on their newsfeed. Consequently, even the ←210 | 211→engaged people would not interact on his page since they would see it much less frequently or not at all.
The Promote Page service contrasts with buying Likes, an illegitimate business model whereby organizations and individuals can buy Likes through ‘click-farms’. These organizations hire low-paid workers from Asia to click on specific links/Pages/YouTube channels to increase the number of Likes/views of a post or video and, therefore, show a fake popularity counter for a brand. On October 3, 2014, Facebook’s Site Integrity Engineer Matt Jones provided tips for Pages to not buy fraudulent Likes:
Fraudulent likes are going to do more harm than good to your Page. The people involved are unlikely to engage with a Page after liking it initially. Our algorithm takes Page Engagement rates into account when deciding when and where to deliver a Page’s legitimate ads and content, so Pages with an artificially inflated number of likes are actually making it harder on themselves to reach people they care about most. (Jones, 2014, my emphasis)
Although Facebook argues that buying fake Likes is an ‘artificial’ behavior which will harm a Page’s performance or business goals, its own service acts in the exact same way. Similar to the politics of categorization shown through examining spam and cookies in the previous chapter, the only difference between the Promote Page and click farm methods is who licenses them, and who and how they are categorized. Facebook authorizes its own practice of paid service to get more Likes, whereas organizations that are not Facebook but conduct the same practice are labelled illegitimate ‘click-farms’. Facebook legitimizes its practices with a license to make its own definitions in the same way as the IAB and other advertising associations. The service can draw the line of legitimacy in its territory and standardize its trade practices, which benefit its business model. By doing so, it retains a monopoly over the production of territories and data subjects and the way they are ordered. Importantly, this is how it regulates rhythms in its territory. One of these regulation processes was to make a distinction between paid and unpaid ‘reach’, which it calls ‘organic’.
Recently, the term ‘organic’ has become a catchphrase in Silicon Valley’s terminology. This term is usually taken to mean that things are ‘naturally’ ordered according to people’s engagement on the platform. As I have shown so far, however, there is nothing natural about the production of knowledge through media, and ←211 | 212→this is not a new thing. The way that media practitioners have been conducting processed listening and rhythmedia has been precisely targeting this notion of feeling natural, experiencing things in ‘real-time’, rather than technologically mediated.
Strategies of making ordering feel ‘organic’ were discussed in Chapter 3, in Bell’s attempt to present its decibel as an objective representation of the ordering of sounds in New York City. The telephone operator training programs were also meant to provide a ‘real-time’ mediation, turning them into efficient communication channels operating fast as machines, decreasing noise and delays. In Chapter 4, the organic ‘ordering’ was conducted by advertisers and publishers who traded people in the automated online market while hiding the multi-layered communication channels of Real-Time-Bidding at the back-end, facilitated by cookies and through the default browser design. ‘Organic’ has always been about ordering things and their relations while concealing the decision making processes behind them. It is about creating asymmetric power through mediated territories. There is nothing organic about rhythmedia.
For Facebook the distinction between ‘organic’ and paid is used to sell a service that makes profit from advertisers and brands by intervening in the newsfeed’s algorithmic ordering. Facebook argues that there is a difference between organic reach and paid reach: “[o]rganic reach is the total number of unique people who were shown your post through unpaid distribution. Paid reach is the total number of unique people who were shown your post as a result of ads” (Facebook, 2016). As this definition illustrates, organic reach is a combination of the advertising industry measuring standards: unique visitors and page impressions. What Facebook implies is that when companies do not pay or bid for ads, there is no intervention in the ordering of the newsfeed algorithm. However, as discussed above, Facebook constantly changes both its design and algorithms to influence people behavior for more engagement.
Shedding light on paid versus organic reach can be seen in Facebook’s announcement on February 11, 2015, launching the ‘relevance scores’ to ads. This feature calculates a score between 1 and 10, which Facebook bases on the positive and negative feedback it foresees an ad receiving from a target audience. This service, argues Facebook, helps advertisers in several ways: “It can lower the cost of reaching people. Put simply, the higher an ad’s relevance score is, the less it will cost to be delivered. This is because our ad delivery system is designed to show the right content to the right people, and a high relevance score is seen by the system as a positive signal” (Facebook, 2015). Previous ←212 | 213→metrics standards of the advertising industry are used by Facebook to predict future actions of its people—relevance is packaged as a product—personalization as an ideal experience.
According to Facebook, ‘positive’ interactions depend on the ad’s objective, but generally relate to views (impressions), clicks or conversions,3 whereas ‘negative’ interactions relate to users hiding the ad or reporting it. Whether positive or negative, all actions count, as they give indication of relevance to a particular user. In this way, even actions which the platform encourages people not to do and will not be ordered—still count and have value. However, this feature comes with a caveat. Facebook makes clear that, although the use of this relevant score might reduce advertisers’ costs, they still need to bid high to be delivered successfully to their desired audience:
Of course, relevance isn’t the only factor our ad delivery system considers. Bid matters too. For instance, if two ads are aimed at the same audience, there’s no guarantee that the ad with an excellent relevance score and low bid will beat the ad with a good relevance score and high bid … As has long been the case on Facebook, the most important factor for success is bidding based on the business goal you hope to meet with an ad. (Facebook, 2015)
The higher the bid, the higher a business’s chance of success, or, in the territory’s terms, prioritized position and timing on the newsfeed. Bidding on Facebook, as it explains in its ‘Ad auction’ section addressed to advertisers, is a combination of three key factors: advertisers’ bids, estimated action rates and ad quality and relevance. This means that bidding is a key element in the way that Facebook’s newsfeed orders things.
Moreover, it shows how Facebook continues and develops another digital advertising industry market tool—real-time bidding—and render it into its own standard. This is another indication of how Facebook wants to be the central hub for advertising across the web, while forcing all other players to adopt its standards and measuring devices. As I mentioned in the previous chapter, Real-time-Bidding is a set of automated systems which enable different actors in the advertising industry to buy and sell ‘ad inventory’ (people and spaces) at the ‘back-end’ by bidding within milliseconds to shape people’s ‘real time’ experience at the front-end. All these systems cater to advertisers who, since the dot-com bubble crash, have become the main income source for social media platforms. The ad matching system of real-time-bidding, “examines all the ad campaigns placed by different advertisers in a particular time interval, their bids, and runs an auction ←213 | 214→to determine which ads are selected” (Andreou et al., 2018: 3). Time is important because it is the way to place an ad at ‘the right time’ on people newsfeed:
Facebook has a piece of ad real estate that it’s auctioning off, and potential advertisers submit a piece of ad creative, a targeting spec for their ideal user, and a bid for what they’re willing to pay to obtain a desired response (such as a click, a like, or a comment). Rather than simply reward that ad position to the highest bidder, though, Facebook uses a complex model that considers both the dollar value of each bid as well as how good a piece of clickbait (or view-bait, or comment-bait) the corresponding ad is. (Martinez, 2018)
This rhythmedia strategy illustrates that one of the ad auction’s main purposes is to push people into action; baiting for more engagement. Such ‘baiting’ is also the same mechanism that promotes mis- and dis-information and other sensational material which attract a lot of ‘engagement’ and at the same time threatens our societies. As Siva Vaidhyanathan argues:
One of the keys to the success of “fake news” is that often these pieces were designed expertly to play both to the established habits of rapid sharers of Facebook content and to Facebook’s EdgeRank algorithm. They reinforced existing beliefs among a highly motivated subset of Facebook users. Absurd or controversial posts are likely to be shared and cheered by those willing to believe them and dismissed, commented upon, argued about, and shared by those who dismiss the veracity of those posts. If someone sees an obviously fraudulent claim on a Friend’s Facebook site and responds to it, it’s likely to flare a long and angry argument among different camps. As we know all too well, Facebook is designed to amplify that sort of engagement. So the pieces spread (Vaidhyanathan, 2018: 184).
The ordering of things in specific times and spaces is used to influence people’s behavior towards a specific action, a ‘desired response’. Therefore, their actual validity, truth, or facts are irrelevant here as long as they are more engaging. An important component of the bidding is ‘estimated action rates’, which is the data gathered from listening to people’s behaviors. Such measurement indicates how many times, at what times and at what frequency people engage with things and other people (as discussed above with Sponsored Stories) on the platform. The data are assembled into a dynamic archive by conducting processed listening into people’s actions on multiple spaces within and outside Facebook. These data, people’s past rhythms, are then feeding the ordering of ads on people’s newsfeed to influence their future behavior towards more engagement.←214 | 215→
Platforms use algorithms to, as Foucault would argue, enact power over people’s actions. In the case of Facebook, the company bases its “estimates on the previous actions of the person you’re trying to reach and your ad’s historical performance data. We recommend optimizing for an action that happens at least 15–25 times per week (though more than that is better) for best results”. The most repetitive actions of people can be harnessed and used as an indicator for an estimate future action in the bidding for a better rhythmedia. This is precisely why it is important to create a database of people’s behaviors that is constantly produced, because this is creates an endless source of revenue. That dataset is produced from the ongoing processed listening to people’s pace, frequency of actions, the time of the day/week the make this action, and time spent on specific objects and relations. As Shoshana Zuboff argues on this new business model:
This entails another shift in the source of surveillance assets from virtual behavior to actual behavior, while monetization opportunities are refocused to blend virtual and actual behavior. This is a new business frontier comprised of knowledge about real-time behavior that creates opportunities to intervene in and modify behavior for profit … This new phenomenon produces the possibility of modifying the behaviors of persons and things for profit and control. (Zuboff, 2015: 84)
Facebook has been measuring people’s actions and time spent on specific things to get these ‘monetization opportunities’ even if there is no visible indication for it (such as liking, sharing, or commenting). The company measures how often people have interacted with things and people in different time intervals (Backstrom, 2013). Furthermore, the platform has been measuring not only which video people watch but how long they watch it (Welch and Zhang, 2014). The platform also takes into account the time spent on stories (Yu and Tas, 2015), and also takes “into account the amount of time people spend on a particular story relative to other content in their News Feed” (Wang and Zhou, 2015). Here, Facebook illustrates how the amount of time people spend on things are statistically measured and compared to their engagement with other things. Just like the digital advertising industry, Facebook constructs specific time-based measuring rules that indicate a person’s frequent action in relation to another person or object. When the duration and rhythm of actions are higher than other actions, this is an indication for a preference which can be commodified and traded in the ad auction.
“Just understanding time is huge”, as Mark Rabkin Facebook’s VP of engineering for ads says, “[w]e want to understand whether you’re interested in a certain thing generally or always. Certain things people do cyclically or weekly or at ←215 | 216→a specific time and it’s helpful to know how this ebbs and flows” (Rabkin quoted in Mannes, 2017). Such frequency-based rules help produce predictions that can be packaged into products. As the company argues, such measurements can “control the amount you spend on each audience, decide when they will see your ads, and measure their response. The ad delivery system will optimize delivery for the best-performing ad in an ad set” (Facebook Business, 2014). People’s behaviors and temporal orderings are commodified and traded for the highest bidder. However, Facebook knows that people do not want to see ads on their newsfeed. On November 14, 2014, Facebook made an announcement:
People told us they wanted to see more stories from friends and Pages they care about, and less promotional content … What we discovered is that a lot of the content people see as too promotional is posts from Pages they like, rather than ads. This may seem counterintuitive but it actually makes sense: News Feed has controls for the number of ads a person sees and for the quality of those ads (based on engagement, hiding ads, etc.), but those same controls haven’t been as closely monitored for promotional Page posts.
Facebook promises to instruct its newsfeed algorithm to decrease the ‘organic’ reach of Pages’ promotional content. In other words, by saying that promotional organic reach posts will decrease, Facebook hints that Pages need to purchase and/or bid for ‘paid’ reach to be ordered on people’s newsfeeds. One of the ‘traits’ of these overly promotional posts is ‘Posts that reuse the exact same content from ads’. Brands that aim to emphasize their messages can repeat the same messages, once when they pay for them through Facebook’s paid services and again when they post them for free. However, this creates what Facebook considers to be excessive rhythm, a burden on the system. Here, Facebook trains brands and advertisers not to share excessively, just as it does with its subscribers (more on this below). In this way, it regulates certain rhythms by pushing companies to buy and bid rather than repeating posts as promotional and bought. Beyond regulation of advertisers the platform also regulates its subscribers by establishing what is a healthy body. To do that it uses its Facebook Immune System algorithm.
On November 10, 2011, Facebook revealed its National Cybersecurity Awareness Month Recap and the Facebook Immune System (FIS) algorithm. During October, Facebook celebrated cyber security by announcing several new security ←216 | 217→features, the most important of which was FIS: “We have invested tremendous human, engineering, and capital resources to build a system for detecting and stopping those that target our service, while protecting the people who use it. We call it the Facebook Immune System (FIS) because it learns, adapts, and protects in much the same way as a biological immune system” (Facebook, 2011).
According to Facebook’s researchers (Stein et al., 2011), FIS is machine learning algorithm that scans all the behaviors performed by people on Facebook to classify them according to specific categories and detect anomalies. As of March 2011, the researchers were conducting “25B checks per day, reaching 650K per second at peak” (Stein et al., 2011: 1). In this way, people’s behaviors are being listened to and statistically measured, examined and categorized in ‘real time’ to create a normality curve of the healthy human body. Bodies with irregular rhythms were deemed sick or non-human and categorized as spam. This shows how, when an irregular behavior occurs, in terms of its frequency and rhythms (compared with others), Facebook can infer that this is an unwanted ‘spammy’ behavior. This categorization relies on the platforms’ definition of what is a normal and legitimate behavior:
Algorithmically, protecting the graph is an adversarial learning problem. Adversarial learning differs from more traditional learning in one important way: the attacker creating the pattern does not want the pattern to be learned. For many learning problems the pattern creator wants better learning and the interests of the learner and the pattern creator are aligned and the pattern creator may even be oblivious to the efforts of the learner. (Stein et al., 2011: 1)
Presenting itself as ‘the learner’, Facebook suggests that it has the same interests as the ‘pattern creators’, the people who use the platform. However, as I discussed above, there is a set behavioral norms embedded in the platform’s affordances. Despite my wishes to establish an experience of a recent, chronologically organized newsfeed, Facebook repeatedly changed my newsfeed preferences against my wishes. Therefore, there are other factors that are fed into this machine learning computational calculation, which are not mentioned.
The FIS consists of five mechanisms: Policy Engine, Classifier Services, Feature Extraction Language (FXL), Dynamic Model Loading, and Feature Loops (Floops). The first step is the Policy Engine that applies all the relevant policies engineered into the algorithmic calculations by Facebook on people’s actions: “decision about how and when to respond can depend on business or policy considerations. For example, an action in one region might be more creepy or undesirable than in another region” (Stein et al., 2011: 6). In this way, the Policy Engine ←217 | 218→conducts rhythmedia on people, features and their connections to express the local business logic and respond accordingly – constructing the deviant is contextual. For example, “blocking an action, requiring an authentication challenge, and disabling an account” (Stein et al., 2011: 3), intervening in specific times and spaces, then, is important for the frictionless operation of the platform presented as ‘real-time’ experience.
The Classifier Services categorize people’s behaviors according to the Policy Engine’s guidelines and update the system accordingly. This means that the company holds the power to decide which people and actions are legitimate on its platform and which ones are not. The Floops component, is the dynamic archive discussed above, which stores and retrieves data about people’s behaviors. It is “a shared memory about past observations and classifications” (Stein et al., 2011: 7). Floops implement three mechanisms—Inner, Middle and Outer—to listen people’s actions in different time intervals, capturing the valuable repetitions. The Inner Floop, counts the amount of times a specific action is made for a defined period of time: “For example, the number of times a URL has been posted on a channel in the past hour” (Stein et al., 2011: 7). The repetitive rhythms of posting are fed as inputs for classification on whether they harm or benefit Facebook.
The Middle Floop applies more complex operations beyond counting, specifically focusing on IP addresses and URLs, which help in understanding where the behavior comes from and establishing whether they are human, bot, or hired click workers. The Outer Floop uses the Memcache, which is a distributed memory object caching system meant to speed up the dynamic ordering of algorithmically mediated platforms. Behaviors across the web are logged daily to the Memcache, and in this way, the Outer Floop understands whether an action was performed by many people across multiple spaces. This enable it to detect harmful rhythms (as defined by Facebook), conducted outside Facebook and act upon them within the platform by filtering them out.
The advantage of the FIS algorithm is its fast update for new models and policies: ‘[a]ttackers change behavior a lot faster than people change their buying patterns’ (Stein et al., 2011: 3). For example, the researchers provide a timeline of phishing to show how time and frequency play important roles in detecting ‘attackers’. Such ‘abnormal’ behavior is detected by spikes of high frequency of similar behavior which is inferred as deviant. Rhythms and time are extremely important, then, for ensuring Facebook and its Open Graph remain ‘safe’; but also, as discussed above, they help in producing a knowledge database of people’s behavior that can be monetized. Understanding if someone is a human, bot or a ←218 | 219→hired click worker is key here, as people’s ‘estimated action rates’ are an important metric in as auction.
The social graph in this case functions as more than a dynamic archive; it not only continuously stores people’s behaviors but also orchestrates their rhythms; it “contains user information and facilitates connections between people to share information. It has two basic properties valuable to attackers. It stores information and it is a powerful viral platform for the distribution of information” (Stein et al., 2011: 3). This ‘facilitation’ and ‘powerful viral’ features are enacted by rhythmedia which conducts the way sociality is orchestrated. Repetitive behaviors are key to FIS’s operations, because they enable Facebook to learn what are people’s preferences and orchestrate the when and where people, objects and their relations will connect or disconnect on the platform according to a rhythm that yields more value.
Importantly, the FIS algorithm uses two main elements to protect the Open Graph: first, global knowledge; and second, users’ feedback (such as reporting violations, as discussed below). User feedback means processed listening to people’s behavior while they engage on and with the platform. This could be an ‘explicit’ behavior, such as marking something as spam, and ‘implicit’ behavior, such as deleting a post.
‘Implicit’ feedback is as mentioned above, every type of action or just ‘living’ on the platform—clicking, viewing, pausing, posting excessively, deleting a post, unfriending/unliking, visiting a profile, writing on Messenger, hovering over something, logging patterns (device, location, time of the day, duration, operating system, broadband) etc.—all have value to its dynamic database. This is an indication that Facebook treats any kind of action on its platform as valuable data. This is conducted within Facebook’s territory and outside of it (‘global knowledge’) thanks to its social plugins that listen, measure, collect and categorize people’s behavior across the web and then create a database to conduct rhythmedia according to its business model and advertiser bidding. So silent actions such as deleting posts have a value for Facebook, even if they are not heard by other people.
Measuring people’s behavior within Facebook is not enough to understand people’s everyday rhythms, and that is why the company also uses ‘Global knowledge’, meaning “the system has knowledge of aggregate patterns and what is normal and unusual” (Stein et al., 2011: 2). The dataset is never finished and is constantly changing, which means that Facebook can adjust its strategies and algorithm according to people’s behavior by tweaking different features that suit its business model.←219 | 220→
Facebook relies on its subscribers’ feedback (loop) to maintain the services’s equilibrium. Thus, training its subscribers to behave in particular ways and encouraging them to report and Like is paramount for the smooth functioning of the dynamic territory. Facebook researchers argue that spammy behavior, depends on culture and region, and that, generally, “the working definition of spam is simply interactions or information that the receiver did not explicitly request and does not wish to receive. Both classifiers and the educational responses need to be tuned for locale and user” (Stein et al., 2011: 4). Interestingly, when people do not want a certain interaction with Facebook (newsfeed sorting, for example), this action is not registered as spam. This is because Facebook has its own definition of unwanted behavior within its territory, and this is how such behaviors are categorized, not according to people’s understanding. The researchers identify three main causes that can jeopardize the Open Graph: compromised accounts,4 fake accounts and creepers. I will focus on the latter two, as they show Facebook’s approach to securing its territory and training the bodies of its subscribers to become well-behaved filters.
The most interesting threat that can harm the Open Graph is creepers. Creeper, as mentioned in Chapter 2, was also considered to be one of the first computer viruses, which spread during the 1970s through ARPANET’s network. This category of people cannot be found in any of Facebook’s terms, when queried in the Help section, or on Facebook’s posts on FIS. The likely reason for this is that creepers are ‘normal’ people. As Stein et al. (2011) describe this spammer category:
Creepers are real users that are using the product in ways that create problems for other users. One example of this is sending friend requests to many strangers. This is not the intended use of the product and these unwanted friend requests are a form of spam for the receivers. (Stein et al., 2011: 4)
But this can be fixed, argue the researchers, because the company has discovered that “the best long-term answer is education” (Ibid). So although sending friend requests to people is one of the core actions promoted by the platform, when the frequency is too high the behavior is categorized as spammy. Thus, training people towards Facebook’s desired rhythms is paramount to the frictionless functioning of the service. Rhythms are extremely important, then, for ensuring Facebook remains ‘safe’; but they also help in producing a database of people’s behavior that can be monetized for advertising purposes.
Because they make profit from people’s rhythms, Facebook does not make actions such as ‘disconnectivity’ (unfriending, unfollowing, unliking, leaving a ←220 | 221→group, etc.) available to others. As Nicholas John and Asaf Nissenbaum show in their analysis of 12 social media APIs, “the pattern of excluding disconnectivity data from APIs is indicative of an overarching logic” (John and Nissenbaum, 2019: 9). This logic means that the company does not want people to be ‘educated’ and know about rhythms that do not bring value and define them as ‘anti-social’. Such ‘negative’ actions are nevertheless still valuable for Facebook as it informs the company about the rhythms of people and what motivates or discourages their engagements.
On the other hand, Facebook does not want to provide data to advertisers which can help them understand people’s rhythm in the same way Facebook does. ‘Negative’ behaviors are also valuable and provide important input for Facebook on how to order its platform. Therefore, the platform offers data on disconnectivity as a service that advertisers should pay and bid for, as discussed above, with the use of ‘negative signals’ of the Relevance Score. By educating people to behave in a desired rhythm and educating advertisers to pay more, Facebook conducts rhythmedia towards a sociality that yields more value. The multiple ways in which Facebook produces data subjects by training their bodies will be explored in the following section.
This section examines how Facebook continuously shapes people’s behavior in its territory, while controlling, prohibiting, and engineering behaviors it considers to be dangerous or problematic to its business model. I argue Facebook produces people into multiple subjects, including the communication channel, as well as the producers (sender), consumers (receiver), and the message. The main type of subject that people are produced into is the filter, which helps to maintain the equilibrium of Facebook.
Each of these subjects requires training of the body to understand the desired way to behave. One element of training is the architecture (how things connect or disconnect and how movement is orchestrated) design (the expressions and relations options) provided by platforms (as discussed above), which guides people on how to present themselves and interact with others. Another element of the training program is filters, encouraging people to indicate in various ways what interests them and what does not. People do this in four ways: liking, reporting, conducting surveys, and listening. I elaborate on these below.←221 | 222→
The Like button was introduced on February 9, 2009, in a post where Facebook compared the button with a rating system, with the “new ‘Like’ feature to be the stars, and the comments to be the review” (Chan, 2009). The Like becomes a sorting numerical unit that can be monetized and exchanged. Importantly, Liking is a form of filtering that helps Facebook understand what people find more interesting than others across its territory. People become filtering machines by indicating what they find worthy of a Like. The motivation behind the Like (interest, like, parody, sympathy, care, etc.) does not matter since, for Facebook, the fact that a person has dedicated time to click on a particular piece post/content means that they are filtering and ranking what is worth their engagement.
By doing so, the service strips the nuance, context, ambiguity, and feelings that make people human. It educates people to think in quantified, simplified ways about themselves and their relations with others; it produces them as data subjects that are narrowed to the platform measuring metrics. This kind of activity is then used as an ‘engagement’ metric that Facebook can provide to advertisers when managing their Pages.
The Like button enables a quantified, standardized, comparable exchange unit/currency, whereby an aspect, or several, of human expressions and interactions can be measured, analyzed and become a product. Clicks, as discussed in the previous chapter, were one of the first metrics in the web economy, which advertisers have been using since the late 1990s. What Facebook is trying to do, however, is more akin to what Bell tried to do, as discussed in Chapter 3, in making the decibel the standard unit over the ‘sone’. Facebook, similarly, has tried to make the Like standardized across the internet. In Chapter 4, advertising companies also debated the meaning and method of measuring clicks and came to an agreement through the IAB standardization project. Facebook aims to disrupt this and push its own definition of measuring and producing subjects. All objects, people, their behaviors, and interactions could be measured and represented by the Like button.
As discussed in the previous chapter, spam’s most common description is a form of excess, a burden on the system, and this notion returns when examining Facebook as well. In a post about the importance of keeping activity on Facebook authentic, Matt Jones, Facebook’s Site Integrity Engineer, argues that the service ←222 | 223→limits the amount of Likes an account can make in order to turn this spammy activity (liking many times) to an inefficient practice. When an account Likes things many times, at an unusually high frequency, the service makes sure the account is legitimate. This is because:
[B]usinesses and people who use our platform want real connections and results, not fakes. Businesses won’t achieve results and could end up doing less business on Facebook if the people they’re connected to aren’t real. It’s in our best interest to make sure that interactions are authentic. (Jones, 2014)
The rhythm of behaviors, as seen with the FIS algorithm, becomes an indicator of authenticity and of being human. High frequency of actions is an indication that the individual entity doing the Liking is not real, a robot, or a click-farm worker, as discussed above.
Sharing on Facebook also has its limitations. In the Graphic Content section of its community standards, Facebook warns its users to use its most advocated action—Sharing—‘in a responsible manner’. The service warns its subscribers to “[a]lways think before you post. Just like anything else you post on the web or send in an email, information you share on Facebook can be copied or re-shared by anyone who can see it”. Facebook not only promotes self-censorship regarding the kind of content people should share, but also urges users to carefully consider the audience they are sharing to, and if the content is appropriate. With Facebook’s privacy settings defaulting to all posts being public, people are encouraged to perform active self-censorship, rather than making the content private to begin with and then allowing the user to choose to share it to a wider audience.
Both ‘Like-baiting’ and frequently circulated content is about increasing the distribution of things, which is the main activity that Facebook encourages, prioritizes, and monetizes. But, this activity should be regulated according to what can yield the most value. Repetitive behaviors create surplus on Facebook’s newsfeed as it does not add new interactions and might confuse the algorithm and measurement of people’s behaviors by feeding it with ‘double’ data relations.
Importantly, controlling Pages to make posessive attempts to monetize people’s engagement is another way for Facebook to regulate its internal market according to its own rhythmedia. It does so by prioritizing Pages that pay and bid to be ordered at the top of people’s newsfeeds. Just like with the previous chapter, rhythms that bring profit to media companies like web-cookies or in this case paying and bidding to be ordered on the newsfeed, will be legitimized while similar practices by other advertisers or people who do not profit the big companies would be categorized as spam. In this way, both people and Pages are policed, disciplined, and managed to behave in rhythms that Facebook considers legitimate.
Another example of restricting and controlling behaviors on Facebook is a change in excessive use of the ‘Hide’ option. People on Facebook are permitted to Hide posts, meaning that they will not see the particular post and can choose to not see any posts from that person or just see fewer posts from that friend. On July 31, 2015, Facebook released a post addressing the phenomenon of people who ‘hide too much’. According to Sami Tas, Software Engineer, and Meihong Wang, Engineering Manager:
[S]ome people hide almost every post in their News Feed, even after they’ve liked or commented on posts. For this group of people, ‘hide’ isn’t as strong a negative signal, and in fact they may still want to see similar stories to the ones they’ve hidden in the future. To do a better job of serving this small group, we made a small update to News Feed so that, for these people only, we don’t take ‘hide’ into account as strongly as before. As a result, this group of people has started seeing more stories from the Pages and friends they are connected to than in the past. Overall, this tweak helps this group see more of the stuff they are interested in. (Tas and Wang, 2015)←224 | 225→
While people use the options offered by Facebook’s design, the service ‘nudges’ them towards its own interpretation of how to use them. Such ‘nudge’ mechanisms are not notified to people in an explicit way, but rather in either ignoring their selected preferences (of hiding content) or adjusting architectural options. In this way, Facebook is conducting rhythmedia, altering people’s possible choices to suit its business model. Therefore, what is at stake here is the way Facebook produces data subjects through architecture and algorithmic designs that prescribe their options of living in the platform, and consequently control and produce behaviors accordingly.
The excess of Likes, Hides, or Shares can have negative influence on the accuracy of Facebook’s newsfeed algorithm, because it statistically measures and calculates such actions to establish people’s newsfeed orderings. Thus, for each action to be as valuable as Facebook intends it to be in the process of filtering data, there is a need for the service to police what it considers to be irregular rhythms of being. This can be done by Facebook categorizing this problematic activity as spam. Just as Bell developed A Design for Living program to educate the telephone operators, Facebook tries to educate people, advertisers, companies and its algorithms by training their bodies for the desired behavior in its territory. Training in the form of social and algorithmic engineering is something that Facebook is very interested in, and it also serves as a biopolitical tool to direct and manage people in a specific direction.
Another way for people to provide information that can help filter content and behaviors on Facebook is surveys. Facebook sends people surveys in two main ways: one, positioning surveys on the bottom right-hand side of the platform, and two, occasionally, Facebook circulates surveys to people, which appear on the whole screen once they enter the platform, to better understand what people think about its newsfeed. Contrary to the surveys conducted in New York City in the 1920s, here, the results of the ways that the data are processed and used are concealed from people. It is difficult to know exactly how the data derived from the surveys informs Facebook’s algorithmic or architecture changes. I received the second type of survey three times during the data collection period: on October 30, 2013, July 2, 2014 and July 13, 2014. The first survey from 2013 provided ten different kinds of post and I had to rate whether I wanted to see more of these posts on Facebook using a five-star scale.←225 | 226→
The other two surveys were delivered in July 2014, after the exposure of Facebook emotion experiment. The July 13 survey presented 15 posts that asked the same question: ‘How much do you agree with this statement? This post feels like an advert’, and the user was given five response options: strongly disagree, disagree, neither agree nor disagree, agree or strongly agree. All of the posts were from Facebook Pages, some that I had already Liked and some I had not (such as Amazon.com). Several posts were shown from the same Page I Liked, such as Resident Advisor (an electronic dance music magazine). The second survey of the two was circulated on July 2, 2014, and it differed from the others as it asked questions on ‘the Facebook experience’, while particularly focusing on the Facebook Graph Search feature that was launched on July 15, 2013. What these surveys show, is that Facebook needs humans to improve its algorithms. Behind the ‘organic’ experience it tries to sell, there are people who work for the platform—either its subscribers or its hidden workers, for free or for low wage—to fine tune the algorithm and provide the contextual meaning that is so needed.
On December 4, 2015, Sami Tas, a software engineer at Facebook, and Ta Virot Chiraphadhanakul, Data Scientist, published a post about the thousands of surveys conducted every day to understand the reasons for the popularity of videos. As they argue:
We survey tens of thousands of people every day, and for the story surveys, we ask them if they prefer a particular viral post to another post. With this update, if a significant amount of people tell us they would prefer to see other posts more than that particular viral post, we’ll take that into account when ranking, so that viral post might show up lower in people’s feeds in the future, since it might not actually be interesting to people. (Tas and Chiraphadhanakul, 2015)
What Facebook’s data scientists argue here is that ‘viral’ stories are anomalies, and that, since anomalies can influence the newsfeed algorithm towards what they consider as bias ordering, there is a need to take special measures when it comes to such unusual rhythmic behaviors. Therefore, increased rhythm (termed ‘high volume’) on Facebook needs to go through another human filtering mechanism that helps Facebook understand if this anomaly is legitimate (and preferably paid for) or whether it is a hoax. Since the results of the two kinds of survey are never publicly published or available to anyone but Facebook, it is difficult to establish how, why and when such anomalies occur, if they are anomalies to begin with, and whether they are treated as legitimate or illegitimate. But what is clear is that while Facebook uses many indicators to understand how people behave, some of them are silent.←226 | 227→
Behaviors on Facebook do not have to make a sound, they can be silent or not be considered as an ‘action’ at all. Taina Bucher (2012a) argues that an Edge, one of Facebook’s newsfeed algorithm criteria, means any interaction with an object on Facebook. This can be done through the social plugins that Facebook provides, such as the Like, Share, or Comment. It can also explain the name of its primary sorting algorithm, EdgeRank, which orders, sorts, and filters objects and people according to their interactions and the value assigned to each of them.
But precisely because an Edge is any interaction, then filtering and ordering people or objects is also determined by actions and relationships that do not receive any cues. For example, if I visit one of my friend’s profiles, EdgeRank will ‘know’ that I am interested in this friend and show me more posts on the newsfeed from her. This is elaborated in the sub-section of Facebook’s Data Use Policy Other information we receive about you:
We receive data about you whenever you use or are running Facebook, such as when you look at another person’s timeline, send or receive a message, search for a friend or a Page, click on, view or otherwise interact with things, use a Facebook mobile app, or make purchases through Facebook.
Any kind of interaction on Facebook is processed listened (including people’s devices, their internet connection speed, location, etc.). The platform listens to people even when it is not visible to others, and this then informs the newsfeed and FIS algorithms filtering mechanism. Because such actions can only be listened to by Facebook it is difficult to know what and why certain actions are silent and others not.
Cristina Alaimo and Jannis Kallinikos (2017), for example, call the actions people do on social media as social data (posting or uploading). By doing so, they automatically adopt the way the platform defines sociality while disregarding many other types of possible actions. As John and Nissenbaum argue “the theory researchers develop might be shaped by the kinds of empirical materials the tools at their disposal are able to give” (John and Nissenbaum, 2019: 9). Unfortunately, they also make this problematic assumption by arguing that social media conceal what they call ‘negative’ actions such as disconnecting, unliking, muting a conversation and unfriending. As shown above here, all actions count and some are similar to ‘positive’ actions, it is their frequency that harms the business models of these companies. A main action that these companies encourage is listening, ←227 | 228→which receives negative connotations with the notorious nickname of lurking. However, it is still endorsed as a form of sociality, but one that does not receive visual or audio cues.
Facebook’s researchers have been interested in understanding people’s listening practices in quantitative ways to encourage them to engage more and thus bring more value to the service. In an article called ‘Quantifying the Invisible Audience in Social Networks’, Bernstein et al. (2013) argue that they want to understand the way people perceive their invisible audiences. They argue that this knowledge can help ‘science’ and ‘design’ to influence content production and self-expression on Facebook’s territory, or in other words, increase engagement and hence profit:
The core result from this analysis is that there is a fundamental mismatch between the sizes of the perceived audience and the actual audience in social network sites. This mismatch may be impacting users’ behavior, ranging from the type of content they post, how often they post, and their motivations to share content. The mismatch also reflects the state of social media as a socially translucent rather than socially transparent system. Social media must balance the benefits of complete information with appropriate social cues, privacy and plausible deniability. (Bernstein et al., 2013: 8)
The reproduction of territory and data subjects must be balanced; the strategies Facebook wants to deploy on people must be subtle enough not to scare them away and yet still influence them and their peers to engage more time on the platform. The researchers undertook this research to understand whether design changes that provide quantitative metrics to show people the actual audience that has seen their posts will benefit the platform. It shows that Facebook is concerned with which metrics to show to encourage more engagement, and will change the architecture accordingly, producing asymmetric power relations with an architecture that restricts people from listening deeper. Here, we discover that concealing metrics is the preferred interface design, as not showing them might have damaging effects. This does not necessarily translate into negative actions, but rather actions that will not yield more engagement.
Using Facebook’s data logs, the researchers (Bernstein et al., 2013) compared surveys asking users how many people they thought were exposed to their posts. Bernstein and colleagues’s methods revealed that, similar to web browsers, Facebook also has server logs documenting every kind of behavior within its territory. With this dynamic archive, Facebook is able to have more listening capacities and, therefore, have more knowledge on its members. In turn, this makes Facebook’s ←228 | 229→listening capacities the most powerful because only the service can access such datasets. Facebook’s researchers point to the limitation of data logs as a measuring tool, saying that, “depending on how the instrument is tuned, it might miss legitimate views or count spurious events as views” (Bernstein et al., 2013: 8). Similar to Bell’s measuring devices, measurement depends on the media practitioner’s expertise to operate the listening tools and infer data from them.
All actions count, whether they are silent or make a sound. It is the actions that make noise, a disturbance to the business model, that need to be controlled, managed and, hopefully, eliminated. It is Facebook that decides what noise is, however, this definition keeps changing according to its business model, the advertisers who bid, its subjects, journalistic articles and the territory.
Bucher (2012a) argues that a factor that drives people’s behavior on Facebook is the threat of invisibility and of not being considered important enough. But people are also encouraged to behave silently. For example, on the right-hand side, in the ‘Chat’ option, Facebook shows when people’s friends last visited the platform, thereby enabling to ‘monitor’ on friends’ behavior without them knowing. In fact, inasmuch as Facebook rewards people in making them or their interactions louder, the service also promotes interactions that can broadly be called ‘listening’.
Such listening practices are not heard by other people, but they are heard by Facebook, which measures, categorizes, and archives these insights as valuable data in its server logs. Facebook could have easily implemented the possibility to show people who has looked at their profile, as it has done with its messaging feature, Messenger. This latter option shows a read receipt, by marking the bottom of the messaging space with one tick, including the date and time it was read.
Listening makes people feel more empowered as they, too, have the capability to know people and things. What these features also do is normalize a certain kind of listening, that which is associated with spying. It also shows Facebook manages a particular rhythmedia, whereby it aims to amplify certain actions over others, but these can be both silent and loud, because everything counts in large amounts.
Another way to turn people into filters is through reporting. Different social media platforms have different mechanisms of reporting content, sometimes also called ‘flagging’. This mechanism allows people to inform the service that a particular piece of content or behavior is unwanted for various reasons, such as being ←229 | 230→hateful or abusive, violent, sexual, harmful, infringing copyright, and etc. According to Crawford and Gillespie:
[T]he flag represents a little understood yet significant marker of interaction between users, platforms, humans, and algorithms, as well as broader political and regulatory forces. Multiple forces shape its use: corporate strategies, programming cultures, public policy, user tactics and counter-tactics, morals, habits, rhetorics, and interfaces. (Crawford and Gillespie, 2016: 410)
They argue that, by not allowing a debate about the values in their services, platforms control public discourse, including how and what should be debated and what should be heard in their territories. This is also illustrated in the limited form of communication such ‘flags’ allow. Facebook, for example, provides very limited means for people to report content. It provides only categories that can benefit its business model. In the 2015 version of Facebook’s community standards, it indicates that:
Our global community is growing every day, and we strive to welcome people to an environment that is free from abusive content. To do this, we rely on people like you. If you see something on Facebook that you believe violates our terms, please report it to us. We have dedicated teams working around the world to review things you report to help make sure Facebook remains safe.
Facebook’s subscribers are expected to serve as quality assurance (QA) for ‘community standards’, for free, however, individual users were not involved with creation of these community standards, and are not included in the mechanisms keeping their accounts safe. In How to Report Things section, people are given illustrations and step-by-step guidance on reporting abusive and spammy content. In another section of their Community Standards dedicated to safety information and resources, people are advised to “[l]earn how to recognize inappropriate content and behavior and how to report it”. Here, as in the previous chapter regarding educating EU citizens, people are expected to learn to be responsible educate themselves and others to keep Facebook safe.
Facebook encourages people to report things that are not listed in its terms or community standards through the social reporting feature, which was introduced on March 10, 2013. Social reporting means that, if someone does not like something that is posted on their newsfeed, they can ask that friend to remove it. By doing so, people are regulating, controlling and managing each other in a biopolitical way. This then serves a second purpose of helping Facebook to define and enforce ‘good’ behavior. This is a way to educate people to train one another to behave in a specific way within Facebook’s territory.←230 | 231→
Reporting, then, allows Facebook to show that it cares about what people want and to have another filtering mechanism for the kind of things it should not order on the newsfeed, thus helping to tune the algorithms. As with many other platforms, after people report to Facebook, they do not know what happens with the report, or how many other people have also reported the same thing. On April 26, 2012, Facebook launched its Support Dashboard feature, which allows people to know when their report has been received and also gives an indication of why an action was taken or not with regard to the report. Facebook, however, does not reveal how many people have reported something (post, Page, or person). Such information can persuade people to complain and even lead to them rebelling against certain decisions (for example, removal of female nipples or mothers who breastfeed).
In the 2015 community standards, the company addressed this by saying that, “[t]he number of reports does not impact whether something will be removed. We never remove content simply because it has been reported a number of times”. This statement, however, leaves out what does impact its decisions. Just as Bell wanted to provide counselling to its rebellious operators and not allow them to unionize by de-politicizing them, here, Facebook uses similar strategies. In this way, platforms’ personalized experience discourages mass actions such as knowing that many people reported, objected or complained about something and taking it forward to Facebook, journalists, municipalities, courts, and governments.
In the spam section (under the security section), people are encouraged to report spam: “By doing so, you will be playing an important role in helping us protect other people from scams”. But people are also given advice on how to keep their digital bodies safe and clean from spam by using various methods such as protecting passwords, not sharing login information, not clicking suspicious links, updating browsers and running antivirus software. Maintaining a healthy body, as Bell ensured with various diet and exercise regimes for its operators, is crucial for subjects who function as communication channels and filters. While people are encouraged to report, what happens to the reports is handled by Facebook’s hidden processors: Commercial Content Moderators (CCM) and Feed Quality Panel (FQP).
Facebook employs different kinds of workers to help maintain its multiple communication channels, to produce a profitable trade territory. Workers include newsfeed ranking engineers, data scientists, software engineers, product ←231 | 232→managers, researchers, security officers and many others. Along with employees whose workplaces are Facebook’s offices, there are others who are less prominent. These workers reside in other places and, sometimes, are not officially declared as Facebook employees: first, Facebook’s cheap, outsourced labor, known as content moderators; and second, Facebook’s raters, known as the Feed Quality Panel. Their work is crucial to filtering unwanted behaviors from Facebook, but they are kept hidden for several reasons: to naturalize their work as part of the ‘organic’ and natural algorithmic processes, to create the feeling of ‘real-time’ uninterrupted experience, to ensure they are not accountable for their work, to prevent them from having to disclose their working criteria and ethics, and to save money. In this section I illuminate their function as filters.
Several decades after the automation of telephone operators’ work, other hidden workers have been produced as the communication channel and processors, they are called, as Sarah Roberts has termed them Commercial Content Moderators (CCM) (Roberts, 2016). According to Nick Summers (2009), this ‘internal police force’ was sitting in Facebook offices in the United States, and, in 2009, consisted of approximately 150 people. In 2018 there are about 7,500 CCM working for Facebook (Koebler and Cox, 2018). As Summers observes, “[p]art hall monitors, part vice cops, these employees are key weapons in Facebook’s efforts to maintain its image as a place that’s safe for corporate advertisers” (Summers, 2009). One of the first times that Facebook discussed these hidden workers was on the 19th of June 2012 (the link is no longer available), when it revealed on its Safety Page information regarding the processes that happen in the ‘back end’ of the platform after people report things:
[T]o effectively review reports, User Operations (UO) is separated into four specific teams that review certain report types—the Safety team, the Hate and Harassment team, the Access team, and the Abusive Content team. When a person reports a piece of content, depending on the reason for their report, it will go to one of these teams.
Although existing for several years, Facebook does not elaborate on the work of content moderation teams. These positions were not found in the Help section when I searched for them. In fact, there is no information in regards to what they do, what training they go through, what are their work conditions, what kinds of guidelines they receive and so on. To this day it is quite difficult to find ←232 | 233→information from Facebook about CCM. As Gillespie argues, “[f]or more than a decade, social media platforms have presented themselves as mere conduits, obscuring and disavowing the content moderation they do” (Gillespie, 2018b). As I show in this book, this has been an ongoing strategy to conceal the decision-making processes conducted at the back-end.
CCMs are hired, according to journalist Adrian Chen (2014), by Facebook through outsourced third-party companies. These workers are usually hired in the Philippines as the country’s relationship with the United States allows workers to understand American social conventions, but importantly, they are cheap labor.
Social media’s growth into a multibillion dollar industry, and its lasting mainstream appeal, has depended in large part on companies’ ability to police the borders of their user … companies like Facebook and Twitter rely on an army of workers employed to soak up the worst of humanity in order to protect the rest of us. And there are legions of them—a vast, invisible pool of human labor. (Chen, 2014)
CCMs, as Roberts (2016) argues, are employed by social media platforms to filter problematic content. In order to perform their job function, they have to perform processed listening to people’s behaviors to separate between the signal and noise as categorized by the platforms they work for. According to Chen, there are at least two kinds of content moderators: one, ‘active moderators’, who filter posts in real time; and two, ‘reactive moderators’, who only filter if content has been reported by people as offensive. The list of problematic content categories, is a mirror of the community standards: ‘pornography, gore, minors, sexual solicitation, sexual body parts/images, racism’. In this way, CCMs conduct processed listening to filter out what Facebook considers antisocial to maintain its title as a social network.
When things are reported by people, they are sent to the outsourced CCM teams and then go through three filtering processes: one, content can be ‘confirmed’ as offensive, thus erasing it from both people’s account and all of Facebook; two, the content can be ‘unconfirmed’, meaning it is not deemed offensive, and it stays on the platform; or three, ‘escalation’, which means content goes through a higher level of filtering by sending it to Facebook’s employees (Chen, 2012). This team is called Risk and Response, and CCMs have to deal with “the hardest and most time-sensitive types of content,” and work “with the policy and communications teams to make tough calls” (Koebler & Cox, 2018). All of these procedures happen at the back-end, hidden from “normal” users because of the specially designed asymmetric listening architecture.←233 | 234→
This is a human cleansing device, or as, one content moderator describes it: ”Think like that there is a sewer channel … and all of the mess/dirt/waste/shit of the world flow towards you and you have to clean it” (Chen, 2012). Such decisions happen within seconds and the content moderators are trained, just like Bell’s operators, to make decisions about sensitive and problematic materials as fast as machines/algorithms.
The moderation training manual that Chen (2014) revealed, titled “Abuse Standards 6.1: Operation Manual for Live Content Moderators” provides insights into the work procedures that CCMs have to follow. Facebook’s first Abuse Standards document was drafted in 2009. Three years later, Version 6.1 consists of a 17-page manual, and workers are given instructions on how to respond to people’s reports, and other kinds of content. Once something is reported (content, people, Pages or behaviors), CCM have to determine the identity of the person by deploying the “name match policy”. This means that they need to tune into people’s profiles to verify whether the name of the person who reported and the person in the comments/post/image are the same. This processed listening practice is hidden and conducted without the knowledge of ordinary users as they are not signalled through visual or audio cues that someone is tuning into their private space.
CCMs also need to determine the context of the content, whether the intent behind it is humor, insult, solicitation, or political. Then, the moderators have to distinguish, decide, and filter (remove, suspend, or escalate) between different types of violations. At the end of the processing procedure, if content was filtered out, then people are notified, but are given limited information regarding the rationale behind the decision or means to appeal it. Finally, CCMs have to adjust performance according to previous situations. In this way, CCMs have to know how to respond in each of these scenarios, take each category into account, and apply regional-specific considerations.
CCMs work in offices that feel like “production line” of factories given that they are expected to process hundreds of reports per hour, as a Facebook content moderator in Germany revealed (Punsmann, 2018). Every aspect of their work is calculated, including their breaks. They are trained to work in an extremely repetitive work sequence that demands fast rhythm and make important decisions within seconds during long shifts. This is to avoid latency, “a measure of the time delay introduced by a particular element in a computer system,” as Lilly Irani (2015: 726) shows in relation to micro-workers’ task completion speed. She argues that technology designers believe that a “good” design is one that is immediate, and that such “assumptions drive efforts to maximize ‘task velocity’ so ←234 | 235→human computation can fulfil expectations of interactive computer technologies” (Ibid). CCMs describe this feature of their work as automation of actions, which increases alienation:
The moderator has not only to decide whether reported posts should be removed or kept on the platform but navigated into a highly complex hierarchy of actions. The mental operations are evaluated as being too complex for algorithms. Nevertheless moderators are expected to act as a computer. The search for uniformity and standardization, together with the strict productivity metrics, lets not much space [sic] for human judgment and intuition. At the end of the ramp-up process, a moderator should handle approximately 1300 reports every day which let him/her in average only a few seconds to reach a decision for each report. (Punsmann, 2018)
Thus, Facebook hires human processors and provides them with guidelines that create a structured workflow, similar to the way in which algorithms are given instructions. As these work procedures show, CCMs conduct processed listening by monitoring, detecting, categorizing, filtering, and reporting different types of things, which require a fast decision-making process. Such actions happen within seconds, and the content moderators are trained, just like Bell’s operators, to make decisions about sensitive and problematic materials as fast as automated machines.
These human interfaces of machines are supposed to have memory and adjust their behaviors according to past performance. They are trained to work like machines and embody the communication channel and filters. Their rhythms are supposed to be as close to robotic as possible, so the rhythmedia of these media territories will feel ‘organic’ and not interfered with. They are also cheap labor that is replaceable and kept hidden from the subscribers of the service and, at the same time, help to keep its competitive edge over other companies.
Hidden in the back-end of the media apparatus, humans employed by media companies operate as part of the communication channel and tune in and out of various spaces in the media architecture. One of the main similarities among these types of media workers is their need to make transitions between different layers of the media infrastructure they operate. While telephone operators tune in and out of subscribers’ lines as well as the overall telephone infrastructure, CCMs do the same with different users, pages, groups, events, and so on. If they find problematic things, they then have to go through specific protocols and filter them out according to local considerations. They conduct this processed listening without interrupting the normal subscribers’ experience, as this is done in the back-end to conceal the processes that are taking place.←235 | 236→
There is a decision-making process used by these human communication channels. Their work determines which people and behaviors are illegitimate, deviant, noisy, or spam. By doing so, media companies want to avoid having important discussions on the way they establish what is a disturbance, an illegitimate behavior or groups of people. They shift the responsibility to automation, these things they supposedly have no control over because they function in an automated, engineered, and objective way—just following orders. These decisions have immense social, cultural, political, and economic implications that are kept hidden and unaccounted for. As Jillian York and Corynne McSherry from the Electronic Frontier Foundation (EFF) argue:
The engineers who designed the platforms we use on a daily basis failed to imagine that one day they would be used by activists to spread word of an uprising…or by state actors to call for genocide. And as pressure from lawmakers and the public to restrict various types of speech—from terrorism to fake news—grows, companies are desperately looking for ways to moderate content at scale. They won’t succeed—at least if they care about protecting online expression even half as much as they care about their bottom line. (York and McSherry, 2019)
They outlines four ways which make content moderation problematic: 1) It is a dangerous job but we can’t let robots do it instead; 2) It is inconsistent and confusing; 3) It can cause real life harms to both the workers and users (such as censoring LGBTQ+ people and content, calling out racism, and deleting women’s health businesses because they are too sexual); 4) Appeals are broken and lack transparency. What guides most of these problems is Facebook’s business model which does not always resonate with marginalized groups or society’s best interest. Facebook’s rhythmedia, the way it orchestrates a certain type of sociality can be noisy to us, because profit is not the value that should be the soundtrack of our societies. Alongside CCM there are other people who tune the algorithms and are kept hidden from society the Feed Quality Panel.
Alongside paid content moderators, Facebook also hires people to fill out surveys to gain a better understanding of what people categorize as interesting in their newsfeeds and the reasons behind this. As mentioned above, Facebook frequently sends its unpaid workers—its subscribers—surveys regarding newsfeed functionality. People are neither rewarded for filling out these surveys nor ←236 | 237→receive information about the results and what is done with them afterwards, the incentives to complete these surveys are quite low. On August 18, 2014, Facebook began a special project in Knoxville involving 30 paid workers in their 20s and 30s (already indicating whose categorization values are most important) completing surveys to improve the newsfeed. According to Steven Levy (2015), “Facebook has expanded the project to 600 people around the country, working four hours a day from home. Those numbers will soon expand to the thousands” (Levy, 2015). Facebook revealed this group, which it calls the Feed Quality Panel (FQP):
As part of our ongoing effort to improve News Feed, we ask over a thousand people to rate their experience every day and tell us how we can improve the content they see when they check Facebook—we call this our Feed Quality Panel. We also survey tens of thousands of people around the world each day to learn more about how well we’re ranking each person’s feed. We ask people to rate each story from one to five stars in response to the question ‘how much did you want to see this story in your News Feed?’ From this research using a representative sample of people, we are able to better understand which stories people would be interested in seeing near the top of their News Feed even if they choose not to click, like or comment on them—and use this information to make ranking changes. (Zhang and Chen, 2016)
Human filters, as Facebook’s newsfeed managers demonstrate, are paramount to the functioning of Facebook. Algorithms have limited abilities to decipher what is important, context and nuances, and especially what influences users to behave in one way or another. Here Facebook wants to tune in closer and listen to people’s behavior to understand what is the rationale behind their actions to make better categorizations (for 20 and 30 year olds, that is). These efforts do not imply Facebook will change these metrics or reveal the data gathered, as that is a part of the competitive edge they have established with their database—they have centralized their power. ‘Improving’ is a problematic term because it is not clear what it means—improving what? for who? and for what purpose? As shown above, despite people’s actions against various algorithmic or architecture designs, Facebook pushes its own rhythmedia rationale.
The work of the FQP is very similar to the work people do on Facebook; they have to go to their personal accounts and decide which stories they like on their newsfeed. But, in order to ‘justify’ their salary, they have to do more than that. These workers access a special version of Facebook and are presented with 30 newsfeed stories specifically tailored for their account. Contrary to the ‘normal’ ←237 | 238→version of Facebook, here, the stories on newsfeed are not ranked, but rather randomly scattered. The raters then have to simulate how they would ‘normally’ engage with the story; ignore it, Comment, Share, Like, or follow the links. After that, they need to answer eight questions to elaborate on how they felt about the story. To finish the story’s feedback, they need to write a paragraph to describe their overall tendencies towards the story (Levy, 2015).
According to Will Oremus (2016), this project was led by Adam Mosseri, Facebook’s VP of newsfeed, who together with his team realized the value in the qualitative input they received from human feedback filters. Therefore, Mosseri expanded the project across the United States and overseas. The FQP is meant to give context and meaning to people’s listening behaviors, it aims to understand what people like without Liking it in the way Facebook offers. It shows that Facebook understands that the standardized metrics and measurements are not enough insights to people’s lives, preferences, and desires. This research helps the company to make a better rhythmedia, meaning ordering things in a way that will lead towards more engagement and less deterrence from ads. This is because the most interesting finding that the FQP revealed was that people do not appreciate ads in their newsfeed:
[T]he testers’ evaluations showed that Facebook still has a long way to go before reaching its stated goal of making sponsored stories (i.e., ads) as welcome and useful as other posts in the News Feed. ‘It’s as expected,’ says Eulenstein. ‘In general, commercial content is less desirable than other forms of content’. (Levy, 2015)
Eulenstein’s statement is important because it reveals that not all findings from such surveys are taken into account. Crucially, this indicates that Facebook knows that people do not like ads on their newsfeed. However, since people’s wishes clash with its business model, then their opinions about ads matter only in the sense of Facebook’s decision to produce ad content in less intrusive ways. Facebook will train people through various algorithmic and architecture design, as well as some help from CCMs, to change their behaviors in relation to advertisements. This could be a reason why the results of such surveys are never published or open to the public. These surveys, then, try to have a better understanding of the kinds of story people prefer and order, to give more context to their listening practices to know how to better shape, phrase, present and embed ads as ‘organic’ stories. Importantly, it helps Facebook link, order, and filter people and things in particular spaces and times towards more engagement and less intrusive ads.←238 | 239→
Of this book, this chapter was the most difficult to write. At the moment of final.final.final2019.doc editing of the chapter, in April 2019, there have been at least two scandals published on Facebook each week. It is tempting to include all of them, because each one reveals a different aspect of the mammoth that it has become. This week’s highlights were the fact that Facebook “ ‘unintentionally’ uploading the address books of 1.5 million users without consent, and says it will delete the collected data and notify those affected” (Hern, 2019), as well as a leak of internal documents from Facebook between 2011 and 2015 which indicated the company wanted to sell access people’s data. In this latter case the interesting revelation was something that internet researchers have been pointing for more than a decade: Facebook says one thing and does another. In particular, the article showed the PR strategy of the company, to divert the design changes narrative and frame it around user trust, not competition or making more profit:
Where privacy is mentioned, it is often in the context of how Facebook can use it as a public relations strategy to soften the blow of the sweeping changes to developers’ access to user data. The documents include several examples suggesting that these changes were designed to cement Facebook’s power in the marketplace, not to protect users. (Solon and Farivar, 2019)
Similar to the previous chapter, ‘control’, ‘safety’, and ‘trust’ are used against people, not for them. Facebook is taking care for its own financial control and safety—to create trust among people and make profit on their data. Importantly, just like the digital advertising industry did in the previous chapter, Facebook wants to position itself as a key player in the digital territory. As much as these leaked documents and many others before them, I wanted to publish this book and needed to put a pin in my own interests and need to consume as much information as possible about Facebook’s scandals—even if it was overwhelming and a challenge for an information junkie such as myself.
No matter how much Facebook fucks up, and it does so spectacularly, people still use it. Some people thought that after the Cambridge Analytica/Brexit scandal people would #DeleteFacebook, but many are still using it. As I mention at the introduction, it does not matter if it’s Facebook, or Google, Amazon or, Microsoft. Before these companies we had IBM, Netscape, MySpace and of course—Bell Company. What is important in this chapter is showing how similar strategies were deployed on people to shape their behaviors towards a desired type of sociality that yield more profit. It is about amplifying such strategies, that ←239 | 240→nothing is naturally ordered, and understanding that things can be different, we need to demand a different rhythm of sociality.
In this chapter, I focus on the four filtering mechanisms that Facebook enacts simultaneously to shape a specific sociality with rhythmedia. The main participants in Facebook’s multiple communication channels are Facebook itself (including its architecture, algorithms, and social plugins), the service affiliates and advertising partners, websites, applications, games, content moderators, feed quality panel and, lastly, its subscribers. Facebook’s strategy is to maintain the equilibrium of its multi-layered communication channels through filtering what it considers to be the appropriate way to behave. The filtering mechanisms consist of four main mechanisms, two non-human, which are its architecture design and algorithms, and two human, which are its low paid workers and, most importantly, its subscribers. All of the elements inform each other in a recursive feedback loop in which rhythmedia is conducted by Facebook and resonates in different capacities and intensities.
The first part of the chapter showed the way that Facebook restructures its territory in specific ways to influence and change people’s behaviors to yield more engagement and thus more value to the service. With the audience selector, the company tries to encourage people to feel as though they can control who can see their posts and, by doing so, persuade them to share more content. The Sponsored Stories feature is intended to influence people’s friends to engage with brands. They do that by producing users into communication channels and monetizing their relations with their friends. Such architecture designs are intended to influence people to behave and influence their peers in various ways, which, as Facebook researchers show, is the main purpose of the platform. Here, Foucault’s notions of power enacted on actions, and specifically on people’s relations, is put into action.
The most influential architecture feature are social plugins, which are an improved version of digital advertisings’ cookies combined with pixels, which processed listen to people’s behavior outside the territory, wherever a website, game, application or other publisher integrates these tools. Social plugins listen to Facebook members and non-members whether or not they are logged in to create a database of behaviors. Here, Facebook develops the ad network technology and turns the platform into a place where people can perform their everyday lives and, at the same time, stretches its tentacles through cookies and pixels across the whole internet. Whereas in Chapter 4 these channels were relatively decentralized between publishers, advertising networks and advertising exchanges, in this chapter, Facebook introduces a recentralisation of the communication channels to and from its territory.←240 | 241→
In doing so, Facebook provides licenses to the advertising industry to use its measuring tools and units and gives controlled listening capacities to them. It also allows advertisers to conduct small-scale research on subscribers but forbids companies from producing data subjects from the platform’s data. Facebook also provides itself with a license to act in ways that, when conducted by others, are deemed illegitimate. In this way, Facebook operates as an advertising association, dictating how ads should be designed, measured and even what kind of text and images they should have. By doing so, Facebook orchestrates the way that people and their interactions are filtered through the web. Facebook becomes a central node for a knowledge database that produces subjects according to its business model.
Behavior is extremely important to the production of data subjects, because knowing when and where people do things enables Facebook to predict what the ‘estimated action rates’ are, which is an important factor in its ad auction bidding. Listening to behaviors is also important in statistically analysing the normal behaviors of people, which can help in identifying when there is an abnormal rhythm As Foucault argues, statistics are harnessed for knowing a population and managing deviant phenomena. This is done with Facebook’s FIS algorithm, which categorizes behaviors to create a normality curve that can assist the service in detecting what it defines as abnormal behavior. This curve is constantly changing according to Facebook’s business model and what kinds of behaviors it perceives as being able to harm its value.
As shown above, the three main spam-related activities, according to Facebook’s researchers, include fake profiles, creepers and compromised accounts. All these activities are categorized as spam because they can create multiple/inauthentic profiles of people or undisciplined subjects who can distort the accurate knowledge production, which can harm Facebook’s business model. The main characteristics of such ‘spammy’ behaviors’ are having the same behavioral pattern and volume, which means that their rhythms are identical and thus easier to spot as irregular. In this way, and similar to the digital advertising industry’s metric web standardization, the boundary between the healthy and human body and the problematic and robotic one is enacted. But such definitions are constantly changing and the ‘right’ rhythmedia is always in process of production.
However, it is important to keep in mind that Facebook’s measurements and enforcement are not always working. As Jonathan Albright, the Director of Tow research center at Columbia University, shows in a series of articles called the “Micro-Propaganda Machine”-some Pages have managed to skew metrics and at the same time Facebook did not suspend or remove problematic Pages. As he ←241 | 242→argues “exploiting the platform’s measurement features and post-engagement metrics—behaviors that seem to have resulted in the inflation of the numbers of likes, shares, and video views reported by these pages” (Albright, 2018). This pattern of lack of proper enforcement of its rules when it comes to Pages and actors who have exploited its platform is not a coincident. Engagement, whether it is disinformation, hate, racism or sexism brings profit, and for Facebook that is the main value. ‘Authenticity’ of behavior is only spammy when it harms the business model and brand reputation, therefore as Albright points, Facebook only removed problematic Pages after it was sued for inflating metrics in October 2018 (Welch, 2018).
The human filters who are in charge of removing problematic things on Facebook are also employed by the company but operate as silent processors. This is an architecture design chosen so that people will think that this is the ‘organic’ way the algorithms operate; A real-time experience. In this way, Facebook avoids being accountable for the decision-making processes these workers make on its behalf. The first type of workers are content moderators, who remove content that has been reported by people or that is forbidden according to guidelines that Facebook gives such employees. These workers are usually low waged and have to operate within seconds, making their behavior as similar to algorithms as possible. Their rhythm, like that of the telephone operators, must be fast and efficient, machine-like.
Such experiences of immediacy, selling ‘real-time’, are exactly the reason why content moderators are not celebrated as a branding device like the telephone operators. However, CCMs keep social media’s competitive edge. So, “despite Facebook investing heavily in artificial intelligence and more automated means of content moderation, the company acknowledges that humans are here to stay” (Koebler and Cox, 2018). As automation of services and immediacy become the standard of experience on the internet, it is important for companies such as Facebook (but also others such as Google and Amazon) to argue that their algorithms are operating without any human intervention.
Content moderators work in ‘factories’ which can be located either in people’s homes (if they are Mechanical Turks) or in special centers in Asia where many workers are cramped in small cubicles and cannot talk with one another. Their work feels ‘alienating’ (as mentioned above by one moderator) by the interface design which similar to the telephone operators and the people who use these services – are ordered in individualized ways. This is meant to de-politicize them, to prevent their workers, whether they are the free labor users or low waged content moderators, from organizing and unionizing. In this way, personalization is a powerful strategy to restructure the territory in order to un-crowd people from political discussion, thought and action.←242 | 243→
The other workers are known as the Feed Quality Panel, and they are meant to provide more meaningful input about people’s behavior. By doing so, they help Facebook expand its listening capacities and learn how to modify different design and algorithmic features to push as many advertisements as possible without irritating users. People, then, are paramount to the functioning of Facebook because the service cannot count solely on algorithms and architecture design in order to operate its medium.
The last filtering machine are people themselves, who are reproduced into several data subjects, most of the time without their knowledge, including the sender, receiver, producer, message, communication channel and, most importantly, the filter. Therefore, they must be trained to behave according to Facebook’s conception of the correct behavior and to use the tools Facebook provides for their ‘intended use’, as they say. People are also meant to understand their relations according to Facebook’s measuring units, which the platform hopes will encourage them to participate more.
At the same time, Facebook also encourages listening actions that do not receive visible cues since these give more information to the platform about how to restructure the territory to yield more value. People are the most valuable asset here, as they train the newsfeed algorithm to be more tailored to their interests with every like, survey completed, and reporting of uninteresting topics. However, it is important to note that, although people’s feedback (loops) are important for the development of Facebook as a multi-layered communication medium, including its algorithms and architecture, their feedback will only be taken into account if it is part of the Facebook business model.
Importantly, the way Facebook’s territory is ordered is not only influenced by algorithms but also by the users, ‘shadow users’, Facebook’s product managers, sites that embed social plugins, spammers, journalists, legislators, Facebook’s affiliates and, potentially, other actors. It involves both human and non-human actors. The weight, relevance and impact of each of these actors can change and mutate according to various reasons and conditions, but not only because of a change to the newsfeed algorithm. Giving more weight to the agency of algorithms takes the agency away from humans, outsourced workers, material and immaterial constellations, changing business models and deals, and the complex processes between all of these.
Concealing the rhythmedia considerations helps avoid questions around how this ordering affects the way people understand their subjectivities, politics, news and other topics and how they can behave in these territories. For example, in 2015 (Eulenstein and Scissors, 2015) and 2016 (Backstrom, 2016) Facebook ←243 | 244→made algorithmic tweaks to prioritize engagements with friends and family, which has significantly decreased people’s interactions with credible news outlets. According to Jennifer Grygiel, this rhythmedia may have shaped people’s opinions and voting behavior in the 2016 USA election (Grygiel, 2019). As more platforms come under government scrutiny and questioning, understanding, and revealing rhythmedia practices can help citizens demand regulation and change for the way such companies order people’s mediated experiences. People should be able to decide what rhythms they want, because being social goes beyond the individual—It’s not personal.
1.Free labour in the context of new media is a concept that has been developed by Tiziana Terranove (2000). Coining the term even before platforms appeared and exacerbated this work ‘opportunity’, Terranova managed to capture the way people work in digital environments voluntarily, for free, while feeling enjoyment and being exploited.
2.Veritasium is an educational science YouTube channel, created by Derek Muller in 2011.
3.Conversion in advertising means that the user has performed some kind of action that was desired/requested by the advertiser, usually visiting the external website linked to the ad; i.e. the advertiser has managed to ‘convert’ the behavior of the user due to the ad.
4.Compromised accounts ‘are accounts where the legitimate owner has lost complete or partial control of their credentials to an attacker. The attacker can be a phisher either automated or human, or a malware agent of some form’ (Stein et al., 2011: 3).
Alaimo, C., & Kallinikos, J. (2017). Computing the everyday: Social media as data platforms. The Information Society, 33(4), 175–191.
Andreou, A., Venkatadri, G., Goga, O., Gummadi, K. P., Loiseau, P., & Mislove, A. (2018). Investigating ad transparency mechanisms in social media: A case study of Facebook’s explanations. In The network and distributed system security symposium (NDSS), San Diego, CA.
Angwin, J., and Parris, T. (2016). Facebook Lets Advertisers Exclude Users by Race. Propublica. Available at: https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-race (Accessed on 2 December 2019).
Angwin, J., Scheiber, N., and Tobin, A. (2017). Facebook Job Ads Raise Concerns About Age Discrimination. New York Times. Available at: https://www.nytimes.com/2017/12/20/business/facebook-job-ads.html (Accessed on 2 December 2019).←244 | 245→
Backstrom, L. (2013). News feed FYI: A window into news feed. Available at: https://www.facebook.com/business/news/News-Feed-FYI-A-Window-Into-News-Feed (Accessed on 22 April 2019).
Bakshy, E., Eckles, D., Yan, R., & Rosenn, I. (2012). Social influence in social advertising: Evidence from field experiments. In Proceedings of the 13th ACM conference on electronic commerce (pp. 146–161). New York, NY: ACM.
Berners-Lee, T. (2010). Long live the web. Scientific American, 303(6), 80–85.
Bernstein, M. S., Bakshy, E., Burke, M., & Karrer, B. (2013). Quantifying the invisible audience in social networks. In Proceedings of the SIGCHI conference on human factors in computing systems (pp. 21–30). New York, NY: ACM.
Blue, V. (2013). Anger mounts after Facebook’s ‘shadow profiles’ leak in bug [Online] Available at: http://www.zdnet.com/article/anger-mounts-after-facebooks-shadow-profiles-leak-in-bug/ (Accessed on 22 April 2019).
Boland, B. (2014). Organic reach on Facebook: Your questions answered. Facebook for Business. Available at: https://www.facebook.com/business/news/Organic-Reach-on-Facebook (Accessed on 22 April 2019).
Bosworth, A. (2016). Bringing better Ads to people. Facebook Newsroom. Available at: https://newsroom.fb.com/news/2016/05/bringing-people-better-ads/ (Accessed on 22 April 2019).
Bucher, T. (2012a). Want to be on the top? Algorithmic power and the threat of invisibility on Facebook. New Media & Society, 14(7), 1164–1180.
Bucher, T. (2012b). A technicity of attention: How software ‘makes sense’. Culture Machine, 13.
Bucher, T. (2016). The algorithmic imaginary: Exploring the ordinary effects of Facebook algorithms. Information, Communication & Society, 4462, 1–15.
Bucher, T. (2018). If… Then: Algorithmic power and politics. Oxford, UK: Oxford University Press.
Burke, M., & Develin, M. (2016, February). Once more with feeling: Supportive responses to social sharing on Facebook. In Proceedings of the 19th ACM conference on computer-supported cooperative work & social computing (pp. 1462–1474). New York, NY: ACM.
Burke, M., Marlow, C., & Lento, T. (2009). Feed me: Motivating newcomer contribution in social network sites. In Proceedings of the SIGCHI conference on human factors in computing systems (pp. 945–954). New York, NY: ACM.
Burke, M., Marlow, C., & Lento, T. (2010). Social network activity and social well-being. In Proceedings of the SIGCHI conference on human factors in computing systems (pp. 1909–1912). New York, NY: ACM.
Chan, H. K. (2009). ‘I like this’. Facebook Newsroom. Available at: https://www.facebook.com/notes/facebook/i-like-this/53024537130/ (Accessed on 22 April 2019).
Chen, A. (2012). Inside Facebook’s outsourced anti-porn and Gore Brigade, where ‘camel toes’ are more offensive than ‘crushed heads’. Available at: https://gawker.com/5885714/inside-facebooks-outsourced-anti-porn-and-gore-brigade-where-camel-toes-are-more-offensive-than-crushed-heads (Accessed on 22 April 2019).
Chen, A. (2014). The laborers who keep dick pics and beheadings out of your Facebook feed. Available at: https://www.wired.com/2014/10/content-moderation/ (Accessed on 22 April 2019).
Chikofsky, E. J., & Cross, J. H. (1990). Reverse engineering and design recovery: A taxonomy. IEEE software, 7(1), 13–17.←245 | 246→
Crawford, K., & Gillespie, T. (2016). What is a flag for? Social media reporting tools and the vocabulary of complaint. New Media & Society, 18(3), 410–428.
Das, S., & Kramer, A. (2013, June). Self-censorship on Facebook. In Seventh international AAAI conference on weblogs and social media.
Eulenstein, M., and Scissors, L. (2015). Balancing Content from Friends and Pages. Facebook Newsroom. Available at: https://newsroom.fb.com/news/2015/04/news-feed-fyi-balancing-content-from-friends-and-pages/ (accessed 15 October 2019).
Facebook. (2010). The value of a liker. Facebook Newsroom. Available at: https://www.facebook.com/note.php?note_id=150630338305797 (Accessed on 22 April 2019).
Facebook. (2011). National cybersecurity awareness month updates. Facebook Security. Available at: https://www.facebook.com/notes/facebook-security/national-cybersecurity-awareness-month-updates/10150335022240766/ (Accessed on 22 April 2019).
Facebook. (2012). What happens after you click ‘Report’. Facebook Safety. Available at: https://www.facebook.com/notes/facebook-safety/what-happens-after-you-click-report/432670926753695 (The link is no longer active).
Facebook. (2013). Important message from Facebook’s White Hat Program. Facebook Security. Available at: https://www.facebook.com/notes/facebook-security/important-message-from-facebooks-white-hat-program/10151437074840766 (Accessed on 22 April 2019).
Facebook. (2014). Reducing overly promotional page posts in news feed. Facebook Newsroom. Available at: https://newsroom.fb.com/news/2014/11/news-feed-fyi-reducing-overly-promotional-page-posts-in-news-feed/ (Accessed on 22 April 2019).
Facebook. (2015a). Statement of rights and responsibilities. Available at: https://www.facebook.com/legal/terms (Accessed on 22 April 2019).
Facebook. (2015b). Showing relevance scores for Ads on Facebook. Facebook Business. Available at: https://www.facebook.com/business/news/relevance-score (Accessed on 22 April 2019).
Fisher, E. (2015). Class struggles in the digital frontier: Audience labour theory and social media users. Information, Communication & Society, 18, 1–15.
Gehl, R. W. (2014). Reverse engineering social media. Philadelphia, PA: Temple University Press.
Gerlitz, C., & Helmond, A. (2013). The like economy: Social buttons and the data-intensive web. New Media & Society, 15(8), 1348–1365.
Gillespie, T. (2014). The relevance of algorithms. In Media technologies: Essays on communication, materiality, and society (p. 167). Cambridge, MA: MIT Press.
Gillespie, T. (2018a) Custodians of the Internet: Platforms, content moderation, and the hidden decisions that shape social media. Yale University Press.
Gillespie, T. (2018b) Moderation is a Commodity. Techdirt. Available at: https://www.techdirt.com/articles/20180206/09403839164/moderation-is-commodity.shtml (Accessed 13 August 2018).
Glaser, A. (2018). Why Does Facebook Always Need a Shove to Deal With Hate Speech?. Slate. Available at: https://slate.com/technology/2018/09/kamala-harris-grilled-sheryl-sandberg-about-facebooks-struggles-with-hate-speech.html (accessed on 9 September 2018).
Grosser, B. (2014). What do metrics want? How quantification prescribes social interaction on Facebook. Computational Culture: A Journal of Software Studies, 4. Available at: http://computationalculture.net/article/what-do-metrics-want (Accessed on 22 April 2019).←246 | 247→
Grygiel, J. (2019). Should Facebook have a “quiet period” of no algorithm changes before a major election?. Niemanlab. Available at: https://www.niemanlab.org/2019/07/should-facebook-have-a-quiet-period-of-no-algorithm-changes-before-a-major-election/ (accessed 15 October 2019).
Hao, K. (2019). Facebook’s ad-serving algorithm discriminates by gender and race. Technology Review. Available at:https://www.technologyreview.com/s/613274/facebook-algorithm-discriminates-ai-bias/ (accessed 15 October 2019).
Helmond, A. (2015). The platformization of the web: Making web data platform ready. Social Media+ Society, 1(2). Available at: https://journals.sagepub.com/doi/full/10.1177/2056305115603080 (Accessed on 22 April 2019).
Hern, A. (2019). Facebook uploaded email contacts of 1.5 m users without consent. The Guardian. Available at: https://www.theguardian.com/technology/2019/apr/18/facebook-uploaded-email-contacts-of-15m-users-without-consent (Accessed on 22 April 2019).
Hicks, M. (2010). Building the social web together. Facebook. Available at: https://www.facebook.com/notes/facebook/building-the-social-web-together/383404517130/ (Accessed on 22 April 2019).
Interactive Advertising Bureau. (2009). Social media Ad metrics definitions. Available at: https://www.iab.com/guidelines/social-advertising-best-practices/ (Accessed on 22 April 2019).
Irani, L. (2015). The cultural work of microwork. New Media & Society, 17(5), 720–739.
John, N. A., & Nissenbaum, A. (2019). An agnotological analysis of APIs: Or, disconnectivity and the ideological limits of our knowledge of social media. The Information Society, 35(1), 1–12.
Jones, M. (2014). Keeping Facebook activity authentic. Facebook Security. Available at: https://www.facebook.com/notes/facebook-security/keeping-facebook-activity-authentic/10152309368645766 (Accessed on 22 April 2019).
Karppi, T. (2018). Disconnect: Facebook’s affective bonds. Minneapolis, MN: University of Minnesota Press.
Kember, S., & Zylinska, J. (2012). Life after new media: Mediation as a vital process. Cambridge, MA: MIT Press.
Koebler, J., & Cox, J. (2018). The impossible job: Inside Facebook’s struggle to moderate two billion people. Motherboard. Available at: https://motherboard.vice.com/en_us/article/xwk9zd/howfacebook-content-moderation-works (Accessed on 22 April 2019).
Levy, S. (2015). How 30 random people in Knoxville may change your Facebook News Feed. Available at: https://medium.com/backchannel/revealed-facebooks-project-to-find-out-what-people-really-want-in-their-news-feed-799dbfb2e8b1 (Accessed on 22 April 2019).
Lynley, M. (2014). This is how an Ad gets placed in your Facebook News Feed: A peek under the hood of one of Facebook’s most important algorithms. Available at: https://www.buzzfeed.com/mattlynley/this-is-how-an-ad-gets-placed-in-your-facebook-news-feed?utm_term=.klG0Nm3BWV#.iwLMaPQgdV (Accessed on 22 April 2019).
Mager, A. (2012). Algorithmic ideology: How capitalist society shapes search engines. Information, Communication & Society, 15(5), 769–787.
Mannes, J. (2017). Machine intelligence is the future of monetization for Facebook. Tech Crunch. Available at: https://techcrunch.com/2017/04/21/machine-intelligence-is-the-future-of-monetization-for-facebook/ (Accessed on 22 April 2019).←247 | 248→
Marra, C., & Souroc, A. (2015). News Feed FYI: Building for all connectivity. Facebook Newsroom. Available at: https://newsroom.fb.com/news/2015/10/news-feed-fyi-building-for-all-connectivity/ (Accessed on 22 April 2019).
Martinez, A. G. (2018) How Trump conquered Facebook—Without Russian ads. Wired. Available at: https://www.wired.com/story/how-trump-conquered-facebookwithout-russian-ads (Accessed on 22 April 2019).
Oremus, W. (2016). Who controls your Facebook Feed. Slate. Available at: http://www.slate.com/articles/technology/cover_story/2016/01/how_facebook_s_news_feed_algorithm_works.3.html (Accessed on 22 April 2019).
Owens, E., & Turitzin, C. (2014). News feed FYI: Cleaning up news feed spam. Available at: https://newsroom.fb.com/news/2014/04/news-feed-fyi-cleaning-up-news-feed-spam/ (Accessed on 22 April 2019).
Punsmann, G. B. (2018). Three months in hell. Süddeutsche Zeitung. Available at: http://sz-magazin.sueddeutsche.de/texte/anzeigen/46820/Three-months-in-hell (Accessed on 15 September 2019).
Roberts, S. T. (2016). Commercial content moderation: Digital laborers’ dirty work. In S. U. Noble & B. Tynes (Eds.), The intersectional Internet: Race, sex, class and culture online (pp. 147–159). New York, NY: Peter Lang.
Rogers, R. (2013). Digital methods. Cambridge, MA: MIT Press.
Roosendaal, A. (2010). Facebook tracks and traces everyone: Like this! Social Science Research Network. Available at: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1717563. (Accessed on 27 August 2019).
Sanghvi, R. (2006). Facebook gets a Facelift. https://www.facebook.com/notes/facebook/facebook-gets-a-facelift/2207967130/ (Accessed on 22 April 2019).
Schroepfer, M. (2014). Research at Facebook. Facebook Newsroom. Available at: http://newsroom.fb.com/news/2014/10/research-at-facebook/ (Accessed on 22 April 2019).
Skeggs, B., & Yuill, S. (2016). The methodology of a multi-model project examining how Facebook infrastructures social relations. Information, Communication & Society, 19(10), 1356–1372.
Solon, O., & Farivar, C. (2019). Mark Zuckerberg leveraged Facebook user data to fight rivals and help friends, leaked documents show. Available at: https://www.nbcnews.com/tech/social-media/mark-zuckerberg-leveraged-facebook-user-data-fight-rivals-help-friends-n994706 (Accessed on 22 April 2019).
Stein, T., Chen, E.,& Mangla, K. (2011). Facebook immune system. In Proceedings of the 4th workshop on social network systems. New York, NY: ACM.
Summers, N. (2009). Facebook’s ‘Porn Cops’ are key to its growth. Newsweek. Available at: https://www.newsweek.com/facebooks-porn-cops-are-key-its-growth-77055 (Accessed on 22 April 2019).
Tas S. Wang, M. (2015). News feed FYI: A better understanding of ‘Hide’. Facebook Newsroom. Available at: https://newsroom.fb.com/news/2015/07/news-feed-fyi-a-better-understanding-of-hide/ (Accessed on 22 April 2019).
Taylor, S. J., Bakshy, E., & Aral, S. (2013). Selection effects in online sharing: Consequences for peer adoption. In Proceedings of the fourteenth ACM conference on electronic commerce (pp. 821–836). New York, NY: ACM.←248 | 249→Vaidhyanathan, S. (2018). Antisocial media: How Facebook disconnects us and undermines democracy. New York, NY: Oxford University Press.
Van Dijck, J. (2013). The culture of connectivity: A critical history of social media. New York, NY: Oxford University Press.
Wang, M., & Zhou, Y. (2015) Taking into account more actions on videos. Facebook Newsroom. Available at: https://newsroom.fb.com/news/2015/06/news-feed-fyi-taking-into-account-more-actions-on-videos/ (Accessed on 22 April 2019).
Welch, B., & Zhang, X. (2014). Showing better videos. Facebook Newsroom. Available at: https://newsroom.fb.com/news/2014/06/news-feed-fyi-showing-better-videos/ (Accessed on 22 April 2019).
Welch, C. (2018). Facebook may have knowingly inflated its video metrics for over a year. The Verge. Available at: https://www.theverge.com/2018/10/17/17989712/facebook-inaccurate-video-metrics-inflation-lawsuit (Accessed on 3 December 2019).
York, J. & McSherry, C. (2019). Content Moderation is Broken. Let Us Count the Ways. EFF. Available at: https://www.eff.org/deeplinks/2019/04/content-moderation-broken-let-us-count-ways (Accessed on 5 December 2019).
Yu, A., & Tas, S. (2015). News Feed FYI: Taking into account time spent on stories. Facebook Newsroom. Available at: http://newsroom.fb.com/news/2015/06/news-feed-fyi-taking-into-account-time-spent-on-stories/ (Accessed on 22 April 2019).
Zhang, C., & Chen, S. (2016). News Feed FYI: Using qualitative feedback to show relevant stories. Facebook Newsroom. Available at: http://newsroom.fb.com/news/2016/02/news-feed-fyi-using-qualitative-feedback-to-show-relevant-stories/ (Accessed on 22 April 2019).
Zuboff, S. (2015) Big other: Surveillance capitalism and the prospects of an information civilization. Journal of Information Technology, 30(1), 75–89.
Zuckerberg, M. (2010) Building the social web together. The Facebook Blog. Available at: https://www.facebook.com/notes/facebook/building-the-social-web-together/383404517130/ (Accessed on 22 April 2019).
Zuckerberg, M. (2011). Our commitment to the Facebook community. Available at: https://www.facebook.com/notes/facebook/our-commitment-to-the-facebook-community/ 10150378701937131/ (Accessed on 22 April 2019).←249 | 250→←250 | 251→