Tagged digital literacies

Screenshot of the home page of The Fabric of Digital Life, with navigation menus on the right side of the page. The top menu navigates to different platform types (carryables, wearables, and others) and the bottom menu navigates to different collections of artifacts.
0

Fostering Student Digital Literacy Through The Fabric of Digital Life

Katlynne Davis, Danielle Stambler, Saveena (Chakrika) Veeramoothoo, Nupoor Ranade, Daniel Hocutt, Jason Tham, John Misak, Ann Hill Duin, and Isabel Pedersen

As part of their work with an international research collaboratory, the authors share the following assignments centered around the online archive, Fabric of Digital Life (FoDL). Students were asked to examine, contribute, and/or curate collections for FoDL to strengthen different facets of their digital literacies.

Read more… Fostering Student Digital Literacy Through The Fabric of Digital Life

A dimly lit laptop light illuminates a keyboard as it closes.
2

The Rhetorical Implications of Data Aggregation: Becoming a “Dividual” in a Data-Driven World

Abstract

Social media platforms have experienced increased scrutiny following scandals like the Facebook–Cambridge Analytica revelations. Nevertheless, these scandals have not deterred the general public from using social media, even as these events have motivated critique of the privacy policies users agree to in order to access them. In this article, we argue that approaches to teaching data and privacy in the classroom would benefit from attending to social media privacy policies and the rhetorical implications of data aggregation: not only what these policies say, but also what cultural, social, and economic impacts they have and for whom. We consider what it means for users to have “meaningful access” and offer an investigative framework for examining data aggregation through three areas of data literacy: how data is collected, how data is processed, and how data is used. We posit Cheney-Lippold’s “measurable types” as a useful theoretical tool for examining data’s complex, far-reaching impacts and offer an assignment sequence featuring rhetorical analysis and genre remediation.

Introduction: Gaining “Meaningful Access” to Privacy Policies

There is an increasing need to attend to the role social media plays in our society as more of the work of maintaining relationships moves to online platforms. While platforms like Facebook and YouTube have experienced increased public scrutiny, a 2019 Pew Research Center study found that social media usage remained relatively unchanged from 2016 to 2018, with seven out of ten adults reporting they rely on social media platforms to get information (Perrin and Anderson 2019). International data-collection scandals like Cambridge Analytica and numerous congressional hearings on Big Tech’s’ power in the United States have not deterred the general public from using social media. Everyday users are increasingly aware that their privacy is compromised by using social media platforms, and many agree that Silicon Valley needs more regulation (Perrin and Anderson 2019; Pew Research Center 2019). Yet, many of these same users continue to rely on social media platforms like Facebook, Twitter, and TikTok to inform themselves on important issues in our society.

Early teacher-scholars within the subfield of Computers and Writing worked within a fairly limited scope. They urged learning with and critiquing digital technologies that were more transparent because of their newness—visible technologies such as word-processing programs and computer labs. But today’s teachers and students must contend with a more ubiquitous and hidden field—the entire distributed and networked internet of personalized content based on internet surveillance strategies and data aggregation. The array of websites and apps students encounter in college includes learning management systems (Canvas, BlackBoard, Google Classroom, Moodle), cloud storage spaces (DropBox, OneDrive), project management tools (Basecamp, Trello), communication platforms (Slack, Teams), search engines (Google, Bing), professional and social branding (LinkedIn), online publishing (Medium, WordPress), social media (Facebook, Twitter, YouTube, Instagram, TikTok, Tumblr, WhatsApp, SnapChat), and all the various websites and apps students use in classrooms and in their personal lives. Each one of these websites and apps publishes a privacy policy that is accessible through small hyperlinks buried at the bottom of the page or through a summary notice of data collection in the app.

Usually long and full of legalese, privacy policies are often ignored by students (and most users) who simply click “agree” instead of reading the terms. This means users are less knowledgeable about the privacy policies they agree to in order to continue using social media platforms. As Obar and Oeldorf-Hirsch find in their study “The Biggest Lie on the Internet: Ignoring the Privacy Policies and Terms of Service Policies of Social Networking Services,” undergraduate students in the U.S. find privacy policies to be “nothing more than an unwanted impediment to the real purpose users go online—the desire to enjoy the ends of digital production” (Obar and Oeldorf-Hirsch 2020, 142). To this point, the 2019 Pew Research Center survey “Americans and Digital Knowledge” found that only 48% of Americans understood how privacy policies function as contracts between themselves and a website concerning the use of their data. Through their alluring affordances and obscure privacy policies, social media platforms hinder users’ ability to meaningfully engage with the data exploitation these platforms rely on.

Americans have long turned to policy for contending with sociocultural issues. While breaches of user privacy energize the public, the scale of social media platforms makes it difficult to fully comprehend these violations of trust; as long as social media works as we expect it to, users rarely question what social media platforms are doing behind the scenes. As mentioned earlier, privacy policies are also oftentimes long, jargon-filled, and unapproachable to the average user. How many of us can say we have read, let alone comprehended, all of the fine print of the privacy policies of the platforms we choose to engage on every day? Doing so requires what digital rhetorics scholar Adam J. Banks refers to in Race, Rhetoric, and Technology as “meaningful access,” or access to not only the technology itself but also to the knowledge, experience, and opportunities necessary to grasp its long-term impacts and the policies guiding its development and use (Banks 2006, 135). Meaningful access as a concept can work against restrictive processes such as digital redlining[1] or restricting access (thus eliminating meaningful access) from certain users based on the filtering preferences of their internet access provider. Privacy policies are obtainable, but they are not truly accessible: users may be able to obtain these documents, but they don’t have a meaningful, useful sense of them.

Teachers and students need to rhetorically engage with social media privacy policies in order to learn about data and privacy: we need to understand not only what these policies say, but also what impacts they have and for whom.[2] We also need to determine who has meaningful access and why that might be. As Angela M. Haas (2018) explains, rhetoric concerns the cultural, social, economic, and political implications of when we “negotiate” information; she specifies digital rhetoric as concerned with the “negotiation of information” when we interface with technology. Safiya Umoja Noble develops a related argument in Algorithms of Oppression: How Search Engines Reinforce Racism, suggesting internet search engine algorithms are a reflection of the values and biases of those who create them, and since algorithmic processes extend into hiring practices and mortgage lending evaluations, big-data practices nonetheless reproduce pre-existing social inequities. We need to learn about data generation and its wide-reaching, real-world impact on how we connect and interact with other people to really grasp these platforms and the policies that govern them.

By learning to critically engage with the policies that shape their digital experiences, students develop an important skill set they can use to identify the ways social media platform algorithms use data collected from users to direct their attention in ways that may be more important to the platforms than to the users themselves—working to generate clicks, repetitive usage, and thus revenue from ad impressions, rather than providing the content the user actually seeks. Students might also think about the ways these privacy policies structure the information-filtering and data-collection functions on which these platforms depend, while such policies likewise fail to protect users from the potential socio-economic and racial disparities their algorithmic infrastructures re-entrench (Gilliard and Culik 2016). To this end, it can be useful to introduce concepts like data aggregation and digital redlining, which can equip users with a better understanding for how data collection works and its far-reaching rhetorical effects. In this way, it is important to understand privacy policies as a writing genre, a typified form of writing that accomplishes a desired rhetorical action (e.g. providing social media platforms with the legal framework to maximize data usage).

As writing studies scholars Irene L. Clark and Andrea Hernandez (2011) explain, “When students acquire genre awareness, they are not only learning how to write in a particular genre. They gain insight into how a genre fulfills a rhetorical purpose” (66–67). By investigating the genre of privacy policies, students gain both transferable skills and crucial data literacy that will serve them as writers, media consumers, and, more basically, as citizens. Working within this niche genre provides insights both into the rhetoric of privacy policies per se, as well as into the use of rhetoric and data aggregation for social manipulation.

One way to deepen student understanding of a genre is through remediation, or the adaptation of the content of a text into a new form for a potentially different audience (Alexander and Rhodes 2014, 60). Remediations require both a comprehension of the original text’s content and an awareness of the intended audience’s experience engaging with that text. Remediation provides students with an opportunity to put their knowledge into practice regardless of the resulting form. For example, a privacy policy could be remediated as an infographic that focuses on key ideas from the policy concerning data usage and explains them in ways a lay public with little prior knowledge could understand.

Ultimately, a multi-pronged approach is required to gain meaningful access to privacy policies. In the following section, we provide a framework with terms and questions that consider how data is collected, processed, and used. We direct attention to digital studies scholar John Cheney-Lippold’s theory of “measurable types,” the algorithmic categories created from aggregated user data, as a framework in our development of an assignment sequence that tasks students with performing two remediations—one that focuses on making information more digestible and another that centers long-term effects. The primary audience for this article is instructors who are new to digital surveillance and big-data concepts and are looking to orient themselves with theory as they create assignments about this emerging issue for their classroom.

How Is Data Collected, Processed, and Used?

Data is the fuel that keeps our social media platforms running. Fortunately for companies like Facebook, Twitter, and TikTok, data is generated and captured constantly on the internet. Every website we visit, every story we share, every comment we post generates data. Some of this information comes in the form of cookies, or small files installed on your computer to keep track of the pages you view and what you click on while visiting them. Capturing user behavior on the internet is accomplished largely through third-party “tracking cookies,” which are different from the “session cookies” used primarily to help web pages load faster. Session cookies do not store any user information. Tracking cookies, on the other hand, are so important to a platform like Facebook’s business model that they have a whole separate policy for them: “We use cookies to help us show ads and to make recommendations for businesses and other organizations to people who may be interested in the products, services or causes they promote” (Facebook n.d.). Big Tech companies and their advertising partners use this information to infer what users’ interests might be based on their online behaviors.

Our internet activity on social media platforms creates metadata, which is another form of data web companies collect and use to track our online activity.[3] Metadata is not the content of our posts and messages, but the information about who and/or what we interact with and how often those interactions occur. While quantitative forms of information may appear more trustworthy and objective, in actuality this seemingly neutral data has been stripped of important rhetorical context. Digital humanities scholar Johanna Drucker suggests that we refer to data as “capta,” since data is not information that perfectly represents whatever was observed as much as it is information that is “captured” with specific purposes in mind. Capta cannot fully stand in for us, but it can be used to compare us to other users who “like” and “share” similar things. Therefore, the collection of metadata is valuable because it more efficiently reveals what we do online than the meaning of our content alone. Rather than try to understand what we are communicating, computers instead process this quantified information and use it to calculate the probability that we will engage with certain media and buy certain products (van Dijck and Poell 2013, 10). So, even though data collection requires us to give up our privacy, the stakes may seem relatively low considering that we are presumably getting “free” access to the platform in exchange. Coming to terms with how data impacts our society requires understanding the ostensibly predictive capacities of data aggregation because data we consciously share is never separate from other data, including data from other users and the data we don’t realize we are sharing (e.g. location, time, etc).

Data is what powers social media platforms, but their rhetorical power comes from how data is processed into predictions about our behavior online. Our individual data does not provide accuracy when it comes to recommending new things, so data aggregation makes recommendations possible by establishing patterns “made from a population, not one person” (Cheney-Lippold 2017, 116).[4] These “dividual” identities, as digital studies scholar Cheney-Lippold explains via digital theorist Tiziana Terranova (2004), are the algorithmic classifications of individual users based on the data generated and processed about them. Indeed, we each have our own personal preferences, but we are also interested in what captures the attention of the larger public: we care about the most recent YouTube sensation or the latest viral video. When platforms like YouTube make video recommendations they are comparing data collected from your viewing behavior to a massive cache of data aggregated from the viewing behavior of many other users.

A primary use of data is in the personalization of online experiences. Social media platforms function under the assumption that we want our online experience to be customized and that we are willing to give up our data to make that happen. Personalization may appear to be increasing our access to information because it helps us filter through the infinite content available to us, but in actuality it has to restrict what we pay attention to in order to work. This filtering can result in digital redlining, which limits the information users have access to based on the filtering preferences of internet access providers (Gilliard and Culik 2016). Internet service providers shape users’ online experiences through both privacy policies and acceptable use policies. Not unlike how banks used racist strategies to limit minority access to physical spaces, internet service providers (including universities) employ “acceptable use policies” to limit engagement with information pre-categorized as “inappropriate” and explain why various users might have very different perceptions of the same event. Practices like digital redlining reveal how personalization, albeit potentially desirable, comes at the cost of weakening the consistent, shared information we rely on to reach consensus with other people. Ultimately, we embrace data aggregation and content personalization without considering its full implications for how we connect and communicate with one another and how businesses and governments see and treat us.

Using Measurable Types to Investigate Privacy Policies

One helpful tool for analyzing how algorithms construct online experiences for different users is Cheney-Lippold’s concept of “measurable types.” Measurable types are algorithmically generated norms or “interpretations of data that stand in as digital containers of categorical meaning” (Cheney-Lippold 2017, 19). Like dividual identities, measurable types are ever-changing categories created from aggregate user data without any actual input from the user. Essentially, measurable types assign users to categories that have very real impacts on them, but from data that has been collected with very specific definitions in mind that users don’t know about. The insidiousness of measurable types is how they automatically draw associations from user behaviors without providing any opportunity for users to critique or correct the “truths” scraped from their dividual data. For instance, most users might not see any adverse effects of being labeled a “gamer”; however being classified as a “gamer” measurable type could also algorithmically align users with members of the #gamergate movement[5] resulting in misogynist content spilling into their digital experiences. In this way, measurable types remove humans from the processes that operationalize their data into consequential algorithmic decisions made on their behalf.

Every social media platform has its own privacy policy “written for the express purpose of protecting a company or website operator from legal damages” which outlines the data-collection practices permissible on the site and governs its use (Beck 2016, 70). Measurable types as a framework guides analysis of these policies with specific attention to the implications of how data is collected, processed, and used. Students in first-year courses in composition and technical communication, in addition to those studying communications, information technology, computer science, and education are well suited to investigate these digital policy documents because many such students are social media users already. Analyzing privacy policies for social media platforms through the measurable types framework reveals to students that these policies are about more than simply their experience on the platform. In addition to prescribing user actions on these sites, these policies also directly impact students’ online experiences as the policies concern how data from their activity on the platform is generated, aggregated, and then repurposed into measurable types. They exist among a constellation of Terms of Service (ToS) documents, which can offer robust opportunities to examine the impact data aggregation has for different entities and users. In other words, to really grapple with how a privacy policy works, it is helpful to examine a wide array of ToS documents in order to familiarize yourself with these genres of digital policy.

The assignment sequence we offer for working with measurable types and social media privacy policies in the writing classroom includes an initial rhetorical analysis followed by two remediations. The rhetorical analysis assignment tasks students with examining choices within the privacy policy (e.g. temporality, transparency, and language) to demonstrate how critical information is relayed and to offer suggestions for making the policy more accessible for various audiences. While the goal of the two remediations together is “meaningful access”—not just understanding the policy itself but also the long-reaching impacts that it will have—the first remediation is focused primarily on making the policy more comprehensible. Through a series of in-class activities students learn about data aggregation, digital redlining, and measurable types before moving into a second, more intense remediation where they investigate the consequences of big data and their social media usage. Ultimately, using measurable types as a framework throughout the assignment sequence we offer presents students a path to learn about how their actions online dictate not only their future experiences on the internet but also the constellation of user experiences in their local community and around the world.

Privacy policy rhetorical analysis and initial remediation

When performing a rhetorical analysis of a social media privacy policy, begin with heuristics to work through genre conventions: how audience, exigence, structure, form, and intention work to shape a genre and the social actions it encapsulates (Miller 2015, 69). Which users and non-users does this document potentially impact? How do specific rhetorical choices impact how critical information is taken up? What is the intent of the people who write and design these documents, and the companies that publish them? Examining and discussing rhetorical choices within the privacy policy reveals how it addresses complex concepts such as data collection and aggregation—issues which are critically important for students to undertake throughout the assignment sequence. The goal is to begin working through the aforementioned terminology to inform remediations that emphasize rhetorical changes students would implement to make the policy more accessible for various audiences.

When approaching the genre for remediation, students should highlight the changes they will implement to make the social media privacy policy more transparent and readable. After students highlight the changes, they can figure out the genre of the remediation. We imagine students might produce infographics, flyers, zines, podcasts, videos, and other genres during this part of the assignment sequence. Since social media privacy policies impact many students directly, ask them to consider what they would do to make the document’s information more accessible and digestible for users like themselves. Students could perform usability tests, hold focus groups, and ask peers (in class and in other classes) for feedback. Also, consider the temporality, transparency, and language of the document. When was the last time the policy was updated? What methods of data collection might be opaque or otherwise inaccessible to users? What rhetorical arguments are formed by the policy? Answering these questions helps students develop a sense of what it means to be an engaged digital citizen. The more comfortable they are with analyzing the dynamics of these policies, the more likely they will see themselves as digital citizens navigating the complexities of a data-driven digital society. Students will focus more on how this data is used and to what ends as we move into a second remediation considering the social, political, and economic implications of digital privacy and data aggregation.

Expanding the scope to amplify measurable types

The exchange of our personal information for accessing services online is among the most complex issues we must address when considering how data use is outlined in social media privacy policies. Therefore, students should build upon their initial remediation, paying attention to the far-reaching implications of practices like data aggregation which lead to data commodification. Cheney-Lippold’s measurable types help us understand how our online experiences are cultivated by the processes of big data—the information you have access to, the content you are recommended, the advertisements you are shown, and the classification of your digital footprint (Beck 2016, 70). The following classroom activities expand the scope of these conversations beyond social media privacy policies towards larger conversations concerning big data by making measurable types visible.

According to Pew Research Center, 90% of adults in the United States have access to the internet; however, this does not mean that users get the same information. What we access online is curated by algorithmic processes, thus creating variable, often inequitable experiences. Digital redlining is about the information you have access to online. As with personalization earlier, digital redlining is “not only about who has access but also about what kind of access they have, how it’s regulated, and how good it is” (Gilliard and Culik 2016). Therefore, analysis should center on the access issues that privacy policies could address to help users better understand the myriad of ways social media platforms limit access just as much as they distribute it. Since digital redlining creates different, inequitable experiences arranged according to measurable types, it is easy to observe, as Gilliard and Culik do, how this frequent practice extends beyond social media privacy policies and into our everyday lives. Even simple, familiar online actions like engaging with mainstream search engines (e.g. Google) can demonstrate how different measurable types yield different results.

The techniques used to investigate social media privacy policies are transferable to any policy about data collection. For example, Google is often criticized for mismanaging user privacy, just as social media platforms like Facebook suffer scrutiny for not protecting users’ information. To examine the cultural, economic, social, and political impacts of user privacy on Google, students can perform some basic searches while logged out of Google services and note the results that appear on the first few pages. Then, students can log into their Google accounts and compare how personalized results differ not only from previous search results, but also from the results provided to friends, family, and their peers. What information is more widely shared? What information feels more restricted and personalized? These questions help us to process how measurable types contribute to the differences in search results even among those in our own communities.

Internet advertisements are another way to see measurable types at work online. As in the previous case with Google searches, we can easily observe the differences in the advertisements shown to one user compared to others since search engine results have a considerable amount of bias built into them (Noble 2018). Moreover, visiting websites from different interest groups across the internet allows you to see how the advertisements shown on those web pages are derived from the measurable types you belong to and how you (knowingly or unknowingly) interact with the various plugins and trackers active on the sites you visit. In comparing how the advertisements from the same webpage differ among students, we can develop an awareness of how algorithmic identities differ among users and what these advertisements infer about them as a person or consumer—the composite of their measurable types. Facebook also has a publicly accessible ad database that allows anyone to view various advertisements circulating on the platform in addition to information pertaining to their cost, potential reach, and the basic demographic information of users who actually viewed them. Advertisements present various sites for analysis and are a useful place to start when determining what data must have been collected about us because they provide a window into the measurable types we are assigned.

Internet advertisers are not the only stakeholders interested in data related to our measurable types. Governments are as well, as they are invested in assessing and managing risks to national security as they define it.[7] For instance, certain search engine queries and other otherwise mundane internet activity (keyword searches, sharing content, etc.) could be a factor in a user being placed on a no-fly list. Artist and technologist James Bridle refers to these assigned algorithmic identities as an “algorithmic citizenship,” a new form of citizenship where your allegiance and your rights are continuously “questioned, calculated, and rewritten” by algorithmic processes using the data they capture from your internet activity writ large (Bridle 2016).[8] Algorithmic citizenship relies on users’ actions across the internet, whereas most users might reasonably assume that data collected on a social media platform would be contained and used for that platform. However, algorithmic citizenship, like citizenship to any country, comes with its own set of consequences when a citizen deviates from an established norm. Not unlike the increased social ostracism a civilian faces from their community when they break laws, or appear to break laws, a user’s privacy and access is scrutinized when they don’t conform to the behavioral expectations overseen by government surveillance agencies like the National Security Agency (NSA).

Performing advanced remediations to account for algorithm-driven processes

Thinking through concepts like algorithmic citizenship and digital redlining helps us acknowledge the disproportionate impacts of algorithm-driven processes on users beyond the white, often heteronormative people for whom the technology was designed. Addressing algorithmic oppression on a theoretical level avoids settling for the short-sighted, strictly technological solutions to problems that are inherently social and cultural, a valuable perspective to consider for the second remediation. Therefore, in developing a second privacy policy remediation, students should consider not only their own experiences but the experiences of others in ways that mimic the aforementioned expansion from the individual to the dividual. This part of the assignment sequence promotes thinking about how online experiences are not equitable for all users by prompting students to investigate their measurable types and offer remediations that account for digital access issues like digital redlining or algorithmic citizenship. Some investigations into these digital modes of oppression will operate at the local, community level while others will operate at the much larger, societal level. Students might consider how their online shopping habits could influence where a new bus line is implemented in a future “smart city,” or how their internet browsing actions could influence which measurable types get flagged automatically for an invasive search by the TSA on their next flight overseas.

Students may choose to remediate the privacy policy into genres similar to the initial remediation assignment (e.g. infographics, videos). However, immersion in these policies for an extended time, over multiple, increasingly more intense inquiries, clarifies how these social media privacy policies extend the digital divide perpetuated by inequitable access to technology and critical digital literacies. Concepts and questions to consider for this remediation include meaningful access, data aggregation, and digital tracking and surveillance techniques. Who has access to certain information and who does not? What user data is shared with different stakeholders and why? What data are being collected and stored? What norms are perpetuated in the development of technology and technological systems? This final assignment in the sequence provides a means to examine the material consequences of big-data technologies: the critical role measurable types play and the algorithmic processes that make them possible. In performing this work, we can better comprehend how data collection and aggregation enables systematic marginalization in our social, political, and economic infrastructures.

Discussion and Further Implications

Learning outcomes vary across classrooms, programs, and institutions, but instructors who choose to teach about data aggregation and social media privacy policies should focus on critical objectives related to genre analysis and performance, cultural and ethical (rhetorical) context, and demonstrating transferable knowledge. Focusing on each of these objectives when assessing remediations of privacy policies in the writing classroom helps students learn and master these concepts. Importantly, the magnitude of the grade matters; genre remediations of privacy policies should be among the highest, if not the highest, weighted assignments during a writing course because of the knowledge of the complex concepts and rigor of writing required to perform the work. Instructors should create and scaffold various lower-stakes assignments and activities for students to complete throughout a sequence, unit, or course which augment the aforementioned learning outcomes.

While scholars in rhetoric and composition have long theorized the nature of genre, instructors should emphasize that privacy policies are a social construct (Miller 2015). Assessment should focus on how well students analyze and perform in the genre of the privacy policy during their remediations. Assessing how well students perform in a genre like a privacy policy challenges them to understand the rhetorical context and inequity of digital surveillance; moreover, it helps them develop transferable knowledge they can use when performing in other genres in other disciplines and as they go out and make an impact on the world. Instructors who teach about privacy policies should highlight knowledge transfer as a learning objective, because it helps students prepare to take up the skills they develop in the writing classroom and deploy them when performing in other genres in other classes and in their careers.

As mentioned earlier, many students have minimal experience with privacy policies because most do not read them and because hardly any have performed in the genre. Admittedly, unless students are planning careers as technical communicators, technologists, or entrepreneurs, they will probably not perform in this genre again. Even the entrepreneurs in your classes will more than likely take the approach of outsourcing the composition of their start-up’s privacy policy. Regardless of their future experiences with genre and remediation, this assignment sequence extends students’ critical thinking about data aggregation beyond their immediate classroom context and into their online and offline worlds.

Data: Beyond the Confines of the Classroom

We recommend analyzing social media privacy policies as a way to provoke meaningful interactions between students and the digital communities to which they belong. With so many documents to analyze, students should not feel restricted to the privacy policies for mainstream social media platforms like Facebook and Twitter but should interrogate fringe platforms like Parler and emerging platforms like TikTok. We have focused on extending conversations about digital privacy, data aggregation, digital redlining, and algorithmic citizenship but there are other concepts and issues worthy of thorough investigation. For example, some students might strive to highlight the intersection of digital policing techniques and mass incarceration in the United States by analyzing the operational policies for police departments that implement digital technologies like body cams and the privacy policies for the companies they partner with (like the body cam company Axon). Others might focus on how data manipulation impacts democracy domestically and abroad by analyzing how social media platforms were used to plan the insurrection in the U.S. Capitol on January 6, 2021, and the meteoric rise of fringe “free speech” platforms like MeWe and Gab in the days following the insurrection.

Working through privacy policies and data concepts is tedious but necessary: we cannot let these challenging issues dissuade us from having important discussions or analyzing complex genres. Foregrounding the immediate impact a social media privacy policy has on our experiences in higher education highlights data aggregation’s larger impacts on our lives beyond the classroom. What are the real-world, rhetorical implications of abstract concepts like digital data collection and digital privacy? The answer is inevitably messy and oftentimes results in uncomfortable conversations; however, understanding how and why data collection, aggregation, and manipulation contributes to systemic oppression provides a valuable opportunity to look far beyond the classroom and to make smart, informed decisions concerning our present and future digital experiences with social media platforms.

Notes

[1] Scholars Chris Gilliard and Hugh Culik (2016) propose the concept of “digital redlining” as a social phenomenon whereby effective access to digital resources is restricted for certain populations by institutional and business policies, in a process that echoes the economic inequality enforced by mortgage banks and government authorities who denied crucial loans to Black neighborhoods throughout much of the 20th century.

[2] Stephanie Vie (2008), for instance, described over a decade ago a “digital divide 2.0,” whereby people’s lack of critical digital literacy denies them equitable access to digital technologies, particularly Web 2.0 tools and technologies, despite having physical access to the technologies and services themselves.

[3] Facebook creator Mark Zuckerberg is not lying when he says that Facebook users own their content, but he also does not clarify that what Facebook is actually interested in is your metadata.

[4] Aggregate data does not mean more accurate data, because data is never static: it is dynamically repurposed. This process can have disastrous results when haphazardly applied to contexts beyond the data’s original purpose. We must recognize and challenge the ways aggregate data can wrongly categorize the most vulnerable users, thereby imposing inequitable experiences online and offline.

[5] #gamergate was a 2014 misogynistic digital aggression campaign meant to harass women working within and researching gaming, framed by participants as a response to unethical practices in videogame journalism.

[6] Facebook launched its ad library (https://www.facebook.com/ads/library/) in 2019 in an effort to increase transparency around political advertisement on the platform.

[7] Perhaps the most recognizable example of this is the Patriot Act (passed October 26, 2001) which prescribes broad and asymmetrical surveillance power to the U.S. government. For example, Title V specifically removes obstacles for investigating terrorism which extend to digital spaces.

[8] This is what Estee Beck (2015) refers to as the “invisible digital identity.”

Bibliography

Alexander, Jonathan, and Jacqueline Rhodes. 2014. On Multimodality: New Media in Composition Studies. Urbana: Conference on College Composition and Communication/National Council of Teachers of English NCTE.

Banks, Adam Joel. 2006. Race, Rhetoric, and Technology: Searching for Higher Ground. Mahwah, New Jersey: Lawrence Erlbaum.

Beck, Estee. 2015. “The Invisible Digital Identity: Assemblages of Digital Networks.” Computers and Composition 35: 125–140.

Beck, Estee. 2016. “Who is Tracking You? A Rhetorical Framework for Evaluating Surveillance and Privacy Practices.” In Establishing and evaluating digital ethos and online credibility, edited by Moe Folk and Shawn Apostel, 66–84. Hershey, Pennsylvania: IGI Global.

Bridle, James. 2016. “Algorithmic Citizenship, Digital Statelessness.” GeoHumanities 2, no. 2: 377–81. https://doi.org/10.1080/2373566X.2016.1237858.

CBC/Radio-Canada. 2018. “Bad Algorithms Are Making Racist Decisions.” Accessed June 18, 2020. https://www.cbc.ca/radio/spark/412-1.4887497/bad-algorithms-are-making-racist-decisions-1.4887504.

Cheney-Lippold, John. 2017. We Are Data: Algorithms and the Making of Our Digital Selves. New York: New York University Press.

Clark, Irene L., and Andrea Hernandez. 2011. “Genre Awareness, Academic Argument, and Transferability.” The WAC Journal 22, no. 1, 65–78. https://doi.org/10.37514/WAC-J.2011.22.1.05.

Dijck, José van, and Thomas Poell. 2013. “Understanding Social Media Logic.” Media and Communication 1, no. 1: 2–14. https://doi.org/10.12924/mac2013.01010002.

Drucker, Johanna. 2014. Graphesis: Visual Forms of Knowledge Production. MetaLABprojects. Cambridge, Massachusetts: Harvard University Press.

Facebook. n.d. “Data policy.” Accessed March 28, 2021. https://www.facebook.com/about/privacy.

Gilliard, Christopher, and Hugh Culik. 2016. “Digital Redlining, Access, and Privacy.” Common Sense Education. Accessed June 16, 2020. https://www.commonsense.org/education/articles/digital-redlining-access-and-privacy.

Haas, Angela M. 2018. “Toward a Digital Cultural Rhetoric.” In The Routledge Handbook of Digital Writing and Rhetoric, edited by Jonathan Alexander & Jaqueline Rhodes, 412–22. New York, New York: Routledge.

Miller, Carolyn R. 2015. “Genre as Social Action (1984), Revisited 30 Years Later (2014).” Letras & Letras 31, no. 3: 56–72.

Noble, Safiya Umoja. 2018. Algorithms of Oppression: How Search Engines Reinforce Racism. New York: New York University Press.

Obar, Jonathan A., and Anne Oeldorf-Hirsch. 2020. “The Biggest Lie on the Internet: Ignoring the Privacy Policies and Terms of Service Policies of Social Networking Services.” Information, Communication & Society 23, no. 1: 128–47. https://doi.org/10.1080/1369118X.2018.1486870.

Perrin, Andrew, and Monica Anderson. 2019. “Share of US adults using social media, including Facebook, is mostly unchanged since 2018.” Pew Research Center.

Pew Research Center. 2019, June 12. “Internet/Broadband Fact Sheet.” Accessed March 20, 2021. https://www.pewresearch.org/internet/fact-sheet/internet-broadband/.

Terranova, Tiziana. 2004. Network Culture: Politics for the Information Age. London, UK; Ann Arbor, Michigan: Pluto Press.

Vie, Stephanie. 2008. “Digital Divide 2.0: ‘Generation M’ and Online Social Networking Sites in the Composition Classroom. Computers and Composition 25, no. 1: 9–23. https://doi.org/10.1016/j.compcom.2007.09.004.

Acknowledgments

We would like to thank our Journal of Interactive Technology and Pedagogy reviewers for their insightful feedback. We are particularly indebted to Estee Beck and Dominique Zino. This article would not have been possible without Estee’s mentorship and willingness to work with us throughout the revision process.

About the Authors

Charles Woods is a Graduate Teaching Assistant and PhD candidate in rhetoric, composition, and technical communication at Illinois State University. His research interests include digital privacy, biopolitical technologies, and digital rhetorics. His dissertation builds a case against the use by American law enforcement of direct-to-consumer genetic technologies as digital surveillance tools, and positions privacy policies as a dynamic rhetorical genre instructors can use to teach about digital privacy and writing. He has contributed to Computers & Composition, Writing Spaces, and The British Columbian Quarterly, among other venues. He hosts a podcast called The Big Rhetorical Podcast.

Noah Wilson is a Visiting Instructor of Writing and Rhetoric at Colgate University and a PhD candidate in Syracuse University’s Composition and Cultural Rhetoric program. His research interests include posthuman ethos, algorithmic rhetorics, and surveillance rhetorics. His dissertation addresses recent trends in social media content-recommendation algorithms, particularly how they have led to increased political polarization in the United States and the proliferation of radicalizing conspiracy theories such as Qanon and #Pizzagate. His research has appeared in Rhetoric Review, Rhetoric of Health & Medicine, Disclosure, and other venues

Skip to toolbar