Tagged social media

A variety of laptops sit, viewed from above, on a table, worked by hands also using external hard drives, phones, and headphones, alongside snacks and tea.
2

Implementing Technology Assignments Developed from a Faculty Technology Fellows Program

Christopher T. O’Brien and Lora Walter

This article describes two assignments from a Faculty Technology Fellows program: one used social media to disseminate scholarly information about childhood and adolescent development, and the second used the application Flip to record an oral presentation about the importance and process of nursing licensure.

Read more… Implementing Technology Assignments Developed from a Faculty Technology Fellows Program

Image of John Green's filter bubble (John Green is the host of "Social Media: Crash Course Navigating Digital Information") that contains his image and a variety of his interests and identity markers surrounding him: soccer, pizza, Harry Potter, coffee, family, a cross, etc.
0

Soft Surveillance: Social Media Filter Bubbles as an Invitation to Critical Digital Literacies

Abstract

This webtext presents the rationale, scaffolding, and instructions for an assignment intended for First-Year Writing (FYW) students: the Filter Bubble Narrative. We pose this assignment in response to Lyon’s (2017) call to analyze “soft surveillance” situations and Gilliard’s (2019) challenge to critically analyze platform-perpetuated surveillance norms with students. We suggest that social media is a particularly productive space to focus student attention on soft surveillance given social media’s ubiquitous presence in society and in students’ lives. Moreover, through their social media use, FYW students have developed an array of digital literacies (Selfe and Hawisher 2004) as prosumers (Beck 2017) that are so engrained in their everyday existences that they haven’t held them up for critical scrutiny (Vie 2008). Through Pariser’s (2012) concept of the “filter bubble,” students engage in scaffolded activities to visualize the effects of algorithmic surveillance and to trace and reassemble the data-driven identities that social media platforms have constructed for them based on their own user data. The final deliverable is a multimodal narrative through which students critically examine and lay claim to their own data in ways that may inform their future use of social media and open opportunities to confront soft surveillance.

David Lyon (2017) argued that we live in a surveillance culture, a way of living under continual watch “that everyday citizens comply with—willingly and wittingly, or not” (825). Lyon (2006) previously stressed that such a pervasively visible cultural existence extends beyond notions of the “surveillance state” and the “panopticon” to forms of seemingly “soft and subtle” surveillance that produce “docile bodies” (4). Drawing upon the work of Gary Marx (2003; 2015), Lyon (2017) argued that such “soft surveillance” is seemingly less invasive and may involve individuals willingly surrendering data, perhaps through “public displays of vulnerability” (832) that are common online via cookies, internet services providers (ISPs), and social media sites. Contemporary surveillance culture is therefore less out there and more everywhere, less spy guys and big brother and much more participatory and data-driven.

In higher education, scholars like Hyslop-Margison and Rochester (2016) and Collier and Ross (2020) have argued that surveillance has always existed through “data collection, assessment, and evaluation, shaping the intellectual work, and tracking the bodies and activities of students and teachers” (Collier and Ross 2020, 276). However, the COVID-19 pandemic has accelerated and contributed to the ways that academic activity is surveilled via proprietary learning management systems and audio/video conferencing software that track clicks and log-ins while simultaneously hoarding student/user data (Atteneder and Collini-Nocker 2020). Responding to and potentially resisting such prevalent surveillance, no matter how soft, therefore requires “a careful, critical, and cultural analysis of surveillance situations” (Lyon 2017, 836). However, as Gilliard’s (2019) “Privacy’s not an abstraction” stressed, “precisely because ideas about privacy have been undermined by tech platforms like Facebook and Google, it is sometimes difficult to have these discussions with students” (para. 16). We will argue that social media news feeds are just the kind of surveillance situations that need critical attention, in writing classrooms, in service of students’ critical digital literacies.

Critical Digital Literacies in the Age of Algorithmic Surveillance

Along with many other scholars writing about technology and classroom practice before us (Selber 2003; Selfe 1999; Takayoshi and Huot 2003; Vie 2008), we suggest that critical is a keyword for theory as well as for application in our networked, digital age, and one that does not emerge fortuitously from incorporating the latest digital technologies in classrooms. In fact, by incorporating technologies into our classrooms, we are often contributing to surveillance culture, as Collier and Ross (2020) note. A critical orientation, we argue, can help.

In “Critical Digital Pedagogy: a Definition,” Jesse Stommel (2014) defined critical pedagogy “as an approach to teaching and learning predicated on fostering agency and empowering learners (implicitly and explicitly critiquing oppressive power structures)” (para. 4). Critical digital pedagogy, he argued, stems from this foundation, but localizes the impact of instructor and student attention to the “nature and effects” of digital spaces and tools (Stommel 2014, para. 14). In adapting the aims of critical pedagogy to the digital, what emerges is a clear distinction between doing the digital in instrumental fashion (e.g., to develop X skill) and doing the digital critically (e.g., to transform one’s being through X). A critical digital literacies approach to surveillance might suggest:

a willingness to speculate that some of the surveillance roles we have come to accept could be otherwise, along with an acknowledgment that we are implicated in what Lyon terms ‘surveillance culture’ (2017) in education. What can we do with that knowledge, and what culture shifts can we collectively provoke? (Collier and Ross 2020, 276)

As Selber (2004) and Noble (2018) have argued, digital technologies and platforms are made by humans that have their own biases and intentions, and those same biases and intentions may become part of the architecture of the technology itself—regardless of intentions or visibility. Other scholars, like Haas (1996) and O’Hara et al. (2002) therefore cautioned against perpetuating what is often called “The Technology Myth,” by calling teacher-scholars to look critically “at the technology itself” instead of through it (Haas 1996, xi). Without a critical perspective, students and instructors may fail to question the politics, ideologies, and rhetorical effects of their digital tools, spaces, and skills, what Selber (2004) defined as critical literacy in a digital age. We argue that there may be no better space to engage students in critical digital practice than the online spaces they visit daily, often multiple times per hour: social media news feeds.

Social Media News Feeds as a Space for Critical Digital Practice

In a report for Pew Research Center titled “Social Media Outpaces Print Newspapers in the U.S. as a News Source,” Elisa Shearer (2018) revealed that 18-to-29-year-olds are four times as likely to go to social media for news compared to those aged 65 and older. Social media applications, which are frequently accessed via mobile devices, are therefore incredibly popular with college-age students (Lutkewitte 2016) and should be seen for what they are: “technology gateways”, or the primary places where users practice digital literacies (Selfe and Hawisher 2004, 84). However, as Vie (2008) argued, even frequent users may still need to further develop “critical technological literacy skills” (10) since “comfort with technology does not imply … they can understand and critique technology’s societal effects” (12). In order to open up awareness and areas of resistance, we suggest students should be introduced to, and offered opportunities to interrogate, the ways in which their self-selected, or curricularly-mandated, technologies surveil them. Here, we aim to focus their attention on the ways they are softly surveilled via algorithms operating behind the scenes of their social media platforms. Specifically, Gilliard (2019) cautioned that “the logic of digital platforms … treats people’s data as raw material to be extracted” and put to use by individuals for a variety of purposes—malicious, benign, and in-between. Moreover, Beck (2017) argued that it has become normative for social media applications, and the companies that control them, to employ algorithmic surveillance to track all user data and personalize experiences based on that data. Indeed, these seemingly invisible mechanisms further “soften” attitudes toward surveillance that may result in sharing personal details so publicly on social media (Marx 2015; Lyon 2017).

One consequence of algorithmic surveillance on social media is what Pariser (2012) has coined the “filter bubble.” Filter bubbles are created through algorithmic content curation, which reverberates users’ pre-existing beliefs, tastes, and attitudes back to them on their own feeds, which isolates users from diverse viewpoints and content (Nguyen et al. 2014, 677). For example, YouTube recommends videos we might like, Facebook feeds us advertisements for apparel that is just our style, and Google rank-orders search results—all based on our own user data. In many ways, the ideas and information we consume are “dictated and imposed on us” by algorithms that limit our access to information and constrain our agency (Frank et al. 2019, Synopsis section). After all, as Beck (2017) argued, these filter bubbles that are curated by algorithmic surveillance constitute an “invisible digital identity” about individuals (45). And as Hayles (1999) argued, our identities are hybridized and may be seen as “an amalgam, a collection of heterogeneous components, a material-informational entity whose boundaries undergo continuous construction and reconstruction,” (Hayles 1999, 3). This suggests that an individual’s online activity and interaction with other digital actors in online spaces, which results in an algorithmic curation of a unique filter bubble, is a material instantiation of their embodied identity(ies).

We therefore maintain that turning students’ attention to their own filter bubbles on social media, a space where they may have already developed an array of literacies, means they can attempt to reconcile the distinction between their digital literacies and critical digital literacies as part of reassembling their data with their body. Indeed, the difference between digital literacies and critical digital literacies are particularly problematic in social media spaces. After all, social media are themselves sites of converging roles and agencies, where users are both producer and consumer (Beck 2017) and, as Jenkins (2006) suggested, sites “where the power of the media producer and the power of the media consumer interact in unpredictable ways” (2). We therefore ask, as William Hart-Davidson did in his foreword to the 2017 edited collection, Social Media/Social Writing: Publics, Presentations, and Pedagogies, “What if we took it [SM] seriously?” (xiii). What if instructors acted intentionally to shift students from instrumental users and information consumers to thinking critically about social media? What opportunities for agency might be revealed through concerted and critical attention to how they are algorithmically surveilled and reconstituted?

As Rheingold (2012) suggested, students who know what the tools are doing and “know what to do with the tools at hand stand a better chance of resisting enclosure” (218). For us, a critical digital pedagogy that fosters critical digital literacies is the antidote to the “enclosure” Rheingold references and a way to more holistically and critically understand agency online. Noble’s (2018) term algorithmic oppression also offers insight into the deleterious effects of unchecked algorithmic curation where, in the case of Google search, in particular, “technology ecosystems… are structuring narratives about Black women and girls” in ways that deepen inequality and reinforce harmful stereotypes (33). Jenkins (2006), too, noted that in networked systems “not all participants are created equal” (3) and that corporations have more power than individual consumers (3).

How can students therefore develop the critical literacies to resist or subvert the market-driven forces that seek to disempower and make their algorithmic identities invisible? Beck (2017) suggested that writing classrooms are a valuable space to try to do so, as “[o]ften times writing courses provide students with the means to consider possibilities for positive change to policy, procedure, and values—all with the power to enact such change through writing” (38). In other words, working with students to trace their online footprint and activities that contribute to the curation of their filter bubbles may offer the opportunity for students to critically look at their digital practices through their own digital practices. Though our interventions will be imperfect, amidst corporate-controlled, algorithmic agents, Hayles (1999) and Latour (2007) have nevertheless stressed that our informational lives are materially part of our identity, and that we do have opportunities for transforming our networked agency. Though “our lives, relationships, memories, fantasies, desires also flow across media channels” (Jenkins 2006, 17), creating data that gets funneled through algorithms for corporate or partisan profit, we can intervene. More importantly, perhaps, so can our students.

One place to begin is to reunite our digital fingerprints and our bodies through narrative, through storytelling. Hayles (1999) argued for “us[ing] the resources of narrative itself, particularly its resistance to various forms of abstraction and disembodiment” (22). We agree and have developed the Filter Bubble Narrative assignment sequence to put theory into practice. We use the term narrative in a capacious sense that recognizes the agency and positionality a writer has to arrange events or data, to tell a story, and the connective, reflective tissue that makes narrative a structure for meaning-making and future action. By investigating and storifying the effects of algorithmic curation and soft surveillance, we defragment our identity and construct a hybrid, a Haylesian posthuman assembled from a Latourian tracing. In short, through the Filter Bubble Narrative assignment sequence, we hope to offer students opportunities to act to create an embodied, expansive identity, one that is both designable and pre-designed as an interaction between humans and algorithms.

In order to encourage students to critically interrogate these filter bubbles and therefore how they’re algorithmically surveilled online, this webtext presents a scaffolded assignment, the Filter Bubble Narrative, as an example of how instructors and students might put soft surveillance under a microscope. However, unlike the hotly debated Kate Klonick assignment that involved gathering data from non-consenting research subjects conversing in public places (see Klonick’s New York Times Op-Ed “A ‘Creepy’ Assignment: Pay Attention to What Strangers Reveal in Public”), our assignment and its scaffolding invites students to investigate the technologies that they already use and that surveil them, “willingly and wittingly, or not’” (Lyon 2017, 825). We think this practice is superior to “reproducing the conditions of privacy violations” that Hutchinson and Gilliard argue against and that are enacted in assignments that involve others, especially without their knowing consent (as cited in Gilliard 2019, para. 9). However, we recognize that some students may not use social media at all, and we do not support the mandatory creation of social media accounts for academic purposes. Therefore, alternative assignments should be made available, as needed.

The Filter Bubble Narrative Assignment Sequence

Taken together, the assignment sequence aims to develop students’ critical digital literacies surrounding surveillance by creating opportunities for students to pay attention to the invisible algorithms that surveil them and personalize the information and advertising they see on their social media feeds, ultimately creating filter bubbles. Students will also be encouraged to investigate opportunities for agency within their filter bubbles through narrative and technical interventions like disabling geolocation within apps, adjusting privacy settings, and seeking out divergent points of view, among other strategies.

The assignment sequence culminates in a multimodal writing assignment, the Filter Bubble Narrative (see Appendix A). The choice to call this project a filter bubble narrative is meant to create some intertextuality between existing first-year writing (FYW) courses that may ask students to write literacy narratives, a common FYW narrative genre included in many of our colleagues’ courses and textbooks. Doing so will hopefully allow instructors to find familiar ground from which to intentionally modify more traditional assignments and to intentionally develop their critical digital pedagogies as well as their students’ critical digital literacies.

Given the widespread move to online and hybrid modes of instruction in higher education due to the COVID-19 pandemic, we intentionally designed our Filter Bubble unit for online delivery via discussion boards, though this is not strictly necessary. And though we outline a multi-week sequence of low-stakes assignments as scaffolding for the Filter Bubble Narrative, we also anticipate that instructors will modify the timeline and assignments to suit local teaching and learning contexts. Finally, in addition to fostering critical digital literacies, these assignments take into consideration the WPA’s (2014) Outcomes Statement for First-Year Writing, the guidelines Scott Warnock (2009) outlines in Teaching Writing Online, and a variety of scholarly voices that recognize opportunities for multimodal composition are essential to developing twenty-first–century literacies (Alexander and Rhodes 2014; Cope, Kalantzis and the New London Group 2000; Palmeri 2012; Yeh 2018).

Scaffolding the filter bubble narrative

During the first week of the Filter Bubble unit, students first read Genesea M. Carter and Aurora Matzke’s (2017) chapter “The More Digital Technology the Better” in the open textbook Bad Ideas About Writing and then submit a low-stakes summary/response entry in their digital writing journals. Additionally, students watch the preview episode (5:12) of Crash Course Navigating Digital Information hosted by John Green on YouTube (CrashCourse 2018). This ten-video course was created in partnership with MediaWise, The Poynter Institute, and The Stanford History Education Group. Then, students engage in an asynchronous discussion board structured by the following questions:

(Q1.) John Green from Crash Course suggests that we each experience the internet a little differently, that content is “personalized and customized” for us. What do you make of that? How is the information that you consume online personalized for you? Do you see this personalization as a form of surveillance? Or not?

(Q2.) Co-authors Genesea M. Carter and Murora Matzke define digital literacy as “students’ ability to understand and use digital devices and information streams effectively and ethically” (321). Let’s interrogate that definition a bit, making it more particular. What constitutes “effective” and/or “ethical” understanding and use?

After answering the prescribed questions, students conclude their post with their own question about the video or chapter for their classmates to answer, as replying to two or more students is a requirement for most discussion boards.

During the second week, students watch the social media episode (16:51) of the Crash Course Navigating Digital Information series. (CrashCourse 2019) After watching, students submit a low-stakes mapping activity in their digital writing journals where they map what’s in their bubble by taking screenshots of the news stories, advertisements, and top-level posts they encounter in their social media feeds. Then, students engage in an asynchronous discussion board structured by the following questions:

(Q1.) Given what you found from investigating the kinds of news stories, advertisements, and top-level posts in your social media feeds, what parts of your identity are in your filter bubble? Where do you see your interests? For example, Jessica sees a lot of ads for ethically made children’s clothing, Rothy’s sustainably made shoes, and YouTube Master Classes about writing. It seems that her filter bubble is constructed in part from her identity as an environmentalist and writing professor. Joel, on the other hand, sees ads for Star Wars merchandise and solar panel incentive programs, suggesting his filter bubble is constructed from his identity as a Star Wars fan and homeowner that needs a new roof.

(Q2.) What parts of your identity, if any, are not represented in your filter bubble?

(Q3.) How do you feel about what’s there, what’s not, and how that personalization came to be? How is your identity represented similarly or differently across digital sites and physical places?

As mentioned previously, students conclude their post with their own question about the video or discussion board topic for their classmates to answer.

In the first half of the third week, students read the Filter Bubble Narrative assignment sheet (see Appendix A) and engage in a first thoughts discussion, a practice adapted from Ben Graydon at Daytona State College. Here, students respond to one or more of the following questions after reading the Filter Bubble Narrative assignment sheet:

(Q1.) Connect the writing task described in the project instructions with one or more of your past writing experiences. When have you written something like this in the past? How was this previous piece of writing similar or different?

(Q2.) Ask a question or questions about the project instructions. Is there anything that doesn’t make sense? That you would like your instructor and classmates to help you better understand?

(Q3.) Describe your current plans for this project. How are you going to get started (explain your ideas to a friend, make an outline, just start writing, etc.)? What previously completed class activities and content might you draw on as you compose this project? What upcoming activities might help you compose this project?

In the second half of the third week, students begin knitting together the story of their filter bubble. Additionally, they engage in an asynchronous discussion board structured by the following question:

(Q1.) What can you do to take a more active role in constructing your identity and “ethically” and “effectively” (Carter and Matzke 2017, 321) navigating your information feeds?

As mentioned previously, students conclude their post with their own question, but for this discussion board topic we offer this alternative:

(Q2.) If you’d like recommendations from your classmates about steps you can take within your apps and/or feeds and pages that might diversify or productively challenge your current information landscape, let us know. If you’d rather we not send you recommendations, that’s okay, too. Go ahead and ask any other topic-related question you’ve got.

The fourth week is spent composing a full-length draft of the Filter Bubble Narrative, which students submit to a peer review discussion board for peer feedback and to an assignment folder for instructor feedback at the beginning of the fifth week.

While peer review is in-progress and the instructor reviews drafts, during the fifth week, students submit a low-stakes reflection in their digital writing journals that investigates how their ideas about digital literacy have changed (or not), especially in relation to the definition provided by Carter and Matzke (2017) about effective and ethical use of digital technologies (321), as well as what they’ve learned about themselves, surveillance, and about writing multimodality.

Limitations & risks

We acknowledge that the Filter Bubble Narrative comes with certain limitations and risks. First, while we suggest that this assignment and its scaffolding may offer potential pathways for students to develop critical digital literacies that may result in further awareness and even resistance to forms of soft surveillance, we are also aware that those practices may be ultimately out of reach. After all, as various scholars discussed above have noted (see Beck 2017; Gilliard 2019; Noble 2018), social media platforms frequently take action to purposefully obscure their very mechanisms for surveillance, which is part of the process of softening resistance (Lyon 2006; 2017; Marx 2003; 2015). Without careful critical attention to such processes, instructors and students may be misled to see this assignment as a transaction of skills necessary to resist all forms of soft surveillance. While students may become more aware of and deliberate about how they perceive or interact with their filter bubble, this does not render the surveillors and their surveillance inert.

Second, some students may be unable or unwilling to draw on their own social media use for this assignment. As we mentioned in an earlier section, not all students engage with social media and others may have broader concerns with privacy. After all, part of the assignment and its scaffolding, as described above, ask students to disclose information about their own social media use—information they may wish to keep private from their teacher and instructors. Students therefore should be reminded that they do not have to disclose any information they do not wish to and guided through alternative assignment designs (e.g., fictionalizing their filter bubble contents).

Conclusion

We’ve offered the Filter Bubble unit as one way to smooth the journey from an instructor’s critical digital pedagogy to students’ critical digital literacies. Instead of sketching this assignment for Journal of Interactive Technology and Pedagogy readers, we wanted to offer a student-directed deliverable, an assignment sheet (see Appendix A), as a way to recognize that “documents do things,” as Judith Enriquez (2020) argued in “The Documents We Teach By.” These things that documents do are many and varied. Our teaching materials are a material representation of our teaching and learning values and of our identities as critical digital pedagogues. And, perhaps most importantly, they have rhetorical effects on our students. Thus, It’s important that we offer student-centered instantiations of critical digital pedagogy along with scholarly-ish prose aimed at other teacher-scholars. Moreover, as students engage with this assignment we hope to be able to offer information about its efficacy in regard to critical digital literacies. Further, student reflections about this assignment are needed and forthcoming, as are notes about alterations we’ll make based on student-instructor collaborations.

In closing, just as we must look at technologies instead of through them in order to perceive soft surveillance and engender critical digital literacies, we must do the same with our teaching documents (Enriquez 2020). We hope that our Filter Bubble Narrative deliverable is a teaching and learning document that instructors can critically look at in order to consider ways to work together with students to reassemble a richer and more critical understanding of online identities within our algorithmically curated social media news feeds. Beyond understanding, we also hope that teachers and students will act to mitigate soft surveillance and filter bubble effects and to become ethical agents with (and even developers of) algorithmic technologies.

References

Alexander, Jonathan, and Jacqueline Rhodes. 2014. On Multimodality: New Media in Composition Studies. Urbana: Conference on College Composition and Communication/National Council of Teachers of English.

Atteneder, Helena, and Bernhard Collini-Nocker. 2020. “Under Control: Audio/Video Conferencing Systems Feed ‘Surveillance Capitalism’ with Students’ Data.” In 2020 13th CMI Conference on Cybersecurity and Privacy (CMI) – Digital Transformation – Potentials and Challenges(51275), 1–7. https://doi.org/10.1109/CMI51275.2020.9322736.

Beck, Estee. 2017. “Sustaining Critical Literacies in the Digital Information Age: The Rhetoric of Sharing, Prosumerism, And Digital Algorithmic Surveillance.” In Social Writing/Social media: Publics, presentations, and pedagogies, edited by Douglas Walls Stephanie Vie, 37–51. Fort Collins: The WAC Clearinghouse and University Press of Colorado.

Carter, Genesea M., and Aurora Matzke. 2017. “The More Digital Technology, the Better.” In Bad Ideas About Writing, edited by Cheryl E. Ball & Drew M. Loewe, 320–324. Morgantown: West Virginia University Libraries. https://textbooks.lib.wvu.edu/badideas/badideasaboutwriting-book.pdf.

Collier, Amy, and Jen Ross. 2020. “Higher Education After Surveillance?” Postdigital Science and Education 2, no. 2: 275–79. https://doi.org/10.1007/s42438-019-00098z.

Cope, Bill, Mary Kalantzis, and the New London Group, eds. 2000. Multiliteracies: Literacy Learning and the Design of Social Futures. New York: Routledge.

CrashCourse. 2018. “Crash Course Navigating Digital Information Preview.” YouTube, December 18, 2018. https://www.youtube.com/watch?v=L4aNmdL3Hr0&list=PL8dPuuaLjXtN07XYqqWSKpPrtNDiCHTzU&index=2.

CrashCourse. 2019. “Social Media: Crash Course Navigating Digital Information #10.” YouTube, March 12, 2019. https://www.youtube.com/watch?v=M5YKW6fhlss&list=PL8dPuuaLjXtN07XYqqWSKpPrtNDiCHTzU&index=12.

Enriquez, Judith. 2020. “The Documents We Teach By.” Hybrid Pedagogies. https://hybridpedagogy.org/the-documents-we-teach-by/.

Frank, Daniel, Firasat Jabeen, Eda Ozyesilpinar, Joshua Wood, and Nathan Riggs. 2019. “Collaboration && / || Copyright.” Kairos 24, no. 1. https://kairos.technorhetoric.net/24.1/praxis/frank-et-al/.

Gilliard, Chris. 2019. “Privacy’s Not an Abstraction.” Fast Company, March 25, 2019. https://www.fastcompany.com/90323529/privacy-is-not-an-abstraction.

Haas, Christina. 1996. Writing Technology: Studies on the Materiality of Literacy. Mahwah: L. Erlbaum Associates.

Hart-Davidson, William. 2017. “Availability Matters (and so does this book): A Foreword. In Social Writing/Social media: Publics, Presentations, and Pedagogies, edited by Douglas Walls and Stephanie Vie, ix–xiii. Fort Collins: The WAC Clearinghouse and University Press of Colorado.

Hayles, Katherine. 1999. How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics. Chicago: University of Chicago Press.

Hyslop-Margison, Emery, and Ramonia Rochester. 2016. “Assessment or surveillance? Panopticism and Higher education.” Philosophical Inquiry in Education 24, no. 1, 102–109.

Jenkins, Henry. 2006. Convergence Culture: Where Old and New Media Collide. New York: New York University Press.

Klonick, Kate. 2019. “A ‘Creepy’ Assignment: Pay Attention to What Strangers Reveal in Public.” New York Times, March 8, 2019. https://www.nytimes.com/2019/03/08/opinion/google-privacy.html.

Latour, Bruno. 2007. Reassembling the Social: An Introduction to Actor-Network-Theory. Oxford: Oxford University Press.

Lutkewitte, Claire. 2012. Web 2.0 Applications for Composition Classrooms. Southlake: Fountainhead Press.

Lyon, David. 2006. “The search for surveillance theories. In Theorizing Surveillance: The Panopticon and Beyond, edited by David Lyons, 3–20. London: Routledge.

Lyon, David. 2017. “Digital Citizenship and Surveillance | Surveillance Culture: Engagement, Exposure, and Ethics in Digital Modernity.” International Journal of Communication 11, 824–42.

Marx, Gary. 2003. “A tack in the shoe: Neutralizing and resisting the new surveillance.” Journal of Social Issues 59, no. 2: 369–390.

Marx, Gary. 2015. “Surveillance studies.” In International Encyclopedia of the Social & Behavioral Sciences (2nd ed), edited by J.D. Wright, 733–741. Retrieved from http://dx.doi.org/10.1016/B978-0-08-097086-8.64025-4.

Nguyen, Tien T., Pik-Mai Hui, F. Maxwell Harper, Loren Terveen, and Joseph A.

Konstan. 2014. “Exploring the filter bubble: the effect of using recommender systems on content diversity.” In Proceedings of the 23rd international conference on World wide web, 677–86.

Noble, Safiya Umoja. 2018. Algorithms of Oppression: How Search Engines Reinforce Racism. New York: New York University Press.

O’Hara, Kenton, Alex Taylor, William Newman, and Abigail J. Sellen. 2002. “Understanding the Materiality of Writing from Multiple Sources.” International Journal of Human-Computer Studies 56, no. 3: 269–305. https://doi.org/10.1006/ijhc.2001.0525.

Palmeri, Jason. 2012. Remixing Composition: A History of Multimodal Writing Pedagogy. Carbondale: Southern Illinois University Press.

Pariser, Eli. 2012. The Filter Bubble: How the New Personalized Web Is Changing What We Read and How We Think. New York: Penguin Publishing Group.

Rheingold, Howard. 2012. “Participative Pedagogy for a Literacy of Literacies.” In The Participatory Cultures Handbook, edited by A. Delwiche & J. J. Henderson, 215–19. London: Routledge.

Selber, Stuart A. 2004. Multiliteracies for a Digital Age. Carbondale: Southern Illinois University Press.

Selfe, Cynthia L. 1999. “Technology and Literacy: A Story about the Perils of Not Paying Attention.” College Composition and Communication 50, no. 3, 411–36. https://doi.org/10.2307/358859.

Selfe, Cynthia L., and Gail E. Hawisher. 2004. Literate Lives in the Information Age: Narratives of Literacy from the United States. Mahwah: Lawrence Erlbaum Associates.

Shearer, Elisa. 2018. “Social Media Outpaces Print Newspapers in the U.S. as a News Source.” Pew Research Center. https://www.pewresearch.org/fact-tank/2018/12/10/social-media-outpaces-print-newspapers-in-the-u-s-as-a-news-source/.

Stommel, Jesse. 2014. “Critical Digital Pedagogy: A Definition.” Hybrid Pedagogy. https://hybridpedagogy.org/critical-digital-pedagogy-definition/.

Takayoshi, Pamela, and Brian Huot. 2003. Teaching Writing with Computers: An Introduction. Boston: Houghton Mifflin.

Vie, Stephanie. 2008. “Digital Divide 2.0: ‘Generation M’ and Online Social Networking Sites in the Composition Classroom.” Computers and Composition 25, no. 1: 9–23. https://doi.org/10.1016/j.compcom.2007.09.004.

Warnock, Scott. 2009. Teaching Writing Online: How and Why. Urbana: National Council of Teachers of English.

WPA. 2014. “WPA Outcomes Statement for First-Year Composition (V3.0).” Last modified July 17, 2014. http://wpacouncil.org/aws/CWPA/asset_manager/get_file/350909?ver=3890.

Yeh, Hui-Chin. 2018. “Exploring the Perceived Benefits of the Process of Multimodal Video Making in Developing Multiliteracies.” Language Learning & Technology 22, no. 2: 28–37.

Appendix: Filter Bubble Narrative Assignment Sheet

Background

In “Social Media: Crash Course Navigating Digital Information,” host John Green says filter bubbles mean “we are surrounded by voices we already know and [are] unable to hear from those we don’t” (8:36). We can also think of filter bubbles as echo chambers that reverberate our existing beliefs, tastes, and attitudes.

Let’s read just a bit more about filter bubbles on Wikipedia, which is a solid site for general, introductory information about almost anything. Please skim this article now: Wikipedia on Filter bubbles.

Next, please watch the following TED talk by Eli Pariser, who invented the term “filter bubble”: Beware Online Filter Bubbles. It’s about 9 minutes long.

Whaddya think? Pariser defines the term “filter bubble” like this: “your filter bubble is your own personal, unique universe of information that you live in online. And what’s in your filter bubble depends on who you are, and it depends on what you do. But the thing is that you don’t decide what gets in. And more importantly, you don’t actually see what gets edited out” (4:06). Additionally, Pariser offers a visual depiction of filter bubbles (at 4:33). Here, the media corporations around the circle are curating, or selecting, what information you encounter on your social media feeds. You see only what’s inside as you passively scroll and click. You’re in a filter bubble. This is in contrast to all the information that you could see on the Web, as represented by the colorful circles that lie outside of the algorithms’ restrictive membrane. Since your filter bubble is unique to you, and created based on your clicking, buying, and browsing data, we might say that it represents part of who you are, part of your identity, both online and offline.

For example, when John Green illustrates his otherwise invisible filter bubble (12:15), we see a particular collection of activities, topics, beliefs, and values; we see parts of his identity (See Figure 1 below).

Image of John Green's filter bubble (John Green is the host of "Social Media: Crash Course Navigating Digital Information") that contains his image and a variety of his interests and identity markers surrounding him: soccer, pizza, Harry Potter, coffee, family, a cross, etc.
Figure 1. Illustration of John Green’s filter bubble. Source: “Social Media: Crash Course Navigating Digital Information” hosted by John Green.

The algorithms running behind Green’s social media feeds personalize his online experience so that the advertising, news stories, and shared content Green encounters hold his attention, a valuable commodity for advertisers and groups or corporations pushing particular angles. I wonder, what’s in your filter bubble? And how does what’s in there represent who you are, your identity, both online and off?

Further, what might you do, as Eli Pariser and John Green both mention in their respective videos, to affect what’s in your bubble in ways that help you move toward your best future self, the aspirational version of yourself (5:12), instead of in ways that reinforce your “more impulsive, present selves” (5:15)? The goal of this project is to investigate and tell the story of your filter bubble as a representation of your identity and to reflect (and maybe act) upon what you find.

Assignment Guidelines

Your Filter Bubble Narrative should tell the story of your filter bubble as a reflection of your identity, both online and off. In composing this story, you should

  • Describe what’s in your filter bubble and how that’s connected to your interests, values, and beliefs on and offline (or not);
  • Discuss how you feel about algorithmic personalization, in general, and your specific filter bubble as a representation of your identity;
  • Sketch out what, if anything, you might do in the future to affect what’s in your filter bubble and/or how you might “ethically” and “effectively” (Carter and Matzke 2017, 321) navigate what’s in there using the strategies Green and Pariser discuss in their videos, as well other strategies you use or have heard about.

You’ll need to make this story multimodal, which means that in addition to alphabetic writing, you should use at least one other mode of communication. For example, you might communicate using images, video, and/or sound. You can create these texts yourself or use (and cite) items from the Web or elsewhere. Please include at least 500 words of written text and at least 3 visual or audio elements. As for the audience and genre, you have some flexibility here. You might want to write your piece for an undergraduate publication like Young Scholars in Writing or Stylus, UCF’s journal of first-year writing. Alternatively, you might write for Medium, a web-based publishing platform where your piece might be tagged #technology #digitalliteracy #self. Or maybe you’re thinking of starting your own blog and this could be your first entry. In any case, you want to consider the audience your publication site addresses (beyond your classmate and me) as you compose.

About the Authors

Jessica Kester is a Professor of English in the School of Humanities and Communication and the Quanta-Honors College at Daytona State College (DSC). She also co-founded and coordinated a Writing Across the Curriculum and Writing in the Disciplines program (WAC/WID) at DSC from 2013 until 2019. Her work has previously appeared in Across the Disciplines and Currents in Teaching and Learning.

Joel Schneier is a Lecturer and Composition Coordinator at the University of Central Florida in the Department of Writing & Rhetoric. He earned a PhD in Communication, Rhetoric, & Digital Media from North Carolina State University in 2019. His research focuses on the intersections of digital literacies, mobile communication, writing, and sociolinguistics, and he has published in Frontiers in Artificial Intelligence, New Media & Society, and Mobile Media & Communication, among others.

A dimly lit laptop light illuminates a keyboard as it closes.
2

The Rhetorical Implications of Data Aggregation: Becoming a “Dividual” in a Data-Driven World

Abstract

Social media platforms have experienced increased scrutiny following scandals like the Facebook–Cambridge Analytica revelations. Nevertheless, these scandals have not deterred the general public from using social media, even as these events have motivated critique of the privacy policies users agree to in order to access them. In this article, we argue that approaches to teaching data and privacy in the classroom would benefit from attending to social media privacy policies and the rhetorical implications of data aggregation: not only what these policies say, but also what cultural, social, and economic impacts they have and for whom. We consider what it means for users to have “meaningful access” and offer an investigative framework for examining data aggregation through three areas of data literacy: how data is collected, how data is processed, and how data is used. We posit Cheney-Lippold’s “measurable types” as a useful theoretical tool for examining data’s complex, far-reaching impacts and offer an assignment sequence featuring rhetorical analysis and genre remediation.

Introduction: Gaining “Meaningful Access” to Privacy Policies

There is an increasing need to attend to the role social media plays in our society as more of the work of maintaining relationships moves to online platforms. While platforms like Facebook and YouTube have experienced increased public scrutiny, a 2019 Pew Research Center study found that social media usage remained relatively unchanged from 2016 to 2018, with seven out of ten adults reporting they rely on social media platforms to get information (Perrin and Anderson 2019). International data-collection scandals like Cambridge Analytica and numerous congressional hearings on Big Tech’s’ power in the United States have not deterred the general public from using social media. Everyday users are increasingly aware that their privacy is compromised by using social media platforms, and many agree that Silicon Valley needs more regulation (Perrin and Anderson 2019; Pew Research Center 2019). Yet, many of these same users continue to rely on social media platforms like Facebook, Twitter, and TikTok to inform themselves on important issues in our society.

Early teacher-scholars within the subfield of Computers and Writing worked within a fairly limited scope. They urged learning with and critiquing digital technologies that were more transparent because of their newness—visible technologies such as word-processing programs and computer labs. But today’s teachers and students must contend with a more ubiquitous and hidden field—the entire distributed and networked internet of personalized content based on internet surveillance strategies and data aggregation. The array of websites and apps students encounter in college includes learning management systems (Canvas, BlackBoard, Google Classroom, Moodle), cloud storage spaces (DropBox, OneDrive), project management tools (Basecamp, Trello), communication platforms (Slack, Teams), search engines (Google, Bing), professional and social branding (LinkedIn), online publishing (Medium, WordPress), social media (Facebook, Twitter, YouTube, Instagram, TikTok, Tumblr, WhatsApp, SnapChat), and all the various websites and apps students use in classrooms and in their personal lives. Each one of these websites and apps publishes a privacy policy that is accessible through small hyperlinks buried at the bottom of the page or through a summary notice of data collection in the app.

Usually long and full of legalese, privacy policies are often ignored by students (and most users) who simply click “agree” instead of reading the terms. This means users are less knowledgeable about the privacy policies they agree to in order to continue using social media platforms. As Obar and Oeldorf-Hirsch find in their study “The Biggest Lie on the Internet: Ignoring the Privacy Policies and Terms of Service Policies of Social Networking Services,” undergraduate students in the U.S. find privacy policies to be “nothing more than an unwanted impediment to the real purpose users go online—the desire to enjoy the ends of digital production” (Obar and Oeldorf-Hirsch 2020, 142). To this point, the 2019 Pew Research Center survey “Americans and Digital Knowledge” found that only 48% of Americans understood how privacy policies function as contracts between themselves and a website concerning the use of their data. Through their alluring affordances and obscure privacy policies, social media platforms hinder users’ ability to meaningfully engage with the data exploitation these platforms rely on.

Americans have long turned to policy for contending with sociocultural issues. While breaches of user privacy energize the public, the scale of social media platforms makes it difficult to fully comprehend these violations of trust; as long as social media works as we expect it to, users rarely question what social media platforms are doing behind the scenes. As mentioned earlier, privacy policies are also oftentimes long, jargon-filled, and unapproachable to the average user. How many of us can say we have read, let alone comprehended, all of the fine print of the privacy policies of the platforms we choose to engage on every day? Doing so requires what digital rhetorics scholar Adam J. Banks refers to in Race, Rhetoric, and Technology as “meaningful access,” or access to not only the technology itself but also to the knowledge, experience, and opportunities necessary to grasp its long-term impacts and the policies guiding its development and use (Banks 2006, 135). Meaningful access as a concept can work against restrictive processes such as digital redlining[1] or restricting access (thus eliminating meaningful access) from certain users based on the filtering preferences of their internet access provider. Privacy policies are obtainable, but they are not truly accessible: users may be able to obtain these documents, but they don’t have a meaningful, useful sense of them.

Teachers and students need to rhetorically engage with social media privacy policies in order to learn about data and privacy: we need to understand not only what these policies say, but also what impacts they have and for whom.[2] We also need to determine who has meaningful access and why that might be. As Angela M. Haas (2018) explains, rhetoric concerns the cultural, social, economic, and political implications of when we “negotiate” information; she specifies digital rhetoric as concerned with the “negotiation of information” when we interface with technology. Safiya Umoja Noble develops a related argument in Algorithms of Oppression: How Search Engines Reinforce Racism, suggesting internet search engine algorithms are a reflection of the values and biases of those who create them, and since algorithmic processes extend into hiring practices and mortgage lending evaluations, big-data practices nonetheless reproduce pre-existing social inequities. We need to learn about data generation and its wide-reaching, real-world impact on how we connect and interact with other people to really grasp these platforms and the policies that govern them.

By learning to critically engage with the policies that shape their digital experiences, students develop an important skill set they can use to identify the ways social media platform algorithms use data collected from users to direct their attention in ways that may be more important to the platforms than to the users themselves—working to generate clicks, repetitive usage, and thus revenue from ad impressions, rather than providing the content the user actually seeks. Students might also think about the ways these privacy policies structure the information-filtering and data-collection functions on which these platforms depend, while such policies likewise fail to protect users from the potential socio-economic and racial disparities their algorithmic infrastructures re-entrench (Gilliard and Culik 2016). To this end, it can be useful to introduce concepts like data aggregation and digital redlining, which can equip users with a better understanding for how data collection works and its far-reaching rhetorical effects. In this way, it is important to understand privacy policies as a writing genre, a typified form of writing that accomplishes a desired rhetorical action (e.g. providing social media platforms with the legal framework to maximize data usage).

As writing studies scholars Irene L. Clark and Andrea Hernandez (2011) explain, “When students acquire genre awareness, they are not only learning how to write in a particular genre. They gain insight into how a genre fulfills a rhetorical purpose” (66–67). By investigating the genre of privacy policies, students gain both transferable skills and crucial data literacy that will serve them as writers, media consumers, and, more basically, as citizens. Working within this niche genre provides insights both into the rhetoric of privacy policies per se, as well as into the use of rhetoric and data aggregation for social manipulation.

One way to deepen student understanding of a genre is through remediation, or the adaptation of the content of a text into a new form for a potentially different audience (Alexander and Rhodes 2014, 60). Remediations require both a comprehension of the original text’s content and an awareness of the intended audience’s experience engaging with that text. Remediation provides students with an opportunity to put their knowledge into practice regardless of the resulting form. For example, a privacy policy could be remediated as an infographic that focuses on key ideas from the policy concerning data usage and explains them in ways a lay public with little prior knowledge could understand.

Ultimately, a multi-pronged approach is required to gain meaningful access to privacy policies. In the following section, we provide a framework with terms and questions that consider how data is collected, processed, and used. We direct attention to digital studies scholar John Cheney-Lippold’s theory of “measurable types,” the algorithmic categories created from aggregated user data, as a framework in our development of an assignment sequence that tasks students with performing two remediations—one that focuses on making information more digestible and another that centers long-term effects. The primary audience for this article is instructors who are new to digital surveillance and big-data concepts and are looking to orient themselves with theory as they create assignments about this emerging issue for their classroom.

How Is Data Collected, Processed, and Used?

Data is the fuel that keeps our social media platforms running. Fortunately for companies like Facebook, Twitter, and TikTok, data is generated and captured constantly on the internet. Every website we visit, every story we share, every comment we post generates data. Some of this information comes in the form of cookies, or small files installed on your computer to keep track of the pages you view and what you click on while visiting them. Capturing user behavior on the internet is accomplished largely through third-party “tracking cookies,” which are different from the “session cookies” used primarily to help web pages load faster. Session cookies do not store any user information. Tracking cookies, on the other hand, are so important to a platform like Facebook’s business model that they have a whole separate policy for them: “We use cookies to help us show ads and to make recommendations for businesses and other organizations to people who may be interested in the products, services or causes they promote” (Facebook n.d.). Big Tech companies and their advertising partners use this information to infer what users’ interests might be based on their online behaviors.

Our internet activity on social media platforms creates metadata, which is another form of data web companies collect and use to track our online activity.[3] Metadata is not the content of our posts and messages, but the information about who and/or what we interact with and how often those interactions occur. While quantitative forms of information may appear more trustworthy and objective, in actuality this seemingly neutral data has been stripped of important rhetorical context. Digital humanities scholar Johanna Drucker suggests that we refer to data as “capta,” since data is not information that perfectly represents whatever was observed as much as it is information that is “captured” with specific purposes in mind. Capta cannot fully stand in for us, but it can be used to compare us to other users who “like” and “share” similar things. Therefore, the collection of metadata is valuable because it more efficiently reveals what we do online than the meaning of our content alone. Rather than try to understand what we are communicating, computers instead process this quantified information and use it to calculate the probability that we will engage with certain media and buy certain products (van Dijck and Poell 2013, 10). So, even though data collection requires us to give up our privacy, the stakes may seem relatively low considering that we are presumably getting “free” access to the platform in exchange. Coming to terms with how data impacts our society requires understanding the ostensibly predictive capacities of data aggregation because data we consciously share is never separate from other data, including data from other users and the data we don’t realize we are sharing (e.g. location, time, etc).

Data is what powers social media platforms, but their rhetorical power comes from how data is processed into predictions about our behavior online. Our individual data does not provide accuracy when it comes to recommending new things, so data aggregation makes recommendations possible by establishing patterns “made from a population, not one person” (Cheney-Lippold 2017, 116).[4] These “dividual” identities, as digital studies scholar Cheney-Lippold explains via digital theorist Tiziana Terranova (2004), are the algorithmic classifications of individual users based on the data generated and processed about them. Indeed, we each have our own personal preferences, but we are also interested in what captures the attention of the larger public: we care about the most recent YouTube sensation or the latest viral video. When platforms like YouTube make video recommendations they are comparing data collected from your viewing behavior to a massive cache of data aggregated from the viewing behavior of many other users.

A primary use of data is in the personalization of online experiences. Social media platforms function under the assumption that we want our online experience to be customized and that we are willing to give up our data to make that happen. Personalization may appear to be increasing our access to information because it helps us filter through the infinite content available to us, but in actuality it has to restrict what we pay attention to in order to work. This filtering can result in digital redlining, which limits the information users have access to based on the filtering preferences of internet access providers (Gilliard and Culik 2016). Internet service providers shape users’ online experiences through both privacy policies and acceptable use policies. Not unlike how banks used racist strategies to limit minority access to physical spaces, internet service providers (including universities) employ “acceptable use policies” to limit engagement with information pre-categorized as “inappropriate” and explain why various users might have very different perceptions of the same event. Practices like digital redlining reveal how personalization, albeit potentially desirable, comes at the cost of weakening the consistent, shared information we rely on to reach consensus with other people. Ultimately, we embrace data aggregation and content personalization without considering its full implications for how we connect and communicate with one another and how businesses and governments see and treat us.

Using Measurable Types to Investigate Privacy Policies

One helpful tool for analyzing how algorithms construct online experiences for different users is Cheney-Lippold’s concept of “measurable types.” Measurable types are algorithmically generated norms or “interpretations of data that stand in as digital containers of categorical meaning” (Cheney-Lippold 2017, 19). Like dividual identities, measurable types are ever-changing categories created from aggregate user data without any actual input from the user. Essentially, measurable types assign users to categories that have very real impacts on them, but from data that has been collected with very specific definitions in mind that users don’t know about. The insidiousness of measurable types is how they automatically draw associations from user behaviors without providing any opportunity for users to critique or correct the “truths” scraped from their dividual data. For instance, most users might not see any adverse effects of being labeled a “gamer”; however being classified as a “gamer” measurable type could also algorithmically align users with members of the #gamergate movement[5] resulting in misogynist content spilling into their digital experiences. In this way, measurable types remove humans from the processes that operationalize their data into consequential algorithmic decisions made on their behalf.

Every social media platform has its own privacy policy “written for the express purpose of protecting a company or website operator from legal damages” which outlines the data-collection practices permissible on the site and governs its use (Beck 2016, 70). Measurable types as a framework guides analysis of these policies with specific attention to the implications of how data is collected, processed, and used. Students in first-year courses in composition and technical communication, in addition to those studying communications, information technology, computer science, and education are well suited to investigate these digital policy documents because many such students are social media users already. Analyzing privacy policies for social media platforms through the measurable types framework reveals to students that these policies are about more than simply their experience on the platform. In addition to prescribing user actions on these sites, these policies also directly impact students’ online experiences as the policies concern how data from their activity on the platform is generated, aggregated, and then repurposed into measurable types. They exist among a constellation of Terms of Service (ToS) documents, which can offer robust opportunities to examine the impact data aggregation has for different entities and users. In other words, to really grapple with how a privacy policy works, it is helpful to examine a wide array of ToS documents in order to familiarize yourself with these genres of digital policy.

The assignment sequence we offer for working with measurable types and social media privacy policies in the writing classroom includes an initial rhetorical analysis followed by two remediations. The rhetorical analysis assignment tasks students with examining choices within the privacy policy (e.g. temporality, transparency, and language) to demonstrate how critical information is relayed and to offer suggestions for making the policy more accessible for various audiences. While the goal of the two remediations together is “meaningful access”—not just understanding the policy itself but also the long-reaching impacts that it will have—the first remediation is focused primarily on making the policy more comprehensible. Through a series of in-class activities students learn about data aggregation, digital redlining, and measurable types before moving into a second, more intense remediation where they investigate the consequences of big data and their social media usage. Ultimately, using measurable types as a framework throughout the assignment sequence we offer presents students a path to learn about how their actions online dictate not only their future experiences on the internet but also the constellation of user experiences in their local community and around the world.

Privacy policy rhetorical analysis and initial remediation

When performing a rhetorical analysis of a social media privacy policy, begin with heuristics to work through genre conventions: how audience, exigence, structure, form, and intention work to shape a genre and the social actions it encapsulates (Miller 2015, 69). Which users and non-users does this document potentially impact? How do specific rhetorical choices impact how critical information is taken up? What is the intent of the people who write and design these documents, and the companies that publish them? Examining and discussing rhetorical choices within the privacy policy reveals how it addresses complex concepts such as data collection and aggregation—issues which are critically important for students to undertake throughout the assignment sequence. The goal is to begin working through the aforementioned terminology to inform remediations that emphasize rhetorical changes students would implement to make the policy more accessible for various audiences.

When approaching the genre for remediation, students should highlight the changes they will implement to make the social media privacy policy more transparent and readable. After students highlight the changes, they can figure out the genre of the remediation. We imagine students might produce infographics, flyers, zines, podcasts, videos, and other genres during this part of the assignment sequence. Since social media privacy policies impact many students directly, ask them to consider what they would do to make the document’s information more accessible and digestible for users like themselves. Students could perform usability tests, hold focus groups, and ask peers (in class and in other classes) for feedback. Also, consider the temporality, transparency, and language of the document. When was the last time the policy was updated? What methods of data collection might be opaque or otherwise inaccessible to users? What rhetorical arguments are formed by the policy? Answering these questions helps students develop a sense of what it means to be an engaged digital citizen. The more comfortable they are with analyzing the dynamics of these policies, the more likely they will see themselves as digital citizens navigating the complexities of a data-driven digital society. Students will focus more on how this data is used and to what ends as we move into a second remediation considering the social, political, and economic implications of digital privacy and data aggregation.

Expanding the scope to amplify measurable types

The exchange of our personal information for accessing services online is among the most complex issues we must address when considering how data use is outlined in social media privacy policies. Therefore, students should build upon their initial remediation, paying attention to the far-reaching implications of practices like data aggregation which lead to data commodification. Cheney-Lippold’s measurable types help us understand how our online experiences are cultivated by the processes of big data—the information you have access to, the content you are recommended, the advertisements you are shown, and the classification of your digital footprint (Beck 2016, 70). The following classroom activities expand the scope of these conversations beyond social media privacy policies towards larger conversations concerning big data by making measurable types visible.

According to Pew Research Center, 90% of adults in the United States have access to the internet; however, this does not mean that users get the same information. What we access online is curated by algorithmic processes, thus creating variable, often inequitable experiences. Digital redlining is about the information you have access to online. As with personalization earlier, digital redlining is “not only about who has access but also about what kind of access they have, how it’s regulated, and how good it is” (Gilliard and Culik 2016). Therefore, analysis should center on the access issues that privacy policies could address to help users better understand the myriad of ways social media platforms limit access just as much as they distribute it. Since digital redlining creates different, inequitable experiences arranged according to measurable types, it is easy to observe, as Gilliard and Culik do, how this frequent practice extends beyond social media privacy policies and into our everyday lives. Even simple, familiar online actions like engaging with mainstream search engines (e.g. Google) can demonstrate how different measurable types yield different results.

The techniques used to investigate social media privacy policies are transferable to any policy about data collection. For example, Google is often criticized for mismanaging user privacy, just as social media platforms like Facebook suffer scrutiny for not protecting users’ information. To examine the cultural, economic, social, and political impacts of user privacy on Google, students can perform some basic searches while logged out of Google services and note the results that appear on the first few pages. Then, students can log into their Google accounts and compare how personalized results differ not only from previous search results, but also from the results provided to friends, family, and their peers. What information is more widely shared? What information feels more restricted and personalized? These questions help us to process how measurable types contribute to the differences in search results even among those in our own communities.

Internet advertisements are another way to see measurable types at work online. As in the previous case with Google searches, we can easily observe the differences in the advertisements shown to one user compared to others since search engine results have a considerable amount of bias built into them (Noble 2018). Moreover, visiting websites from different interest groups across the internet allows you to see how the advertisements shown on those web pages are derived from the measurable types you belong to and how you (knowingly or unknowingly) interact with the various plugins and trackers active on the sites you visit. In comparing how the advertisements from the same webpage differ among students, we can develop an awareness of how algorithmic identities differ among users and what these advertisements infer about them as a person or consumer—the composite of their measurable types. Facebook also has a publicly accessible ad database that allows anyone to view various advertisements circulating on the platform in addition to information pertaining to their cost, potential reach, and the basic demographic information of users who actually viewed them. Advertisements present various sites for analysis and are a useful place to start when determining what data must have been collected about us because they provide a window into the measurable types we are assigned.

Internet advertisers are not the only stakeholders interested in data related to our measurable types. Governments are as well, as they are invested in assessing and managing risks to national security as they define it.[7] For instance, certain search engine queries and other otherwise mundane internet activity (keyword searches, sharing content, etc.) could be a factor in a user being placed on a no-fly list. Artist and technologist James Bridle refers to these assigned algorithmic identities as an “algorithmic citizenship,” a new form of citizenship where your allegiance and your rights are continuously “questioned, calculated, and rewritten” by algorithmic processes using the data they capture from your internet activity writ large (Bridle 2016).[8] Algorithmic citizenship relies on users’ actions across the internet, whereas most users might reasonably assume that data collected on a social media platform would be contained and used for that platform. However, algorithmic citizenship, like citizenship to any country, comes with its own set of consequences when a citizen deviates from an established norm. Not unlike the increased social ostracism a civilian faces from their community when they break laws, or appear to break laws, a user’s privacy and access is scrutinized when they don’t conform to the behavioral expectations overseen by government surveillance agencies like the National Security Agency (NSA).

Performing advanced remediations to account for algorithm-driven processes

Thinking through concepts like algorithmic citizenship and digital redlining helps us acknowledge the disproportionate impacts of algorithm-driven processes on users beyond the white, often heteronormative people for whom the technology was designed. Addressing algorithmic oppression on a theoretical level avoids settling for the short-sighted, strictly technological solutions to problems that are inherently social and cultural, a valuable perspective to consider for the second remediation. Therefore, in developing a second privacy policy remediation, students should consider not only their own experiences but the experiences of others in ways that mimic the aforementioned expansion from the individual to the dividual. This part of the assignment sequence promotes thinking about how online experiences are not equitable for all users by prompting students to investigate their measurable types and offer remediations that account for digital access issues like digital redlining or algorithmic citizenship. Some investigations into these digital modes of oppression will operate at the local, community level while others will operate at the much larger, societal level. Students might consider how their online shopping habits could influence where a new bus line is implemented in a future “smart city,” or how their internet browsing actions could influence which measurable types get flagged automatically for an invasive search by the TSA on their next flight overseas.

Students may choose to remediate the privacy policy into genres similar to the initial remediation assignment (e.g. infographics, videos). However, immersion in these policies for an extended time, over multiple, increasingly more intense inquiries, clarifies how these social media privacy policies extend the digital divide perpetuated by inequitable access to technology and critical digital literacies. Concepts and questions to consider for this remediation include meaningful access, data aggregation, and digital tracking and surveillance techniques. Who has access to certain information and who does not? What user data is shared with different stakeholders and why? What data are being collected and stored? What norms are perpetuated in the development of technology and technological systems? This final assignment in the sequence provides a means to examine the material consequences of big-data technologies: the critical role measurable types play and the algorithmic processes that make them possible. In performing this work, we can better comprehend how data collection and aggregation enables systematic marginalization in our social, political, and economic infrastructures.

Discussion and Further Implications

Learning outcomes vary across classrooms, programs, and institutions, but instructors who choose to teach about data aggregation and social media privacy policies should focus on critical objectives related to genre analysis and performance, cultural and ethical (rhetorical) context, and demonstrating transferable knowledge. Focusing on each of these objectives when assessing remediations of privacy policies in the writing classroom helps students learn and master these concepts. Importantly, the magnitude of the grade matters; genre remediations of privacy policies should be among the highest, if not the highest, weighted assignments during a writing course because of the knowledge of the complex concepts and rigor of writing required to perform the work. Instructors should create and scaffold various lower-stakes assignments and activities for students to complete throughout a sequence, unit, or course which augment the aforementioned learning outcomes.

While scholars in rhetoric and composition have long theorized the nature of genre, instructors should emphasize that privacy policies are a social construct (Miller 2015). Assessment should focus on how well students analyze and perform in the genre of the privacy policy during their remediations. Assessing how well students perform in a genre like a privacy policy challenges them to understand the rhetorical context and inequity of digital surveillance; moreover, it helps them develop transferable knowledge they can use when performing in other genres in other disciplines and as they go out and make an impact on the world. Instructors who teach about privacy policies should highlight knowledge transfer as a learning objective, because it helps students prepare to take up the skills they develop in the writing classroom and deploy them when performing in other genres in other classes and in their careers.

As mentioned earlier, many students have minimal experience with privacy policies because most do not read them and because hardly any have performed in the genre. Admittedly, unless students are planning careers as technical communicators, technologists, or entrepreneurs, they will probably not perform in this genre again. Even the entrepreneurs in your classes will more than likely take the approach of outsourcing the composition of their start-up’s privacy policy. Regardless of their future experiences with genre and remediation, this assignment sequence extends students’ critical thinking about data aggregation beyond their immediate classroom context and into their online and offline worlds.

Data: Beyond the Confines of the Classroom

We recommend analyzing social media privacy policies as a way to provoke meaningful interactions between students and the digital communities to which they belong. With so many documents to analyze, students should not feel restricted to the privacy policies for mainstream social media platforms like Facebook and Twitter but should interrogate fringe platforms like Parler and emerging platforms like TikTok. We have focused on extending conversations about digital privacy, data aggregation, digital redlining, and algorithmic citizenship but there are other concepts and issues worthy of thorough investigation. For example, some students might strive to highlight the intersection of digital policing techniques and mass incarceration in the United States by analyzing the operational policies for police departments that implement digital technologies like body cams and the privacy policies for the companies they partner with (like the body cam company Axon). Others might focus on how data manipulation impacts democracy domestically and abroad by analyzing how social media platforms were used to plan the insurrection in the U.S. Capitol on January 6, 2021, and the meteoric rise of fringe “free speech” platforms like MeWe and Gab in the days following the insurrection.

Working through privacy policies and data concepts is tedious but necessary: we cannot let these challenging issues dissuade us from having important discussions or analyzing complex genres. Foregrounding the immediate impact a social media privacy policy has on our experiences in higher education highlights data aggregation’s larger impacts on our lives beyond the classroom. What are the real-world, rhetorical implications of abstract concepts like digital data collection and digital privacy? The answer is inevitably messy and oftentimes results in uncomfortable conversations; however, understanding how and why data collection, aggregation, and manipulation contributes to systemic oppression provides a valuable opportunity to look far beyond the classroom and to make smart, informed decisions concerning our present and future digital experiences with social media platforms.

Notes

[1] Scholars Chris Gilliard and Hugh Culik (2016) propose the concept of “digital redlining” as a social phenomenon whereby effective access to digital resources is restricted for certain populations by institutional and business policies, in a process that echoes the economic inequality enforced by mortgage banks and government authorities who denied crucial loans to Black neighborhoods throughout much of the 20th century.

[2] Stephanie Vie (2008), for instance, described over a decade ago a “digital divide 2.0,” whereby people’s lack of critical digital literacy denies them equitable access to digital technologies, particularly Web 2.0 tools and technologies, despite having physical access to the technologies and services themselves.

[3] Facebook creator Mark Zuckerberg is not lying when he says that Facebook users own their content, but he also does not clarify that what Facebook is actually interested in is your metadata.

[4] Aggregate data does not mean more accurate data, because data is never static: it is dynamically repurposed. This process can have disastrous results when haphazardly applied to contexts beyond the data’s original purpose. We must recognize and challenge the ways aggregate data can wrongly categorize the most vulnerable users, thereby imposing inequitable experiences online and offline.

[5] #gamergate was a 2014 misogynistic digital aggression campaign meant to harass women working within and researching gaming, framed by participants as a response to unethical practices in videogame journalism.

[6] Facebook launched its ad library (https://www.facebook.com/ads/library/) in 2019 in an effort to increase transparency around political advertisement on the platform.

[7] Perhaps the most recognizable example of this is the Patriot Act (passed October 26, 2001) which prescribes broad and asymmetrical surveillance power to the U.S. government. For example, Title V specifically removes obstacles for investigating terrorism which extend to digital spaces.

[8] This is what Estee Beck (2015) refers to as the “invisible digital identity.”

Bibliography

Alexander, Jonathan, and Jacqueline Rhodes. 2014. On Multimodality: New Media in Composition Studies. Urbana: Conference on College Composition and Communication/National Council of Teachers of English NCTE.

Banks, Adam Joel. 2006. Race, Rhetoric, and Technology: Searching for Higher Ground. Mahwah, New Jersey: Lawrence Erlbaum.

Beck, Estee. 2015. “The Invisible Digital Identity: Assemblages of Digital Networks.” Computers and Composition 35: 125–140.

Beck, Estee. 2016. “Who is Tracking You? A Rhetorical Framework for Evaluating Surveillance and Privacy Practices.” In Establishing and evaluating digital ethos and online credibility, edited by Moe Folk and Shawn Apostel, 66–84. Hershey, Pennsylvania: IGI Global.

Bridle, James. 2016. “Algorithmic Citizenship, Digital Statelessness.” GeoHumanities 2, no. 2: 377–81. https://doi.org/10.1080/2373566X.2016.1237858.

CBC/Radio-Canada. 2018. “Bad Algorithms Are Making Racist Decisions.” Accessed June 18, 2020. https://www.cbc.ca/radio/spark/412-1.4887497/bad-algorithms-are-making-racist-decisions-1.4887504.

Cheney-Lippold, John. 2017. We Are Data: Algorithms and the Making of Our Digital Selves. New York: New York University Press.

Clark, Irene L., and Andrea Hernandez. 2011. “Genre Awareness, Academic Argument, and Transferability.” The WAC Journal 22, no. 1, 65–78. https://doi.org/10.37514/WAC-J.2011.22.1.05.

Dijck, José van, and Thomas Poell. 2013. “Understanding Social Media Logic.” Media and Communication 1, no. 1: 2–14. https://doi.org/10.12924/mac2013.01010002.

Drucker, Johanna. 2014. Graphesis: Visual Forms of Knowledge Production. MetaLABprojects. Cambridge, Massachusetts: Harvard University Press.

Facebook. n.d. “Data policy.” Accessed March 28, 2021. https://www.facebook.com/about/privacy.

Gilliard, Christopher, and Hugh Culik. 2016. “Digital Redlining, Access, and Privacy.” Common Sense Education. Accessed June 16, 2020. https://www.commonsense.org/education/articles/digital-redlining-access-and-privacy.

Haas, Angela M. 2018. “Toward a Digital Cultural Rhetoric.” In The Routledge Handbook of Digital Writing and Rhetoric, edited by Jonathan Alexander & Jaqueline Rhodes, 412–22. New York, New York: Routledge.

Miller, Carolyn R. 2015. “Genre as Social Action (1984), Revisited 30 Years Later (2014).” Letras & Letras 31, no. 3: 56–72.

Noble, Safiya Umoja. 2018. Algorithms of Oppression: How Search Engines Reinforce Racism. New York: New York University Press.

Obar, Jonathan A., and Anne Oeldorf-Hirsch. 2020. “The Biggest Lie on the Internet: Ignoring the Privacy Policies and Terms of Service Policies of Social Networking Services.” Information, Communication & Society 23, no. 1: 128–47. https://doi.org/10.1080/1369118X.2018.1486870.

Perrin, Andrew, and Monica Anderson. 2019. “Share of US adults using social media, including Facebook, is mostly unchanged since 2018.” Pew Research Center.

Pew Research Center. 2019, June 12. “Internet/Broadband Fact Sheet.” Accessed March 20, 2021. https://www.pewresearch.org/internet/fact-sheet/internet-broadband/.

Terranova, Tiziana. 2004. Network Culture: Politics for the Information Age. London, UK; Ann Arbor, Michigan: Pluto Press.

Vie, Stephanie. 2008. “Digital Divide 2.0: ‘Generation M’ and Online Social Networking Sites in the Composition Classroom. Computers and Composition 25, no. 1: 9–23. https://doi.org/10.1016/j.compcom.2007.09.004.

Acknowledgments

We would like to thank our Journal of Interactive Technology and Pedagogy reviewers for their insightful feedback. We are particularly indebted to Estee Beck and Dominique Zino. This article would not have been possible without Estee’s mentorship and willingness to work with us throughout the revision process.

About the Authors

Charles Woods is a Graduate Teaching Assistant and PhD candidate in rhetoric, composition, and technical communication at Illinois State University. His research interests include digital privacy, biopolitical technologies, and digital rhetorics. His dissertation builds a case against the use by American law enforcement of direct-to-consumer genetic technologies as digital surveillance tools, and positions privacy policies as a dynamic rhetorical genre instructors can use to teach about digital privacy and writing. He has contributed to Computers & Composition, Writing Spaces, and The British Columbian Quarterly, among other venues. He hosts a podcast called The Big Rhetorical Podcast.

Noah Wilson is a Visiting Instructor of Writing and Rhetoric at Colgate University and a PhD candidate in Syracuse University’s Composition and Cultural Rhetoric program. His research interests include posthuman ethos, algorithmic rhetorics, and surveillance rhetorics. His dissertation addresses recent trends in social media content-recommendation algorithms, particularly how they have led to increased political polarization in the United States and the proliferation of radicalizing conspiracy theories such as Qanon and #Pizzagate. His research has appeared in Rhetoric Review, Rhetoric of Health & Medicine, Disclosure, and other venues

Composite profile for 'Ima Student.' The profile starts with an introduction and an image of the Colombian flag in the top third of the profile. In the bottom section, the profile has numbered three literacies with three headings. The first heading is Colombian Spanish and, it includes a link to a YouTube video demonstrating the literacy. The profile also includes descriptions about Ima Student’s social media literacy and their music literacy.
0

Diluting the Dominance of SAE: A Multiliteracies Profile Sequence and Assignment

L. Corinne Jones

This Multimodal Profile, supplemental paper, and assignment sequence was designed to help students build a bridge between their social media compositions and their academic compositions to promote high-road transfer and value students’ multiliteracies.

Read more… Diluting the Dominance of SAE: A Multiliteracies Profile Sequence and Assignment

css.php
Skip to toolbar