What agency does a dummy have? The ventriloquist has all the control, making the dummy's, uhh, life(?) simpler.

In conversations about online privacy, we often dance around, but fail to address, the implications on human agency. We note how the platforms we use filter what we see. But we stop short of highlighting the effect that filtering has on our awareness of situations warranting response. We talk about personalized ads on websites. But we stop short of addressing the ability of those ads to short-circuit our usual defenses against compulsive consumption. We assert that phones shouldn’t be used before bed. But we stop short of referring to our phones or our social platforms as addictions, every bit in need of intervention as a gambling habit. We proclaim the hazards of texting while driving. But then our phone manufacturers develop car interfaces that simplify and foreground messaging applications, making it even easier for drivers to distract themselves.

Our push for personalized technology asks us to relinquish more and more of our ever-depleting cache of personal privacy. Our push for privacy, then, first must confront a challenge of our own making: We choose—whether willingly or ignorantly—to give agency to other entities in exchange for perceived simplification. But with simplicity often comes limitation.

Balancing Simplicity and Ability

Ask any Android user their thoughts on Apple users, and you’ll get an earful about limitations, oversimplification, and restrictions. Android users appreciate their ability to customize, configure, and change things to their hearts’ content. From their perspective, ability outweighs simplicity. They see the complexity and non-uniformity of Android systems as a benefit because it allows them to do what they want, the way they want, when they want.

On the other hand, if you ask an Apple user their thoughts on Android devices, you’ll get a similar earful, this time about complexity, difficulty, and risk. Apple users appreciate their ability to pay little attention to configuration while expecting things to work simply and correctly with little effort. “It just works” has long been a catch phrase of the ecosystem, going back even to the introduction of USB and the concept of “plug’n’play” which never really worked on Windows devices. Apple users sacrifice their ability to customize every aspect of their devices in the name of trusting that things will work with few surprises.

Notifications and Mental Agency

Speaking of surprises, the notification systems on our phones also demonstrates both the simplicity/ability tension and the privacy/personalization tension. We like the simplicity of having all important messages go through our phones so we have a single source of any important information. But using our phones for so much limits our ability to walk away from it or ignore it when our focus should be on other things (like sleep, class, or driving, for instance). We give up control of our attention in order to consolidate notifications. And then those notifications control us.

In “Understanding and Maintaining Your Privacy When Writing with Digital Technologies,” Lindsey C. Kim discusses, admittedly in different terms, the idea of mental privacy of non-intrusion from notifications (246–47). If people knocked on our doors at home all the time, we’d be driven mad by the intrusion and the expectation that we attend to all those visitors. But somehow because the equivalent of a door into our awareness is in our pocket or on our wrist, we no longer limit who can knock. Our preoccupation with notifications leads to zombie campuses where students are physically present but mentally distant. And what effect does all this interruption have on our attention spans?

How many people, especially students, use Focus mode (iOS / macOS)? How many people automate it? I suspect classes would be viscerally different if everyone disabled notifications during the sessions.

Algorithms Direct Agency

We’ve all heard about the control algorithms have over our daily lives. Oftentimes these algorithms work in the background, making decisions about us without our awareness. Sometimes, though, their effects can be fairly obvious, and sometimes they can work to influence how we ourselves make decisions.

Corporate Controls

It’s funny to see how desperately companies cling to features that make them money. Take, for instance, Amazon Prime—something that’s essentially pure profit for the company because their logistics infrastructure means they can deliver most products quickly for the same price as a delayed shipment. If a customer cancels Prime, Amazon loses money (the subscription cost) and perceived value (the ease and speed of delivery). Without Prime, Amazon’s services have to compete with local vendors. Thus, the process of canceling a Prime membership requires clicking “cancel” no fewer than four times—they make really, really, really, really sure you want to cancel.

Similarly, Google makes money off of tailored, targeted ads—what they call “personalized” ads. An ad costs more money if Google can say you’re more likely to click on it. For example, an ad for a fish fry on a Friday during Lent is going to be much more valuable to a devout Catholic user than to an atheist who loves steak every weekend. Targeting an ad makes the target more susceptible to whatever the ad sells. We are the targets, and it’s fair to call these campaigns attacks. Targeted ads predict what we’re most likely to give into and make it that much easier for us to give in. Turning off “ad personalization” makes Google’s services far less effective. When a user tries to disable the “feature”, Google asks for confirmation. It presents the decision as one where the user is about to lose out on something. But in fact, the user is about to regain some agency over their browsing and their choices.

Political Controls

Agency over our choices with ads is one thing. But limiting our agency with regard to reality, conspiracy, and elections gets down-right scary. In “Messy & Chaotic Learning,” Martha Fay Burtis addresses the choices higher ed made as the Internet gained traction. She illustrates how our decision to limit student agency by adopting the LMS instead of tilde spaces echoes the all-too-familiar agency-limiting approaches taken by today’s social media platforms. Burtis draws a connecting thread between these limiting systems and our ability to critically navigate information online. She follows that thread directly to the 2016 U.S. election. She says, “In the aftermath, we find that we live in a post truth world filled with fake news and alternative facts.”

Though the shock may have hit us in the aftermath of that election, the trouble certainly didn’t start there. Remember Cambridge Analytica? We’ve known for years that folks can muck about with elections. It didn’t take 45’s usurpation of the popular vote to reveal the trickery. I think the difference is that now we feel like it’s happening to us, while we watch. We’re unable to gaze through our feeds without the sense that our feeds are, well, feeding us. It’s suddenly become unnerving to doomscroll. What Burtis argues for—and warns us about—is that if schools force students to see the web through the lens of an LMS, they won’t be prepared to face the rest of the web after graduation.

Platforms

Burtis says that the LMS “is obviously influencing pedagogy”—an understatement that downplays the insidiousness of the technology. It does more than influence. The LMS forces a pedagogy upon me. It removes certain options from even being possible, and it offers defaults that can be difficult (or impossible) to change or resist. For instance, grade distribution graphs are displayed by default in Canvas. Those graphs compare students with one another and deflect attention away from learning and onto competition. They can prompt feelings of inadequacy, as well. But to turn them off, I must, one class at a time, go to Settings, scroll all the way down, click on the tiny “more options” link, then click the checkbox beside “Hide grade distribution graphs from students.” By default, all Canvas classes promote internal competition on the basis of grades.

Beyond that default, Canvas won’t let me go without grading. I’ve tried. Last semester, I gave formative feedback and indicated whether an assignment was complete. Students read “you did this” (meaning a C) as “you did this perfectly” (meaning a 100%), and we all suffered as a result. I can’t display grades as anything but a percentage. I can’t make something required but unscored. I’m only allowed to grade in a system that demands a 100-point range but that needs only a 5-point distinction. Even that is arbitrary and irrelevant to learning. But yet I am forced to pretend like there’s a difference between an 83% and an 84%. In a writing class.

Agency in Digital Spaces

Any technology we use is a technology we choose. We must make that choice conscientiously, knowing that when we gain simplicity, we lose control. When we let platforms access our data, we lose control of how that data is managed. When we use systems to simplify our work, we lose control of how we do that work. Most critically, when we let these systems influence our decision-making, we lose control of our futures.

3 Responses
  1. Robin DeRosa

    I think the issue I have with this post is mostly about the representation of Martha’s work. If someone weren’t familiar with the full 2017 piece that you quote from or Martha’s longtime work with DoOO and moving faculty and students away from the LMS, it might seem like Martha is actually the one of the folks who needs to wake up to importance of privacy concerns related to tech and edtech in particular, or to how crucial it is for faculty and students to be making critically informed decisions when they choose tools and platforms. But I just watched Martha create an entire PD program around just this with her Design Forward initiative. So I agree with much of this post, but it’s weirdly cherrypicking in how it represents her work, and though I am sure she is not flustered by it, it may do newer readers a disservice if they are looking for leaders and good work to follow as we all navigate these waters.

    1. Thanks for taking me to task on my presentation of Martha’s ideas. You’re absolutely right that I cherry-picked an odd detail. I tried to point to something that students would find familiar in the hopes it would encourage them to connect her assertions with their memories.

      But you’re also absolutely right that I should have positioned her as more aware and forward-thinking than I had initially portrayed her. That was an error of carelessness on my part. I’m grateful you pointed it out, and I apologize if she read the first version. (Hi Martha! Still love you!) I’ve revised the text to better reflect her position, her reputation, and her wisdom, and I hope I’ve better contextualized her work as a result.

      Again, I appreciate your watchful eye keeping me in line.