Everyone’s least favorite company Meta is in the news again, this time due to a wrongful death lawsuit placing the blame for the tragic suicide of a child on the seductive design and victimization risks of social media platforms. This heart-breaking event has given rise to a renewed chorus of policy makers and observers calling for regulation to restrain the exploitative and creep-enabling business practices of social media giants. Enter into this debate the always thoughtful Cory Doctorow, whose recent op-ed in The Guardian anticipates the imminent decline and fall of Meta, the parent company of Facebook, Instagram, and WhatsApp. It’s an excellent essay that offers multiple forms of evidence of doom, primarily a list of compelling reasons for the company’s own staff to head toward the exits. I wish I believed his analysis (I want to believe) as I wish for the declining fortunes of most major tech companies. Somewhat in step with Doctorow, I am hostile to “the advertising model” – the extractive practice of profiting from the labor of unpaid users who create most social media platforms’ content, a relationship made all the more exploitative and society-corroding by the associated practice of segmenting users into echo-chamber segments for advertisers. However, unlike Doctorow, I suspect that Meta is more like Boris Johnson than Samson (pride, the fall, yadda yadda); Like our current PM, Meta may be fated to be ever-embattled yet shockingly resilient. The company is so incredibly wealthy and enormous that it may have decades of reinventions and rebrandings left before dissolving or at least falling from power. Key is the essential dependence of so many people across the world on their flagship platform Facebook even as many of us loathe that dependence and search desperately for kinder alternatives. Sadly though, there are few viable options. As Doctorow concedes, it’s all about the switching costs. In this case, there is no existing or emerging alternative that ticks as many boxes as Facebook, including the crucial two: a critical mass of users and its range of ‘essential’ features. TikTok may be huge and popular but its appeal is limited primarily as entertainment – only one of the boxes and far from the most important. A key and sadly telling aspect is the reliance of artists, activists, charities, and businesses large and small on Facebook. The far less efficient ways that these groups used to reach their target audiences in the beforetimes have largely faded from memory and have been thoroughly replaced by Facebook and only Facebook. The communications strategy of every new brand or cause begins with the platform and then peppers out to other channels. Those of us who loudly argue to “quit Facebook” always have to turn to face not only their own switching costs in terms personal network loss (my risks are small but not insignificant) but those of entire sectors, including those we cherish – or at least want to exist even if that existence is thoroughly propped up by a company who arguably represents much of what so many are working against. I have often said – on Facebook – that until the forces of liberation migrate away from Facebook they risk being snuffed out at their moment of achievement by a single keystroke. The rejection of predominant forms of oppression — what the vast majority of activist causes exist to promote — would pose an existential threat to Meta’s bottom line and the wealth and power of its leadership if it were even close to being realized. What CEO in his right mind would allow emancipatory movements to flourish on their own private domain unless they were certain (and perhaps complicit in ensuring) that the cause will not soon be attained? The revolution will not be ‘liked.’ And yet, we remain dependent on the infrastructures of these corrupt and corrupting systems to recruit allies in the creative and activist struggles that envision a better collective future. Meta isn’t going anywhere unless and until a truly viable and liberating alternative exists. For now, we’re stuck with them.
The short answer is “yes.” The concept of “privilege” is probably familiar to you. It describes the advantages, both subtle and obvious, that flow towards certain groups in society and away from others. It is a collection of unearned rewards, benefits, and/or advantages triggered by affiliation to the dominant side of a power system. We experience this in the United States primarily as white privilege, male privilege and heterosexual privilege. Being any of non-white, female, queer—among other classifications—has been amply demonstrated to reduce one’s opportunities in the world. One need only examine some prison statistics to see how this plays out for people of color, or the evidence of pay disparities by gender to see how this plays out for women. So, how does this play out in the digital world?
We have taken to calling the experience of our online lives the “information society.” It is an increasingly apt description because is accurately describes how our social, political and legal lives are migrating out of physical space and into the digital. It should not be surprising that as both the positive and negative human inclinations found in the material world find expression in the digital world, a move to separate and segment society is finding its way there as well. The inequities that comfort and oppress various groups in modern society are not absent in the digital world. In fact, they are expressed in more subtle and pernicious ways and may prove even harder to combat. By now, you probably know a thing or two about state surveillance due to the disclosures of former NSA analyst Edward R. Snowden. Maybe you were already somewhat concerned about corporate surveillance too—the practices of all sorts of entities collecting information about your browsing habits, cell phone use, and so on. While most of us probably think mainly about how all this data collection affects our own lives, something that is obscured behind the basic problems of unregulated surveillance is how the dramatic increase in surveillance capabilities and data processing affect some people much more than others.
The journalist Natasha Singer has written a number of articles about the the profiling and scoring of consumers based on what can be discovered about them via their online activities and habits. As Singer illustrates, new scoring algorithms—which work similarly to traditional credit scores, but without any regulatory restraint—have the power to influence everything from your eligibility for a loan or a job to who you date. These are proprietary systems that operate with a great deal more information about you than you might imagine. The numbers you dial on your phone, the websites you access, the purchases you make (or decline), where you drive your car or swipe a transit pass, plus thousands of other data points gleaned from the increasingly transparent nature of our transactional and information-seeking lives, are aggregated, crunched and calculated for efficient ad-targeting…and social sorting. You could be tagged a hot prospect for a great deal on airline tickets, or you could be identified as a credit or security risk and denied an apartment rental, all through an entirely opaque and unaccountable web of algorithms and hidden interactions.
While you may or may not be concerned about this for your own life opportunities, this can have pretty disastrous effects on certain segments of society. Consider the intersection race with the technology of modern policing. As one example, body-worn cameras used by the police (which hold out much promise in addressing police/civilian violence) have raised concern due to everything else that is collected by those cameras during police interactions, and the web of technology that can act on that data. Police body-cams don’t only capture video of suspects, but also the people around them. Being the neighbor of a troublemaker dramatically increases your chances of playing a role in a police video. Facial recognition software has advanced to astonishing accuracy and will only get better. The increasing availability to the public of police data, joined with facial recognition software and digital scoring algorithms suggest a scenario in which one’s “scores” could be downgraded from simply living next door to a police target, resulting in curtailed opportunities and choices. Even if the example of this “by catch” seem farfetched to you, consider at least that the actual targets of police activity are also being unfairly impacted in new ways by technology. Racial bias in arrests and convictions is well-established and pretty hard to dispute. This increased likelihood of being targeted by the police based on race now also means an increase in data about people of color entering the data stream and staying there pretty much forever where it can permanently damage one’s future prospects. Add in the web of parsing and scoring algorithms and a picture begins to emerge of race-based algorithmic discrimination.
Race is not the only privilege factor that surfaces inequities in the information society. Women are having a very different experience of information technology than men. Revenge porn, which is the unauthorized publication of nude or pornographic imagery, typically made public by disgruntled ex-boyfriends, overwhelmingly affects women and can have profound social and economic consequences as well as put women in physical danger. Revenge porn victim have been fired, denied employment opportunities socially ostracized and stalked. While some states are getting around to criminalizing the practice, many of the websites that publish revenge porn media are out of the reach of law enforcement. Just like with criminal incident data, it can be nearly impossible to remove every instance of the data once it shows up somewhere online. Women experience other forms of harrassment, including threatening behavior by “trolls,” as happened in the Gamergate controversy earlier this year.
When a person or organization captures information about you and then uses it against you, that is a privacy-based informational harm. It is also an exercise of power. What opaque and proprietary monitoring systems know about you empowers them to manipulate and control you, while simultaneously diminishing your autonomy. We are all subject to this form of disempowerment at the hands of those who operate the technology upon which we are increasingly reliant. Lacking relative power at the outset, as is the case with the unprivileged, places some people in a far more vulnerable position; one they are unlikely to be able to extricate themselves from.
For the affluent and privileged, the market is responding to the increased awareness of, and sensitivity to, ubiquitous surveillance. For example, there are costly smartphones available that ship “hardened” with enhanced encryption and “data leakage” protection. A new industry called “reputation management” is rapidly growing aimed at businesses and the affluent enabling them to manage their online profiles in order to minimize any potential damage from any sort of bad behavior. Unprivileged people are more likely to have black marks against their profiles and are also likely lack the economic means to buy expensive phones and reputation cleanup services while the privileged can avoid discriminatory surveillance practices and can pay to maintain squeaky clean online profiles. This means that existing social divides will just get deeper and more entrenched. In the information society, the unprivileged are increasingly captured and shamed with demeaning and punitive data and can’t do much but become even more marginalized.