One Nation Under Surveillance, Power and Privilege, Tech and Society

The Dystopian Path to Bicycle Safety

As an information ethicist who is generally skeptical about digital products and services whose business model is surveillance, I was struck with some serious itnernal conflict by a recent story about ‘Safe Lanes,’ an app for reporting cars and trucks parked in bike lanes. You see, in addition to being an academic, I am a regular bike commuter. Like other urban bicyclists in the United States, I experience a mix of exhilaration and fear on my commute where inattentive or obnoxious drivers and inadequate bike lanes can make biking feel very unsafe. I used to think risk-taking was the price of urban biking and took some pleasure in dodging cars and powering through my commute. But having racked up decades of scrapes and scares, my sense of adventure is waning. While Seattle drivers are relatively decent about giving way to bikes (it’s a pretty ‘sporty’ city after all), collisions–sometimes fatal–between cars and bikes are alarmingly routine. The city’s department of transportation has added a lot of bike lanes since I’ve lived here, but enforcement of the right of way of bikers is nearly non-existent in my experience. By way of example, an intersection by a police precinct in Seattle’s Capitol Hill neighborhood has a marked space reserved for bikes waiting for the light. That space is frequently occupied by drivers–in police cars. As I know from having had the privilege to bike in places like The Netherlands and Denmark, rigorously enforced bike lanes are a game changer for getting more people (all ages and genders) out of cars and onto bikes. The dramatic increase in bike lanes in US cities in the last few decades has been an incredible boon, but lax enforcement leaves many folks wary of using them.

Enter Safe Lanes, an app that uses smart phone hardware to capture the image and location of vehicles blocking bike lanes. The app uses technology similar to police license plate readers to identify the plate number of the car and sends this information to traffic enforcement. While options exist for bicyclists to report blocking cars through other means, like calling the police on a non-emergency number, most of us cannot be bothered to do this. It’s time-consuming, and even if one calls, there is a sense that nothing is likely to happen or not soon enough. While I am the kind of neighbor who reports litter and graffiti, calling in a parking nuisance is pretty unsatisfying. The idea of Safe Lanes is to make reporting bike lane blockers fast and easy, thereby increasing the likelihood that a report will actually lead to a ticket and maybe even change driver behavior. Even for a surveillance-averse person like me, the allure of punishing drivers who make me feel unsafe is very powerful. I was tempted.

But Safe Lanes doesn’t just stop at taking a user’s report and forwarding it to the authorities. Once a car or truck has been reported through the app, the image remains visible in the app for all Safe Lanes users to see, along with statistics about how many times a vehicle has been reported. In other words, it’s not just a reporting app but a shaming app. The persistence and display of the user-generated reports superimposed on a city map carries with it the implication that, in addition to being diligent citizens who report wrongdoing, we are also expected to join a community of fellow reporters and participate in communal rage by staring at the offending Priuses and UPS trucks reported by others with righteous indignation. And at the end of the day, Safe Lanes joins an alarming number of apps and systems that makes city streets into forums of monitoring and control. As discussed in a story about the app that appeared in CityLab, “The illusion of privacy in the public sphere may have always been an illusion, but with many more eyes and lenses trained on the streets, the age-old practice of ‘being seen’ can evolve quickly into being shared, and being stored. And perhaps being unfairly tried and convicted in the court of public opinion.”

Add to this that these apps are most likely to promote the values and worldview of a particular class of urban dweller: the much maligned “tech bros” amassing in places like Seattle, San Francisco, and other popular US cities. While the goal here, bicycle safety, is not particularly controversial or necessarily classist, it is troubling that we so easily trust some techie with programming skills with the authority to shape public behavior by releasing an app. There are myriad assumptions built into this app and its capabilities. One is that bicyclists have rights – I subscribe to that one. But let’s consider who the most likely targets of the app are: low wage ride share and delivery drivers. They are, after all, the folks whose livelihoods depend on hurrying people and passengers around on crowded city streets. I don’t want to excuse anyone making it unsafe to bike in the city; I do have actual skin in the game here, but in an age of rapidly gentrifying cities, there is something repugnant about affluent city dwellers naming and shaming people with much less social and economic power using information technology. All the ingredients are here for something that appears, at first, to be liberating for an arguably vulnerable group – bicyclists trying not to die. But it also joins an increasingly oppressive assemblage of information systems marketed to “concerned citizens” for the purpose of monitoring, shaming, and controlling others. Ugh! Safe Lanes, you had me at “improve bicycle safety,” but lost me at “participate in a surveillance dystopia in which no minor infraction goes unnoticed or unpunished.”

Perhaps this just what happens when we impoverish and abandon public institutions in favor of entrepreneurial techno-solutionism. What if, rather than hiding in our phones and relying on commercial products to mediate our participation in public life we actually spoke to each other and our elected officials – in actual public forums – where we could advocate for better bike lane enforcement, or demand money for driver education programs. What if rather than relying on apps to shame people into compliance with the behavioral paradigms imagined by technologists who happen to like biking we worked on being less suspicious and more patient with each other while also working on developing safe bicycle networks? I know, I know. This is asking a lot of humans – particularly American humans. But there has to be a better way of improving urban life than weaponizing information technology…doesn’t there?

Standard
One Nation Under Surveillance, Our Corporate Overlords, Tech and Society

Consuming Surveillance

Our consumption habits are the root cause of pervasive surveillance, the erosion of democracy, and the threat of environmental disaster. It is the main culprit driving the digital invasion that seeks to gather data about every aspect of our lives, from our browsing habits to our heartbeats. How is this so? Let me break it down. First, consider that the biggest source of surveillance for most (not all) people is advertising. All that logging, tracking, and predicting going on through the use of seemingly every device and at every transaction is designed to hone micro-targeted advertising and other forms of precision marketing. Every search, website visit, every app on your phone, that wearable device measuring your steps and your sleep, the chatty digital assistant that plays your favorite songs and dims your lights, social media (of course), your “cloud,” all these are all sites of persistent and increasing collection of what Harvard professor Shoshana Zuboff calls our “behavioral surplus.” As we act in the world, the evidence of those actions is gathered up. That is the “surplus” and it contributes to what Zuboff calls a “hidden text” that describes the movements of our lives like a shadow. What is contained in that text is invisible to you and me, but luminous and valuable for others.

The reason for harvesting our behavioral surplus is to sell things – to us and to people we resemble. And not just the things we need, plus a few things we want, but an ever-increasing amount of these things. As it happens, our personal rates of consumption have been steadily increasing for at least a century. So much so, that by the end of the twentieth century, Americans were consuming more than 17 times what they did in 1900, leading us to consume, at present estimates, between one-fifth and one-third of the world’s resources despite having only about one-twentieth of the world’s people.

The United States is not alone. As many formerly low-consumption countries have become wealthier, they have dramatically increasing their consumption as well. This follows a certain logic: An increase in aggregate wealth provides the incentive for businesses to provide goods and services in what free-market fans would term a “virtuous cycle,” where profitable production creates the financial capital to pay higher wages which leads to more disposable income and demand for more goods and services. This apparent lifting of all boats might be fine if there were infinite resources to use as raw materials and as fuel for transportation and production, but there are not. Industrial growth has decimated the planet; depleting its resources, polluting the water and air, and just generally leading us to ecological disaster. And yet, we carry on as if this were not the case. The boats are indeed lifting, but only because of the runoff from melting glaciers.

But how does this lead to surveillance? The phenomenal growth in consumption has been accompanied by a similarly phenomenal growth in consumer choice. In virtually every product category, there could be dozens or even thousands of options. Each producer or service provider really wants your dollar, and they have to fight for it. Advertising is a multi-billion-dollar industry designed to help close this deal and many developments in information technology have been brought to bear as the tools of this particular trade. The development of increasingly invasive and secretive surveillance techniques to capture the minutia of our online and connected lives has been especially useful. As Zuboff tells it, Google pioneered the exploitation of behavioral surplus by employing sophisticated techniques, including artificial intelligence. First, it was to analyze search queries. Search, as we now well know, turned out to be a remarkable wellspring of information about what people think, do, and plan to do. Google’s ingenuity led the company to figure out how to go beyond merely observing our habits to making very good predictions about them. They appear to be working on plans to go further and simply command our choices through manipulating what we see and when we see it, and they are not alone. Just as Google and its parent company Alphabet have developed a wide range of quality online tools and services, from maps to translators, to word processing and data storage systems to keep tabs on us at all times, Facebook has similarly figured out how to keep us glued to our screens as a (last) means of maintaining social ties while also harvesting, and then trafficking in our behavioral surplus.

It is no coincidence that the rise in consumer surveillance has been accompanied by new and troubling forms of state surveillance. It makes sense really. Technology companies, having discovered how to write the hidden text of our lives, found a willing customer for that text in our increasingly paranoid governments. Most of the surveillance technology used by local law enforcement is bought off-the-shelf from commercial firms large and small, including Amazon. The company built to service our every consumptive whim and need, has expanded well beyond its retail position to sell all manner of surveillance equipment. In particular, the company is actively trying to corner the market in selling facial recognition systems to federal agencies, who are enthusiastic buyers. Meanwhile, recordings picked up by Alexa have found their way into criminal trials, demonstrating the effective demolition of the public/private divide through always-on, connected home devices, especially those standing by to take your retail orders. The phones we carry, the smart appliances in our homes, the vehicles we ride in, all these and more offer up the details of our lives to any buyer, public or private. Quaint concepts like search warrants and the expectation of privacy just wither away while we buy buy buy. Meanwhile, federal law enforcement has been contracting with Google, Microsoft, and other tech giants for technical services for decades. All of these companies are banking on both the commercial and governmental business opportunities made possible by the stuff they specialize in: machine learning, facial recognition, data analytics, and so on. The techniques they designed to target advertising are easily converted to techniques for even darker forms of targeting. Predicting what you’ll buy is not all that different from predicting what else you’ll do and oh-so many people want to know.

None of this is your fault of course. There are economic and social forces well beyond our individual control that have created the retail-surveillance state we now find ourselves in. What is true is that you, me, and everyone we know, were easy marks. We like want things. We crave convenience, efficiency, services, systems, tools – anything to impose order on a demanding world. Our lives are busy and our social connections have gone digital. Our health is concerning, so we track our fitness. Stores are a hassle and they’re disappearing anyway, so we shop online. There are too many movies to choose from so we let Netlflix choose. And of course, only the bravest or most stubborn among us can live without a smartphone. As we appear to gain a little space, a little human contact, a little leisure, we also lose – our privacy, our agency, and our planet. We live our life stories tucked into the bosom of our technological affordances and retail pleasures. Yet, the story of our lives only seems to be written by ourselves. The rest of the story is a second text, a hidden text.

Standard
Our Corporate Overlords, Tech and Society

Feeling the Feels: Artificial Intelligence and the Question of Empathy

When I was a child, our family went through a series shitty televisions. This was before most people had internet connections or personal computers, when a TV was a suburban family’s most immediate link to the outside world. So, TVs mattered. We didn’t have much money and a family friend, who happened to be an electronics tinkerer and a bit of a hoarder, would periodically trash-pick old TVs and give them to us. They were big boxy things with glowing tubes inside. The hand-me-down TVs worked for a while and then they didn’t. Towards the end, there was typically a period in which some amount of banging on the side or top of the TV would get it to work or improve the picture for a little while, until having to do it again. Often the banging would be an act of frustration or anger. It felt like the TV was doing something intentionally, holding out on us, magnifying our helplessness and deprivation. I sometimes cried out while hitting the thing. It was cathartic and I was miserable. And here’s the thing…some of those TVs gave the impression that they felt something. The picture would seem to twist or flash in response to the banging. Sometimes a good pounding on a nearly dead TV would produce satisfying sparks or smoke. It was as though the television felt pain. Not simply physical pain, but emotional hurt. Like we could make it share the sadness and frustration we were feeling in the same way an abuser or a bully inflicts pain in order to “share” his wretchedness with others.

The televisions didn’t feel pain of course and, as fairly well-adjusted adult, I no longer beat up on defenseless technology…often. It wouldn’t matter much if I did though (except for the replacement costs and some troubling implications about my mental health) because machines do not feel. Increasingly, we have technologies that see, hear, smell, and can even sense touch and pressure, but they do not now, nor will they ever have emotional feeling. And because they can’t experience emotion, specifically emotional pain, they are incapable of empathy. A well-crafted machine can certainly imitate emotion. Siri or Alexa can choose a voice modulating algorithm to communicate concern, mockery, or hurt and Jibo, the adorable table-top robot, has endearing face-like expressions and can coo like a precocious child, but it is all just algorithmic fakery. The machine feels nothing. It simply chooses a response type from a library of code and executes it without being itself affected in any way.

Why does it matter? There are certainly plenty of people who think it doesn’t, or at least not very much. Luciano Floridi, a philosophy professor at Oxford, has suggested that we overemphasize the specialness of human agency and reason. He revisits the famous “Turing Test,” which basically holds that if a computer can perform well enough to convince you that it is human, then it can no longer be thought of as merely a machine. Floridi wants us to realize that as machines—artificial intelligence—become capable of doing an increasing number of human-like things, our perceptions about the set of skills and practices that can only be entrusted to humans will become progressively smaller and may eventually vanish. The Science and Technology Studies scholar Bruno Latour has similarly suggested that we need to view objects as mundane as seat-belt chimes and hydraulic door closers as having human-like “agency” because such objects act in the world and shape human action. This line of thinking appears to be propelling the development of products and services that give machines significant power over people’s lives based on assertions that machines perform the same or better than humans in many tasks. Getting a job? Software is already being used to choose from a pool of candidates based on their resumes and social media profiles, using machine learning algorithms to make assertions about future performance and “fit.” There are bots that can even conduct job interviews. These approaches are being attempted well beyond hiring for a broad range of decisions, from who should get a kidney to who should go to jail and for how long.

There are very likely numerous scenarios in which machines can do a better job than fallible, biased, or oblivious humans. There may even be demonstrable cases where software overcomes racist and sexist trends in key decision spaces. Even if this is true, there are also a lot of things a machine will never be able to do. Why? Because they lack empathy. Empathy is a core human trait that differentiates us from machines and motivates important human values like charity and the desire to relieve suffering. Unlike many animals, we even feel empathy for members of non-human species. We routinely agonize over what pets and farm animals feel and we worry about the injustice of picture windows as experienced by birds in flight. We also appear to empathize with inanimate objects like cars, and yes, devices with artificial intelligence, such as those smart assistants and even military robots. Humans are overflowing with empathy. It is empathy that causes us to care about homelessness in our cities, the poor health of strangers, and the fates of victims of far-off wars and famines even when we aren’t in direct contact with their struggles.

Empathy is particularly important to our human futures. We cannot simply decide who should live or die, who should suffer, or who deserves a second chance based on code libraries and the capability of giving you bad news with the appearance of regret. Being human means eventually having the experience of pain, which is contributes to an ability to empathize with the suffering of others. Machines cannot have these experiences. This is why humans must always be a central part of important decisions that concern the well-being of other humans. But artificial intelligence has been proposed to make determinations in just those types of realms, such as in military matters. Despite vacuous claims about achieving “humane war” and other insane concepts, war is and should be entirely lacking in humanity. Machines will not improve it, only distance human actors from having to confront its awfulness. Similar discussions about handing off difficult decisions to artificial intelligence need to come to a full stop whenever they involve determinations about human fates. Will the autonomous car allow its owner to die to save others? Who will be liable when the healthcare robot amputates the wrong limb? Liability isn’t the only issue here and neither is the project of figuring out how to program “morality” into AI. The autonomous device can never “care” who lives or dies, which limb should be removed, or how much anyone suffers. It can only make a calculation and then act in the world without paying an emotional price. Like a sociopath. This is not how moral decisions are made. Only humans can truly care about anyone or anything and that is the fundamental basis of moral agency. Artificial intelligence, quite literally, has no skin in the game. And this is why artificial intelligence can never replace us.

Standard
Our Corporate Overlords, Power and Privilege, Tech and Society

Silicon Valley Joins the Culture War?

This past Sunday the web host and domain name registrar GoDaddy bowed to months of pressure from activists and told their longtime customer, The Daily Stormer, to go find another host for their website. On Monday, Google similarly denied a home to the white supremacist, Nazi-aligned website. A denigrating post about a young activist killed by an apparent neo-Nazi at a white nationalist rally in Charlottesville was the final straw that forced GoDaddy’s hand and, presumably, that of Google. In related news, The Los Angeles Times reported over the weekend that other Silicon Valley service providers, such as the short-term rental company Airbnb and the crowdfunding site Patreon, were blocking the use of their services by various “far-right” groups, forcing them to find other providers and, in some cases, to create their own. We should be proud to see Silicon Valley coming to the rescue and fighting on the right side of the culture war, right?

If only it were so simple.

There are several hard questions that should be asked about banning or allowing white nationalists and a whole range of other haters to use the internet to spread their messages, including those messages that strike fear into the hearts of many other users. There are also many important questions—or demands—that should be posed to Silicon Valley firms and our elected leaders to define what should and should not be construed as free expression and to place the burden on policing that in the right sets of hands.

The legal basics involved here are the U.S. Constitution and the “safe harbor” provision of the Communications Decency Act. The free expression guarantees of the First Amendment are frequently cited by white nationalist types as the legitimizing bases for their demonstrations and published hate speech, but they do not apply to services operated by non-governmental entities. In other words, the popular services of the internet, such as Airbnb, Twitter, YouTube, etc. have virtually no obligations under the U.S. Constitution. This means that internet platforms can basically block or allow nearly any type of expression, unless such expression is specifically addressed by a specific law. Section 230 of the Communications Decency Act, which is otherwise known as the “safe harbor” provision, basically absolves providers of internet services, including ISPs, web hosts, media streaming services, and others, of liability for how their customers use those services. If a customer contributes content, it’s on the customer; the internet service is not to be construed as the “publisher” of the content. The CDA’s Section 230 has exceptions of course, but hate speech isn’t one of them.

In the wake of the violence in Charlottesville, many social media commentators were rightly upset at the various enablers of sites like The Daily Stormer and the many other services that provide any sort of comfort to white nationalists. However, as a legal matter, the sites have no constitutional or statutory obligation to do anything, and for the most part, they haven’t. A search on Facebook for “white pride” or similar terms will reveal lists of pages dedicated to white nationalists and the people who love them (really – there’s a white pride dating group). Google has come under fire for failing to police its search auto-complete algorithms from completing sentences like “Muslims are…” with “terrorists” and “the holocaust is” with “a hoax,” and similarly unhelpful constructions (which they’ve improved). Twitter and Facebook have both come under fire from users and commentators who loudly complain that the platforms do far too little to prevent some of their users from engaging in relentless harassment, even when it includes threats. GoDaddy had been under pressure for months to distance itself from The Daily Stormer but chose to do nothing until some magic line was crossed this past weekend.

Silicon Valley’s failure to embody the role of stewardship for civil society should not be surprising, however. For one thing, it is not exactly clear whether it actually makes sense to empower corporations to carry the water of a society’s moral duties. Of course we want corporations to act morally, but as the power of corporations increases—particularly the corporations that are most visible in the internet/mobile sector—the power of non-commercial society seems to be decreasing. (The American companies Alphabet and Apple Computer are worth, together, over $1.4 trillion). The key questions to consider here include: Are we comfortable leaving the decisions about who gets to speak and who does not to enormous institutions that are generally unaccountable to society? How can we be sure that such choices will be made in the best interests of the public rather than to meet narrow, short-term business objectives? Given the increasing importance of the major internet platforms, such as Facebook and YouTube, as accessible and powerful venues for expression of all kinds, it seems obvious that the platform operators must bear some responsibility for what that expression does regardless of how the law and regulatory environment is currently arranged. Yet it is sadly unclear how to make the execution of that responsibility align with the cherished values of electoral democracy and civil society. What is clear to me is that “the market” is not a sufficient incentive structure to ensure that socially beneficial speech and other forms of expression are adequately nurtured and protected.

In a democracy, elected officials such as mayors and governors have the power to determine what constitutes protected expression on the streets of our towns and cities. They will do this imperfectly of course, as seems to be the case in Charlottesville where the city was warned repeatedly about the risk of violence. What is key here is that elected officials serve at the pleasure of their constituents and, in a functioning democracy, such constituents choose those officials and play at least a supporting role in the decisions they make. This process is by no means unproblematic, but the process is well-established and can be influenced by the moral intuitions of voters and activists. Meanwhile, you and me and everyone we know have zero say about what GoDaddy does. Whoever guides the decisions there is far less likely to do so out of moral compunction or the fear of losing political power. Sure, we can get a Twitter gang together to shame them into taking an action against some hate publication, but that is not democracy. For one thing, GoDaddy is only likely to respond when they see money on the line. They didn’t see that until The Daily Stormer did something so vile in the wake of an attention-grabbing murder that they figured they couldn’t pretend they were “neutral” anymore without paying some price. And even though the end result was positive in my view, it wasn’t a democratic action and it will have no impact on what anyone else does. GoDaddy, Google, and Amazon Web Services probably host many hundreds of other websites that promote hatred and will continue to do so.

This should not be surprising because this is the reality of what Silicon Valley does, and particularly what it does when it is enabled by a mix of free-market utopians and free-speech maximalists: It enables bullies. The information industry is built on a winner-take-all model that relentlessly removes revenue from communities and traditional capitalist activities, such as media distribution and street level retail, and redistributes it into increasingly fewer (and richer) hands. In the service of this business model Silicon Valley oracles celebrate ego-driven entrepreneurialism and denigrate steady jobs, equality, wealth-sharing, any sort of collective action. The type of freedom that Silicon Valley celebrates is freedom for the strong, freedom for the already got some, and includes a full-throated claim that only a pure meritocracy that denies the inequities of history is fair or legitimate. Oddly enough, as revealed by the infamous memo by James Damore, Silicon Valley has no problem promoting discredited stereotypes about women and other “less-thans” who aren’t genetically wired to live up to the narrow standards of the engineering elite. All of this taken together is important to note because when Silicon Valley boosters throw around bromides about the value of a free and open internet, what they mean might not be what you think it means.

Don’t get me wrong: I support a free and open internet, but I have my own definition of what that is. For one thing, when I think about free speech, I don’t think about it as completely unfettered, louder-is-freer free speech. Constitutional free speech isn’t wholly free and neither should it be online. For me, free speech is only effective when it doesn’t silence another legitimate speaker. When a cruel or threatening Twitter troll chases an LGBT activist or a game developer off of the platform (or out of their home), that is not an exercise in free speech, that is simply intolerable bullshit. It is vile and immoral behavior that deserves condemnation and little to no protection from the authorities. Now before you go and accuse me of hypocrisy by citing the acts of anti-fascists whose stated aims include silencing neo-Nazi types, note that I wrote that sentence about my view of free expression with care. I have some very specific ideas about what makes a speaker “legitimate.” Just as I claim that free speech is not speech that silences others, I also hold that a legitimate speaker is one who does not aim to deny basic human rights to others. White nationalists fail that test, as do speakers who denigrate women or describe others as inhuman and unworthy to live. Despite whatever non-violent benevolence some white supremacists may claim to espouse, history shows that the end goal of white supremacy is the exclusion, enslavement, or annihilation of non-Whites, including many categories of people with light skin who are otherwise deemed undesirable (Muslims, Jews, LGBT folks, etc.). This is not debatable. Nazi Germany happened. Hundreds of years of African slave trade happened. While we might group hardcore leftists in with other historical rights-deniers like Stalin’s Soviet Union, there is zero evidence that anti-fascists have gulags in mind as the end goal of their activism. Driving racial hatred back into the margins would probably suffice.

This leaves us with a conundrum. We have left the barn door open and allowed Silicon Valley to move the popular venues of expression from the community stage and the city street to their proprietary platforms, where they are guided not by constitutional or democratic principles but by terms-of-service strategically designed to maximize profits and offset risk. This is not the formulation for achieving a civil society. Extremist speech that is permitted to eddy and coalesce in seemingly ungovernable online spaces builds momentum and eventually spills out onto the streets and potentially into violence, as we have witnessed in Charlottesville, in Kansas, and in Portland. Free expression hasn’t lost its value or importance, but we can no longer allow for governments to leave the management of free expression in the hands of the least qualified to handle it—internet platform providers. Broad and generous interpretations of the CDA safe harbor provisions and a misplaced application of constitutional principles on venues that bear no constitutional obligations—or really any obligations to anyone—has pushed society to the brink of chaos. It’s time we focused not only on how to keep the internet “free and open” but also on how to make it fair and inclusive, and ultimately, just.

Who is actually made safer or freer by safe harbors? Do you feel safer? I, for one, do not.

Standard
Our Corporate Overlords, Technology and The Law

Packingham: The Danger of Confusing Cyberspace with Public Space

A recently decided Supreme Court case has triggered a debate about how much (or little) governments can regulate the use of online spaces. Specifically, in Packingham v. North Carolina, a case about a state prohibitions on social media use by sex offenders, the court has weighed in with an opinion that would seem to suggest that social media sites and services are no different than streets or parks where the First Amendment is concerned. While I tentatively agree with the majority that the government should not issue sweeping restrictions on internet access based on an individual’s criminal record, justifying this position by portraying internet sites and services as public space is misleading and, in my opinion, dangerously naïve. Writing as if he had just read the collected essays of John Perry Barlow, Justice Anthony Kennedy writes in the majority opinion: “in the past there may have been difficulty in identifying the most important places (in a spatial sense) for the exchange of views, today the answer is clear. It is cyberspace…” Kennedy correctly asserts that ‘cyberspace’ plays an increasingly important role in people’s lives, but he overlooks how the spaces and places provided by the internet are fundamentally different from those that can more accurately be described as public spaces, such as streets and parks.

On Facebook, for example, users can debate religion and politics with their friends and neighbors or share vacation photos. On LinkedIn, users can look for work, advertise for employees, or review tips on entrepreneurship. And on Twitter, users can petition their elected representatives and otherwise engage with them in a direct manner. Indeed, Governors in all 50 States and almost every Member of Congress have set up accounts for this purpose…In short, social media users employ these websites to engage in a wide array of protected First Amendment activity(emphasis added.)

Like many observers who have written paeans to the free-wheeling uses and democratizing potential of the internet, the majority opinion in Packingham demonstrates an ill-informed exuberance about the freedoms enjoyed by users of social media platforms. Even Justice Samuel Alito in his concurrence with the majority criticizes what he calls the court’s “loose rhetoric,” stating, “there are important differences between cyberspace and the physical world…” Yet Alito only criticizes the breadth of Kennedy’s claims while similarly failing to recognize the myriad ways our civil rights cannot be asserted on the internet. The resulting opinion promotes a popular but inaccurate narrative about the beneficence and neutrality of the internet in general, and social media platforms in particular.

Let’s be abundantly clear: social media sites and services are not public spaces and those who use them are not free to use them as they please. Social media platforms are wholly owned and tightly controlled by commercial entities who derive profit from how they are used. While, as is argued in Packingham, governments may be limited as to the extent they can tailor regulations over the access or use of an internet resource, social media users are already subject to the potentially sweeping choices made by site operators. Through a combination of architecture (code) and policies (terms of service), social media users are guided and constrained in what they can do or say. Twitter, Facebook, and other platforms routinely block users and delete content that would most likely be considered protected speech if it took place in a public venue. So, while we can probably agree that social media platforms have become central to the social lives of many millions of people, this means only that these services are popular. It does not make them public.

Justice Kennedy attempted to link the free speech rights that have been upheld in cases concerning other venues, such as airports, with the rights that should be available on the internet. While I do not disagree that the full extent of our constitutional protections should be available in online venues, the fact of the generally unregulated status of the internet and the commercial ownership of most of its infrastructure means that cyberspace bears very little resemblance to ‘realspace.’ Airports, for example, are public institutions operated by government agencies. A social media site—almost the entire internet now—is more like a shopping mall. In much the same way that social media platforms reproduce features of life in public places like city streets, shopping malls only mimic the interactive spaces they have come to supplant. A mall is neither street nor park. Different rules—and laws—apply to malls. When the Mall of America in Minneapolis shut down a Black Lives Matter protest in December, the mall operators were able to assert their property rights over the expressive and assembly rights of the protestors. A municipality would have risked a civil rights lawsuit had they broken up a peaceful protest on a city sidewalk or in a public park.

Packingham is a case about constitutional rights that overlooks the increasing privatization of those rights. It is also part of a larger problem of misrepresenting cyberspace as a zone of freedom. This transformation in our relationships to rights, and our perceptions about those rights, is aided by the invisibility of power online. Facebook, Twitter, etc., by providing expressive spaces in which their users supply the visible content, do not appear to us much as actors in this drama. We are led to believe that they simply provide appealing services that we get to use so long as we follow some seemingly benign ground rules. We fail to recognize that those rules are not designed for the best interests of users, but for the goals of the platforms themselves and their advertisers. Facebook in particular has worked hard to encourage dramatic changes in human social behavior that have enabled them gain deep knowledge about their users and to monetize that knowledge.

Justice Kennedy’s opinion is especially irksome because, while it purports to preserve important rights as our lives migrate online, it overlooks the distressing trend of privatization of the very rights that the constitution promotes. Yes, we may engage in first amendment activities online without undue interference by government officials, but the ability to do so is not guaranteed by the government because the government is barely involved. Ever since the internet ceased being a project of the Department of Defense, most of it has been privately owned and the government has avoided regulating most of the activities that take place there. While it may be true that an unregulated internet is a good thing, a side effect of this approach has been the growth of enormously powerful online businesses based on manipulating and spying on users and profiting from the resulting data. Every single communication and transaction that takes place on the internet passes through infrastructure belonging to dozens, even hundreds of private companies; any of whom may be asserting their combinations of architectural and policy restrictions on how that infrastructure is used. Where it suits a company to operate with total neutrality and openness, they do so. When it does not, they act in whatever manner suits the bottom line. Facebook, by example, is frequently lauded for its capacity to support political organizing as well as other modes of first amendment activity. But if Facebook decided tomorrow to block access to an NAACP page or to prevent the use of its messaging system to organize a legal street protest, there is nothing but the potential for consumer backlash to prevent them from doing so. If Google decided to choose the next U.S. president by subtly shaping “personalized” search results, there are no law on the books to prevent it. Packingham says nothing about this kind of power over free expression, which dwarfs that of the government when it comes to online activity. Until the government and the courts begin to address the privatization of our rights online, court opinions celebrating our online freedoms will continue to ring hollow while amplifying perceptions of government irrelevance in the internet age.

 

Standard
Our Corporate Overlords, Power and Privilege, Tech and Society

The Commodification of People

Among the many ways so-called Big Data is influencing our lives, quantification and predictive analytics is beginning to play a significant role in how people are selected for opportunities, such as jobs, homes, romance, sex, insurance, and so on, substituting the vagaries of human judgement with seemingly objective and reliable analytic scorecards and labels. The same profusion of data that flows from your interactions with the networked and surveiled world, and which results in all those “personalized” ads you routinely encounter, can also be used to evaluate and grade you as a person. Your daily experiences and interactions with websites, mobile apps, credit card processors, eBook readers, cell-phone carriers, security cameras, etc. leaves data trails that are routinely and tirelessly hoovered up to supply the information economy with the raw material of user profiling (but you already knew that, right?). But beyond the now familiar goal of these activities to simply sell you stuff lies a larger information dream: Using data about you to thoroughly understand what makes you tick and using that understanding to predict your future. Opportunity gatekeepers, such as landlords and employers, find this dream very attractive. Business objectives drive gatekeepers to seek out any and all means to maximize efficiency in their operations and reduce their levels of uncertainty and risk. Quantifying people into gradable categories like bushels of rice with consistent and predictable quality is an intoxicating product offering for decision makers, and the data industry is prepared to meet (and create) that demand. By aggregating together your prior preferences and behaviors and comparing those to the preferences and behaviors of thousands of similar people and their choices, a motivated data processor and her algorithms attempts to make a range of predictions about your life, getting out ahead of the uncertainties of evaluating people based on what they self-report or provide through from their chosen references.

But there’s a problem. Quantifying people is not nearly as easy as quantifying grain. Quantification requires standardization, but people aren’t standardized and the data collection methods we have for analyzing people aren’t perfect. So, shortcuts have to be made and proxies must be used to reduce the rich complexity of human experience into discreet buckets. The first reduction comes in the form of the data that is used. Despite the fact that we our lives are increasingly observed and analyzed, the domains and methods of observation come pre-loaded with certain biases. Tracking what books you read with a Kindle (or other eBook reader) requires, first, that you own a Kindle or use the Kindle app. This already eliminates that data point from consideration for all the people who stubbornly continue to read printed books or who choose to spend their limited incomes in more practical ways. Here we see how one chooses to engage with the data ecology might impact her profile. The varied choices people make about participating in social media are similarly influential in profile development, as evidenced by the increasing number of data products that use social media data as inputs (see this for a chilling example).

The data industry also makes use of the open records policies of government agencies to build their profiles. Some types of public records, such as arrest records, tend to reflect negatively on people of color and the poor. For example, there is an abundance of evidence that drugs and weapons laws are routinely violated by people across demographic lines, but African American men are more likely to be arrested and convicted for violations (see this, and this for examples).  As a result, evaluating people based on their criminal histories doesn’t necessarily tell the kind of nuanced story that leads to complete knowledge. These two examples (and there many others) suggest that the construction of the data regime may not be quite as objective and reliable for judging people as we think. In fact, it appears to favor people of privilege – those who can afford to participate richly in the data economy (and choose to) and those for whom readily-available derogatory data is less likely to be discovered.

In addition to understanding how the formation of user profiles might be flawed and unfair, I am also interested in why economic/social gatekeepers are so keen on using analytics to make decisions about people in the first place. And this brings me to the work of Albert Borgmann who writes about the “hyperactivity” of modern society. Borgmann describes a hyperactive society as one that is constantly “mobilized” against the perceived threat of economic ruin. This mobilization has three key features: the suspension of civility, rule of a vanguard, and the subordination of civilians. It is in that third feature that I detect what I would label the “precarity” of the modern worker. Despite our cultural mythologies in the U.S. and elsewhere about how hard work and dedication inevitably lead to riches and success, and in spite of the tremendous wealth our society has created, we have seen in recent decades increasing social and economic inequality and the loss of stable work opportunities for ordinary people due to changes in a variety of structural economic conditions.  There are many reasons for these changes, but one of the results is that those with the power to make important decisions about our lives seem to have considerably more power and incentive now to exploit what Borgmann refers to the “disposability of the noncombatant work force.” In short, the incentives are high to reduce the work force as much as possible and the moral precepts of capitalism do not offer much resistance to doing so. The resulting precarity of work in our society leads to increased competition among workers. In order to survive in this mobilized society, we are basically forced to compete for increasingly scarce resources rather than to join together to challenge the sources (real and imagined) of the scarcity.

While Borgmann tells us something about societal forces that contribute to interpersonal competition for scarce opportunities, another author, James Carey, sheds light on how information systems have provided the means to commodify human beings. Writing in 1989 (but eerily prescient), Carey examined the dramatic social and economic changes wrought by the first electronic mass communication medium: the telegraph. The telegraph was the first technology capable of detaching information from physical objects and constraints, increasing the ability for traders of every stripe to to abstract physical objects into symbols for exchange. With the telegraph, information about the world could travel much faster than any messenger or machine, breaking down prior barriers of time and space. This change in the temporal and physical reach of communication increased a business person’s pool of potential partners, making direct personal experience with each one impossible. As a result, new methods of evaluating strangers had to emerge. This can be linked to another of Carey’s observations about a separate byproduct of electronic communication: the commodification of goods. Carey argues that the emergence of the commodities futures markets was tied to the linking of buyers and sellers regionally and nationally by the telegraph. It became possible to trade goods, such as bushels of wheat, by lots aggregated from dozens or hundreds of sources rather than dealing directly with the individual producers. This practice required the development of standardized grading systems that could be applied to quantities of goods from diverse sources. These seemingly unrelated byproducts of communications technology–the emergence of impersonal business dealings requiring new methods of personal assessment and the invention of the commodities trade that massed and standardized diverse goods into quality categories–set the stage for the emerging commodification of people. In the modern setting, the ability to post a job ad or a dating profile potentially viewable by millions of people means that the “seller” must be able to rapidly sort through dozens, hundreds, or thousands of applicants. The ability to judge candidates individually becomes impossible. Here we see the origins of the reputation industry and commodification of people: Why not employ algorithms to sort them into quality categories as if they were bushels of grain?

How this operates in practice is complex, but one thing is certain: the precarity of position and the perception that resources are scarce motivates people to sacrifice their own freedoms to gain an edge. People will give up their privacy and  otherwise adjust their lives to please opportunity gatekeepers in order to get ahead. A telling example comes from the insurance market where, in exchange for rate reductions, people install data devices in their cars that monitor and report their driving habits to insurers. Even more invasive, people are sharing the data collected by their health tracking wearables for similar incentives. This practice is known as “signaling” by economists. While granting explicit consent to monitor specific activities is a very obvious type of signaling, there are other means of signaling that are a bit more complex, but not too complex for analytics algorithms to notice. Social media activity provides a rich assortment of signals about one’s life, including family composition, health events, employment satisfaction, and financial stability among others. A few banks are confident enough about what they can learn from social media they are basing credit decisions on it (see this and this). As the practice of monitoring social media use to assess one’s worthiness for loans and other opportunities becomes commonplace, it’s not hard to imagine how that may influence how people use social media and therefore how they socialize in general.

There are many reasons why this matters. For one, it represents a progressive rebalancing of information flows. Economists have long rued the “information asymmetry” in buyer-seller transactions, in which the seller uses her deeper knowledge of a good for sale to the potential disadvantage of the buyer. However, one man’s market inefficiency is another’s defense in a world of outsized power imbalances. If the seller is a job applicant for a job at a large corporation, they are arguably arrayed against the titanic power of the modern corporation. Being able to assume some measure of control over the hiring process could be the last semi-free act of her career. Meanwhile, the corporation’s goals are to avoid risk, by choosing the candidate least likely to harm the firm, and to increase efficiency by streamlining the process of choosing from among a pool of candidates. Commodification of the candidate serves the corporation well, but may disadvantage the candidate if she cannot control the sources and biases of the information used to categorize her. As the reputation industry matures and more and more choices about who gets what opportunity are determined by abstracting people into symbols and treating them like graded commodities, the risk that people seeking opportunities become increasing disempowered will emerge as the crowning achievement of information technology: The commodification of precarious lives.

Standard
Our Corporate Overlords, Tech and Society

Blame the Election On Facebook (in part anyway)

Donald Trump won the U.S. presidential election last night. This is terrible news for the country and I am horrified by his victory. In particular, I’m having trouble processing his obviously widespread support given the many negative attributes he has displayed throughout his life and during the election. While we’re looking around for whom to blame for how things turned out (and there will be plenty of finger-pointing), I believe Facebook, Twitter, Google, etc. and the entire culture of information “personalization” should be counted among the blameworthy. True, there are a number of complex sociological factors in play in any election, and the combination of Trump’s celebrity appeal and populist messaging seem to have had a powerful effect on a lot of people. But here’s the thing: Many of us were unprepared for this result. We looked on with wonder as Trump won in the primaries and went on to become a popular candidate in the general election. Many of us were shocked and surprised by the outcome of the election. Did you, like me, experience ongoing shock and disbelief over Trump’s consistently competitive poll numbers even after allegations of sexual assault and the array of other deeply negative revelations about him? If so, it might be because you and I live in a media bubble built out of algorithmic profiling, an echo chamber designed to soothe us with an overwhelming number of messages that we agree with, or are pretty dang close.

When you view your Facebook “newsfeed,” you’re not viewing every post of every person you are connected to on the network. Facebook’s learning algorithms access thousands of data points about your past behavior on Facebook and your interactions with other websites, merchants, and mobile services to identify your tastes and preferences. The resulting newsfeed you and I see contains only posts that Facebook believes are the most “relevant” to each of us. On Facebook, we’re mainly connected only with people who we have identified as “friends” and what we hear from them (and they from us) is winnowed down into a preference-focused feed that may not even include the contrary views of people we know. When we perform searches on Google, another collection of personalization algorithms massage our search results to conform to what the system believes each of us wants to see. Twitter users can choose whose posts to follow, enabling users to curate their information sources into narrowly defined subjects and communities. In my Twitter feed, I only follow academics and research institutions working in my field, plus a few journalists and news sources whose reporting appeals to me.

Getting news this way is completely different from  traditional journalism, where the goal, ideally, is to provide readers and viewers with a diverse range of ideas and multiple viewpoints. On commercial information services the information we receive is narrowly restricted and designed to please each of us individually. (Much of this will not be news to anyone who has read Eli Pariser’s “The Filter Bubble.”) The goal of customizing our various search results, feeds, and follows is to keep us online, staying engaged with whichever service we’re using, clicking links, viewing ads, buying things. The more time we remain engaged this way, the more information about our preferences and inclinations that can be translated into advertising dollars. The result of all this customization is that each of us is experiencing very different information flows from people who disagree with us – flows designed to keep each consumer engaged and to limit any feelings of discomfort. If you think Hillary Clinton is dishonest, it’s likely reflected in your online media choices and personalizations, and you’re unlikely to see posts or articles that champion her as a person of integrity.

Democratic deliberation requires the airing of a plurality of ideas and room for meaningful debate on the merits. It is still true that people are more likely to find common ground and back down from extreme positions if given the chance to truly understand each other. It is also true that customized information sources are as likely as not to include easily disputed rumors and distortions that would become apparent if more viewpoints were available for consideration. This is not what is happening. Unfortunately for democratic deliberation, the discomforting effects of stories and worldviews that don’t conform to our biases are bad for the online business model. If the goal is to keep people where they are, engaged and consuming what you’re offering, it doesn’t make business sense to question or challenge them and their version of reality.

Our media elites used to do a decent job of providing us with a plurality of views. Traditional journalism is far from perfect; media biases and filters are not new. But there were (and still are) journalistic institutions dedicated to reporting more fact than rumor, and for presenting multiple viewpoints on contentious questions. When that system was more functional than it is now, while I might not agree with the opposing viewpoint, I could at least come to understand and engage with it. Similarly, people on the other side of an issue might come to understand a piece of my truth. But traditional journalism is in decline. Fewer of us are relying on well-established media sources that can legitimately claim to be objective or balanced. One outgrowth of this is that some of the remaining media institutions have become clownish and shallow, more interested in salacious gossip and to pleasing political leaders in return for “access” than to soberly analyzing their views and statements.

As my old friend David Newhoff points out in his blog, viewing the world through the filter of commercial information platforms, including social media, makes it “very hard to distinguish between being vigilantly informed and hysterically manipulated.” As more of us come to get most of our political news from these platforms, whose shared mission is to harvest and monetize information from us and not to inform us, we will continue to fail at gaining a thorough understanding of what comes blasting out of the fire hose. Still more problematic, we’ll also continue to fail to truly understand what the other side actually thinks, resorting instead to caricatures and hyperbole. We are going to see the results of this filtering effect repeated again and again with the result that we become weaker advocates for our causes and candidates. This is not making us smarter. It is making us naive and vulnerable.

While we’re busy pontificating (myself included) on social media about our views and sharing our carefully curated information tidbits with our online followers and friends, remember that this narrowly focused information sharing is a central problem for political discourse. Despite the potential for sharing our views with more people than most of us could have hoped to before these platforms existed, the intentional limiting of our feeds and searches by platform operators means that what we say, do, and seek in the information space is not likely to escape the comfort of our individual echo chambers. We’re just yelling at ourselves while generating revenue for others and carving out ever-tinier slices of an increasingly subjective reality.

Standard