Everyone’s least favorite company Meta is in the news again, this time due to a wrongful death lawsuit placing the blame for the tragic suicide of a child on the seductive design and victimization risks of social media platforms. This heart-breaking event has given rise to a renewed chorus of policy makers and observers calling for regulation to restrain the exploitative and creep-enabling business practices of social media giants. Enter into this debate the always thoughtful Cory Doctorow, whose recent op-ed in The Guardian anticipates the imminent decline and fall of Meta, the parent company of Facebook, Instagram, and WhatsApp. It’s an excellent essay that offers multiple forms of evidence of doom, primarily a list of compelling reasons for the company’s own staff to head toward the exits. I wish I believed his analysis (I want to believe) as I wish for the declining fortunes of most major tech companies. Somewhat in step with Doctorow, I am hostile to “the advertising model” – the extractive practice of profiting from the labor of unpaid users who create most social media platforms’ content, a relationship made all the more exploitative and society-corroding by the associated practice of segmenting users into echo-chamber segments for advertisers. However, unlike Doctorow, I suspect that Meta is more like Boris Johnson than Samson (pride, the fall, yadda yadda); Like our current PM, Meta may be fated to be ever-embattled yet shockingly resilient. The company is so incredibly wealthy and enormous that it may have decades of reinventions and rebrandings left before dissolving or at least falling from power. Key is the essential dependence of so many people across the world on their flagship platform Facebook even as many of us loathe that dependence and search desperately for kinder alternatives. Sadly though, there are few viable options. As Doctorow concedes, it’s all about the switching costs. In this case, there is no existing or emerging alternative that ticks as many boxes as Facebook, including the crucial two: a critical mass of users and its range of ‘essential’ features. TikTok may be huge and popular but its appeal is limited primarily as entertainment – only one of the boxes and far from the most important. A key and sadly telling aspect is the reliance of artists, activists, charities, and businesses large and small on Facebook. The far less efficient ways that these groups used to reach their target audiences in the beforetimes have largely faded from memory and have been thoroughly replaced by Facebook and only Facebook. The communications strategy of every new brand or cause begins with the platform and then peppers out to other channels. Those of us who loudly argue to “quit Facebook” always have to turn to face not only their own switching costs in terms personal network loss (my risks are small but not insignificant) but those of entire sectors, including those we cherish – or at least want to exist even if that existence is thoroughly propped up by a company who arguably represents much of what so many are working against. I have often said – on Facebook – that until the forces of liberation migrate away from Facebook they risk being snuffed out at their moment of achievement by a single keystroke. The rejection of predominant forms of oppression — what the vast majority of activist causes exist to promote — would pose an existential threat to Meta’s bottom line and the wealth and power of its leadership if it were even close to being realized. What CEO in his right mind would allow emancipatory movements to flourish on their own private domain unless they were certain (and perhaps complicit in ensuring) that the cause will not soon be attained? The revolution will not be ‘liked.’ And yet, we remain dependent on the infrastructures of these corrupt and corrupting systems to recruit allies in the creative and activist struggles that envision a better collective future. Meta isn’t going anywhere unless and until a truly viable and liberating alternative exists. For now, we’re stuck with them.

My Journey – Tips for New Brits
When I moved from Seattle to London to take a job in late 2020, I thought I was mentally prepared to make a new life here. I knew it would be an adjustment to pickup and leave a place I had called home for more than three decades, where I had years of friendships, familiar places, and everyday habits. I also sensed that moving to a European city would be different from moving to say, Portland. The timing of my move also happened to coincide with the UK officially leaving the European Union (Brexit), at the start of the longest and strictest pandemic lockdown in the world, and only months after the tragic death of my spouse, who was meant to accompany me on this adventure. The lure of a good job opportunity after completing my PhD and the need of a change compelled me to make the move, but all of these factors were on my mind as I winnowed down my possessions to a fraction of what had, fixed up the house I intended to rent out and eventually sell, and tried to figure out where I was going to live in an enormous city I had visited only a few times as a tourist and where I knew next to no one. Cutting to the chase, it was all much harder than I thought it would be, and yet I have managed to stick it out and to embrace this new chapter in my life. I am a couple of months into my second year in London. I learned many things during the first year, and I continue to learn and grow from this experience. I have compiled a mix of practical tips and strategies that have worked for me, and may be of help to you.
I’ve broken this out into a several chapters, which will be updated as time permits.
- Moving to faraway places
- Finding a place to live
- Getting your stuff to the UK (coming soon)
- Things to do BEFORE you move (coming soon)
- Communication and technology (coming soon)
- Eating and drinking (coming soon)
- The weather (coming soon)
- Errata (coming soon)
Moving to faraway places
Deciding to move to new country is one thing, but making a life there is another. As you probably know, you can’t easily work in a foreign country without the right type of visa. Getting permission to work is not trivial (which is also true in the US). In my case, I came to the UK because I was hired as postdoctoral fellow at a research institution, and they were willing to sponsor me for a work visa. The type of visa I have at present is a Tier 2 (General) visa, which is for skilled workers (similar to the H1-B visa in the United States). This may be one of the easier work visas to acquire, but it is by no means easy. The employer, job type, and salary all have to meet particular standards. And it is not inexpensive. Visas of all types have become increasingly expensive in the UK, in part, because the Conservative government is fairly hostile to immigrants. This is true for all immigrants but more so if you come non-white and/or Muslim dominated countries. There are a range of seemingly punitive charges that are non-negotiable and subject to the political winds. My employer completed the sponsorship paperwork but left me to apply for the visa and jump through the hoops. I wound up hiring an immigration lawyer based in the US who handles UK visas. All in all, the process set me back a few grand. You might be in a better position with a large employer who will do the paperwork for you, but no matter how much support you may have, be prepared to manage bureaucracy and wait longer than you think you should. It is possible of course for US citizens to come to the UK without a proper visa. Tourists are permitted stay for 6 months and that status is renewed by simply leaving the country and coming back in (a quick trip to France, for example). It might be possible to find work without a visa, but it is illegal and risky. And of course, it’s possible to work remotely for a US employer and get paid into a US bank, which I have seen some people do. It’s not a very stable or secure existence, but it’s an option. A significant obstacle there is moving money from the US to UK. I (will) have another post about that, but simply assume that it is more complicated than it should be and, at times, quite expensive and aggravating. A curious thing – once I was approved for the visa, I wasn’t eligible to work until I was physically in the country and able to collect my national ‘biometric’ ID. Only with that in hand can an employer in the UK legally pay you. Working remotely might be an option for you, but you will still have to make a trip to finalise getting your visa.
One of the things that still catches me up short is what it means not to be a citizen where I live. I took for granted my citizenship in the US and, thanks to the vast privileges of that citizenship, I didn’t have to worry about it much when I traveled to other countries. But here in the UK where I work and live, I am aware of my somewhat reduced status. I can’t vote and I have fewer rights in general. My ability to live here is contingent; my work is tied to my visa and my visa to my work. I can’t afford to put either at risk. So, I can’t put either at risk, either by performing poorly at work or by getting caught committing crimes. Readers who have lived as immigrants, especially in the US, may well be rolling their eyes here, which is fair. My white skin and Americanness continue to be significant advantages, particularly in the UK. So, while I feel chastened by my immigration status, it is mainly a passing concern. I fit in pretty well. Of course there are cultural aspects I still have to master, but being from the US excuses me from some social expectations. My gaffs are mainly seen as comic rather than insulting. There’s an unlikely chance I will be alienated for failing to be more British. I can at least be thankful for that. And in general I find that unlike say, the French, the British people seem to be overall friendly and accommodating. I can’t guarantee your experience with that, however.
Finding a place to live
Unless you already have a good friend or family member where you are planning to move willing to serve as your guide, agent, or first housemate, finding a place to live can be a huge challenge. I can only report on the challenges of moving to London. Some of this will be applicable to moving elsewhere in Europe or to other parts of the UK, but some of it is very London specific.
The first challenge I encountered was navigation. If as a tourist visiting an unknown city you’ve ever used an online map to find a hotel or Airbnb that seemed close to where you wanted to be only to find out that, in reality, the distances were quite enormous, you already know one of the key risks of choosing poorly. London in particular is a very large city and one that is not all that easy to get around. There is no ‘grid’ and, while there are some dense and concentrated bits, a lot of interesting things are scattered very far apart. A typical London life can be spread out across numerous, distant neighbourhoods (it’s a little like LA that way). Despite its incredible tube and rail services, it can take ages to get from one neighbourhood to another – and a lot of neighbourhoods are not well served by trains. There are buses everywhere, including those cute double-deckers, but London’s narrow streets and endless traffic means they can be very very slow. Same with taxis and hire cars; they’re plentiful but getting anywhere takes a while. I tend to rely on my bike, which makes most trips faster than other options (fortunately London is pretty flat), but the distances and the weather do not always make that a pleasant option.
I didn’t know London very well before I moved here. I didn’t really know the names or characters of neighbourhoods besides some of the famous ones (those, of course, tend to be the most expensive). I was lucky to have an American friend who had lived in London for a spell a while back, and she was able to at least help me narrow my options. But no one came forward to say “this neighbourhood is IT.” I think it’s because London is a rather strange city with a lot of variety, both close into the centre and farther afield. Depending on what makes you feel ‘at home’, you might find it miles away from areas that others crave to be near. Centuries of mycelium-like growth and subsequent consolidation of thousands of years of human settlement, plus massive WWII bombings and atrocious civic planning have produced a modern city that is a riot of architectures, sleekness, decay, densities, and ‘vibes’. I have yet to identify my ‘ideal’ neighbourhood based on any familiar template (e.g., Seattle, SF, Boston, NYC). There are of course very picturesque areas of Victorian gingerbread homes and costume drama-worthy high streets, but those are likely to be in places like West London where the housing values are most suitable to oligarchs and sheiks.
Which brings me to my second major consideration, which is cost. London is one of the most expensive cities in the world. Moving here from expensive US cities, like Seattle or New York, might prepare you for it, but it doesn’t solve the problem. Unless you’re emigrating to be a banker or have a large trust fund, it’s hard on your own to afford anything within easy commute of the things that make London most exciting. Fortunately, there are Facebook groups for people looking to share spaces, including the dwindling supply of warehouse spaces that were once the cornerstone of London’s scene (and are being quickly replaced by bland apartment blocks). Single occupancy places that are not stratospherically expensive do exist, but they tend to be found in distant neighbourhoods that might make it that much harder for a newcomer to feel like they’re part of the city.
I managed to solve the navigation and cost problem fairly well. I was lucky to have a friend and colleague who had moved to Oxford about a year before and was now interested in moving to London. We decided to team up as flatmates. Though neither of us knew London, my friend could at least take a train in on a weekend and look at flats I identified online, and that is how we found our place. First, the other friend who had lived in London gave me some very broad parameters. Second, I used a few of the online services (current favourite ‘rightmove.co.uk’) and looked at maps and available flats based on pictures and prices. I used my bad Airbnb experiences to look very carefully at commute times from where prospective places were to where I expected to go. Third, I ran leads by the formerly-London friend, who gave me ‘snoozy’ ratings to indicate how dull or interesting a particular neighbourhood might be. Recall that there is a LOT of variety here and zones that should be interesting because they are close to something interesting might be really bland (sort of like Brooklyn) or lack much in the way of decent shops and other amenities. Ultimately we found a ‘council flat’ in Rotherhithe, around the corner from an Overground stop, also not far from two tube stops, and very close the Thames. Council flats are generally very utilitarian and can be quite ugly, but there are a TON of them in London–built hastily during and in the first few decades after WWII–and that is what housing looks like for many Londoners. Our flat is comfortable and well-positioned in the estate, with a view towards a pretty church and a very old pub. In fact, our immediate neighbourhood was rather unscathed by bombs and has a number of well-preserved cute bits. By and large though, it’s pretty snoozy compared with other parts of London. But it’s not hard to get places, the price is right, and I think we landed well.
My advice to you, if you don’t have anyone to help you choose, prioritise being near a tube line that can get you to something central – like Soho or King’s Cross – in less than an hour. North of the river is both hipper and higher class than south of the river (though there is plenty of hipness to the south, e.g., Bermondsey, Peckham, Brixton). East London is gritty and cool, West London is posh, most of Central London is businessy or posh, and North London is family-oriented. These are wild generalisations for which numerous exceptions exist. If you have a job lined up, use a mapping app to figure out how long a bike, tube, or overground trip is required to get there. Aim for 30-40 minutes tops. Use Google Street View to look at the streets near the flat to get a sense of what’s there and how far away a decent food shop might be. Ideally, connect with a local person to go and walk it for you. And most of all, be prepared to have chosen poorly on your first try and then take time, once you’re here, to explore the city. It’s all part of the adventure.
Hyperbole Unboled
(or: Yes, AI is Coming For Your Job)
The industrial revolution is not over. It just has a shiny new(ish) face as artificial intelligence. Despite what you may think AI is, and despite the disappearance of smokestacks and export surpluses in places that used to be the image of industrialization, an industry is still very active. It continues to set about remaking societies through technologies that reduce or eliminate human labor and to carry out the interests of consumption and international domination. And in so doing, is presented to us shrouded in narratives of misleading nonsense about innovation and progress while doing tremendous damage to the natural world.
There is a great deal of hyperbole about artificial intelligence. On the one end of the hyperbolic spectrum are dystopic predictions of systems or machines that are so much smarter than humans that, upon becoming ‘self-aware,’ they will decide that humanity is dispensible and set about enslaving or destroying us. We can call this the ‘Terminator Doctrine.’ On the other side are grand hopes that AI will solve our knottiest problems, from racist policing to environmental catastrophe by making important decisions using pure rationality that is freed from human passions. We can call this the ‘Perfectionist Doctrine.’ Both of these doctrines are appealing for different reasons, but they are bullshit for the same reason, which is that they fail to account for what AI is actually for.
The appeal of hyperbolic AI is that it conjures a future of gleaming machines embodied with an intelligence that is ruthlessly emotionless and capable of seeing a bigger picture than pathetically limited humans. Such images tempt us through our insecurities and frustrations about the messiness of human relations and individual desires for perfection. A mix of science fiction and sober (if naïve) academics have laid the groundwork for these fantasies, and this what they are; fantasies. Despite what researchers, boosters, and lazy journalists like to assert, AI is not here to save us. AI exists for one reason and one reason only; it exists to reduce labor costs. Virtually every type of AI that attracts investment or a customer base automates some process that humans currently do with the promise of requiring fewer of them to do it. This is what makes AI just the latest wave industrialization.
For every heartening story about AI that detects tumors or provides way-finding for visually-impaired people, there is the real mission of AI, which is to carry out the whim and will of corporations and militaries. If there wasn’t the promise of incredible riches or world domination in the adoption of AI, there would not be AI to support accessibility or cure disease. There’s not enough money in that.
Sure, AI research and development is monstrously expensive and AI systems consume eye-watering amounts of electricity and other precious resources, which makes the up-front investment significant. But employing humans is far more expensive overall. The math of AI is not just the computation required to determine if here is an image of a pumpkin or a bicycle, it is the math of industrialists looking to bring down the most significant cost they face – employing humans; a cost made far costlier when those humans demand safe working conditions and a fair share of company profits. And, as with the manufacturing dimension of industry, AI is a planet-killer. The incredible amount of electricity and raw materials required for sophisticated AI systems is staggering. And the environmental costs to extract, process, and later dispose of, the toxic ingredients in digital equipment falls mainly on poorer nations.
The appeal of AI – its selling point – is that it performs ‘better’ than humans at various tasks, like handing out prison sentences that are not explicitly racist. Unfortunately, time and again we find that technology doesn’t eliminate our social problems but instead repackages them. The only algorithm that could be free of racism or other fundamental flaws would have to be so exquisitely designed that the cost and effort to produce and maintain it would probably defeat the efficiencies it is meant to create. Arguably we’d be better off continuing our efforts to attend to the core social problems we face as human beings than trying to replace ourselves with something ‘better.’
One way to understand what AI is for is to consider who it is for. The simple answer is that AI is for the people and entities who invest the most money into its development and who stand to earn the most from its acceptance. Despite utopian visions of an apolitical AI that frees us from drudgery and enlightens us with wisdom, virtually all AI arrives in our lives with an agenda; one set by its producers to serve either themselves or a target customer who is probably not you. Consider Siri. That helpful bot in your phone or laptop. Siri does what a giant corporation wants it to do. Quite often you’ll notice that it carries our ‘your’ wishes by employing products and services made by the same company. Curious that. If Siri happens to help you in the process, well that is only because there is potential profit for the company in doing that. It is not a strings-free gift to you; it is a marketing mechanism. Much more directly profitable AI includes systems for selecting job candidates, setting insurance rates, or producing robot soldiers. The target customers for these systems have enormous budgets making them extremely profitable to serve. The most exciting and innovative AI is not designed for you and me. We are paltry customers for AI by comparison.
If you are uncertain about this argument, I suggest you conduct a simple exercise. Every time you hear about an AI, be it a military drone, a grammar checker, or a companion for elderly people, ask yourself how many human hours would be required to perform the same task. Couple that with the question about how much money could possibly be made from AI that only exists to relieve some form of human suffering, however defined. Once you consider the economics of AI, the answers become clear. And those answers will most likely reveal what AI is for: it’s here to take your job.

Let Someone(thing) Else Decide
When times are hard and all around us is evidence of human frailty, many of us dream of reorganizing the world into an ideal state using logic and purely rational thinking. The past year has been a hard one for me. I suffered a devastating personal loss, I limped to the conclusion of my PhD, and I moved thousands of miles away to chase a job while leaving behind the communities that have long sustained me. In the background have been all the shared and personal hardships of a global pandemic along with a climate disaster and deteriorating geopolitical conditions. Add the acceleration of real and imagined conspiracies and deceits and some truly scary and disappointing world leaders and stir. One could be forgiven for losing hope that we–by which I mean any substantial collective body–are capable of making good decisions for ourselves and the world.
Recently, I was talking about artificial intelligence with some truly smart thoughtful people, including my friend and colleague David Newhoff, who posed this provocation: “I have always been skeptical, if not cynical, about ceding human agency to technology, and this despite being rather cynical about human beings. Lately, as we are forced to accept just how fragile liberal democracy is—because it is clearly too difficult for many humans to preserve—I admit that I have lately wondered whether risking governance by AI would not be preferable…“
It’s important to note that David is no tech-utopian–quite the opposite actually–and this provocation is meant to express skepticism about human nature rather than offer a sincere proposal. But I think we can all relate to sometimes wishing there was something better than us to take over the job of taking care of things. Recent evidence suggests that despite centuries of post-enlightenment faith in human reason, we have not achieved the flourishing of humanity that was promised based purely on our best intentions and essential “good will.” Lately, we haven’t even lived up to the very imperfect ideals of western democracy which, having failed to achieve race or gender justice, not to mention lasting economic justice in the centuries since it took hold, seems like it could be heading us towards oligarchic despotism through the ballot box. In light of these anxieties, isn’t it tempting to believe that a utility-maximizing AI would make better choices about desperate migrants, the climate disaster, and the distribution of food, medicine, and education than short-sighted and selfish humans? Couldn’t such systems nudge or direct us away from a colonial, white-supremacist, billionaire-centric future to one of liberation and shared prosperity?
You already know the answer I am heading for here, which is ‘no.’ And the reason is deceptively simple; because our technologies–all of them–are not better than us. They are us. To quote Tarleton Gillespie, “technology is culture rendered durable.” In other words, everything that we experience as culture, including all that is both right and wrong with society, is present and reproduced in our technology and this is true no matter how “smart” or “intelligent” that technology appears to be. In particular, we often get consumed by visions of artificial intelligence technologies as animate, quasi-humanoids that execute decisions using pure logic, with complete information, and to the highest standard of care. This description is even true of the dystopic visions, which typically retain the idea that AI beings are so much smarter and capable than humans even as they plot our destruction. While it may someday be the case that such beings walk among us as our guides or rulers (either beneficent or malign), we are so very far from that. So-called “super-intelligent” technology does not currently exist and may never exist as it has been so richly imagined in popular narratives. Even if they do eventually appear, we have far more mundane technologies influencing our futures right now. Among these are automated decision systems like recividism-risk prediciton systems and fraud detectors used in government along with recommender and social media algorithms that shape and channel human understanding and engagement with the world. These are built by humans, dependent on data and designs constructed by humans, and exist to carry out human-authored objectives.
Sure, there is tempting promise in intelligent technologies. AI-assisted medical diagnostics, disaster management, and learning technologies may prove themselves and contribute to societal uplift and reduced suffering. However, where well-designed and thoughtfully implemented system might overcome some specific human bias or failure, there is tremendous risk (and numerous examples) of creating or exacerbating other harms. The main source of failure in technology systems is not simply that it’s ‘broken.’ Rather, it is that each one is built upon something–either human or machine–that came before it, including prior technologies and the choices that went into to making those. As Susan Leigh Star offers, new technologies are “built on an installed base.” Nothing springs from a vacuum from which some ideal of perfection might be reached. This is as true of our technologies as it is of our political and social systems. Every technology is an embodiment of human history and culture.
As with human politics, the work of achieving anything good (or at least ‘better’) with technology is a mix of reflection and accountability. By reflecting we come to understand what systems actually do in the world and then apply pressure to push them in positive directions. A key difference between humans decisions and automated ones is not their internal mechanisms but their accessibility. Humans can make terrible choices, but we have far more experience addressing that by holding humans accountable and demanding explanations. We have far less experience doing this with machine learning algorithms. Technological development and assessment can only be done by a tiny class of technological elites (coders) and even they struggle to decipher the reasoning behind the decisions made the most sophisticated systems. Consequently, an AI system that appears to be “better” than a human is probably also one that most of us cannot understand. Our socio-economic-political systems are very imperfect and can be opaque, but they are also always evolving. These human systems are co-produced works that many more of us can take part it. And they are the key ingredients to our technologies, from the choices about what to build to who is granted the privilege of using them and for what purpose. Ideally (and often) social evolutions are driven by what some plurality of people actually want, or something like it. While this is somewhat true of technology, the close relationship between technological innovation and capital means that most advanced technologies are built and operated by a very tiny subset of society and done so within a market-driven, dominating worldview. Technology has politics, as the saying goes. When we stop investing in improving the human systems that are the sources of misery and focus instead on improving technology in the hopes that it will do better than us, we miss the point about what technology is. If it is us, then the work of social justice continues to be the work of improving ourselves. Only through that work can we hope to create technologies that further it.
The Model is Broken
The major social media platforms, especially YouTube and Facebook, are on track to help anti-vaxxers prevent the eradication of COVID-19 once a vaccine becomes available. A news story this week in The Guardian reports on a survey in which one in six Britons are likely to resist a vaccine for the novel coronavirus. That’s more than 16% of the UK population, which is a lot of people. I suspect we can count on similar figures in the United States, which would mean there a staggering number of people likely to refuse vaccination during an international pandemic. This is not simply an inevitable product of a world of conflicting ideas of goodness and health. This is the result of social media enabling the amplification of irresponsible content to generate ad dollars. Anyone who has been paying attention will have noted the rising information power of social media influencers who traffic in potentially deadly conspiracy theories on a range of topics from pedophile rings to chemtrails. On the vaccine front, it was only a handful of months ago in 2019 (remember 2019?) that we witnessed the disastrous irresponsibility of social media platforms contributing to a deadly measles epidemic in Samoa. While there is plenty of blame for the individuals hawking healthcare conspiracies for attention or book sales, their power to encourage awful decisions would be marginal at best without dramatic amplification by platforms like YouTube and Facebook. Monetizing attention without bearing responsibility for the consequences is the business model of the internet and the model is broken. It is the legacy of a short list of consequential policy and business decisions over the last 25 years that were not inevitable and whose effects are proving to be disastrous for the fabric of society and the well-being of everyone.
In case you aren’t up on this history and policy landscape, permit me some space here to break it down. The original version of the internet was a project of the Department of Defense who desired a resilient and decentralized means of communication that would function to some degree even if large parts of the country were a smoking ruin. However, once the original concept was transformed into a consumer network and a successful business model emerged, the decentralized nature of the internet faded quickly. Between the world-spanning popularity of Facebook and YouTube and the gigantic cloud computing infrastructure provided by Amazon, today’s internet is hardly decentralized. Control over information flows resides in fewer private hands than ever before. Much of the wealth creation and consolidation in online businesses is the result of the Telecommunications Act of 1996, an enormous piece of legislation in the United States that was partly conceived to allow the consolidation of old media. However, along with the Telecommunications Act, Congress passed the Communications Decency Act (CDA), a family-friendly law intended to promote the content filtering of pornography. The CDA includes a liability shield provision in the form of this sentence: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider” (47 U.S.C. § 230(c)(1)). By differentiating providers of an “interactive computer service” from publishers (e.g. newspapers, publishing houses, television stations), services that host media and other material provided by users are generally not held responsible for what that content actually is. This lets user-driven platforms like YouTube, Facebook, 4Chan, etc. off the hook because they are not “publishers,” just conduits for user-generated content. So, things that are very unlikely to appear in a newspaper–bomb-making instructions, obscenity, threats, false claims about vaccines–can appear online. Another interesting development along the way was the model – the so-called advertising model. When the internet was still new, it was not free. The first widely available internet web browser, Netscape, was boxed software you had to pay for. America Online and CompuServe were the era’s popular information portals and messaging services and required monthly fees. Even email accounts cost money. A “war” between Netscape and Microsoft in the late 1990s changed everything. Microsoft promoted their struggling web browser, Internet Explorer, by making it free. This abrupt change in business strategy dramatically shifted the business model. It wiped Netscape out of existence (the Mozilla Foundation is its surviving legacy). Other startups took note and paid services rapidly declined in favor of free ones – email, video sharing, resume hosting, etc. Facebook, Twitter, and the rest emerged in this now-familiar environment in which platforms and apps are available for free and supported by advertising dollars. With Section 230 on their side, the platforms can host the most inflammatory content their users upload and then sell ads right next to it. Another refinement to the model was personalization. The surveillance features of the internet are built right in making it incredibly easy to build profiles of end users to target them with ads. But with personalization – in which no two users have the exact same experience online – targeted advertising not only profiles people but constructs audiences. By placing users with others who share similar beliefs and interests, those beliefs and interests are reinforced, creating more engagement with pleasing content, which provides a rich target for advertisers. To paraphrase Safiya Noble’s critique of Google Search, social media is not whatever you think it is; it is an advertising system. Advertising is the reason every platform exists. Advertising guides every decision and ultimately influences what you read, view, and click. This is why it appears that the people making the decisions at Facebook and YouTube don’t care if a particular video promotes volunteering in your community or bombing it. If it sells ads and people don’t object strongly enough to threaten the revenue stream, then it’s welcome on their sites. While many sites have instituted content moderation to cull the very worst, they only do so to the limit of consumer revolt. (If Facebook and YouTube thought they could get away with hosting puppy-torture videos without an advertiser revolt, they would.)
Returning to the anti-vaxxer story, we humans are obviously flawed and readily receptive to “red pill” conspiracies, which long predate the internet. The world is confusing, chaotic, and sometimes evil, and conspiracies offer reassuring answers to hard questions. Many authors and outlets hawking “hidden truths” are effective because they employ the trick of wrapping a lie inside an apparent truth. Do vaccines cause Autism? The answer is definitively no–or at least highly unlikely. But, it’s easy to wrap a scary untruth inside a package of compelling evidence. For example, it is arguably true that Big Pharma has indeed put public health at risk for profit. This doesn’t make vaccines bad for you, but the real ethical failings of the institutions that govern our lives make demands on our ability to tell the difference between legitimate and illegitimate stories about them. The Trump presidency has demonstrated this. Take a contentious issue with a legitimate basis, such as the idea that Washington DC elites have not demonstrated sincere concern for the livelihoods of vast swaths of the population for decades, and then remix that with lies that shift the blame onto “job-taker” scapegoats and you can sell the public on moral failures like the deportation of asylum seekers and refugees. The complexity of contentious issues is one reason why responsible publishers are valuable and badly needed. Holding information outlets responsible ensures that important stories are vetted to some standard before being released to the world instead of just unleashing a firehose from which the loudest, most inflammatory voices dominate. Making outlets theoretically responsible for their content doesn’t guarantee truthfulness or objectivity. The New York Times and the BBC have much to answer for in their histories of coverage, but whatever appears in those venues has to be approved by somebody who is willing to accept consequences. That means something even if the results aren’t always satisfying. Furthermore, everyone sees the same New York Times and the same BBC, which means we can all discuss a somewhat singular story and use it as a basis for rational discourse – including a discourse that doubts the official line. Meanwhile, in the responsibility-free zone of Facebook and YouTube, a zone that reaches more people than any other media source, LGBQT+ indoctrination conspiracies, deep state fairy tales, and the dire warnings of anti-vaxxers flow into the world placing marginalized people at risk and doing tremendous damage to social cohesion. The sheer volume of irresponsible content creates an impossible challenge for people trying to make sense of things on their own. Worse still, personalization of online experience means that a significant number of controversial and false stories are seen only by those who are most likely to be susceptible to them, further dragging people down into conspiracy caves and shielding them from views that might broaden perspective. This affects people from any political or ideological perspective. While there is some truth to the notion that every idea deserves to be expressed somewhere, I do not endorse the notion that all ideas deserve equal time. No single one of us has the entire truth, but we can’t assemble truths into a rational whole by swimming in an ocean of lies. Personalization demonstrates the utter hypocrisy of claims that the solution to “bad” speech is more speech. There is far too much speech dumped on people for them to make rational choices with any regularity. And with personalization, most people are not given a real choice in any case.
Bringing up the topic of social media curation and responsibility naturally leads to questions about how to solve the current mess. I have a few ideas. First, we have to move away from believing that the status quo has to be this way. The world of information existed before the big platforms and there will likely be a different information order 25 years from now. Also, we have to move away from the belief in free-speech absolutism. Every freedom has limits and with each freedom comes responsibility. Simply banging the drum of “liberty” without a plan does not produce a workable society. Similarly, attempts at solutions that get bogged down in “well, how will Facebook do it?” completely miss the point. We need to aim higher than simply fixing a few things, securing a couple of promises, and then just accepting more of the same. You and I should not concern ourselves with whether Facebook can manage it. Next, I have to say that CDA §230 has outlived its usefulness. It was not intended to produce giant and totalizing communications platforms that are accountable to no one (except advertisers, and we cannot count on them as arbiters of justice). This is tricky because it is certainly likely that § 230 has contributed to formerly marginalized voices to being heard. The Black Lives Matter movement was never going to get much sympathetic coverage in the Washington Post or on the Nightly News. The hands-off approach of major platforms literally gave BLM a platform to push racial justice into the mainstream and the world is better off for it. There is a risk that in the absence of §230, we might lose some opportunities to hear from marginalized voices in the future. But that’s a big ‘might’ and a big risk to take hoping for the best. Meanwhile, the corrosive effects of oppressive and deceptive information are tangible. More people have been killed by right-wing extremists in the United States in the last several years than by jihadists or left-leaning extremists. Observers correlate the recent rise in hate group activity with their unhindered presence on social media. I believe we can and must act to better stewards of speech without submitting to slippery slope arguments about how all free expression will be lost. The key is to treat Facebook and YouTube (and others) as publishers. Make them, and those that follow, responsible for what they host and profit from. Really, this is not a stretch. By personalizing the user experience and filtering out the most objectionable content, the big platforms are already acting like publishers. We could carve out some limited §230 protections for platforms of a more limited scope while holding the most profitable accountable as a cost of doing business. Inevitably, folks will ask the functional question: How could the biggest platforms possibly take responsibility for all of the content on their enormous sites? The short answer is: Not our problem! Managing a gigantic platform is the responsibility of those who profit from it. With great scale comes great responsibility. I suggest that for YouTube, for example, they could just slow the hell down and employ an enormous citizen advisory board to curate the site. Sure, it might deny us the privilege of seeing every single video of people singing about international jewish banking conspiracies and lessen the amount of content they host. However, even if they cut their content down to a 10th of what it is now there would still be a staggering amount of it. Next, it’s time to apply new limitations on advertising. We already regulate advertising practices on old media and it’s time to do something about new media. Targeted advertising is, after all, the model and it drives much of what is broken. The “innovation” of micro-sliced affinity audiences and advertiser self-service, while quite profitable, leads to a range of routine abuses, like ad categories for “jew haters” and others that enable housing discrimination. What if we did what has worked for generations and let everyone see the same damn ad? Money would still be made. Speech would still happen. We don’t owe them their ad dollars as much as they owe us a society.
These are modest proposals and readers likely will find flaws. The point is that something must change. The platforms will not willingly walk away from the money currently on the table even if it destroys the very fabric of society, even if it prevents the resolution of the greatest health pandemic in modern history. So long as money keeps changing hands and funneling into Silicon Valley coffers, the broken model won’t change. It’s up to us to demand something better.
The Dystopian Path to Bicycle Safety
As an information ethicist who is generally skeptical about digital products and services whose business model is surveillance, I was struck with some serious itnernal conflict by a recent story about ‘Safe Lanes,’ an app for reporting cars and trucks parked in bike lanes. You see, in addition to being an academic, I am a regular bike commuter. Like other urban bicyclists in the United States, I experience a mix of exhilaration and fear on my commute where inattentive or obnoxious drivers and inadequate bike lanes can make biking feel very unsafe. I used to think risk-taking was the price of urban biking and took some pleasure in dodging cars and powering through my commute. But having racked up decades of scrapes and scares, my sense of adventure is waning. While Seattle drivers are relatively decent about giving way to bikes (it’s a pretty ‘sporty’ city after all), collisions–sometimes fatal–between cars and bikes are alarmingly routine. The city’s department of transportation has added a lot of bike lanes since I’ve lived here, but enforcement of the right of way of bikers is nearly non-existent in my experience. By way of example, an intersection by a police precinct in Seattle’s Capitol Hill neighborhood has a marked space reserved for bikes waiting for the light. That space is frequently occupied by drivers–in police cars. As I know from having had the privilege to bike in places like The Netherlands and Denmark, rigorously enforced bike lanes are a game changer for getting more people (all ages and genders) out of cars and onto bikes. The dramatic increase in bike lanes in US cities in the last few decades has been an incredible boon, but lax enforcement leaves many folks wary of using them.
Enter Safe Lanes, an app that uses smart phone hardware to capture the image and location of vehicles blocking bike lanes. The app uses technology similar to police license plate readers to identify the plate number of the car and sends this information to traffic enforcement. While options exist for bicyclists to report blocking cars through other means, like calling the police on a non-emergency number, most of us cannot be bothered to do this. It’s time-consuming, and even if one calls, there is a sense that nothing is likely to happen or not soon enough. While I am the kind of neighbor who reports litter and graffiti, calling in a parking nuisance is pretty unsatisfying. The idea of Safe Lanes is to make reporting bike lane blockers fast and easy, thereby increasing the likelihood that a report will actually lead to a ticket and maybe even change driver behavior. Even for a surveillance-averse person like me, the allure of punishing drivers who make me feel unsafe is very powerful. I was tempted.
But Safe Lanes doesn’t just stop at taking a user’s report and forwarding it to the authorities. Once a car or truck has been reported through the app, the image remains visible in the app for all Safe Lanes users to see, along with statistics about how many times a vehicle has been reported. In other words, it’s not just a reporting app but a shaming app. The persistence and display of the user-generated reports superimposed on a city map carries with it the implication that, in addition to being diligent citizens who report wrongdoing, we are also expected to join a community of fellow reporters and participate in communal rage by staring at the offending Priuses and UPS trucks reported by others with righteous indignation. And at the end of the day, Safe Lanes joins an alarming number of apps and systems that makes city streets into forums of monitoring and control. As discussed in a story about the app that appeared in CityLab, “The illusion of privacy in the public sphere may have always been an illusion, but with many more eyes and lenses trained on the streets, the age-old practice of ‘being seen’ can evolve quickly into being shared, and being stored. And perhaps being unfairly tried and convicted in the court of public opinion.”
Add to this that these apps are most likely to promote the values and worldview of a particular class of urban dweller: the much maligned “tech bros” amassing in places like Seattle, San Francisco, and other popular US cities. While the goal here, bicycle safety, is not particularly controversial or necessarily classist, it is troubling that we so easily trust some techie with programming skills with the authority to shape public behavior by releasing an app. There are myriad assumptions built into this app and its capabilities. One is that bicyclists have rights – I subscribe to that one. But let’s consider who the most likely targets of the app are: low wage ride share and delivery drivers. They are, after all, the folks whose livelihoods depend on hurrying people and passengers around on crowded city streets. I don’t want to excuse anyone making it unsafe to bike in the city; I do have actual skin in the game here, but in an age of rapidly gentrifying cities, there is something repugnant about affluent city dwellers naming and shaming people with much less social and economic power using information technology. All the ingredients are here for something that appears, at first, to be liberating for an arguably vulnerable group – bicyclists trying not to die. But it also joins an increasingly oppressive assemblage of information systems marketed to “concerned citizens” for the purpose of monitoring, shaming, and controlling others. Ugh! Safe Lanes, you had me at “improve bicycle safety,” but lost me at “participate in a surveillance dystopia in which no minor infraction goes unnoticed or unpunished.”
Perhaps this just what happens when we impoverish and abandon public institutions in favor of entrepreneurial techno-solutionism. What if, rather than hiding in our phones and relying on commercial products to mediate our participation in public life we actually spoke to each other and our elected officials – in actual public forums – where we could advocate for better bike lane enforcement, or demand money for driver education programs. What if rather than relying on apps to shame people into compliance with the behavioral paradigms imagined by technologists who happen to like biking we worked on being less suspicious and more patient with each other while also working on developing safe bicycle networks? I know, I know. This is asking a lot of humans – particularly American humans. But there has to be a better way of improving urban life than weaponizing information technology…doesn’t there?
Consuming Surveillance
Our consumption habits are the root cause of pervasive surveillance, the erosion of democracy, and the threat of environmental disaster. It is the main culprit driving the digital invasion that seeks to gather data about every aspect of our lives, from our browsing habits to our heartbeats. How is this so? Let me break it down. First, consider that the biggest source of surveillance for most (not all) people is advertising. All that logging, tracking, and predicting going on through the use of seemingly every device and at every transaction is designed to hone micro-targeted advertising and other forms of precision marketing. Every search, website visit, every app on your phone, that wearable device measuring your steps and your sleep, the chatty digital assistant that plays your favorite songs and dims your lights, social media (of course), your “cloud,” all these are all sites of persistent and increasing collection of what Harvard professor Shoshana Zuboff calls our “behavioral surplus.” As we act in the world, the evidence of those actions is gathered up. That is the “surplus” and it contributes to what Zuboff calls a “hidden text” that describes the movements of our lives like a shadow. What is contained in that text is invisible to you and me, but luminous and valuable for others.
The reason for harvesting our behavioral surplus is to sell things – to us and to people we resemble. And not just the things we need, plus a few things we want, but an ever-increasing amount of these things. As it happens, our personal rates of consumption have been steadily increasing for at least a century. So much so, that by the end of the twentieth century, Americans were consuming more than 17 times what they did in 1900, leading us to consume, at present estimates, between one-fifth and one-third of the world’s resources despite having only about one-twentieth of the world’s people.
The United States is not alone. As many formerly low-consumption countries have become wealthier, they have dramatically increasing their consumption as well. This follows a certain logic: An increase in aggregate wealth provides the incentive for businesses to provide goods and services in what free-market fans would term a “virtuous cycle,” where profitable production creates the financial capital to pay higher wages which leads to more disposable income and demand for more goods and services. This apparent lifting of all boats might be fine if there were infinite resources to use as raw materials and as fuel for transportation and production, but there are not. Industrial growth has decimated the planet; depleting its resources, polluting the water and air, and just generally leading us to ecological disaster. And yet, we carry on as if this were not the case. The boats are indeed lifting, but only because of the runoff from melting glaciers.
But how does this lead to surveillance? The phenomenal growth in consumption has been accompanied by a similarly phenomenal growth in consumer choice. In virtually every product category, there could be dozens or even thousands of options. Each producer or service provider really wants your dollar, and they have to fight for it. Advertising is a multi-billion-dollar industry designed to help close this deal and many developments in information technology have been brought to bear as the tools of this particular trade. The development of increasingly invasive and secretive surveillance techniques to capture the minutia of our online and connected lives has been especially useful. As Zuboff tells it, Google pioneered the exploitation of behavioral surplus by employing sophisticated techniques, including artificial intelligence. First, it was to analyze search queries. Search, as we now well know, turned out to be a remarkable wellspring of information about what people think, do, and plan to do. Google’s ingenuity led the company to figure out how to go beyond merely observing our habits to making very good predictions about them. They appear to be working on plans to go further and simply command our choices through manipulating what we see and when we see it, and they are not alone. Just as Google and its parent company Alphabet have developed a wide range of quality online tools and services, from maps to translators, to word processing and data storage systems to keep tabs on us at all times, Facebook has similarly figured out how to keep us glued to our screens as a (last) means of maintaining social ties while also harvesting, and then trafficking in our behavioral surplus.
It is no coincidence that the rise in consumer surveillance has been accompanied by new and troubling forms of state surveillance. It makes sense really. Technology companies, having discovered how to write the hidden text of our lives, found a willing customer for that text in our increasingly paranoid governments. Most of the surveillance technology used by local law enforcement is bought off-the-shelf from commercial firms large and small, including Amazon. The company built to service our every consumptive whim and need, has expanded well beyond its retail position to sell all manner of surveillance equipment. In particular, the company is actively trying to corner the market in selling facial recognition systems to federal agencies, who are enthusiastic buyers. Meanwhile, recordings picked up by Alexa have found their way into criminal trials, demonstrating the effective demolition of the public/private divide through always-on, connected home devices, especially those standing by to take your retail orders. The phones we carry, the smart appliances in our homes, the vehicles we ride in, all these and more offer up the details of our lives to any buyer, public or private. Quaint concepts like search warrants and the expectation of privacy just wither away while we buy buy buy. Meanwhile, federal law enforcement has been contracting with Google, Microsoft, and other tech giants for technical services for decades. All of these companies are banking on both the commercial and governmental business opportunities made possible by the stuff they specialize in: machine learning, facial recognition, data analytics, and so on. The techniques they designed to target advertising are easily converted to techniques for even darker forms of targeting. Predicting what you’ll buy is not all that different from predicting what else you’ll do and oh-so many people want to know.
None of this is your fault of course. There are economic and social forces well beyond our individual control that have created the retail-surveillance state we now find ourselves in. What is true is that you, me, and everyone we know, were easy marks. We like want things. We crave convenience, efficiency, services, systems, tools – anything to impose order on a demanding world. Our lives are busy and our social connections have gone digital. Our health is concerning, so we track our fitness. Stores are a hassle and they’re disappearing anyway, so we shop online. There are too many movies to choose from so we let Netlflix choose. And of course, only the bravest or most stubborn among us can live without a smartphone. As we appear to gain a little space, a little human contact, a little leisure, we also lose – our privacy, our agency, and our planet. We live our life stories tucked into the bosom of our technological affordances and retail pleasures. Yet, the story of our lives only seems to be written by ourselves. The rest of the story is a second text, a hidden text.

Feeling the Feels: Artificial Intelligence and the Question of Empathy
When I was a child, our family went through a series shitty televisions. This was before most people had internet connections or personal computers, when a TV was a suburban family’s most immediate link to the outside world. So, TVs mattered. We didn’t have much money and a family friend, who happened to be an electronics tinkerer and a bit of a hoarder, would periodically trash-pick old TVs and give them to us. They were big boxy things with glowing tubes inside. The hand-me-down TVs worked for a while and then they didn’t. Towards the end, there was typically a period in which some amount of banging on the side or top of the TV would get it to work or improve the picture for a little while, until having to do it again. Often the banging would be an act of frustration or anger. It felt like the TV was doing something intentionally, holding out on us, magnifying our helplessness and deprivation. I sometimes cried out while hitting the thing. It was cathartic and I was miserable. And here’s the thing…some of those TVs gave the impression that they felt something. The picture would seem to twist or flash in response to the banging. Sometimes a good pounding on a nearly dead TV would produce satisfying sparks or smoke. It was as though the television felt pain. Not simply physical pain, but emotional hurt. Like we could make it share the sadness and frustration we were feeling in the same way an abuser or a bully inflicts pain in order to “share” his wretchedness with others.
The televisions didn’t feel pain of course and, as fairly well-adjusted adult, I no longer beat up on defenseless technology…often. It wouldn’t matter much if I did though (except for the replacement costs and some troubling implications about my mental health) because machines do not feel. Increasingly, we have technologies that see, hear, smell, and can even sense touch and pressure, but they do not now, nor will they ever have emotional feeling. And because they can’t experience emotion, specifically emotional pain, they are incapable of empathy. A well-crafted machine can certainly imitate emotion. Siri or Alexa can choose a voice modulating algorithm to communicate concern, mockery, or hurt and Jibo, the adorable table-top robot, has endearing face-like expressions and can coo like a precocious child, but it is all just algorithmic fakery. The machine feels nothing. It simply chooses a response type from a library of code and executes it without being itself affected in any way.
Why does it matter? There are certainly plenty of people who think it doesn’t, or at least not very much. Luciano Floridi, a philosophy professor at Oxford, has suggested that we overemphasize the specialness of human agency and reason. He revisits the famous “Turing Test,” which basically holds that if a computer can perform well enough to convince you that it is human, then it can no longer be thought of as merely a machine. Floridi wants us to realize that as machines—artificial intelligence—become capable of doing an increasing number of human-like things, our perceptions about the set of skills and practices that can only be entrusted to humans will become progressively smaller and may eventually vanish. The Science and Technology Studies scholar Bruno Latour has similarly suggested that we need to view objects as mundane as seat-belt chimes and hydraulic door closers as having human-like “agency” because such objects act in the world and shape human action. This line of thinking appears to be propelling the development of products and services that give machines significant power over people’s lives based on assertions that machines perform the same or better than humans in many tasks. Getting a job? Software is already being used to choose from a pool of candidates based on their resumes and social media profiles, using machine learning algorithms to make assertions about future performance and “fit.” There are bots that can even conduct job interviews. These approaches are being attempted well beyond hiring for a broad range of decisions, from who should get a kidney to who should go to jail and for how long.
There are very likely numerous scenarios in which machines can do a better job than fallible, biased, or oblivious humans. There may even be demonstrable cases where software overcomes racist and sexist trends in key decision spaces. Even if this is true, there are also a lot of things a machine will never be able to do. Why? Because they lack empathy. Empathy is a core human trait that differentiates us from machines and motivates important human values like charity and the desire to relieve suffering. Unlike many animals, we even feel empathy for members of non-human species. We routinely agonize over what pets and farm animals feel and we worry about the injustice of picture windows as experienced by birds in flight. We also appear to empathize with inanimate objects like cars, and yes, devices with artificial intelligence, such as those smart assistants and even military robots. Humans are overflowing with empathy. It is empathy that causes us to care about homelessness in our cities, the poor health of strangers, and the fates of victims of far-off wars and famines even when we aren’t in direct contact with their struggles.
Empathy is particularly important to our human futures. We cannot simply decide who should live or die, who should suffer, or who deserves a second chance based on code libraries and the capability of giving you bad news with the appearance of regret. Being human means eventually having the experience of pain, which is contributes to an ability to empathize with the suffering of others. Machines cannot have these experiences. This is why humans must always be a central part of important decisions that concern the well-being of other humans. But artificial intelligence has been proposed to make determinations in just those types of realms, such as in military matters. Despite vacuous claims about achieving “humane war” and other insane concepts, war is and should be entirely lacking in humanity. Machines will not improve it, only distance human actors from having to confront its awfulness. Similar discussions about handing off difficult decisions to artificial intelligence need to come to a full stop whenever they involve determinations about human fates. Will the autonomous car allow its owner to die to save others? Who will be liable when the healthcare robot amputates the wrong limb? Liability isn’t the only issue here and neither is the project of figuring out how to program “morality” into AI. The autonomous device can never “care” who lives or dies, which limb should be removed, or how much anyone suffers. It can only make a calculation and then act in the world without paying an emotional price. Like a sociopath. This is not how moral decisions are made. Only humans can truly care about anyone or anything and that is the fundamental basis of moral agency. Artificial intelligence, quite literally, has no skin in the game. And this is why artificial intelligence can never replace us.
Silicon Valley Joins the Culture War?
This past Sunday the web host and domain name registrar GoDaddy bowed to months of pressure from activists and told their longtime customer, The Daily Stormer, to go find another host for their website. On Monday, Google similarly denied a home to the white supremacist, Nazi-aligned website. A denigrating post about a young activist killed by an apparent neo-Nazi at a white nationalist rally in Charlottesville was the final straw that forced GoDaddy’s hand and, presumably, that of Google. In related news, The Los Angeles Times reported over the weekend that other Silicon Valley service providers, such as the short-term rental company Airbnb and the crowdfunding site Patreon, were blocking the use of their services by various “far-right” groups, forcing them to find other providers and, in some cases, to create their own. We should be proud to see Silicon Valley coming to the rescue and fighting on the right side of the culture war, right?
If only it were so simple.
There are several hard questions that should be asked about banning or allowing white nationalists and a whole range of other haters to use the internet to spread their messages, including those messages that strike fear into the hearts of many other users. There are also many important questions—or demands—that should be posed to Silicon Valley firms and our elected leaders to define what should and should not be construed as free expression and to place the burden on policing that in the right sets of hands.
The legal basics involved here are the U.S. Constitution and the “safe harbor” provision of the Communications Decency Act. The free expression guarantees of the First Amendment are frequently cited by white nationalist types as the legitimizing bases for their demonstrations and published hate speech, but they do not apply to services operated by non-governmental entities. In other words, the popular services of the internet, such as Airbnb, Twitter, YouTube, etc. have virtually no obligations under the U.S. Constitution. This means that internet platforms can basically block or allow nearly any type of expression, unless such expression is specifically addressed by a specific law. Section 230 of the Communications Decency Act, which is otherwise known as the “safe harbor” provision, basically absolves providers of internet services, including ISPs, web hosts, media streaming services, and others, of liability for how their customers use those services. If a customer contributes content, it’s on the customer; the internet service is not to be construed as the “publisher” of the content. The CDA’s Section 230 has exceptions of course, but hate speech isn’t one of them.
In the wake of the violence in Charlottesville, many social media commentators were rightly upset at the various enablers of sites like The Daily Stormer and the many other services that provide any sort of comfort to white nationalists. However, as a legal matter, the sites have no constitutional or statutory obligation to do anything, and for the most part, they haven’t. A search on Facebook for “white pride” or similar terms will reveal lists of pages dedicated to white nationalists and the people who love them (really – there’s a white pride dating group). Google has come under fire for failing to police its search auto-complete algorithms from completing sentences like “Muslims are…” with “terrorists” and “the holocaust is” with “a hoax,” and similarly unhelpful constructions (which they’ve improved). Twitter and Facebook have both come under fire from users and commentators who loudly complain that the platforms do far too little to prevent some of their users from engaging in relentless harassment, even when it includes threats. GoDaddy had been under pressure for months to distance itself from The Daily Stormer but chose to do nothing until some magic line was crossed this past weekend.
Silicon Valley’s failure to embody the role of stewardship for civil society should not be surprising, however. For one thing, it is not exactly clear whether it actually makes sense to empower corporations to carry the water of a society’s moral duties. Of course we want corporations to act morally, but as the power of corporations increases—particularly the corporations that are most visible in the internet/mobile sector—the power of non-commercial society seems to be decreasing. (The American companies Alphabet and Apple Computer are worth, together, over $1.4 trillion). The key questions to consider here include: Are we comfortable leaving the decisions about who gets to speak and who does not to enormous institutions that are generally unaccountable to society? How can we be sure that such choices will be made in the best interests of the public rather than to meet narrow, short-term business objectives? Given the increasing importance of the major internet platforms, such as Facebook and YouTube, as accessible and powerful venues for expression of all kinds, it seems obvious that the platform operators must bear some responsibility for what that expression does regardless of how the law and regulatory environment is currently arranged. Yet it is sadly unclear how to make the execution of that responsibility align with the cherished values of electoral democracy and civil society. What is clear to me is that “the market” is not a sufficient incentive structure to ensure that socially beneficial speech and other forms of expression are adequately nurtured and protected.
In a democracy, elected officials such as mayors and governors have the power to determine what constitutes protected expression on the streets of our towns and cities. They will do this imperfectly of course, as seems to be the case in Charlottesville where the city was warned repeatedly about the risk of violence. What is key here is that elected officials serve at the pleasure of their constituents and, in a functioning democracy, such constituents choose those officials and play at least a supporting role in the decisions they make. This process is by no means unproblematic, but the process is well-established and can be influenced by the moral intuitions of voters and activists. Meanwhile, you and me and everyone we know have zero say about what GoDaddy does. Whoever guides the decisions there is far less likely to do so out of moral compunction or the fear of losing political power. Sure, we can get a Twitter gang together to shame them into taking an action against some hate publication, but that is not democracy. For one thing, GoDaddy is only likely to respond when they see money on the line. They didn’t see that until The Daily Stormer did something so vile in the wake of an attention-grabbing murder that they figured they couldn’t pretend they were “neutral” anymore without paying some price. And even though the end result was positive in my view, it wasn’t a democratic action and it will have no impact on what anyone else does. GoDaddy, Google, and Amazon Web Services probably host many hundreds of other websites that promote hatred and will continue to do so.
This should not be surprising because this is the reality of what Silicon Valley does, and particularly what it does when it is enabled by a mix of free-market utopians and free-speech maximalists: It enables bullies. The information industry is built on a winner-take-all model that relentlessly removes revenue from communities and traditional capitalist activities, such as media distribution and street level retail, and redistributes it into increasingly fewer (and richer) hands. In the service of this business model Silicon Valley oracles celebrate ego-driven entrepreneurialism and denigrate steady jobs, equality, wealth-sharing, any sort of collective action. The type of freedom that Silicon Valley celebrates is freedom for the strong, freedom for the already got some, and includes a full-throated claim that only a pure meritocracy that denies the inequities of history is fair or legitimate. Oddly enough, as revealed by the infamous memo by James Damore, Silicon Valley has no problem promoting discredited stereotypes about women and other “less-thans” who aren’t genetically wired to live up to the narrow standards of the engineering elite. All of this taken together is important to note because when Silicon Valley boosters throw around bromides about the value of a free and open internet, what they mean might not be what you think it means.
Don’t get me wrong: I support a free and open internet, but I have my own definition of what that is. For one thing, when I think about free speech, I don’t think about it as completely unfettered, louder-is-freer free speech. Constitutional free speech isn’t wholly free and neither should it be online. For me, free speech is only effective when it doesn’t silence another legitimate speaker. When a cruel or threatening Twitter troll chases an LGBT activist or a game developer off of the platform (or out of their home), that is not an exercise in free speech, that is simply intolerable bullshit. It is vile and immoral behavior that deserves condemnation and little to no protection from the authorities. Now before you go and accuse me of hypocrisy by citing the acts of anti-fascists whose stated aims include silencing neo-Nazi types, note that I wrote that sentence about my view of free expression with care. I have some very specific ideas about what makes a speaker “legitimate.” Just as I claim that free speech is not speech that silences others, I also hold that a legitimate speaker is one who does not aim to deny basic human rights to others. White nationalists fail that test, as do speakers who denigrate women or describe others as inhuman and unworthy to live. Despite whatever non-violent benevolence some white supremacists may claim to espouse, history shows that the end goal of white supremacy is the exclusion, enslavement, or annihilation of non-Whites, including many categories of people with light skin who are otherwise deemed undesirable (Muslims, Jews, LGBT folks, etc.). This is not debatable. Nazi Germany happened. Hundreds of years of African slave trade happened. While we might group hardcore leftists in with other historical rights-deniers like Stalin’s Soviet Union, there is zero evidence that anti-fascists have gulags in mind as the end goal of their activism. Driving racial hatred back into the margins would probably suffice.
This leaves us with a conundrum. We have left the barn door open and allowed Silicon Valley to move the popular venues of expression from the community stage and the city street to their proprietary platforms, where they are guided not by constitutional or democratic principles but by terms-of-service strategically designed to maximize profits and offset risk. This is not the formulation for achieving a civil society. Extremist speech that is permitted to eddy and coalesce in seemingly ungovernable online spaces builds momentum and eventually spills out onto the streets and potentially into violence, as we have witnessed in Charlottesville, in Kansas, and in Portland. Free expression hasn’t lost its value or importance, but we can no longer allow for governments to leave the management of free expression in the hands of the least qualified to handle it—internet platform providers. Broad and generous interpretations of the CDA safe harbor provisions and a misplaced application of constitutional principles on venues that bear no constitutional obligations—or really any obligations to anyone—has pushed society to the brink of chaos. It’s time we focused not only on how to keep the internet “free and open” but also on how to make it fair and inclusive, and ultimately, just.
Who is actually made safer or freer by safe harbors? Do you feel safer? I, for one, do not.
Packingham: The Danger of Confusing Cyberspace with Public Space
A recently decided Supreme Court case has triggered a debate about how much (or little) governments can regulate the use of online spaces. Specifically, in Packingham v. North Carolina, a case about a state prohibitions on social media use by sex offenders, the court has weighed in with an opinion that would seem to suggest that social media sites and services are no different than streets or parks where the First Amendment is concerned. While I tentatively agree with the majority that the government should not issue sweeping restrictions on internet access based on an individual’s criminal record, justifying this position by portraying internet sites and services as public space is misleading and, in my opinion, dangerously naïve. Writing as if he had just read the collected essays of John Perry Barlow, Justice Anthony Kennedy writes in the majority opinion: “in the past there may have been difficulty in identifying the most important places (in a spatial sense) for the exchange of views, today the answer is clear. It is cyberspace…” Kennedy correctly asserts that ‘cyberspace’ plays an increasingly important role in people’s lives, but he overlooks how the spaces and places provided by the internet are fundamentally different from those that can more accurately be described as public spaces, such as streets and parks.
On Facebook, for example, users can debate religion and politics with their friends and neighbors or share vacation photos. On LinkedIn, users can look for work, advertise for employees, or review tips on entrepreneurship. And on Twitter, users can petition their elected representatives and otherwise engage with them in a direct manner. Indeed, Governors in all 50 States and almost every Member of Congress have set up accounts for this purpose…In short, social media users employ these websites to engage in a wide array of protected First Amendment activity…(emphasis added.)
Like many observers who have written paeans to the free-wheeling uses and democratizing potential of the internet, the majority opinion in Packingham demonstrates an ill-informed exuberance about the freedoms enjoyed by users of social media platforms. Even Justice Samuel Alito in his concurrence with the majority criticizes what he calls the court’s “loose rhetoric,” stating, “there are important differences between cyberspace and the physical world…” Yet Alito only criticizes the breadth of Kennedy’s claims while similarly failing to recognize the myriad ways our civil rights cannot be asserted on the internet. The resulting opinion promotes a popular but inaccurate narrative about the beneficence and neutrality of the internet in general, and social media platforms in particular.
Let’s be abundantly clear: social media sites and services are not public spaces and those who use them are not free to use them as they please. Social media platforms are wholly owned and tightly controlled by commercial entities who derive profit from how they are used. While, as is argued in Packingham, governments may be limited as to the extent they can tailor regulations over the access or use of an internet resource, social media users are already subject to the potentially sweeping choices made by site operators. Through a combination of architecture (code) and policies (terms of service), social media users are guided and constrained in what they can do or say. Twitter, Facebook, and other platforms routinely block users and delete content that would most likely be considered protected speech if it took place in a public venue. So, while we can probably agree that social media platforms have become central to the social lives of many millions of people, this means only that these services are popular. It does not make them public.
Justice Kennedy attempted to link the free speech rights that have been upheld in cases concerning other venues, such as airports, with the rights that should be available on the internet. While I do not disagree that the full extent of our constitutional protections should be available in online venues, the fact of the generally unregulated status of the internet and the commercial ownership of most of its infrastructure means that cyberspace bears very little resemblance to ‘realspace.’ Airports, for example, are public institutions operated by government agencies. A social media site—almost the entire internet now—is more like a shopping mall. In much the same way that social media platforms reproduce features of life in public places like city streets, shopping malls only mimic the interactive spaces they have come to supplant. A mall is neither street nor park. Different rules—and laws—apply to malls. When the Mall of America in Minneapolis shut down a Black Lives Matter protest in December, the mall operators were able to assert their property rights over the expressive and assembly rights of the protestors. A municipality would have risked a civil rights lawsuit had they broken up a peaceful protest on a city sidewalk or in a public park.
Packingham is a case about constitutional rights that overlooks the increasing privatization of those rights. It is also part of a larger problem of misrepresenting cyberspace as a zone of freedom. This transformation in our relationships to rights, and our perceptions about those rights, is aided by the invisibility of power online. Facebook, Twitter, etc., by providing expressive spaces in which their users supply the visible content, do not appear to us much as actors in this drama. We are led to believe that they simply provide appealing services that we get to use so long as we follow some seemingly benign ground rules. We fail to recognize that those rules are not designed for the best interests of users, but for the goals of the platforms themselves and their advertisers. Facebook in particular has worked hard to encourage dramatic changes in human social behavior that have enabled them gain deep knowledge about their users and to monetize that knowledge.
Justice Kennedy’s opinion is especially irksome because, while it purports to preserve important rights as our lives migrate online, it overlooks the distressing trend of privatization of the very rights that the constitution promotes. Yes, we may engage in first amendment activities online without undue interference by government officials, but the ability to do so is not guaranteed by the government because the government is barely involved. Ever since the internet ceased being a project of the Department of Defense, most of it has been privately owned and the government has avoided regulating most of the activities that take place there. While it may be true that an unregulated internet is a good thing, a side effect of this approach has been the growth of enormously powerful online businesses based on manipulating and spying on users and profiting from the resulting data. Every single communication and transaction that takes place on the internet passes through infrastructure belonging to dozens, even hundreds of private companies; any of whom may be asserting their combinations of architectural and policy restrictions on how that infrastructure is used. Where it suits a company to operate with total neutrality and openness, they do so. When it does not, they act in whatever manner suits the bottom line. Facebook, by example, is frequently lauded for its capacity to support political organizing as well as other modes of first amendment activity. But if Facebook decided tomorrow to block access to an NAACP page or to prevent the use of its messaging system to organize a legal street protest, there is nothing but the potential for consumer backlash to prevent them from doing so. If Google decided to choose the next U.S. president by subtly shaping “personalized” search results, there are no law on the books to prevent it. Packingham says nothing about this kind of power over free expression, which dwarfs that of the government when it comes to online activity. Until the government and the courts begin to address the privatization of our rights online, court opinions celebrating our online freedoms will continue to ring hollow while amplifying perceptions of government irrelevance in the internet age.
You must be logged in to post a comment.