Our Corporate Overlords, Technology and The Law

The Model is Broken

The major social media platforms, especially YouTube and Facebook, are on track to help anti-vaxxers prevent the eradication of COVID-19 once a vaccine becomes available. A news story this week in The Guardian reports on a survey in which one in six Britons are likely to resist a vaccine for the novel coronavirus. That’s more than 16% of the UK population, which is a lot of people. I suspect we can count on similar figures in the United States, which would mean there a staggering number of people likely to refuse vaccination during an international pandemic. This is not simply an inevitable product of a world of conflicting ideas of goodness and health. This is the result of social media enabling the amplification of irresponsible content to generate ad dollars. Anyone who has been paying attention will have noted the rising information power of social media influencers who traffic in potentially deadly conspiracy theories on a range of topics from pedophile rings to chemtrails. On the vaccine front, it was only a handful of months ago in 2019 (remember 2019?) that we witnessed the disastrous irresponsibility of social media platforms contributing to a deadly measles epidemic in Samoa. While there is plenty of blame for the individuals hawking healthcare conspiracies for attention or book sales, their power to encourage awful decisions would be marginal at best without dramatic amplification by platforms like YouTube and Facebook. Monetizing attention without bearing responsibility for the consequences is the business model of the internet and the model is broken. It is the legacy of a short list of consequential policy and business decisions over the last 25 years that were not inevitable and whose effects are proving to be disastrous for the fabric of society and the well-being of everyone.  

In case you aren’t up on this history and policy landscape, permit me some space here to break it down. The original version of the internet was a project of the Department of Defense who desired a resilient and decentralized means of communication that would function to some degree even if large parts of the country were a smoking ruin. However, once the original concept was transformed into a consumer network and a successful business model emerged, the decentralized nature of the internet faded quickly. Between the world-spanning popularity of Facebook and YouTube and the gigantic cloud computing infrastructure provided by Amazon, today’s internet is hardly decentralized. Control over information flows resides in fewer private hands than ever before. Much of the wealth creation and consolidation in online businesses is the result of the Telecommunications Act of 1996, an enormous piece of legislation in the United States that was partly conceived to allow the consolidation of old media. However, along with the Telecommunications Act, Congress passed the Communications Decency Act (CDA), a family-friendly law intended to promote the content filtering of pornography. The CDA includes a liability shield provision in the form of this sentence: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider” (47 U.S.C. § 230(c)(1)). By differentiating providers of an “interactive computer service” from publishers (e.g. newspapers, publishing houses, television stations), services that host media and other material provided by users are generally not held responsible for what that content actually is. This lets user-driven platforms like YouTube, Facebook, 4Chan, etc. off the hook because they are not “publishers,” just conduits for user-generated content. So, things that are very unlikely to appear in a newspaper–bomb-making instructions, obscenity, threats, false claims about vaccines–can appear online. Another interesting development along the way was the model – the so-called advertising model. When the internet was still new, it was not free. The first widely available internet web browser, Netscape, was boxed software you had to pay for. America Online and CompuServe were the era’s popular information portals and messaging services and required monthly fees. Even email accounts cost money. A “war” between Netscape and Microsoft in the late 1990s changed everything. Microsoft promoted their struggling web browser, Internet Explorer, by making it free. This abrupt change in business strategy dramatically shifted the business model. It wiped Netscape out of existence (the Mozilla Foundation is its surviving legacy). Other startups took note and paid services rapidly declined in favor of free ones – email, video sharing, resume hosting, etc. Facebook, Twitter, and the rest emerged in this now-familiar environment in which platforms and apps are available for free and supported by advertising dollars.  With Section 230 on their side, the platforms can host the most inflammatory content their users upload and then sell ads right next to it. Another refinement to the model was personalization. The surveillance features of the internet are built right in making it incredibly easy to build profiles of end users to target them with ads. But with personalization – in which no two users have the exact same experience online – targeted advertising not only profiles people but constructs audiences. By placing users with others who share similar beliefs and interests, those beliefs and interests are reinforced, creating more engagement with pleasing content, which provides a rich target for advertisers. To paraphrase Safiya Noble’s critique of Google Search, social media is not whatever you think it is; it is an advertising system. Advertising is the reason every platform exists. Advertising guides every decision and ultimately influences what you read, view, and click. This is why it appears that the people making the decisions at Facebook and YouTube don’t care if a particular video promotes volunteering in your community or bombing it. If it sells ads and people don’t object strongly enough to threaten the revenue stream, then it’s welcome on their sites. While many sites have instituted content moderation to cull the very worst, they only do so to the limit of consumer revolt. (If Facebook and YouTube thought they could get away with hosting puppy-torture videos without an advertiser revolt, they would.)

Returning to the anti-vaxxer story, we humans are obviously flawed and readily receptive to “red pill” conspiracies, which long predate the internet. The world is confusing, chaotic, and sometimes evil, and conspiracies offer reassuring answers to hard questions. Many authors and outlets hawking “hidden truths” are effective because they employ the trick of wrapping a lie inside an apparent truth. Do vaccines cause Autism? The answer is definitively no–or at least highly unlikely. But, it’s easy to wrap a scary untruth inside a package of compelling evidence. For example, it is arguably true that Big Pharma has indeed put public health at risk for profit. This doesn’t make vaccines bad for you, but the real ethical failings of the institutions that govern our lives make demands on our ability to tell the difference between legitimate and illegitimate stories about them. The Trump presidency has demonstrated this. Take a contentious issue with a legitimate basis, such as the idea that Washington DC elites have not demonstrated sincere concern for the livelihoods of vast swaths of the population for decades, and then remix that with lies that shift the blame onto “job-taker” scapegoats and you can sell the public on moral failures like the deportation of asylum seekers and refugees. The complexity of contentious issues is one reason why responsible publishers are valuable and badly needed. Holding information outlets responsible ensures that important stories are vetted to some standard before being released to the world instead of just unleashing a firehose from which the loudest, most inflammatory voices dominate. Making outlets theoretically responsible for their content doesn’t guarantee truthfulness or objectivity. The New York Times and the BBC have much to answer for in their histories of coverage, but whatever appears in those venues has to be approved by somebody who is willing to accept consequences. That means something even if the results aren’t always satisfying. Furthermore, everyone sees the same New York Times and the same BBC, which means we can all discuss a somewhat singular story and use it as a basis for rational discourse – including a discourse that doubts the official line. Meanwhile, in the responsibility-free zone of Facebook and YouTube, a zone that reaches more people than any other media source, LGBQT+ indoctrination conspiracies, deep state fairy tales, and the dire warnings of anti-vaxxers flow into the world placing marginalized people at risk and doing tremendous damage to social cohesion. The sheer volume of irresponsible content creates an impossible challenge for people trying to make sense of things on their own. Worse still, personalization of online experience means that a significant number of controversial and false stories are seen only by those who are most likely to be susceptible to them, further dragging people down into conspiracy caves and shielding them from views that might broaden perspective. This affects people from any political or ideological perspective. While there is some truth to the notion that every idea deserves to be expressed somewhere, I do not endorse the notion that all ideas deserve equal time. No single one of us has the entire truth, but we can’t assemble truths into a rational whole by swimming in an ocean of lies. Personalization demonstrates the utter hypocrisy of claims that the solution to “bad” speech is more speech. There is far too much speech dumped on people for them to make rational choices with any regularity. And with personalization, most people are not given a real choice in any case. 

Bringing up the topic of social media curation and responsibility naturally leads to questions about how to solve the current mess. I have a few ideas. First, we have to move away from believing that the status quo has to be this way. The world of information existed before the big platforms and there will likely be a different information order 25 years from now. Also, we have to move away from the belief in free-speech absolutism. Every freedom has limits and with each freedom comes responsibility. Simply banging the drum of “liberty” without a plan does not produce a workable society. Similarly, attempts at solutions that get bogged down in “well, how will Facebook do it?” completely miss the point. We need to aim higher than simply fixing a few things, securing a couple of promises, and then just accepting more of the same. You and I should not concern ourselves with whether Facebook can manage it. Next, I have to say that CDA §230 has outlived its usefulness. It was not intended to produce giant and totalizing communications platforms that are accountable to no one (except advertisers, and we cannot count on them as arbiters of justice). This is tricky because it is certainly likely that § 230 has contributed to formerly marginalized voices to being heard. The Black Lives Matter movement was never going to get much sympathetic coverage in the Washington Post or on the Nightly News. The hands-off approach of major platforms literally gave BLM a platform to push racial justice into the mainstream and the world is better off for it. There is a risk that in the absence of §230, we might lose some opportunities to hear from marginalized voices in the future. But that’s a big ‘might’ and a big risk to take hoping for the best. Meanwhile, the corrosive effects of oppressive and deceptive information are tangible. More people have been killed by right-wing extremists in the United States in the last several years than by jihadists or left-leaning extremists. Observers correlate the recent rise in hate group activity with their unhindered presence on social media. I believe we can and must act to better stewards of speech without submitting to slippery slope arguments about how all free expression will be lost. The key is to treat Facebook and YouTube (and others) as publishers. Make them, and those that follow, responsible for what they host and profit from. Really, this is not a stretch. By personalizing the user experience and filtering out the most objectionable content, the big platforms are already acting like publishers. We could carve out some limited §230 protections for platforms of a more limited scope while holding the most profitable accountable as a cost of doing business. Inevitably, folks will ask the functional question: How could the biggest platforms possibly take responsibility for all of the content on their enormous sites? The short answer is: Not our problem! Managing a gigantic platform is the responsibility of those who profit from it. With great scale comes great responsibility. I suggest that for YouTube, for example, they could just slow the hell down and employ an enormous citizen advisory board to curate the site. Sure, it might deny us the privilege of seeing every single video of people singing about international jewish banking conspiracies and lessen the amount of content they host. However, even if they cut their content down to a 10th of what it is now there would still be a staggering amount of it. Next, it’s time to apply new limitations on advertising. We already regulate advertising practices on old media and it’s time to do something about new media. Targeted advertising is, after all, the model and it drives much of what is broken. The “innovation” of micro-sliced affinity audiences and advertiser self-service, while quite profitable, leads to a range of routine abuses, like ad categories for “jew haters” and others that enable housing discrimination. What if we did what has worked for generations and let everyone see the same damn ad? Money would still be made. Speech would still happen. We don’t owe them their ad dollars as much as they owe us a society. 

These are modest proposals and readers likely will find flaws. The point is that something must change. The platforms will not willingly walk away from the money currently on the table even if it destroys the very fabric of society, even if it prevents the resolution of the greatest health pandemic in modern history. So long as money keeps changing hands and funneling into Silicon Valley coffers, the broken model won’t change. It’s up to us to demand something better. 


Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.