Our Corporate Overlords, Technology and The Law

The Model is Broken

The major social media platforms, especially YouTube and Facebook, are on track to help anti-vaxxers prevent the eradication of COVID-19 once a vaccine becomes available. A news story this week in The Guardian reports on a survey in which one in six Britons are likely to resist a vaccine for the novel coronavirus. That’s more than 16% of the UK population, which is a lot of people. I suspect we can count on similar figures in the United States, which would mean there a staggering number of people likely to refuse vaccination during an international pandemic. This is not simply an inevitable product of a world of conflicting ideas of goodness and health. This is the result of social media enabling the amplification of irresponsible content to generate ad dollars. Anyone who has been paying attention will have noted the rising information power of social media influencers who traffic in potentially deadly conspiracy theories on a range of topics from pedophile rings to chemtrails. On the vaccine front, it was only a handful of months ago in 2019 (remember 2019?) that we witnessed the disastrous irresponsibility of social media platforms contributing to a deadly measles epidemic in Samoa. While there is plenty of blame for the individuals hawking healthcare conspiracies for attention or book sales, their power to encourage awful decisions would be marginal at best without dramatic amplification by platforms like YouTube and Facebook. Monetizing attention without bearing responsibility for the consequences is the business model of the internet and the model is broken. It is the legacy of a short list of consequential policy and business decisions over the last 25 years that were not inevitable and whose effects are proving to be disastrous for the fabric of society and the well-being of everyone.  

In case you aren’t up on this history and policy landscape, permit me some space here to break it down. The original version of the internet was a project of the Department of Defense who desired a resilient and decentralized means of communication that would function to some degree even if large parts of the country were a smoking ruin. However, once the original concept was transformed into a consumer network and a successful business model emerged, the decentralized nature of the internet faded quickly. Between the world-spanning popularity of Facebook and YouTube and the gigantic cloud computing infrastructure provided by Amazon, today’s internet is hardly decentralized. Control over information flows resides in fewer private hands than ever before. Much of the wealth creation and consolidation in online businesses is the result of the Telecommunications Act of 1996, an enormous piece of legislation in the United States that was partly conceived to allow the consolidation of old media. However, along with the Telecommunications Act, Congress passed the Communications Decency Act (CDA), a family-friendly law intended to promote the content filtering of pornography. The CDA includes a liability shield provision in the form of this sentence: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider” (47 U.S.C. § 230(c)(1)). By differentiating providers of an “interactive computer service” from publishers (e.g. newspapers, publishing houses, television stations), services that host media and other material provided by users are generally not held responsible for what that content actually is. This lets user-driven platforms like YouTube, Facebook, 4Chan, etc. off the hook because they are not “publishers,” just conduits for user-generated content. So, things that are very unlikely to appear in a newspaper–bomb-making instructions, obscenity, threats, false claims about vaccines–can appear online. Another interesting development along the way was the model – the so-called advertising model. When the internet was still new, it was not free. The first widely available internet web browser, Netscape, was boxed software you had to pay for. America Online and CompuServe were the era’s popular information portals and messaging services and required monthly fees. Even email accounts cost money. A “war” between Netscape and Microsoft in the late 1990s changed everything. Microsoft promoted their struggling web browser, Internet Explorer, by making it free. This abrupt change in business strategy dramatically shifted the business model. It wiped Netscape out of existence (the Mozilla Foundation is its surviving legacy). Other startups took note and paid services rapidly declined in favor of free ones – email, video sharing, resume hosting, etc. Facebook, Twitter, and the rest emerged in this now-familiar environment in which platforms and apps are available for free and supported by advertising dollars.  With Section 230 on their side, the platforms can host the most inflammatory content their users upload and then sell ads right next to it. Another refinement to the model was personalization. The surveillance features of the internet are built right in making it incredibly easy to build profiles of end users to target them with ads. But with personalization – in which no two users have the exact same experience online – targeted advertising not only profiles people but constructs audiences. By placing users with others who share similar beliefs and interests, those beliefs and interests are reinforced, creating more engagement with pleasing content, which provides a rich target for advertisers. To paraphrase Safiya Noble’s critique of Google Search, social media is not whatever you think it is; it is an advertising system. Advertising is the reason every platform exists. Advertising guides every decision and ultimately influences what you read, view, and click. This is why it appears that the people making the decisions at Facebook and YouTube don’t care if a particular video promotes volunteering in your community or bombing it. If it sells ads and people don’t object strongly enough to threaten the revenue stream, then it’s welcome on their sites. While many sites have instituted content moderation to cull the very worst, they only do so to the limit of consumer revolt. (If Facebook and YouTube thought they could get away with hosting puppy-torture videos without an advertiser revolt, they would.)

Returning to the anti-vaxxer story, we humans are obviously flawed and readily receptive to “red pill” conspiracies, which long predate the internet. The world is confusing, chaotic, and sometimes evil, and conspiracies offer reassuring answers to hard questions. Many authors and outlets hawking “hidden truths” are effective because they employ the trick of wrapping a lie inside an apparent truth. Do vaccines cause Autism? The answer is definitively no–or at least highly unlikely. But, it’s easy to wrap a scary untruth inside a package of compelling evidence. For example, it is arguably true that Big Pharma has indeed put public health at risk for profit. This doesn’t make vaccines bad for you, but the real ethical failings of the institutions that govern our lives make demands on our ability to tell the difference between legitimate and illegitimate stories about them. The Trump presidency has demonstrated this. Take a contentious issue with a legitimate basis, such as the idea that Washington DC elites have not demonstrated sincere concern for the livelihoods of vast swaths of the population for decades, and then remix that with lies that shift the blame onto “job-taker” scapegoats and you can sell the public on moral failures like the deportation of asylum seekers and refugees. The complexity of contentious issues is one reason why responsible publishers are valuable and badly needed. Holding information outlets responsible ensures that important stories are vetted to some standard before being released to the world instead of just unleashing a firehose from which the loudest, most inflammatory voices dominate. Making outlets theoretically responsible for their content doesn’t guarantee truthfulness or objectivity. The New York Times and the BBC have much to answer for in their histories of coverage, but whatever appears in those venues has to be approved by somebody who is willing to accept consequences. That means something even if the results aren’t always satisfying. Furthermore, everyone sees the same New York Times and the same BBC, which means we can all discuss a somewhat singular story and use it as a basis for rational discourse – including a discourse that doubts the official line. Meanwhile, in the responsibility-free zone of Facebook and YouTube, a zone that reaches more people than any other media source, LGBQT+ indoctrination conspiracies, deep state fairy tales, and the dire warnings of anti-vaxxers flow into the world placing marginalized people at risk and doing tremendous damage to social cohesion. The sheer volume of irresponsible content creates an impossible challenge for people trying to make sense of things on their own. Worse still, personalization of online experience means that a significant number of controversial and false stories are seen only by those who are most likely to be susceptible to them, further dragging people down into conspiracy caves and shielding them from views that might broaden perspective. This affects people from any political or ideological perspective. While there is some truth to the notion that every idea deserves to be expressed somewhere, I do not endorse the notion that all ideas deserve equal time. No single one of us has the entire truth, but we can’t assemble truths into a rational whole by swimming in an ocean of lies. Personalization demonstrates the utter hypocrisy of claims that the solution to “bad” speech is more speech. There is far too much speech dumped on people for them to make rational choices with any regularity. And with personalization, most people are not given a real choice in any case. 

Bringing up the topic of social media curation and responsibility naturally leads to questions about how to solve the current mess. I have a few ideas. First, we have to move away from believing that the status quo has to be this way. The world of information existed before the big platforms and there will likely be a different information order 25 years from now. Also, we have to move away from the belief in free-speech absolutism. Every freedom has limits and with each freedom comes responsibility. Simply banging the drum of “liberty” without a plan does not produce a workable society. Similarly, attempts at solutions that get bogged down in “well, how will Facebook do it?” completely miss the point. We need to aim higher than simply fixing a few things, securing a couple of promises, and then just accepting more of the same. You and I should not concern ourselves with whether Facebook can manage it. Next, I have to say that CDA §230 has outlived its usefulness. It was not intended to produce giant and totalizing communications platforms that are accountable to no one (except advertisers, and we cannot count on them as arbiters of justice). This is tricky because it is certainly likely that § 230 has contributed to formerly marginalized voices to being heard. The Black Lives Matter movement was never going to get much sympathetic coverage in the Washington Post or on the Nightly News. The hands-off approach of major platforms literally gave BLM a platform to push racial justice into the mainstream and the world is better off for it. There is a risk that in the absence of §230, we might lose some opportunities to hear from marginalized voices in the future. But that’s a big ‘might’ and a big risk to take hoping for the best. Meanwhile, the corrosive effects of oppressive and deceptive information are tangible. More people have been killed by right-wing extremists in the United States in the last several years than by jihadists or left-leaning extremists. Observers correlate the recent rise in hate group activity with their unhindered presence on social media. I believe we can and must act to better stewards of speech without submitting to slippery slope arguments about how all free expression will be lost. The key is to treat Facebook and YouTube (and others) as publishers. Make them, and those that follow, responsible for what they host and profit from. Really, this is not a stretch. By personalizing the user experience and filtering out the most objectionable content, the big platforms are already acting like publishers. We could carve out some limited §230 protections for platforms of a more limited scope while holding the most profitable accountable as a cost of doing business. Inevitably, folks will ask the functional question: How could the biggest platforms possibly take responsibility for all of the content on their enormous sites? The short answer is: Not our problem! Managing a gigantic platform is the responsibility of those who profit from it. With great scale comes great responsibility. I suggest that for YouTube, for example, they could just slow the hell down and employ an enormous citizen advisory board to curate the site. Sure, it might deny us the privilege of seeing every single video of people singing about international jewish banking conspiracies and lessen the amount of content they host. However, even if they cut their content down to a 10th of what it is now there would still be a staggering amount of it. Next, it’s time to apply new limitations on advertising. We already regulate advertising practices on old media and it’s time to do something about new media. Targeted advertising is, after all, the model and it drives much of what is broken. The “innovation” of micro-sliced affinity audiences and advertiser self-service, while quite profitable, leads to a range of routine abuses, like ad categories for “jew haters” and others that enable housing discrimination. What if we did what has worked for generations and let everyone see the same damn ad? Money would still be made. Speech would still happen. We don’t owe them their ad dollars as much as they owe us a society. 

These are modest proposals and readers likely will find flaws. The point is that something must change. The platforms will not willingly walk away from the money currently on the table even if it destroys the very fabric of society, even if it prevents the resolution of the greatest health pandemic in modern history. So long as money keeps changing hands and funneling into Silicon Valley coffers, the broken model won’t change. It’s up to us to demand something better. 

Standard
Our Corporate Overlords, Technology and The Law

Packingham: The Danger of Confusing Cyberspace with Public Space

A recently decided Supreme Court case has triggered a debate about how much (or little) governments can regulate the use of online spaces. Specifically, in Packingham v. North Carolina, a case about a state prohibitions on social media use by sex offenders, the court has weighed in with an opinion that would seem to suggest that social media sites and services are no different than streets or parks where the First Amendment is concerned. While I tentatively agree with the majority that the government should not issue sweeping restrictions on internet access based on an individual’s criminal record, justifying this position by portraying internet sites and services as public space is misleading and, in my opinion, dangerously naïve. Writing as if he had just read the collected essays of John Perry Barlow, Justice Anthony Kennedy writes in the majority opinion: “in the past there may have been difficulty in identifying the most important places (in a spatial sense) for the exchange of views, today the answer is clear. It is cyberspace…” Kennedy correctly asserts that ‘cyberspace’ plays an increasingly important role in people’s lives, but he overlooks how the spaces and places provided by the internet are fundamentally different from those that can more accurately be described as public spaces, such as streets and parks.

On Facebook, for example, users can debate religion and politics with their friends and neighbors or share vacation photos. On LinkedIn, users can look for work, advertise for employees, or review tips on entrepreneurship. And on Twitter, users can petition their elected representatives and otherwise engage with them in a direct manner. Indeed, Governors in all 50 States and almost every Member of Congress have set up accounts for this purpose…In short, social media users employ these websites to engage in a wide array of protected First Amendment activity(emphasis added.)

Like many observers who have written paeans to the free-wheeling uses and democratizing potential of the internet, the majority opinion in Packingham demonstrates an ill-informed exuberance about the freedoms enjoyed by users of social media platforms. Even Justice Samuel Alito in his concurrence with the majority criticizes what he calls the court’s “loose rhetoric,” stating, “there are important differences between cyberspace and the physical world…” Yet Alito only criticizes the breadth of Kennedy’s claims while similarly failing to recognize the myriad ways our civil rights cannot be asserted on the internet. The resulting opinion promotes a popular but inaccurate narrative about the beneficence and neutrality of the internet in general, and social media platforms in particular.

Let’s be abundantly clear: social media sites and services are not public spaces and those who use them are not free to use them as they please. Social media platforms are wholly owned and tightly controlled by commercial entities who derive profit from how they are used. While, as is argued in Packingham, governments may be limited as to the extent they can tailor regulations over the access or use of an internet resource, social media users are already subject to the potentially sweeping choices made by site operators. Through a combination of architecture (code) and policies (terms of service), social media users are guided and constrained in what they can do or say. Twitter, Facebook, and other platforms routinely block users and delete content that would most likely be considered protected speech if it took place in a public venue. So, while we can probably agree that social media platforms have become central to the social lives of many millions of people, this means only that these services are popular. It does not make them public.

Justice Kennedy attempted to link the free speech rights that have been upheld in cases concerning other venues, such as airports, with the rights that should be available on the internet. While I do not disagree that the full extent of our constitutional protections should be available in online venues, the fact of the generally unregulated status of the internet and the commercial ownership of most of its infrastructure means that cyberspace bears very little resemblance to ‘realspace.’ Airports, for example, are public institutions operated by government agencies. A social media site—almost the entire internet now—is more like a shopping mall. In much the same way that social media platforms reproduce features of life in public places like city streets, shopping malls only mimic the interactive spaces they have come to supplant. A mall is neither street nor park. Different rules—and laws—apply to malls. When the Mall of America in Minneapolis shut down a Black Lives Matter protest in December, the mall operators were able to assert their property rights over the expressive and assembly rights of the protestors. A municipality would have risked a civil rights lawsuit had they broken up a peaceful protest on a city sidewalk or in a public park.

Packingham is a case about constitutional rights that overlooks the increasing privatization of those rights. It is also part of a larger problem of misrepresenting cyberspace as a zone of freedom. This transformation in our relationships to rights, and our perceptions about those rights, is aided by the invisibility of power online. Facebook, Twitter, etc., by providing expressive spaces in which their users supply the visible content, do not appear to us much as actors in this drama. We are led to believe that they simply provide appealing services that we get to use so long as we follow some seemingly benign ground rules. We fail to recognize that those rules are not designed for the best interests of users, but for the goals of the platforms themselves and their advertisers. Facebook in particular has worked hard to encourage dramatic changes in human social behavior that have enabled them gain deep knowledge about their users and to monetize that knowledge.

Justice Kennedy’s opinion is especially irksome because, while it purports to preserve important rights as our lives migrate online, it overlooks the distressing trend of privatization of the very rights that the constitution promotes. Yes, we may engage in first amendment activities online without undue interference by government officials, but the ability to do so is not guaranteed by the government because the government is barely involved. Ever since the internet ceased being a project of the Department of Defense, most of it has been privately owned and the government has avoided regulating most of the activities that take place there. While it may be true that an unregulated internet is a good thing, a side effect of this approach has been the growth of enormously powerful online businesses based on manipulating and spying on users and profiting from the resulting data. Every single communication and transaction that takes place on the internet passes through infrastructure belonging to dozens, even hundreds of private companies; any of whom may be asserting their combinations of architectural and policy restrictions on how that infrastructure is used. Where it suits a company to operate with total neutrality and openness, they do so. When it does not, they act in whatever manner suits the bottom line. Facebook, by example, is frequently lauded for its capacity to support political organizing as well as other modes of first amendment activity. But if Facebook decided tomorrow to block access to an NAACP page or to prevent the use of its messaging system to organize a legal street protest, there is nothing but the potential for consumer backlash to prevent them from doing so. If Google decided to choose the next U.S. president by subtly shaping “personalized” search results, there are no law on the books to prevent it. Packingham says nothing about this kind of power over free expression, which dwarfs that of the government when it comes to online activity. Until the government and the courts begin to address the privatization of our rights online, court opinions celebrating our online freedoms will continue to ring hollow while amplifying perceptions of government irrelevance in the internet age.

 

Standard
Our Corporate Overlords, Technology and The Law

Copyright Shaming Won’t Work

I’ve been reading Cory Doctorow’s latest book: Information Doesn’t Want to Be Free and it’s a good read. There is a lot of good stuff in here, even if I don’t exactly agree with his insistence that the world is as good as ever for artists who want to create work and get paid for it, despite the abundance of new ways to share that work online. Doctorow does offer a fresh perspective to the “copyfight” and his foundational arguments are compelling.

Including this one: shaming and prosecuting people for copying digital artworks without permission is futile and it’s mean. I’m not talking about people who are making money from ripping and reselling exabytes of digital art. (I refuse to resort to the stark and depressing term “content” to describe what humans make to express themselves.) I’m talking about the threats and actions being taken against individuals who acquire media from unauthorized places or in unauthorized ways for their own use, and who share it with friends, which is what most “piracy” is. Don’t get me wrong, I am very concerned about artists getting paid. I tried to making living as an artist for many years and I know many people who do it now. I want them to get paid for their work and I struggle with how complicated it has gotten for that to happen, but this is nothing new. Being a professional artist has always been very very hard. I also think that the companies that publish and distribute creative work deserve to get paid. But the problem we’re facing is not simply that a bunch of people are sitting at home copying files and thereby negatively impacting the incomes of artists and their enablers, it’s the entire ecosystem of arts and entertainment that has changed radically, and the villains in the highway robbery known as the “entertainment business” are still the same villains as 20 years ago–the major record labels, the movie studios, and, as Doctorow points out, the newly powerful “intermediaries” like Apple and Amazon. Working adversarily at times, and in concert at others, “the industry” has reimagined important parts of the arts business model in ingenious and artist-cheating ways that offer few, if any, additional benefits to “honest” consumers. While reinventing the business, the industry has gone to astonishing lengths to create new crimes and to increase the seriousness of old ones, and their motivations have virtually nothing to do with artists. It’s unfortunate that most working artists have to labor under the overbearing advocacy of the arts and entertainment industry, because artists still do have rights to assert. It’s just not clear that they would choose to assert them the way Sony and Universal and Amazon do.

I have copied music and movies. Lots of them. So have you I imagine. In the prime of my music-making and consuming life in the 1980s, 90s and early 2000s, I inhaled new music. I bought a great deal of music, but I also made cassette tapes and burned CDs of other people’s records, tapes and CDs. I watched entire seasons of The Sopranos on VHS tapes lovingly mailed by a girlfriend’s mom. These things were and still are illegal and I had some vague understanding that this might be “wrong,” but I didn’t care. Why? Because I was still paying good money for the stuff all the time. I was (and still am) supporting artists in myriad ways, and a lot of people still do, albeit in new ways that may not be tabulated as “units” like the old days. A key difference has transpired in the relationship between consumers and creative products. In the days when most media arrived in some sort of package, I felt I had complete ownership rights over what I bought. Total control of it once it was in my hands. The emerging business model now is “licensing.” You pay to use some intangible media, but you can’t do anything else (legally) with it, like share it with your spouse or friends. This is a significant paradigm shift for consumers who have been passing around books for hundreds of years, and recorded media for over a century. This radical shift how creative works are bought and paid for–especially the new limits inherent in the deal–is totally frustrating for people who just want to buy a record and then lend to friends. The business model of “use it for a moment and it’s gone” wasn’t built for us, and it pisses us off and makes us fairly sanguine about breaking the rules so that it does work for us. This is exacerbated by the fact that the same technology that makes media increasingly intangible, also makes it much easier to share. This is not the fault of the sharers, and the industry has also benefited and found lots of new ways to make money from the same intangibility. Copying files remains very easy to do, and since it’s easy, we’re all going to use the objects we have at our disposal because that’s what free, imaginative people do.They don’t wring their hands and recite honor codes handed down by corporations when they want to hear a song. That’s why I owned a tape-to-tape deck in the 1980s and used it to copy music. Trying to convince people not to use the tools in front of them is simply untenable. Making us all outlaws over it is a Kafkaesque absurdity.

Here is a scenario: My wife Sarah and I take a trip together and bring our Kindles, each loaded up with four books. Sarah finishes a book and says “you’ve got to read this.” I say “great, I just finished my book. Give me yours.” The problem is that its stuck on her Kindle and she wants to read one of her other books. She’s not much interested in the rest of mine. Here is where the trouble starts: I just want to borrow a fucking book. I don’t want to manage user accounts or visit an Amazon website to arrange a 14 day loan, or whatever laughably paltry solution they offer. If, at that moment, someone handed me a little box that could make the two Kindles share books, but it ran an illegal program and using it violated some non-negotiable terms of service I was forced to agree to in order to have any technology in my hands at all, I would very likely say “fuck it” and “yes please.” In one form or another, this is what is happening everywhere, except that it’s a lot simpler to copy using the Internet, and it’s not going to get any harder. Making it a crime is not the solution.

This is a rant that does not offer solutions to the bottom-line challenges for artists, and I know that. There are serious problems with the business of being a working artist in the 21st century and file-sharing does play a role. But it’s technically infeasible to stop people from sharing media files, and the sharing is as old as the means to record and preserve the work. It’s incredibly easy to right-click and choose “save,” which is why it feels like, at worst, a “thought crime” and not a real crime at all. No amount of shaming is going to stop that, and prosecuting people only demonstrates a very scary sort of corporate power that we are seeing more and more throughout society. There are other ways to get people to pay for art, and a lot of that reward the artist directly rather than through huge entertainment corporations. Live performance, pay-what-you-will schemes, merchandising, etc. These aren’t optimal money-makers for all creative artists, but it works for some artists, and arguably has given rise to new types of creativity. Whether or not the new deal rescues art-as-we-knew-it is an open question, but I don’t think we can continue to rely on the arts-business models of the 1950s and 60s. It was a good time, but things have changed.

Standard