Our Corporate Overlords, Tech and Society

Hyperbole Unboled

(or: Yes, AI is Coming For Your Job)

The industrial revolution is not over. It just has a shiny new(ish) face as artificial intelligence. Despite what you may think AI is, and despite the disappearance of smokestacks and export surpluses in places that used to be the image of industrialization, an industry is still very active. It continues to set about remaking societies through technologies that reduce or eliminate human labor and to carry out the interests of consumption and international domination. And in so doing, is presented to us shrouded in narratives of misleading nonsense about innovation and progress while doing tremendous damage to the natural world.

There is a great deal of hyperbole about artificial intelligence. On the one end of the hyperbolic spectrum are dystopic predictions of systems or machines that are so much smarter than humans that, upon becoming ‘self-aware,’ they will decide that humanity is dispensible and set about enslaving or destroying us. We can call this the ‘Terminator Doctrine.’ On the other side are grand hopes that AI will solve our knottiest problems, from racist policing to environmental catastrophe by making important decisions using pure rationality that is freed from human passions. We can call this the ‘Perfectionist Doctrine.’ Both of these doctrines are appealing for different reasons, but they are bullshit for the same reason, which is that they fail to account for what AI is actually for.

The appeal of hyperbolic AI is that it conjures a future of gleaming machines embodied with an intelligence that is ruthlessly emotionless and capable of seeing a bigger picture than pathetically limited humans. Such images tempt us through our insecurities and frustrations about the messiness of human relations and individual desires for perfection. A mix of science fiction and sober (if naïve) academics have laid the groundwork for these fantasies, and this what they are; fantasies. Despite what researchers, boosters, and lazy journalists like to assert, AI is not here to save us. AI exists for one reason and one reason only; it exists to reduce labor costs. Virtually every type of AI that attracts investment or a customer base automates some process that humans currently do with the promise of requiring fewer of them to do it. This is what makes AI just the latest wave industrialization.

For every heartening story about AI that detects tumors or provides way-finding for visually-impaired people, there is the real mission of AI, which is to carry out the whim and will of corporations and militaries. If there wasn’t the promise of incredible riches or world domination in the adoption of AI, there would not be AI to support accessibility or cure disease. There’s not enough money in that.

Sure, AI research and development is monstrously expensive and AI systems consume eye-watering amounts of electricity and other precious resources, which makes the up-front investment significant. But employing humans is far more expensive overall. The math of AI is not just the computation required to determine if here is an image of a pumpkin or a bicycle, it is the math of industrialists looking to bring down the most significant cost they face – employing humans; a cost made far costlier when those humans demand safe working conditions and a fair share of company profits. And, as with the manufacturing dimension of industry, AI is a planet-killer. The incredible amount of electricity and raw materials required for sophisticated AI systems is staggering. And the environmental costs to extract, process, and later dispose of, the toxic ingredients in digital equipment falls mainly on poorer nations.

The appeal of AI – its selling point – is that it performs ‘better’ than humans at various tasks, like handing out prison sentences that are not explicitly racist. Unfortunately, time and again we find that technology doesn’t eliminate our social problems but instead repackages them. The only algorithm that could be free of racism or other fundamental flaws would have to be so exquisitely designed that the cost and effort to produce and maintain it would probably defeat the efficiencies it is meant to create. Arguably we’d be better off continuing our efforts to attend to the core social problems we face as human beings than trying to replace ourselves with something ‘better.’

One way to understand what AI is for is to consider who it is for. The simple answer is that AI is for the people and entities who invest the most money into its development and who stand to earn the most from its acceptance. Despite utopian visions of an apolitical AI that frees us from drudgery and enlightens us with wisdom, virtually all AI arrives in our lives with an agenda; one set by its producers to serve either themselves or a target customer who is probably not you. Consider Siri. That helpful bot in your phone or laptop. Siri does what a giant corporation wants it to do. Quite often you’ll notice that it carries our ‘your’ wishes by employing products and services made by the same company. Curious that. If Siri happens to help you in the process, well that is only because there is potential profit for the company in doing that. It is not a strings-free gift to you; it is a marketing mechanism. Much more directly profitable AI includes systems for selecting job candidates, setting insurance rates, or producing robot soldiers. The target customers for these systems have enormous budgets making them extremely profitable to serve. The most exciting and innovative AI is not designed for you and me. We are paltry customers for AI by comparison.

If you are uncertain about this argument, I suggest you conduct a simple exercise. Every time you hear about an AI, be it a military drone, a grammar checker, or a companion for elderly people, ask yourself how many human hours would be required to perform the same task. Couple that with the question about how much money could possibly be made from AI that only exists to relieve some form of human suffering, however defined. Once you consider the economics of AI, the answers become clear. And those answers will most likely reveal what AI is for: it’s here to take your job.

Our Corporate Overlords, Technology and The Law

The Model is Broken

The major social media platforms, especially YouTube and Facebook, are on track to help anti-vaxxers prevent the eradication of COVID-19 once a vaccine becomes available. A news story this week in The Guardian reports on a survey in which one in six Britons are likely to resist a vaccine for the novel coronavirus. That’s more than 16% of the UK population, which is a lot of people. I suspect we can count on similar figures in the United States, which would mean there a staggering number of people likely to refuse vaccination during an international pandemic. This is not simply an inevitable product of a world of conflicting ideas of goodness and health. This is the result of social media enabling the amplification of irresponsible content to generate ad dollars. Anyone who has been paying attention will have noted the rising information power of social media influencers who traffic in potentially deadly conspiracy theories on a range of topics from pedophile rings to chemtrails. On the vaccine front, it was only a handful of months ago in 2019 (remember 2019?) that we witnessed the disastrous irresponsibility of social media platforms contributing to a deadly measles epidemic in Samoa. While there is plenty of blame for the individuals hawking healthcare conspiracies for attention or book sales, their power to encourage awful decisions would be marginal at best without dramatic amplification by platforms like YouTube and Facebook. Monetizing attention without bearing responsibility for the consequences is the business model of the internet and the model is broken. It is the legacy of a short list of consequential policy and business decisions over the last 25 years that were not inevitable and whose effects are proving to be disastrous for the fabric of society and the well-being of everyone.  

In case you aren’t up on this history and policy landscape, permit me some space here to break it down. The original version of the internet was a project of the Department of Defense who desired a resilient and decentralized means of communication that would function to some degree even if large parts of the country were a smoking ruin. However, once the original concept was transformed into a consumer network and a successful business model emerged, the decentralized nature of the internet faded quickly. Between the world-spanning popularity of Facebook and YouTube and the gigantic cloud computing infrastructure provided by Amazon, today’s internet is hardly decentralized. Control over information flows resides in fewer private hands than ever before. Much of the wealth creation and consolidation in online businesses is the result of the Telecommunications Act of 1996, an enormous piece of legislation in the United States that was partly conceived to allow the consolidation of old media. However, along with the Telecommunications Act, Congress passed the Communications Decency Act (CDA), a family-friendly law intended to promote the content filtering of pornography. The CDA includes a liability shield provision in the form of this sentence: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider” (47 U.S.C. § 230(c)(1)). By differentiating providers of an “interactive computer service” from publishers (e.g. newspapers, publishing houses, television stations), services that host media and other material provided by users are generally not held responsible for what that content actually is. This lets user-driven platforms like YouTube, Facebook, 4Chan, etc. off the hook because they are not “publishers,” just conduits for user-generated content. So, things that are very unlikely to appear in a newspaper–bomb-making instructions, obscenity, threats, false claims about vaccines–can appear online. Another interesting development along the way was the model – the so-called advertising model. When the internet was still new, it was not free. The first widely available internet web browser, Netscape, was boxed software you had to pay for. America Online and CompuServe were the era’s popular information portals and messaging services and required monthly fees. Even email accounts cost money. A “war” between Netscape and Microsoft in the late 1990s changed everything. Microsoft promoted their struggling web browser, Internet Explorer, by making it free. This abrupt change in business strategy dramatically shifted the business model. It wiped Netscape out of existence (the Mozilla Foundation is its surviving legacy). Other startups took note and paid services rapidly declined in favor of free ones – email, video sharing, resume hosting, etc. Facebook, Twitter, and the rest emerged in this now-familiar environment in which platforms and apps are available for free and supported by advertising dollars.  With Section 230 on their side, the platforms can host the most inflammatory content their users upload and then sell ads right next to it. Another refinement to the model was personalization. The surveillance features of the internet are built right in making it incredibly easy to build profiles of end users to target them with ads. But with personalization – in which no two users have the exact same experience online – targeted advertising not only profiles people but constructs audiences. By placing users with others who share similar beliefs and interests, those beliefs and interests are reinforced, creating more engagement with pleasing content, which provides a rich target for advertisers. To paraphrase Safiya Noble’s critique of Google Search, social media is not whatever you think it is; it is an advertising system. Advertising is the reason every platform exists. Advertising guides every decision and ultimately influences what you read, view, and click. This is why it appears that the people making the decisions at Facebook and YouTube don’t care if a particular video promotes volunteering in your community or bombing it. If it sells ads and people don’t object strongly enough to threaten the revenue stream, then it’s welcome on their sites. While many sites have instituted content moderation to cull the very worst, they only do so to the limit of consumer revolt. (If Facebook and YouTube thought they could get away with hosting puppy-torture videos without an advertiser revolt, they would.)

Returning to the anti-vaxxer story, we humans are obviously flawed and readily receptive to “red pill” conspiracies, which long predate the internet. The world is confusing, chaotic, and sometimes evil, and conspiracies offer reassuring answers to hard questions. Many authors and outlets hawking “hidden truths” are effective because they employ the trick of wrapping a lie inside an apparent truth. Do vaccines cause Autism? The answer is definitively no–or at least highly unlikely. But, it’s easy to wrap a scary untruth inside a package of compelling evidence. For example, it is arguably true that Big Pharma has indeed put public health at risk for profit. This doesn’t make vaccines bad for you, but the real ethical failings of the institutions that govern our lives make demands on our ability to tell the difference between legitimate and illegitimate stories about them. The Trump presidency has demonstrated this. Take a contentious issue with a legitimate basis, such as the idea that Washington DC elites have not demonstrated sincere concern for the livelihoods of vast swaths of the population for decades, and then remix that with lies that shift the blame onto “job-taker” scapegoats and you can sell the public on moral failures like the deportation of asylum seekers and refugees. The complexity of contentious issues is one reason why responsible publishers are valuable and badly needed. Holding information outlets responsible ensures that important stories are vetted to some standard before being released to the world instead of just unleashing a firehose from which the loudest, most inflammatory voices dominate. Making outlets theoretically responsible for their content doesn’t guarantee truthfulness or objectivity. The New York Times and the BBC have much to answer for in their histories of coverage, but whatever appears in those venues has to be approved by somebody who is willing to accept consequences. That means something even if the results aren’t always satisfying. Furthermore, everyone sees the same New York Times and the same BBC, which means we can all discuss a somewhat singular story and use it as a basis for rational discourse – including a discourse that doubts the official line. Meanwhile, in the responsibility-free zone of Facebook and YouTube, a zone that reaches more people than any other media source, LGBQT+ indoctrination conspiracies, deep state fairy tales, and the dire warnings of anti-vaxxers flow into the world placing marginalized people at risk and doing tremendous damage to social cohesion. The sheer volume of irresponsible content creates an impossible challenge for people trying to make sense of things on their own. Worse still, personalization of online experience means that a significant number of controversial and false stories are seen only by those who are most likely to be susceptible to them, further dragging people down into conspiracy caves and shielding them from views that might broaden perspective. This affects people from any political or ideological perspective. While there is some truth to the notion that every idea deserves to be expressed somewhere, I do not endorse the notion that all ideas deserve equal time. No single one of us has the entire truth, but we can’t assemble truths into a rational whole by swimming in an ocean of lies. Personalization demonstrates the utter hypocrisy of claims that the solution to “bad” speech is more speech. There is far too much speech dumped on people for them to make rational choices with any regularity. And with personalization, most people are not given a real choice in any case. 

Bringing up the topic of social media curation and responsibility naturally leads to questions about how to solve the current mess. I have a few ideas. First, we have to move away from believing that the status quo has to be this way. The world of information existed before the big platforms and there will likely be a different information order 25 years from now. Also, we have to move away from the belief in free-speech absolutism. Every freedom has limits and with each freedom comes responsibility. Simply banging the drum of “liberty” without a plan does not produce a workable society. Similarly, attempts at solutions that get bogged down in “well, how will Facebook do it?” completely miss the point. We need to aim higher than simply fixing a few things, securing a couple of promises, and then just accepting more of the same. You and I should not concern ourselves with whether Facebook can manage it. Next, I have to say that CDA §230 has outlived its usefulness. It was not intended to produce giant and totalizing communications platforms that are accountable to no one (except advertisers, and we cannot count on them as arbiters of justice). This is tricky because it is certainly likely that § 230 has contributed to formerly marginalized voices to being heard. The Black Lives Matter movement was never going to get much sympathetic coverage in the Washington Post or on the Nightly News. The hands-off approach of major platforms literally gave BLM a platform to push racial justice into the mainstream and the world is better off for it. There is a risk that in the absence of §230, we might lose some opportunities to hear from marginalized voices in the future. But that’s a big ‘might’ and a big risk to take hoping for the best. Meanwhile, the corrosive effects of oppressive and deceptive information are tangible. More people have been killed by right-wing extremists in the United States in the last several years than by jihadists or left-leaning extremists. Observers correlate the recent rise in hate group activity with their unhindered presence on social media. I believe we can and must act to better stewards of speech without submitting to slippery slope arguments about how all free expression will be lost. The key is to treat Facebook and YouTube (and others) as publishers. Make them, and those that follow, responsible for what they host and profit from. Really, this is not a stretch. By personalizing the user experience and filtering out the most objectionable content, the big platforms are already acting like publishers. We could carve out some limited §230 protections for platforms of a more limited scope while holding the most profitable accountable as a cost of doing business. Inevitably, folks will ask the functional question: How could the biggest platforms possibly take responsibility for all of the content on their enormous sites? The short answer is: Not our problem! Managing a gigantic platform is the responsibility of those who profit from it. With great scale comes great responsibility. I suggest that for YouTube, for example, they could just slow the hell down and employ an enormous citizen advisory board to curate the site. Sure, it might deny us the privilege of seeing every single video of people singing about international jewish banking conspiracies and lessen the amount of content they host. However, even if they cut their content down to a 10th of what it is now there would still be a staggering amount of it. Next, it’s time to apply new limitations on advertising. We already regulate advertising practices on old media and it’s time to do something about new media. Targeted advertising is, after all, the model and it drives much of what is broken. The “innovation” of micro-sliced affinity audiences and advertiser self-service, while quite profitable, leads to a range of routine abuses, like ad categories for “jew haters” and others that enable housing discrimination. What if we did what has worked for generations and let everyone see the same damn ad? Money would still be made. Speech would still happen. We don’t owe them their ad dollars as much as they owe us a society. 

These are modest proposals and readers likely will find flaws. The point is that something must change. The platforms will not willingly walk away from the money currently on the table even if it destroys the very fabric of society, even if it prevents the resolution of the greatest health pandemic in modern history. So long as money keeps changing hands and funneling into Silicon Valley coffers, the broken model won’t change. It’s up to us to demand something better. 

One Nation Under Surveillance, Our Corporate Overlords, Tech and Society

Consuming Surveillance

Our consumption habits are the root cause of pervasive surveillance, the erosion of democracy, and the threat of environmental disaster. It is the main culprit driving the digital invasion that seeks to gather data about every aspect of our lives, from our browsing habits to our heartbeats. How is this so? Let me break it down. First, consider that the biggest source of surveillance for most (not all) people is advertising. All that logging, tracking, and predicting going on through the use of seemingly every device and at every transaction is designed to hone micro-targeted advertising and other forms of precision marketing. Every search, website visit, every app on your phone, that wearable device measuring your steps and your sleep, the chatty digital assistant that plays your favorite songs and dims your lights, social media (of course), your “cloud,” all these are all sites of persistent and increasing collection of what Harvard professor Shoshana Zuboff calls our “behavioral surplus.” As we act in the world, the evidence of those actions is gathered up. That is the “surplus” and it contributes to what Zuboff calls a “hidden text” that describes the movements of our lives like a shadow. What is contained in that text is invisible to you and me, but luminous and valuable for others.

The reason for harvesting our behavioral surplus is to sell things – to us and to people we resemble. And not just the things we need, plus a few things we want, but an ever-increasing amount of these things. As it happens, our personal rates of consumption have been steadily increasing for at least a century. So much so, that by the end of the twentieth century, Americans were consuming more than 17 times what they did in 1900, leading us to consume, at present estimates, between one-fifth and one-third of the world’s resources despite having only about one-twentieth of the world’s people.

The United States is not alone. As many formerly low-consumption countries have become wealthier, they have dramatically increasing their consumption as well. This follows a certain logic: An increase in aggregate wealth provides the incentive for businesses to provide goods and services in what free-market fans would term a “virtuous cycle,” where profitable production creates the financial capital to pay higher wages which leads to more disposable income and demand for more goods and services. This apparent lifting of all boats might be fine if there were infinite resources to use as raw materials and as fuel for transportation and production, but there are not. Industrial growth has decimated the planet; depleting its resources, polluting the water and air, and just generally leading us to ecological disaster. And yet, we carry on as if this were not the case. The boats are indeed lifting, but only because of the runoff from melting glaciers.

But how does this lead to surveillance? The phenomenal growth in consumption has been accompanied by a similarly phenomenal growth in consumer choice. In virtually every product category, there could be dozens or even thousands of options. Each producer or service provider really wants your dollar, and they have to fight for it. Advertising is a multi-billion-dollar industry designed to help close this deal and many developments in information technology have been brought to bear as the tools of this particular trade. The development of increasingly invasive and secretive surveillance techniques to capture the minutia of our online and connected lives has been especially useful. As Zuboff tells it, Google pioneered the exploitation of behavioral surplus by employing sophisticated techniques, including artificial intelligence. First, it was to analyze search queries. Search, as we now well know, turned out to be a remarkable wellspring of information about what people think, do, and plan to do. Google’s ingenuity led the company to figure out how to go beyond merely observing our habits to making very good predictions about them. They appear to be working on plans to go further and simply command our choices through manipulating what we see and when we see it, and they are not alone. Just as Google and its parent company Alphabet have developed a wide range of quality online tools and services, from maps to translators, to word processing and data storage systems to keep tabs on us at all times, Facebook has similarly figured out how to keep us glued to our screens as a (last) means of maintaining social ties while also harvesting, and then trafficking in our behavioral surplus.

It is no coincidence that the rise in consumer surveillance has been accompanied by new and troubling forms of state surveillance. It makes sense really. Technology companies, having discovered how to write the hidden text of our lives, found a willing customer for that text in our increasingly paranoid governments. Most of the surveillance technology used by local law enforcement is bought off-the-shelf from commercial firms large and small, including Amazon. The company built to service our every consumptive whim and need, has expanded well beyond its retail position to sell all manner of surveillance equipment. In particular, the company is actively trying to corner the market in selling facial recognition systems to federal agencies, who are enthusiastic buyers. Meanwhile, recordings picked up by Alexa have found their way into criminal trials, demonstrating the effective demolition of the public/private divide through always-on, connected home devices, especially those standing by to take your retail orders. The phones we carry, the smart appliances in our homes, the vehicles we ride in, all these and more offer up the details of our lives to any buyer, public or private. Quaint concepts like search warrants and the expectation of privacy just wither away while we buy buy buy. Meanwhile, federal law enforcement has been contracting with Google, Microsoft, and other tech giants for technical services for decades. All of these companies are banking on both the commercial and governmental business opportunities made possible by the stuff they specialize in: machine learning, facial recognition, data analytics, and so on. The techniques they designed to target advertising are easily converted to techniques for even darker forms of targeting. Predicting what you’ll buy is not all that different from predicting what else you’ll do and oh-so many people want to know.

None of this is your fault of course. There are economic and social forces well beyond our individual control that have created the retail-surveillance state we now find ourselves in. What is true is that you, me, and everyone we know, were easy marks. We like want things. We crave convenience, efficiency, services, systems, tools – anything to impose order on a demanding world. Our lives are busy and our social connections have gone digital. Our health is concerning, so we track our fitness. Stores are a hassle and they’re disappearing anyway, so we shop online. There are too many movies to choose from so we let Netlflix choose. And of course, only the bravest or most stubborn among us can live without a smartphone. As we appear to gain a little space, a little human contact, a little leisure, we also lose – our privacy, our agency, and our planet. We live our life stories tucked into the bosom of our technological affordances and retail pleasures. Yet, the story of our lives only seems to be written by ourselves. The rest of the story is a second text, a hidden text.

Our Corporate Overlords, Tech and Society

Feeling the Feels: Artificial Intelligence and the Question of Empathy

When I was a child, our family went through a series shitty televisions. This was before most people had internet connections or personal computers, when a TV was a suburban family’s most immediate link to the outside world. So, TVs mattered. We didn’t have much money and a family friend, who happened to be an electronics tinkerer and a bit of a hoarder, would periodically trash-pick old TVs and give them to us. They were big boxy things with glowing tubes inside. The hand-me-down TVs worked for a while and then they didn’t. Towards the end, there was typically a period in which some amount of banging on the side or top of the TV would get it to work or improve the picture for a little while, until having to do it again. Often the banging would be an act of frustration or anger. It felt like the TV was doing something intentionally, holding out on us, magnifying our helplessness and deprivation. I sometimes cried out while hitting the thing. It was cathartic and I was miserable. And here’s the thing…some of those TVs gave the impression that they felt something. The picture would seem to twist or flash in response to the banging. Sometimes a good pounding on a nearly dead TV would produce satisfying sparks or smoke. It was as though the television felt pain. Not simply physical pain, but emotional hurt. Like we could make it share the sadness and frustration we were feeling in the same way an abuser or a bully inflicts pain in order to “share” his wretchedness with others.

The televisions didn’t feel pain of course and, as fairly well-adjusted adult, I no longer beat up on defenseless technology…often. It wouldn’t matter much if I did though (except for the replacement costs and some troubling implications about my mental health) because machines do not feel. Increasingly, we have technologies that see, hear, smell, and can even sense touch and pressure, but they do not now, nor will they ever have emotional feeling. And because they can’t experience emotion, specifically emotional pain, they are incapable of empathy. A well-crafted machine can certainly imitate emotion. Siri or Alexa can choose a voice modulating algorithm to communicate concern, mockery, or hurt and Jibo, the adorable table-top robot, has endearing face-like expressions and can coo like a precocious child, but it is all just algorithmic fakery. The machine feels nothing. It simply chooses a response type from a library of code and executes it without being itself affected in any way.

Why does it matter? There are certainly plenty of people who think it doesn’t, or at least not very much. Luciano Floridi, a philosophy professor at Oxford, has suggested that we overemphasize the specialness of human agency and reason. He revisits the famous “Turing Test,” which basically holds that if a computer can perform well enough to convince you that it is human, then it can no longer be thought of as merely a machine. Floridi wants us to realize that as machines—artificial intelligence—become capable of doing an increasing number of human-like things, our perceptions about the set of skills and practices that can only be entrusted to humans will become progressively smaller and may eventually vanish. The Science and Technology Studies scholar Bruno Latour has similarly suggested that we need to view objects as mundane as seat-belt chimes and hydraulic door closers as having human-like “agency” because such objects act in the world and shape human action. This line of thinking appears to be propelling the development of products and services that give machines significant power over people’s lives based on assertions that machines perform the same or better than humans in many tasks. Getting a job? Software is already being used to choose from a pool of candidates based on their resumes and social media profiles, using machine learning algorithms to make assertions about future performance and “fit.” There are bots that can even conduct job interviews. These approaches are being attempted well beyond hiring for a broad range of decisions, from who should get a kidney to who should go to jail and for how long.

There are very likely numerous scenarios in which machines can do a better job than fallible, biased, or oblivious humans. There may even be demonstrable cases where software overcomes racist and sexist trends in key decision spaces. Even if this is true, there are also a lot of things a machine will never be able to do. Why? Because they lack empathy. Empathy is a core human trait that differentiates us from machines and motivates important human values like charity and the desire to relieve suffering. Unlike many animals, we even feel empathy for members of non-human species. We routinely agonize over what pets and farm animals feel and we worry about the injustice of picture windows as experienced by birds in flight. We also appear to empathize with inanimate objects like cars, and yes, devices with artificial intelligence, such as those smart assistants and even military robots. Humans are overflowing with empathy. It is empathy that causes us to care about homelessness in our cities, the poor health of strangers, and the fates of victims of far-off wars and famines even when we aren’t in direct contact with their struggles.

Empathy is particularly important to our human futures. We cannot simply decide who should live or die, who should suffer, or who deserves a second chance based on code libraries and the capability of giving you bad news with the appearance of regret. Being human means eventually having the experience of pain, which is contributes to an ability to empathize with the suffering of others. Machines cannot have these experiences. This is why humans must always be a central part of important decisions that concern the well-being of other humans. But artificial intelligence has been proposed to make determinations in just those types of realms, such as in military matters. Despite vacuous claims about achieving “humane war” and other insane concepts, war is and should be entirely lacking in humanity. Machines will not improve it, only distance human actors from having to confront its awfulness. Similar discussions about handing off difficult decisions to artificial intelligence need to come to a full stop whenever they involve determinations about human fates. Will the autonomous car allow its owner to die to save others? Who will be liable when the healthcare robot amputates the wrong limb? Liability isn’t the only issue here and neither is the project of figuring out how to program “morality” into AI. The autonomous device can never “care” who lives or dies, which limb should be removed, or how much anyone suffers. It can only make a calculation and then act in the world without paying an emotional price. Like a sociopath. This is not how moral decisions are made. Only humans can truly care about anyone or anything and that is the fundamental basis of moral agency. Artificial intelligence, quite literally, has no skin in the game. And this is why artificial intelligence can never replace us.

Our Corporate Overlords, Power and Privilege, Tech and Society

Silicon Valley Joins the Culture War?

This past Sunday the web host and domain name registrar GoDaddy bowed to months of pressure from activists and told their longtime customer, The Daily Stormer, to go find another host for their website. On Monday, Google similarly denied a home to the white supremacist, Nazi-aligned website. A denigrating post about a young activist killed by an apparent neo-Nazi at a white nationalist rally in Charlottesville was the final straw that forced GoDaddy’s hand and, presumably, that of Google. In related news, The Los Angeles Times reported over the weekend that other Silicon Valley service providers, such as the short-term rental company Airbnb and the crowdfunding site Patreon, were blocking the use of their services by various “far-right” groups, forcing them to find other providers and, in some cases, to create their own. We should be proud to see Silicon Valley coming to the rescue and fighting on the right side of the culture war, right?

If only it were so simple.

There are several hard questions that should be asked about banning or allowing white nationalists and a whole range of other haters to use the internet to spread their messages, including those messages that strike fear into the hearts of many other users. There are also many important questions—or demands—that should be posed to Silicon Valley firms and our elected leaders to define what should and should not be construed as free expression and to place the burden on policing that in the right sets of hands.

The legal basics involved here are the U.S. Constitution and the “safe harbor” provision of the Communications Decency Act. The free expression guarantees of the First Amendment are frequently cited by white nationalist types as the legitimizing bases for their demonstrations and published hate speech, but they do not apply to services operated by non-governmental entities. In other words, the popular services of the internet, such as Airbnb, Twitter, YouTube, etc. have virtually no obligations under the U.S. Constitution. This means that internet platforms can basically block or allow nearly any type of expression, unless such expression is specifically addressed by a specific law. Section 230 of the Communications Decency Act, which is otherwise known as the “safe harbor” provision, basically absolves providers of internet services, including ISPs, web hosts, media streaming services, and others, of liability for how their customers use those services. If a customer contributes content, it’s on the customer; the internet service is not to be construed as the “publisher” of the content. The CDA’s Section 230 has exceptions of course, but hate speech isn’t one of them.

In the wake of the violence in Charlottesville, many social media commentators were rightly upset at the various enablers of sites like The Daily Stormer and the many other services that provide any sort of comfort to white nationalists. However, as a legal matter, the sites have no constitutional or statutory obligation to do anything, and for the most part, they haven’t. A search on Facebook for “white pride” or similar terms will reveal lists of pages dedicated to white nationalists and the people who love them (really – there’s a white pride dating group). Google has come under fire for failing to police its search auto-complete algorithms from completing sentences like “Muslims are…” with “terrorists” and “the holocaust is” with “a hoax,” and similarly unhelpful constructions (which they’ve improved). Twitter and Facebook have both come under fire from users and commentators who loudly complain that the platforms do far too little to prevent some of their users from engaging in relentless harassment, even when it includes threats. GoDaddy had been under pressure for months to distance itself from The Daily Stormer but chose to do nothing until some magic line was crossed this past weekend.

Silicon Valley’s failure to embody the role of stewardship for civil society should not be surprising, however. For one thing, it is not exactly clear whether it actually makes sense to empower corporations to carry the water of a society’s moral duties. Of course we want corporations to act morally, but as the power of corporations increases—particularly the corporations that are most visible in the internet/mobile sector—the power of non-commercial society seems to be decreasing. (The American companies Alphabet and Apple Computer are worth, together, over $1.4 trillion). The key questions to consider here include: Are we comfortable leaving the decisions about who gets to speak and who does not to enormous institutions that are generally unaccountable to society? How can we be sure that such choices will be made in the best interests of the public rather than to meet narrow, short-term business objectives? Given the increasing importance of the major internet platforms, such as Facebook and YouTube, as accessible and powerful venues for expression of all kinds, it seems obvious that the platform operators must bear some responsibility for what that expression does regardless of how the law and regulatory environment is currently arranged. Yet it is sadly unclear how to make the execution of that responsibility align with the cherished values of electoral democracy and civil society. What is clear to me is that “the market” is not a sufficient incentive structure to ensure that socially beneficial speech and other forms of expression are adequately nurtured and protected.

In a democracy, elected officials such as mayors and governors have the power to determine what constitutes protected expression on the streets of our towns and cities. They will do this imperfectly of course, as seems to be the case in Charlottesville where the city was warned repeatedly about the risk of violence. What is key here is that elected officials serve at the pleasure of their constituents and, in a functioning democracy, such constituents choose those officials and play at least a supporting role in the decisions they make. This process is by no means unproblematic, but the process is well-established and can be influenced by the moral intuitions of voters and activists. Meanwhile, you and me and everyone we know have zero say about what GoDaddy does. Whoever guides the decisions there is far less likely to do so out of moral compunction or the fear of losing political power. Sure, we can get a Twitter gang together to shame them into taking an action against some hate publication, but that is not democracy. For one thing, GoDaddy is only likely to respond when they see money on the line. They didn’t see that until The Daily Stormer did something so vile in the wake of an attention-grabbing murder that they figured they couldn’t pretend they were “neutral” anymore without paying some price. And even though the end result was positive in my view, it wasn’t a democratic action and it will have no impact on what anyone else does. GoDaddy, Google, and Amazon Web Services probably host many hundreds of other websites that promote hatred and will continue to do so.

This should not be surprising because this is the reality of what Silicon Valley does, and particularly what it does when it is enabled by a mix of free-market utopians and free-speech maximalists: It enables bullies. The information industry is built on a winner-take-all model that relentlessly removes revenue from communities and traditional capitalist activities, such as media distribution and street level retail, and redistributes it into increasingly fewer (and richer) hands. In the service of this business model Silicon Valley oracles celebrate ego-driven entrepreneurialism and denigrate steady jobs, equality, wealth-sharing, any sort of collective action. The type of freedom that Silicon Valley celebrates is freedom for the strong, freedom for the already got some, and includes a full-throated claim that only a pure meritocracy that denies the inequities of history is fair or legitimate. Oddly enough, as revealed by the infamous memo by James Damore, Silicon Valley has no problem promoting discredited stereotypes about women and other “less-thans” who aren’t genetically wired to live up to the narrow standards of the engineering elite. All of this taken together is important to note because when Silicon Valley boosters throw around bromides about the value of a free and open internet, what they mean might not be what you think it means.

Don’t get me wrong: I support a free and open internet, but I have my own definition of what that is. For one thing, when I think about free speech, I don’t think about it as completely unfettered, louder-is-freer free speech. Constitutional free speech isn’t wholly free and neither should it be online. For me, free speech is only effective when it doesn’t silence another legitimate speaker. When a cruel or threatening Twitter troll chases an LGBT activist or a game developer off of the platform (or out of their home), that is not an exercise in free speech, that is simply intolerable bullshit. It is vile and immoral behavior that deserves condemnation and little to no protection from the authorities. Now before you go and accuse me of hypocrisy by citing the acts of anti-fascists whose stated aims include silencing neo-Nazi types, note that I wrote that sentence about my view of free expression with care. I have some very specific ideas about what makes a speaker “legitimate.” Just as I claim that free speech is not speech that silences others, I also hold that a legitimate speaker is one who does not aim to deny basic human rights to others. White nationalists fail that test, as do speakers who denigrate women or describe others as inhuman and unworthy to live. Despite whatever non-violent benevolence some white supremacists may claim to espouse, history shows that the end goal of white supremacy is the exclusion, enslavement, or annihilation of non-Whites, including many categories of people with light skin who are otherwise deemed undesirable (Muslims, Jews, LGBT folks, etc.). This is not debatable. Nazi Germany happened. Hundreds of years of African slave trade happened. While we might group hardcore leftists in with other historical rights-deniers like Stalin’s Soviet Union, there is zero evidence that anti-fascists have gulags in mind as the end goal of their activism. Driving racial hatred back into the margins would probably suffice.

This leaves us with a conundrum. We have left the barn door open and allowed Silicon Valley to move the popular venues of expression from the community stage and the city street to their proprietary platforms, where they are guided not by constitutional or democratic principles but by terms-of-service strategically designed to maximize profits and offset risk. This is not the formulation for achieving a civil society. Extremist speech that is permitted to eddy and coalesce in seemingly ungovernable online spaces builds momentum and eventually spills out onto the streets and potentially into violence, as we have witnessed in Charlottesville, in Kansas, and in Portland. Free expression hasn’t lost its value or importance, but we can no longer allow for governments to leave the management of free expression in the hands of the least qualified to handle it—internet platform providers. Broad and generous interpretations of the CDA safe harbor provisions and a misplaced application of constitutional principles on venues that bear no constitutional obligations—or really any obligations to anyone—has pushed society to the brink of chaos. It’s time we focused not only on how to keep the internet “free and open” but also on how to make it fair and inclusive, and ultimately, just.

Who is actually made safer or freer by safe harbors? Do you feel safer? I, for one, do not.

Our Corporate Overlords, Technology and The Law

Packingham: The Danger of Confusing Cyberspace with Public Space

A recently decided Supreme Court case has triggered a debate about how much (or little) governments can regulate the use of online spaces. Specifically, in Packingham v. North Carolina, a case about a state prohibitions on social media use by sex offenders, the court has weighed in with an opinion that would seem to suggest that social media sites and services are no different than streets or parks where the First Amendment is concerned. While I tentatively agree with the majority that the government should not issue sweeping restrictions on internet access based on an individual’s criminal record, justifying this position by portraying internet sites and services as public space is misleading and, in my opinion, dangerously naïve. Writing as if he had just read the collected essays of John Perry Barlow, Justice Anthony Kennedy writes in the majority opinion: “in the past there may have been difficulty in identifying the most important places (in a spatial sense) for the exchange of views, today the answer is clear. It is cyberspace…” Kennedy correctly asserts that ‘cyberspace’ plays an increasingly important role in people’s lives, but he overlooks how the spaces and places provided by the internet are fundamentally different from those that can more accurately be described as public spaces, such as streets and parks.

On Facebook, for example, users can debate religion and politics with their friends and neighbors or share vacation photos. On LinkedIn, users can look for work, advertise for employees, or review tips on entrepreneurship. And on Twitter, users can petition their elected representatives and otherwise engage with them in a direct manner. Indeed, Governors in all 50 States and almost every Member of Congress have set up accounts for this purpose…In short, social media users employ these websites to engage in a wide array of protected First Amendment activity(emphasis added.)

Like many observers who have written paeans to the free-wheeling uses and democratizing potential of the internet, the majority opinion in Packingham demonstrates an ill-informed exuberance about the freedoms enjoyed by users of social media platforms. Even Justice Samuel Alito in his concurrence with the majority criticizes what he calls the court’s “loose rhetoric,” stating, “there are important differences between cyberspace and the physical world…” Yet Alito only criticizes the breadth of Kennedy’s claims while similarly failing to recognize the myriad ways our civil rights cannot be asserted on the internet. The resulting opinion promotes a popular but inaccurate narrative about the beneficence and neutrality of the internet in general, and social media platforms in particular.

Let’s be abundantly clear: social media sites and services are not public spaces and those who use them are not free to use them as they please. Social media platforms are wholly owned and tightly controlled by commercial entities who derive profit from how they are used. While, as is argued in Packingham, governments may be limited as to the extent they can tailor regulations over the access or use of an internet resource, social media users are already subject to the potentially sweeping choices made by site operators. Through a combination of architecture (code) and policies (terms of service), social media users are guided and constrained in what they can do or say. Twitter, Facebook, and other platforms routinely block users and delete content that would most likely be considered protected speech if it took place in a public venue. So, while we can probably agree that social media platforms have become central to the social lives of many millions of people, this means only that these services are popular. It does not make them public.

Justice Kennedy attempted to link the free speech rights that have been upheld in cases concerning other venues, such as airports, with the rights that should be available on the internet. While I do not disagree that the full extent of our constitutional protections should be available in online venues, the fact of the generally unregulated status of the internet and the commercial ownership of most of its infrastructure means that cyberspace bears very little resemblance to ‘realspace.’ Airports, for example, are public institutions operated by government agencies. A social media site—almost the entire internet now—is more like a shopping mall. In much the same way that social media platforms reproduce features of life in public places like city streets, shopping malls only mimic the interactive spaces they have come to supplant. A mall is neither street nor park. Different rules—and laws—apply to malls. When the Mall of America in Minneapolis shut down a Black Lives Matter protest in December, the mall operators were able to assert their property rights over the expressive and assembly rights of the protestors. A municipality would have risked a civil rights lawsuit had they broken up a peaceful protest on a city sidewalk or in a public park.

Packingham is a case about constitutional rights that overlooks the increasing privatization of those rights. It is also part of a larger problem of misrepresenting cyberspace as a zone of freedom. This transformation in our relationships to rights, and our perceptions about those rights, is aided by the invisibility of power online. Facebook, Twitter, etc., by providing expressive spaces in which their users supply the visible content, do not appear to us much as actors in this drama. We are led to believe that they simply provide appealing services that we get to use so long as we follow some seemingly benign ground rules. We fail to recognize that those rules are not designed for the best interests of users, but for the goals of the platforms themselves and their advertisers. Facebook in particular has worked hard to encourage dramatic changes in human social behavior that have enabled them gain deep knowledge about their users and to monetize that knowledge.

Justice Kennedy’s opinion is especially irksome because, while it purports to preserve important rights as our lives migrate online, it overlooks the distressing trend of privatization of the very rights that the constitution promotes. Yes, we may engage in first amendment activities online without undue interference by government officials, but the ability to do so is not guaranteed by the government because the government is barely involved. Ever since the internet ceased being a project of the Department of Defense, most of it has been privately owned and the government has avoided regulating most of the activities that take place there. While it may be true that an unregulated internet is a good thing, a side effect of this approach has been the growth of enormously powerful online businesses based on manipulating and spying on users and profiting from the resulting data. Every single communication and transaction that takes place on the internet passes through infrastructure belonging to dozens, even hundreds of private companies; any of whom may be asserting their combinations of architectural and policy restrictions on how that infrastructure is used. Where it suits a company to operate with total neutrality and openness, they do so. When it does not, they act in whatever manner suits the bottom line. Facebook, by example, is frequently lauded for its capacity to support political organizing as well as other modes of first amendment activity. But if Facebook decided tomorrow to block access to an NAACP page or to prevent the use of its messaging system to organize a legal street protest, there is nothing but the potential for consumer backlash to prevent them from doing so. If Google decided to choose the next U.S. president by subtly shaping “personalized” search results, there are no law on the books to prevent it. Packingham says nothing about this kind of power over free expression, which dwarfs that of the government when it comes to online activity. Until the government and the courts begin to address the privatization of our rights online, court opinions celebrating our online freedoms will continue to ring hollow while amplifying perceptions of government irrelevance in the internet age.


Our Corporate Overlords, Power and Privilege, Tech and Society

The Commodification of People

Among the many ways so-called Big Data is influencing our lives, quantification and predictive analytics is beginning to play a significant role in how people are selected for opportunities, such as jobs, homes, romance, sex, insurance, and so on, substituting the vagaries of human judgement with seemingly objective and reliable analytic scorecards and labels. The same profusion of data that flows from your interactions with the networked and surveiled world, and which results in all those “personalized” ads you routinely encounter, can also be used to evaluate and grade you as a person. Your daily experiences and interactions with websites, mobile apps, credit card processors, eBook readers, cell-phone carriers, security cameras, etc. leaves data trails that are routinely and tirelessly hoovered up to supply the information economy with the raw material of user profiling (but you already knew that, right?). But beyond the now familiar goal of these activities to simply sell you stuff lies a larger information dream: Using data about you to thoroughly understand what makes you tick and using that understanding to predict your future. Opportunity gatekeepers, such as landlords and employers, find this dream very attractive. Business objectives drive gatekeepers to seek out any and all means to maximize efficiency in their operations and reduce their levels of uncertainty and risk. Quantifying people into gradable categories like bushels of rice with consistent and predictable quality is an intoxicating product offering for decision makers, and the data industry is prepared to meet (and create) that demand. By aggregating together your prior preferences and behaviors and comparing those to the preferences and behaviors of thousands of similar people and their choices, a motivated data processor and her algorithms attempts to make a range of predictions about your life, getting out ahead of the uncertainties of evaluating people based on what they self-report or provide through from their chosen references.

But there’s a problem. Quantifying people is not nearly as easy as quantifying grain. Quantification requires standardization, but people aren’t standardized and the data collection methods we have for analyzing people aren’t perfect. So, shortcuts have to be made and proxies must be used to reduce the rich complexity of human experience into discreet buckets. The first reduction comes in the form of the data that is used. Despite the fact that we our lives are increasingly observed and analyzed, the domains and methods of observation come pre-loaded with certain biases. Tracking what books you read with a Kindle (or other eBook reader) requires, first, that you own a Kindle or use the Kindle app. This already eliminates that data point from consideration for all the people who stubbornly continue to read printed books or who choose to spend their limited incomes in more practical ways. Here we see how one chooses to engage with the data ecology might impact her profile. The varied choices people make about participating in social media are similarly influential in profile development, as evidenced by the increasing number of data products that use social media data as inputs (see this for a chilling example).

The data industry also makes use of the open records policies of government agencies to build their profiles. Some types of public records, such as arrest records, tend to reflect negatively on people of color and the poor. For example, there is an abundance of evidence that drugs and weapons laws are routinely violated by people across demographic lines, but African American men are more likely to be arrested and convicted for violations (see this, and this for examples).  As a result, evaluating people based on their criminal histories doesn’t necessarily tell the kind of nuanced story that leads to complete knowledge. These two examples (and there many others) suggest that the construction of the data regime may not be quite as objective and reliable for judging people as we think. In fact, it appears to favor people of privilege – those who can afford to participate richly in the data economy (and choose to) and those for whom readily-available derogatory data is less likely to be discovered.

In addition to understanding how the formation of user profiles might be flawed and unfair, I am also interested in why economic/social gatekeepers are so keen on using analytics to make decisions about people in the first place. And this brings me to the work of Albert Borgmann who writes about the “hyperactivity” of modern society. Borgmann describes a hyperactive society as one that is constantly “mobilized” against the perceived threat of economic ruin. This mobilization has three key features: the suspension of civility, rule of a vanguard, and the subordination of civilians. It is in that third feature that I detect what I would label the “precarity” of the modern worker. Despite our cultural mythologies in the U.S. and elsewhere about how hard work and dedication inevitably lead to riches and success, and in spite of the tremendous wealth our society has created, we have seen in recent decades increasing social and economic inequality and the loss of stable work opportunities for ordinary people due to changes in a variety of structural economic conditions.  There are many reasons for these changes, but one of the results is that those with the power to make important decisions about our lives seem to have considerably more power and incentive now to exploit what Borgmann refers to the “disposability of the noncombatant work force.” In short, the incentives are high to reduce the work force as much as possible and the moral precepts of capitalism do not offer much resistance to doing so. The resulting precarity of work in our society leads to increased competition among workers. In order to survive in this mobilized society, we are basically forced to compete for increasingly scarce resources rather than to join together to challenge the sources (real and imagined) of the scarcity.

While Borgmann tells us something about societal forces that contribute to interpersonal competition for scarce opportunities, another author, James Carey, sheds light on how information systems have provided the means to commodify human beings. Writing in 1989 (but eerily prescient), Carey examined the dramatic social and economic changes wrought by the first electronic mass communication medium: the telegraph. The telegraph was the first technology capable of detaching information from physical objects and constraints, increasing the ability for traders of every stripe to to abstract physical objects into symbols for exchange. With the telegraph, information about the world could travel much faster than any messenger or machine, breaking down prior barriers of time and space. This change in the temporal and physical reach of communication increased a business person’s pool of potential partners, making direct personal experience with each one impossible. As a result, new methods of evaluating strangers had to emerge. This can be linked to another of Carey’s observations about a separate byproduct of electronic communication: the commodification of goods. Carey argues that the emergence of the commodities futures markets was tied to the linking of buyers and sellers regionally and nationally by the telegraph. It became possible to trade goods, such as bushels of wheat, by lots aggregated from dozens or hundreds of sources rather than dealing directly with the individual producers. This practice required the development of standardized grading systems that could be applied to quantities of goods from diverse sources. These seemingly unrelated byproducts of communications technology–the emergence of impersonal business dealings requiring new methods of personal assessment and the invention of the commodities trade that massed and standardized diverse goods into quality categories–set the stage for the emerging commodification of people. In the modern setting, the ability to post a job ad or a dating profile potentially viewable by millions of people means that the “seller” must be able to rapidly sort through dozens, hundreds, or thousands of applicants. The ability to judge candidates individually becomes impossible. Here we see the origins of the reputation industry and commodification of people: Why not employ algorithms to sort them into quality categories as if they were bushels of grain?

How this operates in practice is complex, but one thing is certain: the precarity of position and the perception that resources are scarce motivates people to sacrifice their own freedoms to gain an edge. People will give up their privacy and  otherwise adjust their lives to please opportunity gatekeepers in order to get ahead. A telling example comes from the insurance market where, in exchange for rate reductions, people install data devices in their cars that monitor and report their driving habits to insurers. Even more invasive, people are sharing the data collected by their health tracking wearables for similar incentives. This practice is known as “signaling” by economists. While granting explicit consent to monitor specific activities is a very obvious type of signaling, there are other means of signaling that are a bit more complex, but not too complex for analytics algorithms to notice. Social media activity provides a rich assortment of signals about one’s life, including family composition, health events, employment satisfaction, and financial stability among others. A few banks are confident enough about what they can learn from social media they are basing credit decisions on it (see this and this). As the practice of monitoring social media use to assess one’s worthiness for loans and other opportunities becomes commonplace, it’s not hard to imagine how that may influence how people use social media and therefore how they socialize in general.

There are many reasons why this matters. For one, it represents a progressive rebalancing of information flows. Economists have long rued the “information asymmetry” in buyer-seller transactions, in which the seller uses her deeper knowledge of a good for sale to the potential disadvantage of the buyer. However, one man’s market inefficiency is another’s defense in a world of outsized power imbalances. If the seller is a job applicant for a job at a large corporation, they are arguably arrayed against the titanic power of the modern corporation. Being able to assume some measure of control over the hiring process could be the last semi-free act of her career. Meanwhile, the corporation’s goals are to avoid risk, by choosing the candidate least likely to harm the firm, and to increase efficiency by streamlining the process of choosing from among a pool of candidates. Commodification of the candidate serves the corporation well, but may disadvantage the candidate if she cannot control the sources and biases of the information used to categorize her. As the reputation industry matures and more and more choices about who gets what opportunity are determined by abstracting people into symbols and treating them like graded commodities, the risk that people seeking opportunities become increasing disempowered will emerge as the crowning achievement of information technology: The commodification of precarious lives.

Our Corporate Overlords, Tech and Society

Blame the Election On Facebook (in part anyway)

Donald Trump won the U.S. presidential election last night. This is terrible news for the country and I am horrified by his victory. In particular, I’m having trouble processing his obviously widespread support given the many negative attributes he has displayed throughout his life and during the election. While we’re looking around for whom to blame for how things turned out (and there will be plenty of finger-pointing), I believe Facebook, Twitter, Google, etc. and the entire culture of information “personalization” should be counted among the blameworthy. True, there are a number of complex sociological factors in play in any election, and the combination of Trump’s celebrity appeal and populist messaging seem to have had a powerful effect on a lot of people. But here’s the thing: Many of us were unprepared for this result. We looked on with wonder as Trump won in the primaries and went on to become a popular candidate in the general election. Many of us were shocked and surprised by the outcome of the election. Did you, like me, experience ongoing shock and disbelief over Trump’s consistently competitive poll numbers even after allegations of sexual assault and the array of other deeply negative revelations about him? If so, it might be because you and I live in a media bubble built out of algorithmic profiling, an echo chamber designed to soothe us with an overwhelming number of messages that we agree with, or are pretty dang close.

When you view your Facebook “newsfeed,” you’re not viewing every post of every person you are connected to on the network. Facebook’s learning algorithms access thousands of data points about your past behavior on Facebook and your interactions with other websites, merchants, and mobile services to identify your tastes and preferences. The resulting newsfeed you and I see contains only posts that Facebook believes are the most “relevant” to each of us. On Facebook, we’re mainly connected only with people who we have identified as “friends” and what we hear from them (and they from us) is winnowed down into a preference-focused feed that may not even include the contrary views of people we know. When we perform searches on Google, another collection of personalization algorithms massage our search results to conform to what the system believes each of us wants to see. Twitter users can choose whose posts to follow, enabling users to curate their information sources into narrowly defined subjects and communities. In my Twitter feed, I only follow academics and research institutions working in my field, plus a few journalists and news sources whose reporting appeals to me.

Getting news this way is completely different from  traditional journalism, where the goal, ideally, is to provide readers and viewers with a diverse range of ideas and multiple viewpoints. On commercial information services the information we receive is narrowly restricted and designed to please each of us individually. (Much of this will not be news to anyone who has read Eli Pariser’s “The Filter Bubble.”) The goal of customizing our various search results, feeds, and follows is to keep us online, staying engaged with whichever service we’re using, clicking links, viewing ads, buying things. The more time we remain engaged this way, the more information about our preferences and inclinations that can be translated into advertising dollars. The result of all this customization is that each of us is experiencing very different information flows from people who disagree with us – flows designed to keep each consumer engaged and to limit any feelings of discomfort. If you think Hillary Clinton is dishonest, it’s likely reflected in your online media choices and personalizations, and you’re unlikely to see posts or articles that champion her as a person of integrity.

Democratic deliberation requires the airing of a plurality of ideas and room for meaningful debate on the merits. It is still true that people are more likely to find common ground and back down from extreme positions if given the chance to truly understand each other. It is also true that customized information sources are as likely as not to include easily disputed rumors and distortions that would become apparent if more viewpoints were available for consideration. This is not what is happening. Unfortunately for democratic deliberation, the discomforting effects of stories and worldviews that don’t conform to our biases are bad for the online business model. If the goal is to keep people where they are, engaged and consuming what you’re offering, it doesn’t make business sense to question or challenge them and their version of reality.

Our media elites used to do a decent job of providing us with a plurality of views. Traditional journalism is far from perfect; media biases and filters are not new. But there were (and still are) journalistic institutions dedicated to reporting more fact than rumor, and for presenting multiple viewpoints on contentious questions. When that system was more functional than it is now, while I might not agree with the opposing viewpoint, I could at least come to understand and engage with it. Similarly, people on the other side of an issue might come to understand a piece of my truth. But traditional journalism is in decline. Fewer of us are relying on well-established media sources that can legitimately claim to be objective or balanced. One outgrowth of this is that some of the remaining media institutions have become clownish and shallow, more interested in salacious gossip and to pleasing political leaders in return for “access” than to soberly analyzing their views and statements.

As my old friend David Newhoff points out in his blog, viewing the world through the filter of commercial information platforms, including social media, makes it “very hard to distinguish between being vigilantly informed and hysterically manipulated.” As more of us come to get most of our political news from these platforms, whose shared mission is to harvest and monetize information from us and not to inform us, we will continue to fail at gaining a thorough understanding of what comes blasting out of the fire hose. Still more problematic, we’ll also continue to fail to truly understand what the other side actually thinks, resorting instead to caricatures and hyperbole. We are going to see the results of this filtering effect repeated again and again with the result that we become weaker advocates for our causes and candidates. This is not making us smarter. It is making us naive and vulnerable.

While we’re busy pontificating (myself included) on social media about our views and sharing our carefully curated information tidbits with our online followers and friends, remember that this narrowly focused information sharing is a central problem for political discourse. Despite the potential for sharing our views with more people than most of us could have hoped to before these platforms existed, the intentional limiting of our feeds and searches by platform operators means that what we say, do, and seek in the information space is not likely to escape the comfort of our individual echo chambers. We’re just yelling at ourselves while generating revenue for others and carving out ever-tinier slices of an increasingly subjective reality.

Our Corporate Overlords, Philosophy

Is Western Philosophy Irrelevant?

In a thought-provoking opinion piece by Robert Frodeman and Adam Briggle, the authors argue that modern philosophy has engineered its own irrelevance by retreating into elite research universities where it has struggled, and largely failed, to compete with the positivist natural sciences around which the modern university was built. Rather than remaining a democratically situated facet of daily life and a familiar component of all manner of intellectual inquiry, Philosophy, as a discipline, surrendered to the institutional pressures to specialize and to endlessly “produce” new knowledge.

While I agree that one reason Philosophy has been marginalized is due to the same disciplinary, epistemological tensions that populate ongoing debates about scientific rigor and results between scholars of the “hard sciences,” like Chemistry and Computer Science, and the “soft sciences” such as Sociology and of course, Philosophy. However, I would offer that the rising eminence of the applied philosopher, aka “the ethicist” as a feature of research teams, board rooms, and news columns, counters some of the trends cited by the authors. Applied Ethics is a move towards the re-democratization of philosophy in my opinion, and has risen in eminence due to a rising discomfort with the abuses of increasingly powerful and influential institutions (e.g. Enron, Wall Street, etc.). The main limitation there though is the same one that has neutered moral accountability within the natural sciences: capitalism. Boeing, Amazon and Google may employ ethicists (unlikely at Amazon actually), but can their work make those institutions “good?” Or do they simply provide cover and PR talking points for business as usual?

So, my question back to these authors, which is perhaps addressed in their book: Where is there actually space in our society for philosophers outside of the research university? The academy appears to be the last refuge for anyone who doubts that a society based entirely on free markets and social darwinism is necessarily a good society (and that refuge is shrinking). Who would have the philosophers now except for self-help book readers, cable news-segment producers, and public relations “damage control” teams? Maybe it’s not just that Philosophy felt the need to ape and compete with the natural sciences and so retreated to their own ivory tower, it was driven out of industrialized consumer society and dismissed as vaguely interesting and quaint, but mainly inconvenient. (Modern religions could lodge a similar complaint, but their troubles are complicated by widespread hypocrisy and intolerance.) Academic institutions may not be ideal or particularly democratic places, but they do preserve the realm of free thinking about ideal societal models in an era when self-interest, greed and accumulation appear to be the ascendant societal values, while harmony, charity and compassion are viewed as weak.

It’s become fashionable to trash the academy as an ossified, siloed, nearly-irrelevant institution that produces far less “value” than the resources it takes in. Many of those critiques are of course quite valid. Universities and their cultures are not at all above reproach and should be the subject of ongoing scrutiny by their communities. But these critiques occur at the same moment that bootstrap-pulling libertarians demand that our educational institutions make a “business case” for their existence and prove their “relevance.” The creeping corporatization of universities, particularly grant-driven-but-otherwise-cash-strapped public universities is, in part, a quest to delegitimize the types of inquiry that do not lead to the creation of new consumer products, but instead challenge existing power structures by questioning the current arrangement of society. Like that of Socrates and Martin Luther, independent inquiry and public declarations of dissent makes the powerful very uncomfortable. That is the value of so-called “academic freedom” which, when it actually functions and enables truly free expression, allows thoughtful people to boldly challenge the status quo. Philosophy deserves a place in society. Blaming the discipline for its balkanization may, in part, be a case of blaming the victim.

Our Corporate Overlords, Tech and Society

Groundwork to a Rhetoric of Technology

I recently had the opportunity to learn about a field known as the “rhetoric of science,” which is the study of the discourse around scientific topics. While the word “rhetoric” is often thought of as a pejorative, here it is a neutral term that broadly describes how we go about trying to persuade each other to a point of view using words and, quite often, do so by targeting emotions and assumptions. Each of us uses rhetoric pretty much daily; from technical arguments about politics, to mundane negotiations about household responsibilities. We spend a great deal of time in conversation “making a point,” which is another way of saying that we try to persuade others that we’re right about something. In studying the rhetoric of science, scholars seek to understand how science is described, debated, and understood (and frequently misunderstood). Rhetorics of science frequently affect how specific research or an entire science is perceived, and it can affect how future work is conducted. An example is the examination of how topics like climate change and human evolution, both of which are firmly settled questions within the scientific community, have been successfully portrayed by activists as ongoing debates. Another example is the study of how simple metaphors are used to describe deeply technical topics like genomics. (Is DNA really a “map” or a “blueprint” of a gene?) Consider the recent controversy concerning Planned Parenthood and the alleged sale of fetal tissue to researchers for profit. A fiercely politicized discourse has been employed to depict a fairly routine activity—the use of human tissue for research—as something deeply nefarious. The rhetoric of science is a fascinating research area, one that we are all engaged in whenever we consume media on scientific topics, which happens with increasing frequency thanks to the ease with which information and misinformation rapidly spreads via social media and cable news programs.

My brief introduction to the rhetoric of science caused me to redirect my thinking about how we talk about information systems and technology. If you know me or have been reading this blog, it probably won’t surprise you to learn that I have been labeled a technology “skeptic.” There’s an example of a rhetorical move right there. I think “skeptic”—which has a mix of connotations, some of them pretty negative—isn’t quite the right word. What I feel is that common portrayals of modern technology in our public discourse lack a satisfying amount of questioning and thoughtfulness. This shouldn’t be too surprising since, after all, most of what we hear about popular technologies comes, either directly or through proxies, from the giant corporations that make them. We learn most of what we know about iPhones from Apple (and its many allies), social media from Facebook (and its many admirers), and so on. Those who craft the messages we most often hear on these topics exploit the fact that most of us are readily impressed by sleek designs and technological novelty. While the twittersphere may contain an abundance of contrarian voices on technology topics, you kind of have to want to hear them to find them, and even then, credibility is hard to establish. It’s easy to dismiss critics as uninformed, puritanical, or simply “no fun” (consider this blog, for example). I think it’s safe to say that the most consistent and well-crafted information we hear about most technologies comes from marketers. Whether it arrives in the form of a slick advertisement, or through something more viral, like blog posts and cable news appearances by various spokespersons and consultants, the major channels of communication still favor those with the deepest pockets and largest marketing infrastructures. There are people who get paid really well to spin great stories about a direct link between new technologies and human flourishing and they do it very well. Even seemingly neutral information sources like news programs often lack introspection; favoring instead to offer breathless “reviews” of new technologies that fail to offer criticism that might alienate an audience that is generally awestruck by the latest gadgets and apps.

What I’m planning to do with all this is to start looking at some of the common tropes and stylistic moves that tech-evangelists use to convince their audiences about the seeming promise and inevitability of tech-mediated living. Examining the metaphors is one way to do this. Words like “interactive” and “disruptive” deserve closer inspection. So does the term “social media” for that matter. In each case, we should be asking, what do these words mean to an audience? And do they accurately describe the states and changes they are employed to describe? The use of terms that invoke freedom and choice have a long history of association with market based thinking, and have become even more pervasive in the rhetoric of Silicon Valley. What is a “free” app exactly? What range of “choices” do we actually have in selecting and using information technology?

My main concern is that as we move from living and interacting in physical space and in real time—on the street, in the park, in the auditorium—to online existences where we interact using social media, augmented reality, gaming, and so on, that we are moving into spaces not only mediated by technology, but easily manipulated by the corporations that make the technology. Interactive spaces made by corporations are not agenda-less spaces. They contain (and are) rhertorics designed to persuade. One look at the default screen of an iPhone offers numerous clues as to the priorities of Apple, which likely do not conform entirely to yours. Wherever possible, companies that can hold your attention will seek to convince you to use more of their products and services and, as often as possible, will reinforce their tech-focused, consumerist worldviews. As more and more of the information we receive is “curated” for us by the algorithms that select, say, your personalized Google search results, there is a real risk that powerful voices will dominate and hijack your access to information. Consider for a moment what could happen if Eric Schmidt, the CEO of Google, decided that he really wanted Donald Trump to be the next president. How much tinkering would it take to subtly change your search results to present the most sympathetic accounts of Trump and his views? Technology companies have access to enormously powerful rhetorical tools. Our actual freedoms and choices may well depend on how attentive and aware we are of that.

This is a topic I plan to return to. For now, I invite you to do what I’m doing, which is to listen closely to the words that get used in any conversation about information systems and technology (including mine) and seek out the meta in the conversation. What shorthand is used to describe complex, socially impactful developments? How are contrary voices characterized? You may start to make some very interesting observations. You may find yourself becoming something of a critic. Who knows? You might even become a “Luddite.”