Power and Privilege, Tech and Society

Let Someone(thing) Else Decide

When times are hard and all around us is evidence of human frailty, many of us dream of reorganizing the world into an ideal state using logic and purely rational thinking. The past year has been a hard one for me. I suffered a devastating personal loss, I limped to the conclusion of my PhD, and I moved thousands of miles away to chase a job while leaving behind the communities that have long sustained me. In the background have been all the shared and personal hardships of a global pandemic along with a climate disaster and deteriorating geopolitical conditions. Add the acceleration of real and imagined conspiracies and deceits and some truly scary and disappointing world leaders and stir. One could be forgiven for losing hope that we–by which I mean any substantial collective body–are capable of making good decisions for ourselves and the world.

Recently, I was talking about artificial intelligence with some truly smart thoughtful people, including my friend and colleague David Newhoff, who posed this provocation: “I have always been skeptical, if not cynical, about ceding human agency to technology, and this despite being rather cynical about human beings. Lately, as we are forced to accept just how fragile liberal democracy is—because it is clearly too difficult for many humans to preserve—I admit that I have lately wondered whether risking governance by AI would not be preferable

It’s important to note that David is no tech-utopian–quite the opposite actually–and this provocation is meant to express skepticism about human nature rather than offer a sincere proposal. But I think we can all relate to sometimes wishing there was something better than us to take over the job of taking care of things. Recent evidence suggests that despite centuries of post-enlightenment faith in human reason, we have not achieved the flourishing of humanity that was promised based purely on our best intentions and essential “good will.” Lately, we haven’t even lived up to the very imperfect ideals of western democracy which, having failed to achieve race or gender justice, not to mention lasting economic justice in the centuries since it took hold, seems like it could be heading us towards oligarchic despotism through the ballot box. In light of these anxieties, isn’t it tempting to believe that a utility-maximizing AI would make better choices about desperate migrants, the climate disaster, and the distribution of food, medicine, and education than short-sighted and selfish humans? Couldn’t such systems nudge or direct us away from a colonial, white-supremacist, billionaire-centric future to one of liberation and shared prosperity?

You already know the answer I am heading for here, which is ‘no.’ And the reason is deceptively simple; because our technologies–all of them–are not better than us. They are us. To quote Tarleton Gillespie, “technology is culture rendered durable.” In other words, everything that we experience as culture, including all that is both right and wrong with society, is present and reproduced in our technology and this is true no matter how “smart” or “intelligent” that technology appears to be. In particular, we often get consumed by visions of artificial intelligence technologies as animate, quasi-humanoids that execute decisions using pure logic, with complete information, and to the highest standard of care. This description is even true of the dystopic visions, which typically retain the idea that AI beings are so much smarter and capable than humans even as they plot our destruction. While it may someday be the case that such beings walk among us as our guides or rulers (either beneficent or malign), we are so very far from that. So-called “super-intelligent” technology does not currently exist and may never exist as it has been so richly imagined in popular narratives. Even if they do eventually appear, we have far more mundane technologies influencing our futures right now. Among these are automated decision systems like recividism-risk prediciton systems and fraud detectors used in government along with recommender and social media algorithms that shape and channel human understanding and engagement with the world. These are built by humans, dependent on data and designs constructed by humans, and exist to carry out human-authored objectives.

Sure, there is tempting promise in intelligent technologies. AI-assisted medical diagnostics, disaster management, and learning technologies may prove themselves and contribute to societal uplift and reduced suffering. However, where well-designed and thoughtfully implemented system might overcome some specific human bias or failure, there is tremendous risk (and numerous examples) of creating or exacerbating other harms. The main source of failure in technology systems is not simply that it’s ‘broken.’ Rather, it is that each one is built upon something–either human or machine–that came before it, including prior technologies and the choices that went into to making those. As Susan Leigh Star offers, new technologies are “built on an installed base.” Nothing springs from a vacuum from which some ideal of perfection might be reached. This is as true of our technologies as it is of our political and social systems. Every technology is an embodiment of human history and culture.

As with human politics, the work of achieving anything good (or at least ‘better’) with technology is a mix of reflection and accountability. By reflecting we come to understand what systems actually do in the world and then apply pressure to push them in positive directions. A key difference between humans decisions and automated ones is not their internal mechanisms but their accessibility. Humans can make terrible choices, but we have far more experience addressing that by holding humans accountable and demanding explanations. We have far less experience doing this with machine learning algorithms. Technological development and assessment can only be done by a tiny class of technological elites (coders) and even they struggle to decipher the reasoning behind the decisions made the most sophisticated systems. Consequently, an AI system that appears to be “better” than a human is probably also one that most of us cannot understand. Our socio-economic-political systems are very imperfect and can be opaque, but they are also always evolving. These human systems are co-produced works that many more of us can take part it. And they are the key ingredients to our technologies, from the choices about what to build to who is granted the privilege of using them and for what purpose. Ideally (and often) social evolutions are driven by what some plurality of people actually want, or something like it. While this is somewhat true of technology, the close relationship between technological innovation and capital means that most advanced technologies are built and operated by a very tiny subset of society and done so within a market-driven, dominating worldview. Technology has politics, as the saying goes. When we stop investing in improving the human systems that are the sources of misery and focus instead on improving technology in the hopes that it will do better than us, we miss the point about what technology is. If it is us, then the work of social justice continues to be the work of improving ourselves. Only through that work can we hope to create technologies that further it.


Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.