Our Corporate Overlords, Tech and Society

Feeling the Feels: Artificial Intelligence and the Question of Empathy

When I was a child, our family went through a series shitty televisions. This was before most people had internet connections or personal computers, when a TV was a suburban family’s most immediate link to the outside world. So, TVs mattered. We didn’t have much money and a family friend, who happened to be an electronics tinkerer and a bit of a hoarder, would periodically trash-pick old TVs and give them to us. They were big boxy things with glowing tubes inside. The hand-me-down TVs worked for a while and then they didn’t. Towards the end, there was typically a period in which some amount of banging on the side or top of the TV would get it to work or improve the picture for a little while, until having to do it again. Often the banging would be an act of frustration or anger. It felt like the TV was doing something intentionally, holding out on us, magnifying our helplessness and deprivation. I sometimes cried out while hitting the thing. It was cathartic and I was miserable. And here’s the thing…some of those TVs gave the impression that they felt something. The picture would seem to twist or flash in response to the banging. Sometimes a good pounding on a nearly dead TV would produce satisfying sparks or smoke. It was as though the television felt pain. Not simply physical pain, but emotional hurt. Like we could make it share the sadness and frustration we were feeling in the same way an abuser or a bully inflicts pain in order to “share” his wretchedness with others.

The televisions didn’t feel pain of course and, as fairly well-adjusted adult, I no longer beat up on defenseless technology…often. It wouldn’t matter much if I did though (except for the replacement costs and some troubling implications about my mental health) because machines do not feel. Increasingly, we have technologies that see, hear, smell, and can even sense touch and pressure, but they do not now, nor will they ever have emotional feeling. And because they can’t experience emotion, specifically emotional pain, they are incapable of empathy. A well-crafted machine can certainly imitate emotion. Siri or Alexa can choose a voice modulating algorithm to communicate concern, mockery, or hurt and Jibo, the adorable table-top robot, has endearing face-like expressions and can coo like a precocious child, but it is all just algorithmic fakery. The machine feels nothing. It simply chooses a response type from a library of code and executes it without being itself affected in any way.

Why does it matter? There are certainly plenty of people who think it doesn’t, or at least not very much. Luciano Floridi, a philosophy professor at Oxford, has suggested that we overemphasize the specialness of human agency and reason. He revisits the famous “Turing Test,” which basically holds that if a computer can perform well enough to convince you that it is human, then it can no longer be thought of as merely a machine. Floridi wants us to realize that as machines—artificial intelligence—become capable of doing an increasing number of human-like things, our perceptions about the set of skills and practices that can only be entrusted to humans will become progressively smaller and may eventually vanish. The Science and Technology Studies scholar Bruno Latour has similarly suggested that we need to view objects as mundane as seat-belt chimes and hydraulic door closers as having human-like “agency” because such objects act in the world and shape human action. This line of thinking appears to be propelling the development of products and services that give machines significant power over people’s lives based on assertions that machines perform the same or better than humans in many tasks. Getting a job? Software is already being used to choose from a pool of candidates based on their resumes and social media profiles, using machine learning algorithms to make assertions about future performance and “fit.” There are bots that can even conduct job interviews. These approaches are being attempted well beyond hiring for a broad range of decisions, from who should get a kidney to who should go to jail and for how long.

There are very likely numerous scenarios in which machines can do a better job than fallible, biased, or oblivious humans. There may even be demonstrable cases where software overcomes racist and sexist trends in key decision spaces. Even if this is true, there are also a lot of things a machine will never be able to do. Why? Because they lack empathy. Empathy is a core human trait that differentiates us from machines and motivates important human values like charity and the desire to relieve suffering. Unlike many animals, we even feel empathy for members of non-human species. We routinely agonize over what pets and farm animals feel and we worry about the injustice of picture windows as experienced by birds in flight. We also appear to empathize with inanimate objects like cars, and yes, devices with artificial intelligence, such as those smart assistants and even military robots. Humans are overflowing with empathy. It is empathy that causes us to care about homelessness in our cities, the poor health of strangers, and the fates of victims of far-off wars and famines even when we aren’t in direct contact with their struggles.

Empathy is particularly important to our human futures. We cannot simply decide who should live or die, who should suffer, or who deserves a second chance based on code libraries and the capability of giving you bad news with the appearance of regret. Being human means eventually having the experience of pain, which is contributes to an ability to empathize with the suffering of others. Machines cannot have these experiences. This is why humans must always be a central part of important decisions that concern the well-being of other humans. But artificial intelligence has been proposed to make determinations in just those types of realms, such as in military matters. Despite vacuous claims about achieving “humane war” and other insane concepts, war is and should be entirely lacking in humanity. Machines will not improve it, only distance human actors from having to confront its awfulness. Similar discussions about handing off difficult decisions to artificial intelligence need to come to a full stop whenever they involve determinations about human fates. Will the autonomous car allow its owner to die to save others? Who will be liable when the healthcare robot amputates the wrong limb? Liability isn’t the only issue here and neither is the project of figuring out how to program “morality” into AI. The autonomous device can never “care” who lives or dies, which limb should be removed, or how much anyone suffers. It can only make a calculation and then act in the world without paying an emotional price. Like a sociopath. This is not how moral decisions are made. Only humans can truly care about anyone or anything and that is the fundamental basis of moral agency. Artificial intelligence, quite literally, has no skin in the game. And this is why artificial intelligence can never replace us.

Standard

2 thoughts on “Feeling the Feels: Artificial Intelligence and the Question of Empathy

  1. The same was said not too long ago about animals. That they couldn’t feel emotions but books like beyond words how animals think and feel and experiences like koko and her kittens and her response to robin Williams death put a serious tests to those claims. If computers can be so sophistically programmed to exhibit empathy who is to say they are not experiencing it? Are we ourselves not a sort of sophisticated program? Where is this empathy? In what space does it exist? I think it lies in our past. In our experiences and memories. It exists in the fiction we enjoy in broken old tvs and novels. Those things program us in a certain way.

    Like

    • What differentiates the case of animals from that of artificial beings is that we now know that animals always had feelings. Humans (not all of course) just failed to recognize what was there. Humans make AI, so we can know exactly what it’s made of and what kind of “mind” it possesses. While I do not myself believe it, it might be true that someday AI will be so complex and capable that we’ll have to admit that it “feels” something. But chances are we will have tasked AI with making very important decisions long before that occurs. Compounding this, the human habit of anthropomorphizing anything that appears to be alive means that we’ll likely deceive ourselves about the sentience of AI much earlier than we rationally should, leading to significant epistemic risks that include potentially entrusting decisions to machines that require emotional responses they can only imitate.

      Like

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.