Should we learn morals from AI?

In a fascinating article published at 3QuarksDaily, moral philosopher Michael Klenk raises the possibility that we could improve the rather obviously sorry state of human moral decision making by turning to Artificial Intelligence. He envisions two types of “moral apps” that may be developed in the future to help us navigate our ethical thickets: ethically aligned artificial advisors and ethically insightful artificial advisors. Klenk wisely concludes by the end of the article that “both ethically aligned and ethically informed artificial advisors are a long shot away from expertise on morality,” and that they both present significant problems of implementation. Still, he seems generally favorable to the notion. I’m not, and here is why.

Let’s begin with the simplest type of moral app envisioned by Klenk, what he calls ethically aligned artificial advisors. The idea is simple and, at first glance, obviously on the mark: just like we are now used to asking advice to apps residing on our “smart” (really: fast data processing) phones about, say, where to go for dinner, or what movie to watch, so we should be able to ask a moral advisor where, for instance, is the nearest vegetarian restaurant, assuming we decided for ethical reasons that we want to eat vegetarian.

Such apps already exist, and one of my favorite is put out by the Monterey Aquarium in California: Seafood Watch. The app allows you to locate restaurants serving sustainable seafood, identify suitable alternatives to your favorite entrees, and so forth. But I’m not sure why Klenk even considers this type of app a moral advisor in the first place. He says that the value is already set by the designers of the app, but more importantly the value is set by the user who downloads the app in the first place. You have already made a decision to eat vegetarian, or pescatarian, for ethical (or health, or other) reasons. The app is simply providing you with hopefully reliable data to enact your decision. You are not deputizing anything other than the search for factual information to the program. Indeed, the program itself hardly counts as AI, since there is no artificial learning going on here. The original information is provided by the experts at the Aquarium.

A variation on this would be an app that crowdsources your ethical issues, like those programs that use hundreds or thousands of user-based ratings for restaurants, combine them with your own past history of dining as recorded by the app, and provide you with handy “recommendations” on where to eat next. Setting aside the difficulties of implementing this for ethical problems — which are far more difficult then restaurant menus and ratings to translate into machine databases — you would have made the dangerous decision to deputize your ethics to an unknown (and non-transparently sampled) crowd of strangers. Such deputizing is already questionable when it comes to movies and restaurants. After all, why should you equate the majority opinion (especially, again, one that is sampled in ways that are likely suspect or non-representative of the general population) to a quality opinion? That way you’ll find yourself watching superhero movies all the time, or eating the same stuff that everyone else eats, regardless of whether you like it or not. Sure, that will save you time. But allowing an anonymous crowd to make moral decisions for you? That seems dangerously irresponsible.

It also is entirely at odds with the very concept of a moral (or ethical, I’m using the words interchangeably here) decision. But I will get back to this crucial point after we discuss Klenk’s second, and far more interesting (and problematic!) category of moral app: ethically insightful artificial advisors.

Klenk’s defines this second type of program as one capable of discovering moral truths on their own, thus providing new knowledge to their users. But wait, what? “Discovering” moral truths? I realize that this is still somehow controversial in moral philosophy, but moral truths are not “out there” to be discovered. They are human constructs. To be sure, they are not arbitrary human constructs, and they are more or less constrained by objective facts, for instance about how the world works, and about human nature.

That’s why answering moral questions is so darn difficult: they are not (entirely) factual, they rely on judgments, which themselves are the result of a chain of reasoning. And the very same facts about a given moral conundrum may be compatible with a number of answers about what to do. 

For instance: is a given abortion permissible? To answer in an ethically reasonable fashion one would have to have access to some empirical facts, for instance about the medical conditions of the mother, or when the fetus develops the biological ability to experience pain. But these facts are then filtered by one’s value system, which is based on certain axioms, at least some of which may be non-negotiable. Like “personhood begins at conception,” or “a woman has the right to a complete control of her body.” In both cases, the bit about neuro-developmental biology becomes irrelevant, facts trumped by axioms. Also, the conceptual space that separates moral axioms and pertinent facts has to be bridged by proper arguments connecting those axioms and facts to one’s final decision. This process may very well be compatible with more than one way of reasoning about those facts and axioms, which means that reasonable people may arrive at different conclusions. 

Let me be clear: I’m not advocating moral relativism here, I’m just trying to provide a sketch that will make it easier to appreciate why answering moral questions is sometimes so complicated, and — more importantly — why there isn’t a single “truth” out there to be discovered. Moreover, that truth is most certainly not axiom-independent, pace Kant and his dream of a universal ethics.

How does Klenk think these ethically insightful artificial advisors would work? How would they arrive at the alleged moral truths? “A promising route to morals ex Machina goes via a bottom-up approach: Machine learning. An ethically insightful artificial advisor would aim to extract moral principles and values from learning data gathered from historical texts, survey responses, and eventually, observations of actions. We could learn these principles and values and ultimately make better moral judgements.”

Klenk immediately realizes some of the problems inherent in such a proposal, beginning with the good ‘ol GIGO: garbage in, garbage out. If the data provided to the computer by combing the history of human ethical decision making is garbage, because humans are bad at ethical decision making (the problem these apps are supposed to be addressing in the first place!), then it doesn’t matter how sleek your user interface will be: the app will still be spewing out garbage.

But the problem is far more radical that that. On top of my already mentioned skepticism about the very existence of moral truths, the ethically insightful artificial advisors would still not be doing something that different from the first type of app we considered: combing a database for examples and arriving at some sort of consensus advice based on that sample. The likelihood of discovering moral truths that way is about the same as the likelihood of discovering truths about movie making, or restaurant cooking. Not good.

Klenk goes on to make an even worse suggestion: “Perhaps our best shot would be to start training the machine with the carefully considered ethical opinions of ethics professors, plus the entire canon of academic moral philosophy (I would say that as a moral philosopher, wouldn’t I)?” Oh boy. Klenk must never have seen an episode of The Good Place, featuring the highly ineffective Professor (of moral philosophy) Chidi Anagonye, who overthinks every problem so much that he can’t make even the simplest decision in life. Or, more seriously, Klenk hasn’t considered the damning empirical evidence about professors of moral philosophy not being more ethical than other academics (the reason, I think, is because these people study the theory but are entirely uninterested in, even disdainful of, the practice).

Finally, assuming that our soon to be artificial moral overlords were somehow able to discover new moral truths, we would likely not understand how they did it, just like currently operating neural networks produce the results they are trained for (say, an accurate face recognition algorithm) by ways that are largely opaque even to the programmers of those networks. This is problematic in any application of neural networks and machine learning, because presumably we don’t just want the results, we want to learn how to arrive at those results. But the problem is greatly compounded when it comes to ethics because of the fundamental issue that I briefly mentioned early on and to which it is now time to return: the whole notion of an AI that provides us with moral advice is fundamentally misguided once we consider what ethics is.

The relevant divide that makes me far more critical than Klenk about the very idea of AI ethical advisors is the same divide that separates ancient and modern conceptions of ethics. I don’t know to which Klenk subscribes, but I would be surprised if he turned out not to be a Kantian deontologist or a Utilitarian. These modern (in a philosophical sense: they originated respectively in the 18th and 19th centuries) approaches are universalist in nature, that is, they assume that there are answers to moral questions that are applicable regardless of people, places, and situations. Moreover, modern moral philosophy is organized around the notion that the crucial question to ask is of the type: is action X right or wrong?

Contrast this with the ancient Greco-Roman approach known as virtue ethics, which has made a most welcome comeback of late both inside and out of the walls of academia. Virtue ethics, of which Stoicism is one example, is situational: the very same action could turn out to be virtuous or not, depending on the circumstances and, crucially, the motivation of the actor. Say, for instance, that next Sunday I volunteer at the local soup kitchen. Is this the right thing to do? Deontologists would say yes, because it is the sort of behavior that we would wish to be universal. The world would be a better place if everyone volunteered some of their time to help others. A Utilitarian would agree, but for different reasons: my help in the kitchen will increase people’s happiness and decrease their level of pain or suffering.

But in virtue ethics it also matters why I am volunteering in the first place: if I do it motivated by genuine concern for my fellow human beings, then the action is virtuous. But if I do it because I need a line about community outreach in my resume, so that I can apply for a new job, then the action is not virtuous, even though it may be beneficial to others.

Crucially, then, virtue ethics shifts the focus from the action considered in isolation to the character of the moral actor. Ultimately, the appropriate question is not “is X right or wrong?” but “is X improving or undermining my character, is it making me a better or worse person?”

And that’s why AI-based ethical advice is out of the question for the virtue ethicist: if I simply follow what an app tells me, I am not actively working on my character, I’m just content to mindlessly check “X is right” boxes. That’s unethical, because it diminishes my own worth as a human being.

By becoming a patron, you'll instantly unlock access to 12 exclusive posts
12
Writings
By becoming a patron, you'll instantly unlock access to 12 exclusive posts
12
Writings