red zeppelin ynAo7ZSZWO0 unsplash

Can we ever teach AI -via argumentation or intuition -to do the morally right thing? That’s what moral philosophers, like our columnist Katleen Gabriels, are racking their brains about.

A question that preoccupies me as a moral philosopher is to what extent artificial intelligence (AI) is capable of making moral judgments. To address that question, of course, we first need to know how humans arrive at moral judgments. Unfortunately, no consensus on that exists. Moral psychologist Jonathan Haidt argues that our moral reasoning is guided in the first place by our intuition. ‘Reason is a slave of the passions,’ as philosopher David Hume stated in the 18th century.

Haidt presented test subjects with a taboo scenario about a brother and sister who have sex with each other one time only. The objections were addressed. The siblings use contraceptives (birth control pill and condom) and it happens with mutual consent. The majority intuitively disapproves of this scenario and then seek arguments to support that intuition. If respondents are given more time to think about it and are also provided with substantiated arguments, then they are more likely to be okay with it. A calm conversation and the provision of arguments can make people change their gut instincts and their judgments. When there is an open conversation with mutual understanding and affection, people are more willing to change their minds.

‘Play’ as a form of intuition
Machine learning and deep learning are opening up opportunities for AI to develop a kind of moral ‘intuition’ by providing data and letting algorithms search for patterns in that data. The word intuition is not really the right one, because AI always concerns calculations. Like in the case study with AlphaGo, you could confront an algorithm with millions of scenarios. In this instance, about morality. Then have it ‘play’ against them (as a form of self-play) and learn from mistakes. AI will find a pattern, for example about right and wrong, and can consequently develop a kind of intuition. It continues to be extremely important to look critically at how AI discovers patterns. After all, not every pattern is desirable, as AI could also develop preferences based on e.g. popularity.

No consensus
The morally right thing to do, under any circumstances, is to do whatever has the best reasons for doing it. Giving equal weight to the interests of each individual who will be affected by what people do. Quite apart from the question of whether AI will ever be able to do this, no consensus exists on those “best reasons.” This certainly complicates the choice of which data we should use to train AI with. The theory, and more specifically, the definition of morality that you adhere to and subsequently train AI with, will determine the outcome. In this case, moral judgment. When you connect ethics and AI, you inevitably end up being stuck with making choices that then determine the direction of that moral judgment. In short; for now, this question remains highly speculative.

Innovation Origins