The increasing attention being paid to artificial intelligence raises important questions about its integration with social sciences and humanity, according to David De Cremer, founder and director of the Centre on AI Technology for Humankind at the National University of Singapore Business School. He is the author of the recent book, Leadership by Algorithm: Who Leads and Who Follows in the AI Era?
While AI today is good at repetitive tasks and can replace many managerial functions, it could over time acquire the “general intelligence” that humans have, he said in a recent interview with AI for Business (AIB), a new initiative at Analytics at Wharton. Headed by Wharton operations, information and decisions professor Kartik Hosanagar, AIB is a research initiative that focuses on helping students expand their knowledge and application of machine learning and understand the business and societal implications of AI.
According to De Cremer, AI will never have “a soul” and it cannot replace human leadership qualities that let people be creative and have different perspectives. Leadership is required to guide the development and applications of AI in ways that best serve the needs of humans. “The job of the future may well be [that of] a philosopher who understands technology, what it means to our human identity, and what it means for the kind of society we would like to see,” he noted.
An edited transcript of the interview appears below.
AI for Business: A lot is being written about artificial intelligence. What inspired you to write Leadership by Algorithm? What gap among existing books about AI were you trying to fill?
David De Cremer: AI has been around for quite some time. The term was coined in 1956 and inspired a “first wave” of research until the mid-1970s. But since the beginning of the 21st century more direct applications became clear and changed our attitude towards the “real” potential of AI. This shift was especially fueled by events where AI started to engage with world champions in chess and the Chinese game Go. Most of the attention went, and still goes to, the technology itself: that the technology acts in ways that seem to be intelligent, which is also a simple definition of artificial intelligence.
It seems intelligent in ways that humans are intelligent. I am not a computer scientist; my background is in behavioral economics. But I did notice that the integration between social sciences, humanity, and artificial intelligence was not getting as much attention as it should. Artificial intelligence is meant to create value for society that is populated by humans; the end users always must be humans. That means AI must act, think, read, and produce outcomes in a social context.
AI is particularly good at repetitive, routine tasks and thinking systematically and consistently. This already implies that the tasks and the jobs that are most likely to be taken over by AI are the hard skills, and not so much the soft skills. In a way, this observation corresponds with what is called Moravec’s paradox: What is easy for humans is difficult for AI, and what is difficult for humans seems rather easy for AI.
An important conclusion is then also that in the future developments of humans, training our soft skills will become even more important — and not less as many may assume. I wanted to explain that because there are many signs today — especially so since COVID-19 — that we need and are required to adapt more to the new technologies. As such, that puts the use and influence of AI in our society in a dominant position. As we are becoming more aware, we are moving into a society where people are being told by algorithms what their taste is, and, without questioning it too much, most people comply easily. Given these circumstances, it does not seem to be a wild fantasy anymore that AI may be able to take a leadership position, which is why I wanted to write the book.
“We are moving into a society where people are being told by algorithms what their taste is, and, without questioning it too much, most people comply easily.”
AIB: Is it possible to develop AI in a way that makes technology more efficient without undermining humanity? Why does this risk exist? Can it be mitigated?
De Cremer: I believe it is possible. This relates to the topic of the book as well. [It is important] that we have the right kind of leadership. The book is not only about whether AI will replace leaders; I also point out that humans have certain unique qualities that technology will never have. It is difficult to put a soul into a machine. If we could do that, we would also understand the secrets of life. I am not too optimistic that it will [become reality] in the next few decades, but we have an enormous responsibility. We are developing AI or a machine that can do things we would never have imagined years ago.
At the same time, because of our unique qualities of having and taking perspective, proactive thinking, and being able to take things into abstraction, it is up to us how we are going to use it. If you look at the leadership today, I do not see much consensus in the world. We are not paying enough attention to training our leaders – our business leaders, our political leaders, and our societal leaders. We need good leadership education. Training starts with our children. [It is about] how we train them to appreciate creativity, the ability to work together with others, take perspectives from each other, and learn a certain kind of responsibility that makes our society. So yes, we can use machines for good if we are clear about what our human identity is and the value we want to create for a humane society.
AIB: Algorithms are becoming an important part of how work is managed. What are the implications?
De Cremer: An algorithm is a model that makes data intelligent, meaning it helps us to recognize the trends that are happening in the world around us, and that are captured by means of our data collections. When analyzed well, data can tell us how to deal with our environment in a better and more efficient manner. This is what I’m trying to do in the business school, by seeing how we can make our business leaders more tech savvy in understanding how, where, and why to use algorithms, automation, to have more efficient decision-making.
Many business leaders have problems making business cases for why they should use AI. They are struggling to make sense of what AI can bring to their companies. Today most of them are influenced by surveys showing that as a business you have to engage in AI adoption because everyone else is doing it. But how it can benefit your own unique company is often less well understood.
Every company has data that is unique to it. You must work with that in terms of [shaping] your strategy, and in terms of the value that your company can and wishes to create. For this to be achieved, you also have to understand the values that define your company and that make it different your competitors. We are not doing a good job training our business leaders to think like this. Rather than making them think that they should become coders themselves, they should focus on becoming a bit more tech savvy so they can pursue their business strategy — in line with their values — in an environment where technology is part of the business process.
This implies that our business leaders do understand what an algorithm exactly does, but also what its limits are, what the potential is, and especially so where in the decision-making chain of the company AI can be used to promote productivity and efficiency. To achieve this, we need leaders who are tech savvy enough to optimize their extensive knowledge on business processes to maximize efficiency for the company and for society. It is there that I see a weakness for many business leaders today.
Without a doubt, AI will become the new co-worker. It will be important for us to decide where in the loop of the business process do you automate, where is it possible to take humans out of the loop, and where do you definitely keep humans in the loop to make sure that automation and the use of AI doesn’t lead to a work culture where people feel that they are being supervised by a machine, or being treated like robots. We must be sensitive to these questions. Leaders build cultures, and in doing this they communicate and represent the values and norms the company uses to decide how work needs to be done to create business value.
AIB: Are algorithms replacing the human mind as machines replaced the body? Or are algorithms and machines amplifying the capabilities of the mind and body? Should humans worry that AI will render the mental abilities of humans obsolete or simply change them?
De Cremer: That is one of the big philosophical questions. We can refer to Descartes here, [who discovered the] body and mind [problem]. With the Industrial Revolution, we can say that the body was replaced by machine. Some people do believe that with artificial intelligence the mind will now be replaced. So, body and mind are basically taken over by machines.
“We can use machines for good if we are clear about what our human identity is and the value we want to create for a humane society.”
As I outlined in my book, there is more sophistication to that. We also know that the body and mind are connected. What connects them is the soul. And that soul is not a machine. The machine at this moment has no real grasp of what it means to understand its environment or how meaning can be inferred from it. Even more important in light of the idea of humanity and AI, a machine does not think about humans, or what it means to be a human. It does not care about humans. If you die today, AI does not worry about that.
So, AI does not have a connection to reality in terms of understanding semantics and deeply felt emotions. AI has no soul. That is essential for body and mind to function. We say that one plus one is three if you want to make a great team. But in this case if we say AI or machines replace the body and then replace the mind, we still have one plus one is two, but we do not have three, we don’t have the magic. Because of that, I do not believe AI is replacing our mind.
Secondly, the simple definition that I postulated earlier is that artificial intelligence represents behaviors, or decisions that are being made by a machine that seem intelligent. That definition is based on the idea that machine intelligence is able to imitate the intelligent behavior that humans show. But, that machines seem able to act in ways like humans does not mean that we are talking about the same kind of intelligence and existence.
When we look at machine learning, it is modeled after neural networks. But we also know, for example, that neuroscience still knows little, maybe not even 10%, of how the brain works. So, we cannot say that we know everything and put that in a machine and argue that it replicated the human mind completely.
The simplest example I always use is that a computer works in ones and zeroes, but people do not work in ones and zeroes. When we talk about ethics with humans, things are mostly never black or white, but rather gray. As humans we are able to make sense of that gray area, because we have developed an intuition, a moral compass in the way we grew up and were educated. As a result, we can make sense of ambiguity. Computers at the moment cannot do that. Interestingly, efforts are being made today to see whether we can train machines like we educate children. If that succeeds, then machines will come closer to dealing with ambiguity as we do.
AIB: What implications do these questions have for leadership? What role can leaders play in encouraging the design of better technology that is used in wiser rather than smarter ways?
“That machines seem able to act in ways like humans does not mean that we are talking about the same kind of intelligence and existence.”
De Cremer: I make a distinction between managers and leaders. When we talk about running an organization, you need both management and leadership. Management provides the foundation for companies to work in a stable and orderly manner. We have procedures so we can make things a little bit more predictable. Since the early 20th century, as companies grew in size, you had to manage companies and [avoid] chaos. Management is thus the opposite of chaos. It is about structuring and [bringing] order to chaos by employing metrics to assess goals and KPI’s are achieved in more or less predictable ways. In a way, management as we know it, is a status-quo maintaining system.
Leadership, however, is not focused on the status quo but rather deals with change and the responsibility to give direction to deal with the chaos that comes along with change. That is why it is important for leadership to be able to adapt, to be agile, because once things change, as a leader you are looked upon to [provide solutions]. That is where our abilities to be creative, to think in proactive ways, understand what value people want to see and to adapt to ensure that this kind of value is achieved when change sets in.
AI will be extremely applicable to management because management is consistent, it tries to focus on the status quo, and because of its repetitiveness it is in essence a pretty predictable activity … and this is basically also how an algorithm works. AI is already doing this kind of work by predicting the behavior of employees, whether they will leave the company, or whether they are still motivated to do their job. Many managerial decisions are where I see algorithms can play a big role. It starts as AI being an advisor, providing information, but then slowly moving into management jobs. I call this management by algorithm – MBA. Theoretically and from a practical point of view, this will happen, because AI as we know it today in organizations is good at working with stationary data sets. It, however, has a problem dealing with complexities. This is where AI, as we know it today, falls short on the leadership front.
Computer scientists working in robotics and with self-driving cars say the biggest challenge for robots is interacting with people, physical contact, and coordinating their movements with the execution of tasks. Basically, it is more difficult for robots to work within the context of teams than sending a robot to Mars. The reason for this is that the more complex the environment, the more likely it is that robots will make mistakes. As we are less tolerant to having robots inflict harm on humans, it thus becomes a dangerous activity to have autonomous robots and vehicles interacting with humans.
Leadership is about dealing with change. It is about making decisions that you know are valuable to humans. You need to understand what it means to be a human, that you can have human concerns, taking into account that you can be compassionate, and you can be humane. At the same time, you need to be able to imagine and be proactive, because your strategy in a changing situation may need to be adjusted to create the same value. You need to be able to make abstraction of this, and AI is not able to do this.
AIB: I am glad you brought up the question of compassion. Do you believe that algorithm-based leadership is capable of empathy, compassion, curiosity, or creativity?
“[Artificial intelligence] has a problem dealing with complexities. This is where AI, as we know it today, falls short on the leadership front.”
De Cremer: Startups and scientists are working on what we call “affective AI”. Can AI detect and feel emotions? Conceptually it is easy to understand. So, yes, AI will be able to detect emotions, as long as we have enough training data available. Of course, emotions are complex – also to humans – so, really understanding what emotions signify to the human experience, that’s something AI will not be able to do (at least in decades to come). As I said before, AI does not understand what it means to be human, so, taking the emotional intelligence perspective of what makes us human is clearly a limit for machines. That is also why we call it artificial intelligence. It is important to point out that we can also say that humans have an AI; I call that authentic intelligence.
At this moment AI does not have authentic intelligence. People believe that AI systems cannot have authentic emotions and an authentic sense of morality. It is impossible because they do not have the empathic and existential qualities people are equipped with. Also, I am not too sure that algorithms achieve authentic intelligence easily given the fact that they do not have a soul. So, if we cannot infuse them with a common sense that corresponds to the common sense of humans, which can make sense of gray zones and ambiguity, I don’t think they can develop a real sense of empathy, which is authentic and genuine.
What they can learn — and that is because of the imitation principle — is what we call surface-level emotions. They will be able to respond, they will scan your face, they will listen to the tone of your voice, and they will be able to identify categories of emotions and respond to it in ways that humans usually respond to. That is a surface-level understanding of the emotions that humans express. And I do believe that this ability will help machines to be efficient in most interactions with humans.
Why will it work? Because as humans we are very attuned to the ability of our interaction partners to respond to our emotions. So almost immediately and unconsciously, when someone pays attention to us, we reciprocate. Recognizing surface-level emotions would already do the trick. The deeper-level emotions correspond with what I call authentic intelligence, which is genuine, and an understanding of those type of emotions is what is needed to develop friendships and long-term connections. AI as we know it today is not even close to such an ability.
With respect to creativity, it is a similar story. Creativity means bringing forward a new idea, something that is new and meaningful to people. It solves a problem that is useful, and it makes sense to people. AI can play a role there, especially in identifying something new. Algorithms are much faster than humans in connecting information because they can scan, analyze, and observe trends in data so much faster than we do. So, in the first stage of creativity, yes, AI can bring things we know together to create a new combination so much faster and better than humans. But, humans will be needed to assess whether the new combination makes sense to solve problems humans want to solve. Creative ideas gain in value when they become meaningful to people and therefore human supervision as the final step in the creativity process will be needed.
“One of the concerns we have today is that machines are not reducing inequality but enhancing it.”
Let me illustrate this point with the following example: Experiments have been conducted where AI was given several ingredients to make pizzas, and some pizzas turned out to be attractive to humans, but other pizzas ended up being products that humans were unlikely to eat, like pineapple with marmite. Marmite is popular in the U.K. and according to the commercials, people love it or hate it, so, it’s a difficult ingredient. AI, however, does not think about whether humans will like such products or find them useful – it just identifies new combinations. So, the human will always be needed to determine whether such ideas will at the end of the day be useful and regarded as a meaningful product.
AIB: What are the limits to management by algorithm?
De Cremer: When we look at it from the narrow point of view of management, there are no limits. I believe that AI will be able to do almost any managerial task in the future. That is because of the way we define management as being focused on the idea of creating stability, order, consistency, predictability, by means of using metrics (e.g., KPIs).
AIB: How can we move towards a future where algorithms may not lead but still be at the service of humanity?
De Cremer: First, all managers and leaders will have to understand what AI is. They must understand AI’s potential and its limits — where humans must jump in and take responsibility. Humanity is important. We have to make sure that people not only look at technology from a utility perspective, where it can make a company run more efficiently because it reduces cost by not having to hire too many employees or not training people anymore to do certain tasks.
I would like to see a society where people become much more reflective. The job of the future may well be [that of] a philosopher…one who understands technology, what it means to our human identity, and what it means to the kind of society we would like to see. AI also makes us think about who we are as a species. What do we really want to achieve? Once we make AI a coworker, once we make AI a kind of citizen of our societies, I am sure the awareness of the idea “Us versus them” will become directive in the debates and discussions of the kind of institutes, organizations and society we would like to see. I called this awareness the “new diversity” in my book. Humans versus non-humans, or machines: It makes us think also about who we are, and we need that to determine what kind of value we want to create. That value will determine how we are going to use our technology.
One of the concerns we have today is that machines are not reducing inequality but enhancing it. For example, we all know that AI, in order to learn, needs data. But is data widely available to everyone or only a select few? Well, if we look at the usual suspects — Amazon, Facebook, Apple and so forth — we see that they own most of the data. They applied a business model where the customer became the product itself. Our data are valuable to them. As a result, these companies can run more sophisticated experiments, which are needed to improve our AI – which means that technology is also in the hands of a few. Democracy of data does not exist today. Given the fact that one important future direction in AI research is to make AI more powerful in terms of processing and predicting, obviously a certain fear exists that if we do not manage AI well, and we don’t think about it in terms of [whether] it is good for society as a whole, we may run into risks. Our future must be one where everyone can be tech-savvy but not one that eliminates our concerns and reflections on human identity. That is the kind of education I would like to see.
Only if we let it.
So yes, we can use machines for good if we are clear about what our human identity is and the value we want to create for a humane society.
Every company has data that is unique to it. You must work with that in terms of [shaping] your strategy, and in terms of the value that your company can and wishes to create. For this to be achieved, you also have to understand the values that define your company and that make it different your competitors.
Know thyself?!