The Artificial Intelligence Challenge And The End Of The World

    The Artificial Intelligence Challenge And The End Of The World

    B
    @Blockchainboss
    2 Followers
    6 months ago 350

    AIAI Summary

    toggle
    Bulleted
    toggle
    Text

    Key Insights

    • Bullet points not available for this slide.
    See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/354471279
The Artificial Intelligence Challenge and the End of Humanity
Chapter · September 2021
DOI: 10.1007/978-981-16-2309-7_3
CITATION
1
READS
513
1 author:
Chenyang Li
Nanyang Technological University
118 PUBLICATIONS   1,521 CITATIONS   
SEE PROFILE
All content following this page was uploaded by Chenyang Li on 06 June 2023.
The user has requested enhancement of the downloaded file.
    1/16
    The Artificial Intelligence Challenge
and the End of Humanity
Chenyang Li
The title of this essay has a twofold meaning, as does the word “end.” The word
“end” means the last part of an extended thing or a period of time. A cessation.
“End” also means the purpose and goal of an effort or a course of action. Bearing in
mind this double meaning, I will spend this essay arguing that firstly, the emergence
and the rapid advancement of Artificial Intelligence technology means the end of
humanity in an important sense. We will irreversibly lose the special status that we
have claimed to possess. I will secondly argue that we should develop AI technology
to serve our purposes and to that end, we should make advanced AI beings as ethical
as possible as we see fit.
This essay consists of four sections in response to the four clusters of issues
proposed in the Berggruen AI research project. The first section argues that, in an
important sense, the AI challenge means that humanity is at the exit door when it
comes to its essential distinctiveness. The second section examines the implications
of such a change both as a form of progress and alienation. Section three addresses
questions about how to make AI beings moral. The last section explores how AI
technology may affect Chinese philosophy in important ways.
Failure in Search for the Human Essence
According to legend, the Temple of Apollo at the sacred site of Delphi in ancient
Greece displayed the inscription “Know Thyself” (γνîθι σεαυτo´ν). This motto has
been taken to mean that we humans should seek to understand what we are. The
ancient Greek philosopher Socrates followed this motto in his lifetime search (in the
C. Li (B)
School of Humanities, Nanyang Technological University, 48 Nanyang Drive, HSS-03-89,
Singapore 639818, Singapore
e-mail: cyli@ntu.edu.sg
© CITIC Press Corporation 2021
B. Song (ed.), Intelligence and Wisdom,
https://doi.org/10.1007/978-981-16-2309-7_3
33
    2/16
    34 C. Li
fifth century BC) to find out who and what he was. Such an effort is really about the
nature of humanity. We humans have a universal need to try and understand who or
what we are. One common underlying presumption is a belief in humanity’s distinctiveness, a conviction that humanity is different in some essential way from all other
forms of being in the universe. The “essential” requirement in this effort is important,
as the eighteenth century German philosopher Georg W. F. Hegel famously said. He
argued that even if humans are the only beings with earlobes: that does not mean
that having earlobes is an essential differentia for being human. Moreover, humans
do not want to be merely distinctive, but distinctive in uplifting ways, in ways that
make us not only unique but also special. Thinking along these lines, for example,
Socrates developed his philosophy of the human soul. However, if we look at human
history, such a wish has seemed forever un-filled. We had thought that we are special
because our earth is in the center of the universe, which means that we are located in
the center of the universe. But that center did not hold. The sixteenth century Copernican Revolution ruthlessly removed it from us. The Renaissance mathematician
Nicolaus Copernicus’s discovery showed that our earth is not the center but merely a
planet that revolves around the sun. At the time, his theory was strongly opposed by
religious adherents because it was perceived as a threat to the status and hence the
special identity of humanity. Since then, we have to recognize that we were never at
the center of the universe. We had also thought that we are special because we were
made in God’s image. The nineteenth century British naturalist Charles Darwin took
that comforting thought away from us with his evolution theory. Darwin argued that
humanity had evolved from less evolved species just like other species have. He said
that how we look (our image) is an outcome of evolution, not out of a special design
by a higher power. And for some, the final blow, psychologically at least, was delivered when the nineteenth century German philosopher Friedrich Nietzsche declared
that: “God is dead!” The slogan implies that the divine is no longer a viable option
for humanity to ground our special status in. We have to create meaning in our own
lives. For a long time, possessing rationality seemed to be the only special thing we
had left that set us apart from all other beings. But even that special feature has been
challenged. The 19–20th century Austrian neurologist Sigmund Freud, known as the
“father of psychoanalysis,” argued that the human mind consists of three components: the conscious, the preconscious, and the unconscious. The conscious, within
which rationality presumably resides, represents but a small tip of a large iceberg
containing the entire human psyche. The operation of the human mind is influenced
largely by the preconscious and the unconscious. In other words, rationality does not
play a major role in the operation of the human mind. As a result, in Freud’s view
humans can hardly be defined as a rational animal. We may or may not accept Freud’s
theory. But his challenge demonstrates that we cannot take it as a given that humans
are rational animals. It is something that needs to be established. Some may take selfconsciousness as a distinctive human characteristic. Recently, scientists at Columbia
University discovered that some AI beings have self-awareness, posting a direct
challenge to a long-held belief about a human monopoly over self-consciousness.1
1 Bodkin (2019).
    3/16
    The Artificial Intelligence Challenge and the End of Humanity 35
So, where does this series of events leave us? We are still searching for answers to
the eternal question of “Know Thyself.” Freud’s theory has been dismissed by many
in their attempt to hold on to rationality as the last straw. Rationality is a form of
intelligence. Thinking and acting rationally is a function of intelligence.2 Most of us
have been holding and/or hoping that at least we humans are more intelligent than
all other beings. Indeed, our search for answers to the question of “Know Thyself”
can be seen as humans using intelligence to create myths about the distinctiveness
of humanity. At least in this regard, humans have been superior to all other beings.
In other words, we have been distinctive in creating myths about our distinctiveness.
But now, the moment has finally come: the AI challenge to the distinctiveness of
human intelligence. Advancing AI technology will demonstrate that we humans are
neither unique in possessing high intelligence nor the most intelligent beings in the
world. Some advanced AI beings already surpass humans in intelligence. AlphaZero
can beat the best human chess players. Some argue that we are approaching the point
of singularity when AI technology overpowers humanity in intelligence.3
The AI challenge in intelligence is not merely a matter about intelligence levels.
Unlike all other evolved natural species that have been compared with humanity,
AI technology is not a natural occurrence and it can be adjusted expeditiously to
match human capacities. The ancient Chinese philosopher Mencius famously said
that humans are distinct from animals because we have a humane heart that enables us
not to bear to see suffering. Advanced AI can now be programmed to stop proceeding
with their tasks when they “perceive” suffering. They can even be programmed to
take action to reduce sufferings. Think of medical robots created to care for patients,
for example. The third century BC Confucian philosopher Xunzi argued that humans
are different from other species because we can form society (qun 群). But that is
no longer a human distinction. Military combat robots can certainly coordinate their
actions. Indeed, social coordination is a must if they are going to be effective on
the battlefield. The rise of AI technology has significantly reduced any distance
between the human and non-human world. The fluidity of AI technology has made
any attempted claim on human distinctiveness increasingly implausible, if not utterly
impossible. Unlike our compared parties in nature, AI beings can be “customized
to order,” so to speak. Anything that has been considered special and unique about
humanity can be duplicated in AI technology.
The emergence and rapid advancement of AI technology makes a satisfactory
answer to the question of “Know Thyself” as elusive as ever. This compels us to
rethink the question itself. Perhaps, to “know thyself” is not, or should not be, about
looking for humanity’s distinctiveness. Perhaps it is about discovering that humanity
is not so distinctive and learning to live with the consequences of such a discovery.
Daoism is perhaps more ready to accept such a conclusion than many other philosophies. The Chinese Daoist text Daodejing records the ancient sage Laozi’s insight
that “understanding others is intelligence and understanding thyself is wisdom.” Both
understanding others and understanding oneself are about understanding humanity.
2 For a discussion of the relation between rationality and intelligence, see Baron (1985).
3 Kurzweil (2005).
    4/16
    36 C. Li
Yet, he does not seem to stress the distinctiveness of humanity. Between humanity
and other existing things in the world, there are differences without distinction.
It is entirely possible, and even likely, that our future lies in integrating with
advanced AI technology rather than maintaining our distinctiveness. The futurist
Ray Kurzweil said: “We’re going to literally merge with this technology, with AI, to
make us smarter. It already does. These devices are brain extenders and people really
think of it that way, and that’s a new thing.”4 By using nanotechnology, we will be
able to connect AI devices to the nerve systems of our brains, enabling AI-enhanced
brains to operate much faster and more powerfully. Philosophers have already started
contemplating the validity of “extended mind.”5 AI technology has now provided
scientific and technological evidence for its feasibility and reality, in the form of
extended brains. Humanity becomes a hybrid: a mix of what our biological species
has to offer and what we decide to adopt from the AI technology. In that respect,
“humanoid” does not only denote a non-human being that resembles humans, but also
a human being mixed with non-human components. The current Chinese expression
for advanced AI systems, “jiqi ren 机器人”—literally “machine-man/woman”—
may be more appropriate for signifying such hybrid beings than its current usage for
AI devices. A regular AI device is not a ren 人 until it integrates with a human.
In the meantime, AI technology progresses along with biological engineering technology. Recently, scientists at Cornell University succeeded in a bottom-up construction of dynamic biomaterials powered by an artificial metabolism that represents a
combination of irreversible biosynthesis and dissipative assembly processes. Using
this material, they were able to program an emergent locomotion behavior resembling
a slime mold. Dynamic biomaterials possess properties such as autonomous pattern
generation and continuous polarized regeneration. It is reported that: “Dynamic
biomaterials powered by artificial metabolism could provide a previously unexplored route to realize ‘artificial’ biological systems with regenerating and selfsustaining characteristics.”6 Metabolism and biosynthesis are characteristics of life.
Dan Luo, professor of biological and environmental engineering at Cornell University’s College of Agriculture and Life Sciences said: “We are introducing a brandnew, lifelike material concept powered by its very own artificial metabolism. We are
not making something that’s alive, but we are creating materials that are much more
lifelike than have ever been seen before.”7 Shogo Hamada, a member of the Cornell
research team, added that: “We are at a first step towards building lifelike robots
by artificial metabolism.”8 When this kind of new technology is combined with AI
technology, it will meet us from the opposite end of humans with “extended brains.”
We may well see artificial organic AI beings coming to us as we see ourselves in a
mirror. The distinction between humans and AI beings will further diminish.
4 https://www.wired.com/story/ray-kurzweil-on-turing-tests-brain-extenders-and-ai-ethics/.
Accessed on 11 December 2018.
5 Clark and Chalmers (1998).
6 Hamada et al. (2019).
7 Hayes (2019).
8
Ibid.
    5/16
    The Artificial Intelligence Challenge and the End of Humanity 37
If the above argument holds, then we are facing the real possibility that we will
not only never find the answer to the ancient question “Know Thyself” in distinctive
ways, but also witness the end of humanity as we know it. “Humanity” has always had
a dual meaning. On the one hand, it stands for the biological species homo sapiens.
On the other, it is a value-laden idea, standing for an ideal. Even though the primary
existence of humanity is always as the biological species, humans have never been
satisfied with being described as something like “featherless biped” beings, even
though humans are perhaps the only such beings in the world. We have always
wanted more than that. The call to “Know Thyself” is to urge us to pursue humanity
in the value-laden sense. The end of humanity does not mean that humans will cease
to exist as a biological species, but that humanity will forever lose its distinctiveness
and uniqueness, its “essence.” The Greek word for essence, τo` τι´ Ãν εναι, literally
means “the what-it-was-to-be” for a thing (or in Aristotle’s shorter version, τo` τι´ ™στι,
“the what-it-is”). Perhaps humans do not have such an essence after all. That kind of
humanity, one with a special essence, has ended. Humans will no doubt continue to
search for answers to “Know Thyself,” but we may have to view human “essence”
as a moving target rather than something that is a given, there to be discovered. For
humanity, the upcoming AI era might be described as a loss of innocence or loss of
self-regarded maturity. Either way, humanity, in its traditional sense, is finished.
Progress as Alienation
By blurring the distinction between humanity and non-human existents, AI technology allows us to extend human existence into new territories, not only spatially
but also existentially. That is, we have become more than what our natural species
has evolved to be and has to offer. In an important sense, we are what we create
ourselves to be. Scientists now contemplate whether humanity has gone beyond the
Holocene epoch (the time since the last Ice Age) and entered the Anthropocene epoch
when humans will transform nature. As 1995 Nobel Laureate Paul J. Crutzen and
his coauthor C. Schwägerl put it: “It’s no longer us against ‘Nature.’ Instead, it’s we
who decide what nature is and what it will be.”9
In view of the AI challenge, we
may add that it is not only about nature anymore; it is also about ourselves: to a large
extent, we will decide what we are and what we will be. Such a gigantic leap away
from our species’ natural state could be viewed as both progress but also alienation:
a blessing and a curse. It could be called progress because we will be tremendously
enabled and empowered. We will be able to accomplish things that have previously
existed only in the realms of mythologies and dreams. As long as change makes
humans more adaptable to living environments, then it can be regarded as progress.
It can also be alienation in the sense that we have forever left our natural home base.
The very idea of alienation is grounded on some kind of “essence,” something that we
humans are, should, or ought to be. As I have shown earlier in this essay, the presumed
9 Crutzen and Schwägerl (2011).
    6/16
    38 C. Li
human essence has been largely elusive, something that we have presumed to possess,
taken comfort in thinking about, but have never discovered despite our perpetual
efforts throughout history. We can nevertheless speak of alienation from what we
are, or have been, supposed to be. We are no longer in the image of humanity as we
used to hold for ourselves. We are now very different. Just as today’s smart phone
generation cannot imagine life without their gadgets, once humanity has become
hybrid, we will no longer be able to return to our biological state. In this sense, such
alienation cannot be reversed. It is eternal.
These two claims relating to progress and alienation may sound contradictory. But
they take place in the same process. Progress always involves some kind of loss. Let’s
take one example: ancient stories depict a lovesick girl or boy in a torment of yearning
for a physical letter from his or her lover. Modern communication technologies have
put an end to such scenarios. So now, novelists have to find other ways to eulogize
such human sentiments. Here’s another. It is very easy to travel overnight from New
York to Shanghai. In doing so, we have lost the poetic experience of traveling with
fellow passengers on a month-long journey by sea. The ancient book, the Zhuangzi
records an interesting story. Confucius’s disciple Zigong had a conversation with a
gardener, who watered his garden by fetching water with a jar from a well. Zigong
told him that there are machines for such purposes and they can irrigate a hundred
plots a day, with very little labor. “Why do not you use such a machine?” Zigong
inquired. The gardener made an angry grimace and said with a laugh:
I have heard from my teacher that where there are ingenious contraptions, there are sure to be
ingenious affairs, and where there are ingenious affairs, there are sure to be ingenious minds.
When one harbors an ingenious mind in one’s breast, its pure simplicity will be impaired.
When pure simplicity is impaired, the spiritual nature will be unstable. He whose spiritual
nature is unsettled will not be supported by the Way. It’s not that I am unaware of such things,
rather that I would be ashamed to do them. (Zhuangzi, Ch. 11)10
The gardener represents the kind of people who hold on tightly to what they are,
or what they think they are, resisting any departure from their perceived nature of
simplicity. The gardener is right that by accepting new technologies people would
have to give up their original state. There is alienation. We cannot have both. If a state
of simplicity is deemed essential to being human or to the good life, then accepting
new technologies poses too much of a risk and is not worth it. However, the gardener’s
philosophy is sound only if we accept that our original state of simplicity is essential
to what we humans are. Without such a presumption, then there is little reason for us
not to accept new changes. The alienation arising from the influx of AI technology
into our existence should not be deplored given that we were never what we thought
we were. It should be celebrated.
10 Mair (1994, p. 111).
    7/16
    The Artificial Intelligence Challenge and the End of Humanity 39
Confucian AI Ethics
The emergence and rapid advancement of AI technology has brought unprecedented
challenges to ethics. Whether we can successfully address these challenges will have
a profound impact on our lives and humanity’s future. For one thing, we now need
to consider whether and how to treat advanced AI beings as ethical beings before we
decide what to do with them in other ways.
As a traditional enterprise, ethics has been developed on the basis of a purportedly well-established order in the world. The world has been roughly categorized
into humanity, living organisms, and other things, either through evolution, divine
creation, or other processes. Early conceptions of ethics, especially in the West,
made humanity the only moral agent and moral patient. All other beings were
considered outside of a moral domain. The rise of environmental consciousness
has changed that. Some contemporary ethicists have expanded the moral domain to
include living organisms too. However, there is disagreement about the exact moral
standing of various organisms. Some insist that only humans matter on a moral level.
Some argue that the moral domain should include other “higher” life forms such
as horses and dogs. Yet still others argue that all life forms are morally relevant, as
life itself possesses intrinsic value. Despite these disagreements, people are pretty
clear about what categories they are dealing with. Animals are animals and trees
are trees. Even though the seventeenth century French philosopher René Descartes
famously degraded animals to machines–because they do not have souls–he was firm
about creating a dividing line between humans (thinking beings) and everything else
(spatially extended objects). His cogito, or “I think,” stands for intelligence, which is
exclusively reserved for humanity. AI technology poses a gigantic challenge about
humanity’s distinctiveness. It has brought us an entirely different “animal,” so to
speak. AI beings are machines, yet of a very different kind, ones with intelligence.
The emergence of AI beings has shaken the foundations and understanding of the
world: the one on which traditional ethics is founded. It calls us to re-think ethics in
an entirely new light.
When we think about robotic ethics, we first must consider whether the intelligence characteristic of AI beings qualifies them to be seriously considered as moral
patients. These are beings with moral standing, towards which other moral agents
(like normal humans) have moral responsibilities. A human person is a moral patient;
we should treat a person with respect. A bicycle, on other hand, is not a moral patient.
Even though I may take good care of my bicycle, or treat your bicycle gently out of
my moral obligation to you, I do not have a moral obligation to a bicycle. Some environmental ethicists demand that we go beyond humanity to include animals and other
living things as moral patients, because animals can suffer, or they have dignity, or
life itself possesses intrinsic value. Others insist that only rational beings have moral
standing. According to the eighteenth century German philosopher Immanuel Kant,
only humans have moral standing. He argued that this was not because we are a
biological species but because we are rational: that rationality grounds moral qualification. Some ethicists regard non-human intelligent species, such as dogs, dolphins
    8/16
    40 C. Li
and elephants, as candidates for being moral patients because of the affinity of rationality and intelligence.11 Now AI beings have joined the same camp. Since AI beings
are intelligent, we should treat them accordingly. As far as morality is concerned,
this implies that AI beings should be treated as moral patients. A 2007 story by Joel
Garreau of the Washington Post reported a U.S. Army officer calling off a robotic
land-mine-sweeping experiment when the robot kept crawling along despite losing
its legs one at a time. The officer declared that the test was inhumane.12 This scenario
is similar to one in which military personnel care about their K-9s. Being intelligent
makes advanced AI beings closer to us and arouses our sympathy. We are moved
towards accepting them as moral patients. Yet, it is by no means clear how exactly
we should treat AI beings as moral objects within society. After all, AI beings are
machines, not living things. They do not have a life of their own like a dog, nor do
they have the capacity to suffer (unless we make them so) as living organisms do.
Their intelligence is also not uniform. AI beings cover a wide spectrum that ranges
from a barely intelligent automated robot, to the highly sophisticated Kengoro, the
most advanced humanoid robot yet. Even if we agree that sophisticated AI beings
can and should be considered moral patients, it is still far from being evident how
they should be treated. For example, what regulations should we adopt to manage
the working conditions of AI doctors, now that more and more are being deployed
across the world? Should there be minimum requirements as we have for human
doctors? This question will become increasingly pressing as AI technology incorporates emotions as well as intelligence, making them more and more like our fellow
human beings.
Furthermore, recent discussions about AI ethics have mainly centred on AI beings
as moral agents. A moral agent is a being capable of understanding right and wrong
and choosing to act accordingly. Traditionally, ethics has confined moral agency
only to humanity. We hold humans morally responsible, but not animals. Even
though many environmental ethicists accept animals as moral patients, few if any
take animals as moral agents for the obvious reason that animals cannot discern moral
right from wrong. Can AI beings have moral agency? Sophisticated AI devices have
the capacity to make decisions that not only have economic consequences but also
moral ones. The US Army has begun developing AI hardware and software that
will make robotic fighting machines capable of following the ethical standards of
warfare.13 Such capacities to be developed in the future are not merely limited to
programming possible courses of action in a limited range of anticipated circumstances; they can also be open-ended systems that gather and synthesize information, predict and assess the consequences of available options, deploy appropriate
responses. Such capacities make advanced AI systems more like autonomous people
than mere machines as traditionally understood. The “Moral Machine” experiment
at MIT’s Media Lab poses a moral dilemma for autonomous cars. Should such a
11 See Baron (1985).
12 Wallach and Allen (2009, p. 63).
13 Wendell and Allen (2009, p. 30).
    9/16
    The Artificial Intelligence Challenge and the End of Humanity 41
car divert its path to avoid hitting a child but hit a grandmother instead?14 Should
autonomous cars be designed and enabled to do this? One may think that in cases
like this it is still humans making decisions for AI beings as we program them. But
then, haven’t we been similarly socially programmed to avoid killing a human at the
expense of, say, a cat? Advanced AI beings surely can be trained as human beings
can.
The rapid advancement of AI capacities in decision-making brings such sophisticated beings increasingly closer to being moral agents. It calls for integrating ethics
into the way that advanced AI systems are designed. This dimension poses perhaps
the greatest challenge to moral philosophy, in part because ethicists themselves do not
agree on what kind of normative ethics is the best or most appropriate. Deontologists
hold that ethics must be based on moral duty. Utilitarians advocate courses of action
that bring about the most desired consequences. Virtue ethicists want AI robots to
behave in ways that demonstrate good virtues. You might think that we should go
with the lowest common denominator and use the “no harm” principle as the ethical
requirement for advanced AI systems. But this might not be so easy either, as some
systems are designed precisely to do harm. Currently, an undeclared arms race is
already underway between major military powers to develop autonomous robots to
replace human soldiers in battlefields. These killing AI devices will definitely make
the world even more dangerous and prone to armed conflicts. Should there then be
an international ban on designing and producing combat AI machines?
These questions do not have easy answers. Yet, all these challenges must be taken
seriously and addressed appropriately if we wish to maintain a world in which humans
flourish. The task has become even more pressing due to the fast advancement of
AI technology and its enormous impact. Unlike evolutionary progresses that take
a gradual course, AI systems will quickly occupy every corner of the world and
change virtually every aspect of our lives. Indeed, our phones and computers already
come with in-built AI technologies. This affects humanity in fundamental ways. It
could determine our fate in important ways. In this sense, successfully addressing AIrelated ethical challenges is even more difficult than addressing AI-related technical
challenges. Understanding the pressing nature of such challenges is an important
step that we must take without delay.
What kind of values should we instill into AI beings? At some point, humans
will be living with humanoid robots, as peers in many ways. Humanoid robots will
make their own decisions while carrying out tasks. As their creators, we humans
should ensure that their decision-making is guided by some sort of ethics or moral
principles, similar to what we wish to see in other humans. What kind of robotic
ethics will be both feasible and appropriate to our moral sensitivities? In order to
address these questions, we need to first determine the nature of robotic ethics. In
this regard, Confucian moral philosophy can provide a useful resource. Confucian
14 See the “Moral Machine” experiment by the MIT Media Lab (https://www.technologyreview.
com/s/612341/a-global-ethics-study-aims-to-help-ai-solve-the-self-driving-trolley-problem/).
Accessed on 30 December 2018.
    10/16
    42 C. Li
understandings of ethics include two approaches, represented by the ancient philosophers Mencius and Xunzi respectively. Mencius held that humans are unique in that
we are born with a humane heart (xin 心). Fostering this humane heart will make us
authentically human and lead us towards a moral human life. We may call this “the
soft-heart approach.” In this view, ethics is about developing and following a humane
heart. Xunzi held that humans become moral by acquiring social norms from society.
Humans do not possess morals when they are born; they are later “programmed” to
act morally. We may call this approach “the social wiring approach.”
Both Mencius and Xunzi emphasized humanity’s distinctiveness. Mencius
focused on the distinctiveness of the human heart, which only humans possess and
animals do not. He famously claimed,
The reason why I say that humans all have hearts that are not unfeeling toward others is this.
Suppose someone suddenly saw a child about to fall into a well: everyone in such a situation
would have a feeling of alarm and compassion—not because one sought to get in good with
the child’s parents, not because one wanted fame among their neighbors and friends, and not
because one would dislike the sound of the child’s cries. From this we can see that if one is
without the heart of compassion, one is not a human. If one is without the heart of disdain,
one is not a human. If one is without the heart of deference, one is not a human. If one is
without the heart of approval and disapproval, one is not a human. The heart of compassion
is the sprout of benevolence. The heart of disdain is the sprout of righteousness. The heart
of deference is the sprout of propriety. The heart of approval and disapproval is the sprout
of wisdom. People having these four sprouts is like their having four limbs.15 (2A6)
Mencius maintained that the difference between “humanity” and “beasts” lies
with the possession of this kind heart, or the lack thereof. For Mencius, humanity
does not refer to the biological species; it possesses morality. Only beings with this
kind heart are moral beings. Developing such a heart is to continue on the track of
moral existence. In this sense, Mencius was not drawing a line between homo sapiens,
on the one hand, and other beings on the other. He defined the difference between
ethical beings and non-ethical beings in terms of possessing or not possessing a kind
heart. AI beings could, therefore, be included in the ethical realm, if they are given
such a “heart.” With such a Mencian heart, AI beings will function morally in society
as moral humans do.
Mencius’s “xin 心” is of course not the physical heart in the body. It is the psychological, emotional capacity to care about and to be kind towards others. If we follow
a Mencian approach towards robotic ethics, we may install a “no-hurting” principle in humanoid robots as an overriding mechanism, ensuring that they cannot
carry out destructive actions that harm humanity, other life forms, and one another
among themselves. We will have something similar to what the writer John Havens
has called “Heartificial Intelligence.”16 This approach may be most appropriate for
medical AI beings. In medical care, the primary principle is traceable to the Hippocratic Oath: “First do no harm” (Primum non nocere). If they take this principle on
board, AI beings will act in ethical ways in ways that are aligned with Mencius’s
philosophy.
15 Van Norden (2008, p. 46).
16 Havens (2016).
    11/16
    The Artificial Intelligence Challenge and the End of Humanity 43
However, at a fundamental level, AI machines will be Xunzian in the sense that
ethics will be socially made or artificial (wei 伪) as Xunzi vigorously argued in his
time. The ancient Chinese philosopher also emphasized human distinctions, but in a
very different way. He said,
Water and fire have qi but are without life. Grasses and trees have life but are without
awareness. Birds and beasts have awareness but are without standards of righteousness.
Humans have qi and life and awareness, and moreover they have yi. And so they are the most
precious things under Heaven. They are not as strong as oxen or as fast as horses, but oxen
and horses are used by them. How is this so? I say: It is because humans are able to form
communities (qun 群) while the animals cannot.17 (Xunzi 9.9)
Xunzi held humanity to be the most precious being in the entire world. He identified two features behind humanity’s distinctiveness. One is yi 义, the sense of
moral appropriateness; the other is qun 群, the ability to form communities. Both
are needed for humans to flourish. Yi not only gives humans a sense of what we
need to be ethical but also what is ethically appropriate. Xunzi believed that human
xing (natural tendencies) is bad and that following xing will lead society to chaos
and ruin. Without social construction by the sages, yi in its inborn state does not
enable people to overcome xing towards goodness.18 The American philosopher
and sinologist, David Nivison, identified “yi” as Xunzi’s source of morality, but he
interpreted it as an ability of intelligence and insisted that yi does not have “any
particular content.”19 Nivison’s interpretation makes Xunzi’s yi more like a capacity
for humans to use their intelligence to think in rational or reasonable ways, and made
it instrumental towards forming communities, which Xunzi identified as a separate
human distinction. In order to form communities, humans need to devise rules of
ritual propriety (li 礼), rules of what is appropriate and conducive towards flourishing
communities. Rather than the humane heart, Xunzi emphasized the importance of
ethical rules in regulating society. A Xunzian approach to AI ethics would primarily
be about devising effective rules.
If we follow a Xunzian approach, we can construct rules to guide AI actions
without the need for an overriding principle to protect humanity, other life forms in
the world, or other humanoid robots. We may give AI rules to obey as driverless cars
obey traffic rules. AI beings will aim to accomplish their assigned goals, regardless
of the nature of these goals. Perhaps both the Mencian and Xunzian approaches can
be used depending on the kind of robots in question. The Mencian approach would
seem to make more sense for robots that are caring for humans, whereas the Xunxian
approach would be more suitable for many other purposes. Mencius has typically
been given a more prominent status over Xunzi throughout the history of Confucian
philosophy. However, recent Confucian studies seem to have been gravitating more
and more towards Xunzi. As far as AI ethics is concerned, these two approaches can
be integrated in working with advanced AI beings in the future.
17 Hutton (2014, 76).
18 For more discussion, see Chenyang Li (2011).
19 Nivison (1996, pp. 207, 201).
    12/16
    44 C. Li
The futurist novelist Isaac Asimov (1920–1992) proposed three fundamental laws
to guide the behavior of robots. They are,
1. A robot may not injure a human being or, through inaction, allow a human being
to come to harm.
2. A robot must obey orders given it by human beings except where such orders
would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not
conflict with the First or Second Law.20
Asimov’s first law clearly carries a Mencian motif. Even though it is in the form
of a principle, it manifests the paramount importance of no-harm to humans. Such a
requirement reflects a human value deeply rooted in the kind of ultimate concern that
Mencius articulated. Yet, these laws are in the form of rules, as Xunzi would have
opted for. Today’s AI ethics needs to be much more sophisticated and far-reaching
than Asimov’s three laws. However, no matter how advanced AI ethics becomes, it
seems fitting to incorporate the two orientations that Mencius and Xunzi initiated.
We may reformulate their principles this way: First, advanced AI beings must benefit
humanity as their primary goal. Second, their activities must be regulated by rules on
the basis of the first principle. AI beings can act to benefit themselves, but their prime
purpose is humanity’s benefit and that must ground all other considerations regarding
AI. Furthermore, AI beings will have to be regulated by rules that are consistent with
the first principle. These two principles combined make possible for AI beings to
co-exist with humans as part of, or an extension to, human communities.
AI’s Effect on Confucian Philosophy
The development of AI technology has not only raised important questions for ethics
but also philosophy in general. Similar to other philosophies, Chinese philosophy
has been developed over thousands of years without questions raised by various
AI challenges. Today, it has to address new challenges. It must deal with a reality
where humanity’s distinctiveness has either disappeared, or become less and less
discernible. It must consider a future society when AI beings are important participants. In this section, I make two arguments. My first argument is that some important
characteristics of Confucian philosophy, such as the doctrine of “graded love,” allow
it to adjust moral requirements in response to AI challenges. My second argument is
that Confucian philosophy needs to be adjusted to meet with AI’s challenges. I use
the Confucian theme of learning to demonstrate this.
In a way, Confucian philosophy may have less difficulty meeting AI’s challenges
than some of its western counterparts. For example, Kantian ethics is grounded
on rationality; rationality is supposed to be universally uniform across all rational
beings. Even though Confucians accept the universal endowment of moral potential
20 Wallach and Allen (2009, p. 12).
    13/16
    The Artificial Intelligence Challenge and the End of Humanity 45
in all humans, they do not assume that all human beings have achieved the same
level of moral refinement and allow for differentiation in moral attainment. On the
basis of Kantian ethics, advanced AI beings would be either included or excluded
from the moral domain. If they are included, they should be treated equally with all
moral beings, without differentiation. By allowing differentiated degrees of moral
attainment and moral standing, Confucians seem to have more room to accommodate
and embrace AI beings as moral agents. For one thing, they can accept AI beings
as moral beings without having to treat AI beings as our full equals, which seems a
more intuitively sound approach.
Confucian philosophy has prioritized two cardinal values, ren (仁) and li (礼).
Ren stands for a comprehensive virtue that is grounded on a caring heart. Li stands
for a social or cultural grammar that regulates human behavior. There is also another
important virtue that has been advanced by Confucian thinkers but has not received
the recognition it deserves: he (和), harmony. Confucian ethics promotes graded love,
in the formulation of “qinqin, renmin, aiwu” (Mencius 7A45).21 Namely, being affectionate towards parents (family), caring about people, and charitably value things.
The ai in the last statement can be read as “to value”; it also implies a sense of “xi
惜,” to utilize prudently.22 The “wu” in “aiwu” includes all things with value in the
world. These Confucian ethical requirements are graded. “Aiwu” is the minimum
requirement and the least difficult to practice. Renmin is more demanding; it encompasses the requirement of gentle treatment but extends beyond aiwu as it requires the
agent to care about people in a loving manner. The most demanding is qinqin, as it
not only encompasses caring about family in a loving manner but also being affectionate towards them. To put it another way, one cannot care about people without a
gentle attitude in the first place; one cannot be affectionate towards family without
being able to cherish and care. All these positive attitudes towards family, people,
and things in the world need to be fostered through moral cultivation. These three
categories differ in degrees and in intensity. Being affectionate implies care; care
implies appreciation. Environmental ethicists argue that (at least) higher forms of
animals should be taken as moral patients, even though not as moral agents because
they do not possess the capacity to make moral decisions. Confucians may or may
not accept animals as moral patients in the full sense of the term, but they can accommodate the call to treat animals gently under the category of “aiwu.” Such an attitude
towards animals can be easily extended to AI beings. However, AI beings pose additional challenges to ethics because they are capable of making ethically relevant
decisions. This means that we should not treat AI beings solely under the category
of “aiwu.” We should consider them as moral agents, at least some of them and at
varied degrees. That makes them candidates for the category of “renmin.” We should
21 亲亲, 仁民, 爱物. https://ctext.org/mengzi/jin-xin-i/zh. Accessed 31 December 2018.
22 For instance, the classic commentator Zheng Xuan 郑玄 (127–200 AD) interprets “ai” in “ai
mo zhu zhi 爱莫助之” in the Book of Poetry as “xi 惜.” Zhu Xi also uses “xi” to interpret “ai” in
Confucius’s comment that “you ai the sheep but I ai the ritual 尔爱其羊, 我爱其礼” in the Analects
3.17.
    14/16
    46 C. Li
not only appreciate AI beings, but also cherish and care about them, treating them
as moral agents as well as moral patients.
As a living tradition, Confucianism has been adjusting itself in response to the
times. Its gradual acceptance of democracy is one such example. Early on, Mencius
advocated the idea that people are the foundation of the state. But democratic ideas
did not really emerge in the tradition until thinkers such as Li Zhi 李贽(1527–1602)
and Huang Zongxi 黄宗羲 (1610–1695). AI technology presents another challenge
for Confucianism. One such challenge relates to meritocracy. Confucianism originally gained prominence in part because it rose up against the previous hereditary
system and advocated meritocracy.23 Confucian meritocracy places the basis of social
mobility on people’s virtue and knowledge/ability. Virtue and knowledge have to be
acquired through hard work under appropriate conditions. Consequently, serious
learning has been a key virtue and a moral requirement in the Confucian tradition.
Confucius took learning as the most important way to become a good person. He
set a good role model for people as described in the Analects: “At fifteen I set my
heart on learning; at thirty I established myself; at forty I was beyond perplexity; at
fifty I knew the mandate of heaven; at sixty I am at harmony; at seventy I can follow
my heart without transgressing boundaries.”24 Learning was an enjoyable activity:
“is it a pleasure to learn and practice what is learned in a timely fashion?”25 He
lamented that: “the ancients learned for their self-improvement, whereas nowadays
people learn for others.”26 He said that: “what worries me is that people do not cultivate virtues, they learn without exchanging ideas, they learn the right things without
following it, and they do not change even though they are not good.”27 Confucius
valued learning so highly in part because we can only become knowledgeable and
useful members in society through learning. The AI era may change this paramount
Confucian requirement. The futurist Ray Kurzweil predicted that AI would make it
possible for people to link their brains with a computer. He said: “The most interesting thing will be for your neocortex to extend itself with synthetic neocortex in the
cloud. Ultimately our thinking will be predominated by the synthetic neocortex.”28
The neocortex is the region of the brain that is associated with human mental functions. AI technology will extend the human brain beyond its biological limitations.
Kurzweil even predicted that the additional neocortex would be stored in the cloud,
which would allow unlimited expansion. If this comes to pass, then meritocracy may
no longer have to depend on knowledge in the traditional sense. Confucians will
have to reconsider their view about the value and importance of knowledge-based
23 For discussions of Confucian democracy, see Bell and Li (2013).
24 吾十有五而志于学, 三十而立, 四十而不惑, 五十而知天命, 六十而耳顺, 七十而从心所欲,
不逾矩。《( 为政第二》). Following Liao Mingchun, I read 耳 as 聏 and translate it as harmony. See
Liao 廖名春《, 孔子真精神: 论语疑难问题解读》, Guiyang: KongXueTang Shuju (2014).
25 学而时习之, 不亦说乎? (《学而第一》).
26 古之学者为己, 今之学者为人。《( 宪问》).
27 德之不修, 学之不讲, 闻义不能徙, 不善不能改, 是吾忧也。《( 述而第七》).
28 https://www.inverse.com/article/33373-ray-kurzweil-singularity-thoughts. Accessed 17
December 2018.
    15/16
    The Artificial Intelligence Challenge and the End of Humanity 47
learning. They will probably shift more towards virtue learning and towards practical
abilities in life.
Of course, at this stage, any proposal for AI ethics is tentative. Confucian ethics
is not merely a product of pure contemplation or speculation; it is generated through
both reason and feeling. Developing such ethics requires us to draw on actual experience. For that reason, a Confucian AI ethics has yet to be worked out more adequately
and fully along with the further development of, and interaction with, AI beings.29
References
Baron, Jonathan. 1985. Rationality and intelligence. Cambridge: Cambridge University Press.
Bell, Daniel, and Chenyang Li. 2013. The East Asia challenge for democracy: Political meritocracy
in comparative perspective. New York, NY: Cambridge University Press.
Bodkin, Henry. 2019, January 30. Robot that thinks for itself from scratch brings forward rise the
self-aware machines. Science Robotics.
Chenyang Li. 2011. The origin of goodness in Xunzi. Journal of Chinese Philosophy 38:46–63.
Clark, Andy, and David J. Chalmers. 1998. The extended mind. Analysis 58: 7–19.
Crutzen, P. J., and C. Schwägerl. 2011. Living in the Anthropocene: Toward a new global ethos. Yale
Environment 360. http://e360.yale.edu/feature/living_in_the_anthropocene_toward_a_new_glo
bal_ethos/2363/. Accessed 12 December 2018.
Hamada, Shogo, et al. 2019, April 10. Dynamic DNA material with emergent locomotion behavior
powered by artificial metabolism. Science Robotics 4 (29): eaaw3512. https://doi.org/10.1126/
scirobotics.aaw3512. https://robotics.sciencemag.org/content/4/29/eaaw3512. Accessed 13 April
2019.
Havens, John. 2016. Heartificial intelligence: Embracing our humanity to maximize machines.
Tarcher/Penguin.
Hayes, Matt. 2019, April 10. Engineers create ‘lifelike’ material with artificial metabolism.
Cornel Chronicle. http://news.cornell.edu/stories/2019/04/engineers-create-lifelike-material-art
ificial-metabolism. Accessed 13 April 2019.
Hutton, Eric, trans. 2014. Xunzi: The complete text. Princeton and Oxford: Princeton University
Press.
Kurzweil, Ray. 2005. The singularity is near. New York: Penguin Group.
Mair, Victor. 1994. Wandering on the way: Early Taoist tales and parables of Chuang Tzu. New
York: A Bantam Book.
Nivison, David. 1996. Hsün Tzu on ‘Human Nature.’ In The ways of Confucianism: Investigations
in Chinese philosophy, ed. Bryan Van Norden, 207, 201. Chicago: Open Court.
Van Norden, Bryan W., trans. 2008. Mengzi: With selections from traditional commentaries.
Indianapolis: Hackett Publishing Company, Inc.
Wallach, Wendell, and Colin Allen. 2009. Moral machines: Teaching robots right from wrong.
Oxford and New York: Oxford University Press.
29 The research of this essay was supported by a Tier-1 resarch grant from Nanyang Technological
University (#RG114/20).
View publication stats
    16/16

    The Artificial Intelligence Challenge And The End Of The World

    • 1. See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/354471279 The Artificial Intelligence Challenge and the End of Humanity Chapter · September 2021 DOI: 10.1007/978-981-16-2309-7_3 CITATION 1 READS 513 1 author: Chenyang Li Nanyang Technological University 118 PUBLICATIONS   1,521 CITATIONS    SEE PROFILE All content following this page was uploaded by Chenyang Li on 06 June 2023. The user has requested enhancement of the downloaded file.
    • 2. The Artificial Intelligence Challenge and the End of Humanity Chenyang Li The title of this essay has a twofold meaning, as does the word “end.” The word “end” means the last part of an extended thing or a period of time. A cessation. “End” also means the purpose and goal of an effort or a course of action. Bearing in mind this double meaning, I will spend this essay arguing that firstly, the emergence and the rapid advancement of Artificial Intelligence technology means the end of humanity in an important sense. We will irreversibly lose the special status that we have claimed to possess. I will secondly argue that we should develop AI technology to serve our purposes and to that end, we should make advanced AI beings as ethical as possible as we see fit. This essay consists of four sections in response to the four clusters of issues proposed in the Berggruen AI research project. The first section argues that, in an important sense, the AI challenge means that humanity is at the exit door when it comes to its essential distinctiveness. The second section examines the implications of such a change both as a form of progress and alienation. Section three addresses questions about how to make AI beings moral. The last section explores how AI technology may affect Chinese philosophy in important ways. Failure in Search for the Human Essence According to legend, the Temple of Apollo at the sacred site of Delphi in ancient Greece displayed the inscription “Know Thyself” (γνîθι σεαυτo´ν). This motto has been taken to mean that we humans should seek to understand what we are. The ancient Greek philosopher Socrates followed this motto in his lifetime search (in the C. Li (B) School of Humanities, Nanyang Technological University, 48 Nanyang Drive, HSS-03-89, Singapore 639818, Singapore e-mail: cyli@ntu.edu.sg © CITIC Press Corporation 2021 B. Song (ed.), Intelligence and Wisdom, https://doi.org/10.1007/978-981-16-2309-7_3 33
    • 3. 34 C. Li fifth century BC) to find out who and what he was. Such an effort is really about the nature of humanity. We humans have a universal need to try and understand who or what we are. One common underlying presumption is a belief in humanity’s distinctiveness, a conviction that humanity is different in some essential way from all other forms of being in the universe. The “essential” requirement in this effort is important, as the eighteenth century German philosopher Georg W. F. Hegel famously said. He argued that even if humans are the only beings with earlobes: that does not mean that having earlobes is an essential differentia for being human. Moreover, humans do not want to be merely distinctive, but distinctive in uplifting ways, in ways that make us not only unique but also special. Thinking along these lines, for example, Socrates developed his philosophy of the human soul. However, if we look at human history, such a wish has seemed forever un-filled. We had thought that we are special because our earth is in the center of the universe, which means that we are located in the center of the universe. But that center did not hold. The sixteenth century Copernican Revolution ruthlessly removed it from us. The Renaissance mathematician Nicolaus Copernicus’s discovery showed that our earth is not the center but merely a planet that revolves around the sun. At the time, his theory was strongly opposed by religious adherents because it was perceived as a threat to the status and hence the special identity of humanity. Since then, we have to recognize that we were never at the center of the universe. We had also thought that we are special because we were made in God’s image. The nineteenth century British naturalist Charles Darwin took that comforting thought away from us with his evolution theory. Darwin argued that humanity had evolved from less evolved species just like other species have. He said that how we look (our image) is an outcome of evolution, not out of a special design by a higher power. And for some, the final blow, psychologically at least, was delivered when the nineteenth century German philosopher Friedrich Nietzsche declared that: “God is dead!” The slogan implies that the divine is no longer a viable option for humanity to ground our special status in. We have to create meaning in our own lives. For a long time, possessing rationality seemed to be the only special thing we had left that set us apart from all other beings. But even that special feature has been challenged. The 19–20th century Austrian neurologist Sigmund Freud, known as the “father of psychoanalysis,” argued that the human mind consists of three components: the conscious, the preconscious, and the unconscious. The conscious, within which rationality presumably resides, represents but a small tip of a large iceberg containing the entire human psyche. The operation of the human mind is influenced largely by the preconscious and the unconscious. In other words, rationality does not play a major role in the operation of the human mind. As a result, in Freud’s view humans can hardly be defined as a rational animal. We may or may not accept Freud’s theory. But his challenge demonstrates that we cannot take it as a given that humans are rational animals. It is something that needs to be established. Some may take selfconsciousness as a distinctive human characteristic. Recently, scientists at Columbia University discovered that some AI beings have self-awareness, posting a direct challenge to a long-held belief about a human monopoly over self-consciousness.1 1 Bodkin (2019).
    • 4. The Artificial Intelligence Challenge and the End of Humanity 35 So, where does this series of events leave us? We are still searching for answers to the eternal question of “Know Thyself.” Freud’s theory has been dismissed by many in their attempt to hold on to rationality as the last straw. Rationality is a form of intelligence. Thinking and acting rationally is a function of intelligence.2 Most of us have been holding and/or hoping that at least we humans are more intelligent than all other beings. Indeed, our search for answers to the question of “Know Thyself” can be seen as humans using intelligence to create myths about the distinctiveness of humanity. At least in this regard, humans have been superior to all other beings. In other words, we have been distinctive in creating myths about our distinctiveness. But now, the moment has finally come: the AI challenge to the distinctiveness of human intelligence. Advancing AI technology will demonstrate that we humans are neither unique in possessing high intelligence nor the most intelligent beings in the world. Some advanced AI beings already surpass humans in intelligence. AlphaZero can beat the best human chess players. Some argue that we are approaching the point of singularity when AI technology overpowers humanity in intelligence.3 The AI challenge in intelligence is not merely a matter about intelligence levels. Unlike all other evolved natural species that have been compared with humanity, AI technology is not a natural occurrence and it can be adjusted expeditiously to match human capacities. The ancient Chinese philosopher Mencius famously said that humans are distinct from animals because we have a humane heart that enables us not to bear to see suffering. Advanced AI can now be programmed to stop proceeding with their tasks when they “perceive” suffering. They can even be programmed to take action to reduce sufferings. Think of medical robots created to care for patients, for example. The third century BC Confucian philosopher Xunzi argued that humans are different from other species because we can form society (qun 群). But that is no longer a human distinction. Military combat robots can certainly coordinate their actions. Indeed, social coordination is a must if they are going to be effective on the battlefield. The rise of AI technology has significantly reduced any distance between the human and non-human world. The fluidity of AI technology has made any attempted claim on human distinctiveness increasingly implausible, if not utterly impossible. Unlike our compared parties in nature, AI beings can be “customized to order,” so to speak. Anything that has been considered special and unique about humanity can be duplicated in AI technology. The emergence and rapid advancement of AI technology makes a satisfactory answer to the question of “Know Thyself” as elusive as ever. This compels us to rethink the question itself. Perhaps, to “know thyself” is not, or should not be, about looking for humanity’s distinctiveness. Perhaps it is about discovering that humanity is not so distinctive and learning to live with the consequences of such a discovery. Daoism is perhaps more ready to accept such a conclusion than many other philosophies. The Chinese Daoist text Daodejing records the ancient sage Laozi’s insight that “understanding others is intelligence and understanding thyself is wisdom.” Both understanding others and understanding oneself are about understanding humanity. 2 For a discussion of the relation between rationality and intelligence, see Baron (1985). 3 Kurzweil (2005).
    • 5. 36 C. Li Yet, he does not seem to stress the distinctiveness of humanity. Between humanity and other existing things in the world, there are differences without distinction. It is entirely possible, and even likely, that our future lies in integrating with advanced AI technology rather than maintaining our distinctiveness. The futurist Ray Kurzweil said: “We’re going to literally merge with this technology, with AI, to make us smarter. It already does. These devices are brain extenders and people really think of it that way, and that’s a new thing.”4 By using nanotechnology, we will be able to connect AI devices to the nerve systems of our brains, enabling AI-enhanced brains to operate much faster and more powerfully. Philosophers have already started contemplating the validity of “extended mind.”5 AI technology has now provided scientific and technological evidence for its feasibility and reality, in the form of extended brains. Humanity becomes a hybrid: a mix of what our biological species has to offer and what we decide to adopt from the AI technology. In that respect, “humanoid” does not only denote a non-human being that resembles humans, but also a human being mixed with non-human components. The current Chinese expression for advanced AI systems, “jiqi ren 机器人”—literally “machine-man/woman”— may be more appropriate for signifying such hybrid beings than its current usage for AI devices. A regular AI device is not a ren 人 until it integrates with a human. In the meantime, AI technology progresses along with biological engineering technology. Recently, scientists at Cornell University succeeded in a bottom-up construction of dynamic biomaterials powered by an artificial metabolism that represents a combination of irreversible biosynthesis and dissipative assembly processes. Using this material, they were able to program an emergent locomotion behavior resembling a slime mold. Dynamic biomaterials possess properties such as autonomous pattern generation and continuous polarized regeneration. It is reported that: “Dynamic biomaterials powered by artificial metabolism could provide a previously unexplored route to realize ‘artificial’ biological systems with regenerating and selfsustaining characteristics.”6 Metabolism and biosynthesis are characteristics of life. Dan Luo, professor of biological and environmental engineering at Cornell University’s College of Agriculture and Life Sciences said: “We are introducing a brandnew, lifelike material concept powered by its very own artificial metabolism. We are not making something that’s alive, but we are creating materials that are much more lifelike than have ever been seen before.”7 Shogo Hamada, a member of the Cornell research team, added that: “We are at a first step towards building lifelike robots by artificial metabolism.”8 When this kind of new technology is combined with AI technology, it will meet us from the opposite end of humans with “extended brains.” We may well see artificial organic AI beings coming to us as we see ourselves in a mirror. The distinction between humans and AI beings will further diminish. 4 https://www.wired.com/story/ray-kurzweil-on-turing-tests-brain-extenders-and-ai-ethics/. Accessed on 11 December 2018. 5 Clark and Chalmers (1998). 6 Hamada et al. (2019). 7 Hayes (2019). 8 Ibid.
    • 6. The Artificial Intelligence Challenge and the End of Humanity 37 If the above argument holds, then we are facing the real possibility that we will not only never find the answer to the ancient question “Know Thyself” in distinctive ways, but also witness the end of humanity as we know it. “Humanity” has always had a dual meaning. On the one hand, it stands for the biological species homo sapiens. On the other, it is a value-laden idea, standing for an ideal. Even though the primary existence of humanity is always as the biological species, humans have never been satisfied with being described as something like “featherless biped” beings, even though humans are perhaps the only such beings in the world. We have always wanted more than that. The call to “Know Thyself” is to urge us to pursue humanity in the value-laden sense. The end of humanity does not mean that humans will cease to exist as a biological species, but that humanity will forever lose its distinctiveness and uniqueness, its “essence.” The Greek word for essence, τo` τι´ Ãν εναι, literally means “the what-it-was-to-be” for a thing (or in Aristotle’s shorter version, τo` τι´ ™στι, “the what-it-is”). Perhaps humans do not have such an essence after all. That kind of humanity, one with a special essence, has ended. Humans will no doubt continue to search for answers to “Know Thyself,” but we may have to view human “essence” as a moving target rather than something that is a given, there to be discovered. For humanity, the upcoming AI era might be described as a loss of innocence or loss of self-regarded maturity. Either way, humanity, in its traditional sense, is finished. Progress as Alienation By blurring the distinction between humanity and non-human existents, AI technology allows us to extend human existence into new territories, not only spatially but also existentially. That is, we have become more than what our natural species has evolved to be and has to offer. In an important sense, we are what we create ourselves to be. Scientists now contemplate whether humanity has gone beyond the Holocene epoch (the time since the last Ice Age) and entered the Anthropocene epoch when humans will transform nature. As 1995 Nobel Laureate Paul J. Crutzen and his coauthor C. Schwägerl put it: “It’s no longer us against ‘Nature.’ Instead, it’s we who decide what nature is and what it will be.”9 In view of the AI challenge, we may add that it is not only about nature anymore; it is also about ourselves: to a large extent, we will decide what we are and what we will be. Such a gigantic leap away from our species’ natural state could be viewed as both progress but also alienation: a blessing and a curse. It could be called progress because we will be tremendously enabled and empowered. We will be able to accomplish things that have previously existed only in the realms of mythologies and dreams. As long as change makes humans more adaptable to living environments, then it can be regarded as progress. It can also be alienation in the sense that we have forever left our natural home base. The very idea of alienation is grounded on some kind of “essence,” something that we humans are, should, or ought to be. As I have shown earlier in this essay, the presumed 9 Crutzen and Schwägerl (2011).
    • 7. 38 C. Li human essence has been largely elusive, something that we have presumed to possess, taken comfort in thinking about, but have never discovered despite our perpetual efforts throughout history. We can nevertheless speak of alienation from what we are, or have been, supposed to be. We are no longer in the image of humanity as we used to hold for ourselves. We are now very different. Just as today’s smart phone generation cannot imagine life without their gadgets, once humanity has become hybrid, we will no longer be able to return to our biological state. In this sense, such alienation cannot be reversed. It is eternal. These two claims relating to progress and alienation may sound contradictory. But they take place in the same process. Progress always involves some kind of loss. Let’s take one example: ancient stories depict a lovesick girl or boy in a torment of yearning for a physical letter from his or her lover. Modern communication technologies have put an end to such scenarios. So now, novelists have to find other ways to eulogize such human sentiments. Here’s another. It is very easy to travel overnight from New York to Shanghai. In doing so, we have lost the poetic experience of traveling with fellow passengers on a month-long journey by sea. The ancient book, the Zhuangzi records an interesting story. Confucius’s disciple Zigong had a conversation with a gardener, who watered his garden by fetching water with a jar from a well. Zigong told him that there are machines for such purposes and they can irrigate a hundred plots a day, with very little labor. “Why do not you use such a machine?” Zigong inquired. The gardener made an angry grimace and said with a laugh: I have heard from my teacher that where there are ingenious contraptions, there are sure to be ingenious affairs, and where there are ingenious affairs, there are sure to be ingenious minds. When one harbors an ingenious mind in one’s breast, its pure simplicity will be impaired. When pure simplicity is impaired, the spiritual nature will be unstable. He whose spiritual nature is unsettled will not be supported by the Way. It’s not that I am unaware of such things, rather that I would be ashamed to do them. (Zhuangzi, Ch. 11)10 The gardener represents the kind of people who hold on tightly to what they are, or what they think they are, resisting any departure from their perceived nature of simplicity. The gardener is right that by accepting new technologies people would have to give up their original state. There is alienation. We cannot have both. If a state of simplicity is deemed essential to being human or to the good life, then accepting new technologies poses too much of a risk and is not worth it. However, the gardener’s philosophy is sound only if we accept that our original state of simplicity is essential to what we humans are. Without such a presumption, then there is little reason for us not to accept new changes. The alienation arising from the influx of AI technology into our existence should not be deplored given that we were never what we thought we were. It should be celebrated. 10 Mair (1994, p. 111).
    • 8. The Artificial Intelligence Challenge and the End of Humanity 39 Confucian AI Ethics The emergence and rapid advancement of AI technology has brought unprecedented challenges to ethics. Whether we can successfully address these challenges will have a profound impact on our lives and humanity’s future. For one thing, we now need to consider whether and how to treat advanced AI beings as ethical beings before we decide what to do with them in other ways. As a traditional enterprise, ethics has been developed on the basis of a purportedly well-established order in the world. The world has been roughly categorized into humanity, living organisms, and other things, either through evolution, divine creation, or other processes. Early conceptions of ethics, especially in the West, made humanity the only moral agent and moral patient. All other beings were considered outside of a moral domain. The rise of environmental consciousness has changed that. Some contemporary ethicists have expanded the moral domain to include living organisms too. However, there is disagreement about the exact moral standing of various organisms. Some insist that only humans matter on a moral level. Some argue that the moral domain should include other “higher” life forms such as horses and dogs. Yet still others argue that all life forms are morally relevant, as life itself possesses intrinsic value. Despite these disagreements, people are pretty clear about what categories they are dealing with. Animals are animals and trees are trees. Even though the seventeenth century French philosopher René Descartes famously degraded animals to machines–because they do not have souls–he was firm about creating a dividing line between humans (thinking beings) and everything else (spatially extended objects). His cogito, or “I think,” stands for intelligence, which is exclusively reserved for humanity. AI technology poses a gigantic challenge about humanity’s distinctiveness. It has brought us an entirely different “animal,” so to speak. AI beings are machines, yet of a very different kind, ones with intelligence. The emergence of AI beings has shaken the foundations and understanding of the world: the one on which traditional ethics is founded. It calls us to re-think ethics in an entirely new light. When we think about robotic ethics, we first must consider whether the intelligence characteristic of AI beings qualifies them to be seriously considered as moral patients. These are beings with moral standing, towards which other moral agents (like normal humans) have moral responsibilities. A human person is a moral patient; we should treat a person with respect. A bicycle, on other hand, is not a moral patient. Even though I may take good care of my bicycle, or treat your bicycle gently out of my moral obligation to you, I do not have a moral obligation to a bicycle. Some environmental ethicists demand that we go beyond humanity to include animals and other living things as moral patients, because animals can suffer, or they have dignity, or life itself possesses intrinsic value. Others insist that only rational beings have moral standing. According to the eighteenth century German philosopher Immanuel Kant, only humans have moral standing. He argued that this was not because we are a biological species but because we are rational: that rationality grounds moral qualification. Some ethicists regard non-human intelligent species, such as dogs, dolphins
    • 9. 40 C. Li and elephants, as candidates for being moral patients because of the affinity of rationality and intelligence.11 Now AI beings have joined the same camp. Since AI beings are intelligent, we should treat them accordingly. As far as morality is concerned, this implies that AI beings should be treated as moral patients. A 2007 story by Joel Garreau of the Washington Post reported a U.S. Army officer calling off a robotic land-mine-sweeping experiment when the robot kept crawling along despite losing its legs one at a time. The officer declared that the test was inhumane.12 This scenario is similar to one in which military personnel care about their K-9s. Being intelligent makes advanced AI beings closer to us and arouses our sympathy. We are moved towards accepting them as moral patients. Yet, it is by no means clear how exactly we should treat AI beings as moral objects within society. After all, AI beings are machines, not living things. They do not have a life of their own like a dog, nor do they have the capacity to suffer (unless we make them so) as living organisms do. Their intelligence is also not uniform. AI beings cover a wide spectrum that ranges from a barely intelligent automated robot, to the highly sophisticated Kengoro, the most advanced humanoid robot yet. Even if we agree that sophisticated AI beings can and should be considered moral patients, it is still far from being evident how they should be treated. For example, what regulations should we adopt to manage the working conditions of AI doctors, now that more and more are being deployed across the world? Should there be minimum requirements as we have for human doctors? This question will become increasingly pressing as AI technology incorporates emotions as well as intelligence, making them more and more like our fellow human beings. Furthermore, recent discussions about AI ethics have mainly centred on AI beings as moral agents. A moral agent is a being capable of understanding right and wrong and choosing to act accordingly. Traditionally, ethics has confined moral agency only to humanity. We hold humans morally responsible, but not animals. Even though many environmental ethicists accept animals as moral patients, few if any take animals as moral agents for the obvious reason that animals cannot discern moral right from wrong. Can AI beings have moral agency? Sophisticated AI devices have the capacity to make decisions that not only have economic consequences but also moral ones. The US Army has begun developing AI hardware and software that will make robotic fighting machines capable of following the ethical standards of warfare.13 Such capacities to be developed in the future are not merely limited to programming possible courses of action in a limited range of anticipated circumstances; they can also be open-ended systems that gather and synthesize information, predict and assess the consequences of available options, deploy appropriate responses. Such capacities make advanced AI systems more like autonomous people than mere machines as traditionally understood. The “Moral Machine” experiment at MIT’s Media Lab poses a moral dilemma for autonomous cars. Should such a 11 See Baron (1985). 12 Wallach and Allen (2009, p. 63). 13 Wendell and Allen (2009, p. 30).
    • 10. The Artificial Intelligence Challenge and the End of Humanity 41 car divert its path to avoid hitting a child but hit a grandmother instead?14 Should autonomous cars be designed and enabled to do this? One may think that in cases like this it is still humans making decisions for AI beings as we program them. But then, haven’t we been similarly socially programmed to avoid killing a human at the expense of, say, a cat? Advanced AI beings surely can be trained as human beings can. The rapid advancement of AI capacities in decision-making brings such sophisticated beings increasingly closer to being moral agents. It calls for integrating ethics into the way that advanced AI systems are designed. This dimension poses perhaps the greatest challenge to moral philosophy, in part because ethicists themselves do not agree on what kind of normative ethics is the best or most appropriate. Deontologists hold that ethics must be based on moral duty. Utilitarians advocate courses of action that bring about the most desired consequences. Virtue ethicists want AI robots to behave in ways that demonstrate good virtues. You might think that we should go with the lowest common denominator and use the “no harm” principle as the ethical requirement for advanced AI systems. But this might not be so easy either, as some systems are designed precisely to do harm. Currently, an undeclared arms race is already underway between major military powers to develop autonomous robots to replace human soldiers in battlefields. These killing AI devices will definitely make the world even more dangerous and prone to armed conflicts. Should there then be an international ban on designing and producing combat AI machines? These questions do not have easy answers. Yet, all these challenges must be taken seriously and addressed appropriately if we wish to maintain a world in which humans flourish. The task has become even more pressing due to the fast advancement of AI technology and its enormous impact. Unlike evolutionary progresses that take a gradual course, AI systems will quickly occupy every corner of the world and change virtually every aspect of our lives. Indeed, our phones and computers already come with in-built AI technologies. This affects humanity in fundamental ways. It could determine our fate in important ways. In this sense, successfully addressing AIrelated ethical challenges is even more difficult than addressing AI-related technical challenges. Understanding the pressing nature of such challenges is an important step that we must take without delay. What kind of values should we instill into AI beings? At some point, humans will be living with humanoid robots, as peers in many ways. Humanoid robots will make their own decisions while carrying out tasks. As their creators, we humans should ensure that their decision-making is guided by some sort of ethics or moral principles, similar to what we wish to see in other humans. What kind of robotic ethics will be both feasible and appropriate to our moral sensitivities? In order to address these questions, we need to first determine the nature of robotic ethics. In this regard, Confucian moral philosophy can provide a useful resource. Confucian 14 See the “Moral Machine” experiment by the MIT Media Lab (https://www.technologyreview. com/s/612341/a-global-ethics-study-aims-to-help-ai-solve-the-self-driving-trolley-problem/). Accessed on 30 December 2018.
    • 11. 42 C. Li understandings of ethics include two approaches, represented by the ancient philosophers Mencius and Xunzi respectively. Mencius held that humans are unique in that we are born with a humane heart (xin 心). Fostering this humane heart will make us authentically human and lead us towards a moral human life. We may call this “the soft-heart approach.” In this view, ethics is about developing and following a humane heart. Xunzi held that humans become moral by acquiring social norms from society. Humans do not possess morals when they are born; they are later “programmed” to act morally. We may call this approach “the social wiring approach.” Both Mencius and Xunzi emphasized humanity’s distinctiveness. Mencius focused on the distinctiveness of the human heart, which only humans possess and animals do not. He famously claimed, The reason why I say that humans all have hearts that are not unfeeling toward others is this. Suppose someone suddenly saw a child about to fall into a well: everyone in such a situation would have a feeling of alarm and compassion—not because one sought to get in good with the child’s parents, not because one wanted fame among their neighbors and friends, and not because one would dislike the sound of the child’s cries. From this we can see that if one is without the heart of compassion, one is not a human. If one is without the heart of disdain, one is not a human. If one is without the heart of deference, one is not a human. If one is without the heart of approval and disapproval, one is not a human. The heart of compassion is the sprout of benevolence. The heart of disdain is the sprout of righteousness. The heart of deference is the sprout of propriety. The heart of approval and disapproval is the sprout of wisdom. People having these four sprouts is like their having four limbs.15 (2A6) Mencius maintained that the difference between “humanity” and “beasts” lies with the possession of this kind heart, or the lack thereof. For Mencius, humanity does not refer to the biological species; it possesses morality. Only beings with this kind heart are moral beings. Developing such a heart is to continue on the track of moral existence. In this sense, Mencius was not drawing a line between homo sapiens, on the one hand, and other beings on the other. He defined the difference between ethical beings and non-ethical beings in terms of possessing or not possessing a kind heart. AI beings could, therefore, be included in the ethical realm, if they are given such a “heart.” With such a Mencian heart, AI beings will function morally in society as moral humans do. Mencius’s “xin 心” is of course not the physical heart in the body. It is the psychological, emotional capacity to care about and to be kind towards others. If we follow a Mencian approach towards robotic ethics, we may install a “no-hurting” principle in humanoid robots as an overriding mechanism, ensuring that they cannot carry out destructive actions that harm humanity, other life forms, and one another among themselves. We will have something similar to what the writer John Havens has called “Heartificial Intelligence.”16 This approach may be most appropriate for medical AI beings. In medical care, the primary principle is traceable to the Hippocratic Oath: “First do no harm” (Primum non nocere). If they take this principle on board, AI beings will act in ethical ways in ways that are aligned with Mencius’s philosophy. 15 Van Norden (2008, p. 46). 16 Havens (2016).
    • 12. The Artificial Intelligence Challenge and the End of Humanity 43 However, at a fundamental level, AI machines will be Xunzian in the sense that ethics will be socially made or artificial (wei 伪) as Xunzi vigorously argued in his time. The ancient Chinese philosopher also emphasized human distinctions, but in a very different way. He said, Water and fire have qi but are without life. Grasses and trees have life but are without awareness. Birds and beasts have awareness but are without standards of righteousness. Humans have qi and life and awareness, and moreover they have yi. And so they are the most precious things under Heaven. They are not as strong as oxen or as fast as horses, but oxen and horses are used by them. How is this so? I say: It is because humans are able to form communities (qun 群) while the animals cannot.17 (Xunzi 9.9) Xunzi held humanity to be the most precious being in the entire world. He identified two features behind humanity’s distinctiveness. One is yi 义, the sense of moral appropriateness; the other is qun 群, the ability to form communities. Both are needed for humans to flourish. Yi not only gives humans a sense of what we need to be ethical but also what is ethically appropriate. Xunzi believed that human xing (natural tendencies) is bad and that following xing will lead society to chaos and ruin. Without social construction by the sages, yi in its inborn state does not enable people to overcome xing towards goodness.18 The American philosopher and sinologist, David Nivison, identified “yi” as Xunzi’s source of morality, but he interpreted it as an ability of intelligence and insisted that yi does not have “any particular content.”19 Nivison’s interpretation makes Xunzi’s yi more like a capacity for humans to use their intelligence to think in rational or reasonable ways, and made it instrumental towards forming communities, which Xunzi identified as a separate human distinction. In order to form communities, humans need to devise rules of ritual propriety (li 礼), rules of what is appropriate and conducive towards flourishing communities. Rather than the humane heart, Xunzi emphasized the importance of ethical rules in regulating society. A Xunzian approach to AI ethics would primarily be about devising effective rules. If we follow a Xunzian approach, we can construct rules to guide AI actions without the need for an overriding principle to protect humanity, other life forms in the world, or other humanoid robots. We may give AI rules to obey as driverless cars obey traffic rules. AI beings will aim to accomplish their assigned goals, regardless of the nature of these goals. Perhaps both the Mencian and Xunzian approaches can be used depending on the kind of robots in question. The Mencian approach would seem to make more sense for robots that are caring for humans, whereas the Xunxian approach would be more suitable for many other purposes. Mencius has typically been given a more prominent status over Xunzi throughout the history of Confucian philosophy. However, recent Confucian studies seem to have been gravitating more and more towards Xunzi. As far as AI ethics is concerned, these two approaches can be integrated in working with advanced AI beings in the future. 17 Hutton (2014, 76). 18 For more discussion, see Chenyang Li (2011). 19 Nivison (1996, pp. 207, 201).
    • 13. 44 C. Li The futurist novelist Isaac Asimov (1920–1992) proposed three fundamental laws to guide the behavior of robots. They are, 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.20 Asimov’s first law clearly carries a Mencian motif. Even though it is in the form of a principle, it manifests the paramount importance of no-harm to humans. Such a requirement reflects a human value deeply rooted in the kind of ultimate concern that Mencius articulated. Yet, these laws are in the form of rules, as Xunzi would have opted for. Today’s AI ethics needs to be much more sophisticated and far-reaching than Asimov’s three laws. However, no matter how advanced AI ethics becomes, it seems fitting to incorporate the two orientations that Mencius and Xunzi initiated. We may reformulate their principles this way: First, advanced AI beings must benefit humanity as their primary goal. Second, their activities must be regulated by rules on the basis of the first principle. AI beings can act to benefit themselves, but their prime purpose is humanity’s benefit and that must ground all other considerations regarding AI. Furthermore, AI beings will have to be regulated by rules that are consistent with the first principle. These two principles combined make possible for AI beings to co-exist with humans as part of, or an extension to, human communities. AI’s Effect on Confucian Philosophy The development of AI technology has not only raised important questions for ethics but also philosophy in general. Similar to other philosophies, Chinese philosophy has been developed over thousands of years without questions raised by various AI challenges. Today, it has to address new challenges. It must deal with a reality where humanity’s distinctiveness has either disappeared, or become less and less discernible. It must consider a future society when AI beings are important participants. In this section, I make two arguments. My first argument is that some important characteristics of Confucian philosophy, such as the doctrine of “graded love,” allow it to adjust moral requirements in response to AI challenges. My second argument is that Confucian philosophy needs to be adjusted to meet with AI’s challenges. I use the Confucian theme of learning to demonstrate this. In a way, Confucian philosophy may have less difficulty meeting AI’s challenges than some of its western counterparts. For example, Kantian ethics is grounded on rationality; rationality is supposed to be universally uniform across all rational beings. Even though Confucians accept the universal endowment of moral potential 20 Wallach and Allen (2009, p. 12).
    • 14. The Artificial Intelligence Challenge and the End of Humanity 45 in all humans, they do not assume that all human beings have achieved the same level of moral refinement and allow for differentiation in moral attainment. On the basis of Kantian ethics, advanced AI beings would be either included or excluded from the moral domain. If they are included, they should be treated equally with all moral beings, without differentiation. By allowing differentiated degrees of moral attainment and moral standing, Confucians seem to have more room to accommodate and embrace AI beings as moral agents. For one thing, they can accept AI beings as moral beings without having to treat AI beings as our full equals, which seems a more intuitively sound approach. Confucian philosophy has prioritized two cardinal values, ren (仁) and li (礼). Ren stands for a comprehensive virtue that is grounded on a caring heart. Li stands for a social or cultural grammar that regulates human behavior. There is also another important virtue that has been advanced by Confucian thinkers but has not received the recognition it deserves: he (和), harmony. Confucian ethics promotes graded love, in the formulation of “qinqin, renmin, aiwu” (Mencius 7A45).21 Namely, being affectionate towards parents (family), caring about people, and charitably value things. The ai in the last statement can be read as “to value”; it also implies a sense of “xi 惜,” to utilize prudently.22 The “wu” in “aiwu” includes all things with value in the world. These Confucian ethical requirements are graded. “Aiwu” is the minimum requirement and the least difficult to practice. Renmin is more demanding; it encompasses the requirement of gentle treatment but extends beyond aiwu as it requires the agent to care about people in a loving manner. The most demanding is qinqin, as it not only encompasses caring about family in a loving manner but also being affectionate towards them. To put it another way, one cannot care about people without a gentle attitude in the first place; one cannot be affectionate towards family without being able to cherish and care. All these positive attitudes towards family, people, and things in the world need to be fostered through moral cultivation. These three categories differ in degrees and in intensity. Being affectionate implies care; care implies appreciation. Environmental ethicists argue that (at least) higher forms of animals should be taken as moral patients, even though not as moral agents because they do not possess the capacity to make moral decisions. Confucians may or may not accept animals as moral patients in the full sense of the term, but they can accommodate the call to treat animals gently under the category of “aiwu.” Such an attitude towards animals can be easily extended to AI beings. However, AI beings pose additional challenges to ethics because they are capable of making ethically relevant decisions. This means that we should not treat AI beings solely under the category of “aiwu.” We should consider them as moral agents, at least some of them and at varied degrees. That makes them candidates for the category of “renmin.” We should 21 亲亲, 仁民, 爱物. https://ctext.org/mengzi/jin-xin-i/zh. Accessed 31 December 2018. 22 For instance, the classic commentator Zheng Xuan 郑玄 (127–200 AD) interprets “ai” in “ai mo zhu zhi 爱莫助之” in the Book of Poetry as “xi 惜.” Zhu Xi also uses “xi” to interpret “ai” in Confucius’s comment that “you ai the sheep but I ai the ritual 尔爱其羊, 我爱其礼” in the Analects 3.17.
    • 15. 46 C. Li not only appreciate AI beings, but also cherish and care about them, treating them as moral agents as well as moral patients. As a living tradition, Confucianism has been adjusting itself in response to the times. Its gradual acceptance of democracy is one such example. Early on, Mencius advocated the idea that people are the foundation of the state. But democratic ideas did not really emerge in the tradition until thinkers such as Li Zhi 李贽(1527–1602) and Huang Zongxi 黄宗羲 (1610–1695). AI technology presents another challenge for Confucianism. One such challenge relates to meritocracy. Confucianism originally gained prominence in part because it rose up against the previous hereditary system and advocated meritocracy.23 Confucian meritocracy places the basis of social mobility on people’s virtue and knowledge/ability. Virtue and knowledge have to be acquired through hard work under appropriate conditions. Consequently, serious learning has been a key virtue and a moral requirement in the Confucian tradition. Confucius took learning as the most important way to become a good person. He set a good role model for people as described in the Analects: “At fifteen I set my heart on learning; at thirty I established myself; at forty I was beyond perplexity; at fifty I knew the mandate of heaven; at sixty I am at harmony; at seventy I can follow my heart without transgressing boundaries.”24 Learning was an enjoyable activity: “is it a pleasure to learn and practice what is learned in a timely fashion?”25 He lamented that: “the ancients learned for their self-improvement, whereas nowadays people learn for others.”26 He said that: “what worries me is that people do not cultivate virtues, they learn without exchanging ideas, they learn the right things without following it, and they do not change even though they are not good.”27 Confucius valued learning so highly in part because we can only become knowledgeable and useful members in society through learning. The AI era may change this paramount Confucian requirement. The futurist Ray Kurzweil predicted that AI would make it possible for people to link their brains with a computer. He said: “The most interesting thing will be for your neocortex to extend itself with synthetic neocortex in the cloud. Ultimately our thinking will be predominated by the synthetic neocortex.”28 The neocortex is the region of the brain that is associated with human mental functions. AI technology will extend the human brain beyond its biological limitations. Kurzweil even predicted that the additional neocortex would be stored in the cloud, which would allow unlimited expansion. If this comes to pass, then meritocracy may no longer have to depend on knowledge in the traditional sense. Confucians will have to reconsider their view about the value and importance of knowledge-based 23 For discussions of Confucian democracy, see Bell and Li (2013). 24 吾十有五而志于学, 三十而立, 四十而不惑, 五十而知天命, 六十而耳顺, 七十而从心所欲, 不逾矩。《( 为政第二》). Following Liao Mingchun, I read 耳 as 聏 and translate it as harmony. See Liao 廖名春《, 孔子真精神: 论语疑难问题解读》, Guiyang: KongXueTang Shuju (2014). 25 学而时习之, 不亦说乎? (《学而第一》). 26 古之学者为己, 今之学者为人。《( 宪问》). 27 德之不修, 学之不讲, 闻义不能徙, 不善不能改, 是吾忧也。《( 述而第七》). 28 https://www.inverse.com/article/33373-ray-kurzweil-singularity-thoughts. Accessed 17 December 2018.
    • 16. The Artificial Intelligence Challenge and the End of Humanity 47 learning. They will probably shift more towards virtue learning and towards practical abilities in life. Of course, at this stage, any proposal for AI ethics is tentative. Confucian ethics is not merely a product of pure contemplation or speculation; it is generated through both reason and feeling. Developing such ethics requires us to draw on actual experience. For that reason, a Confucian AI ethics has yet to be worked out more adequately and fully along with the further development of, and interaction with, AI beings.29 References Baron, Jonathan. 1985. Rationality and intelligence. Cambridge: Cambridge University Press. Bell, Daniel, and Chenyang Li. 2013. The East Asia challenge for democracy: Political meritocracy in comparative perspective. New York, NY: Cambridge University Press. Bodkin, Henry. 2019, January 30. Robot that thinks for itself from scratch brings forward rise the self-aware machines. Science Robotics. Chenyang Li. 2011. The origin of goodness in Xunzi. Journal of Chinese Philosophy 38:46–63. Clark, Andy, and David J. Chalmers. 1998. The extended mind. Analysis 58: 7–19. Crutzen, P. J., and C. Schwägerl. 2011. Living in the Anthropocene: Toward a new global ethos. Yale Environment 360. http://e360.yale.edu/feature/living_in_the_anthropocene_toward_a_new_glo bal_ethos/2363/. Accessed 12 December 2018. Hamada, Shogo, et al. 2019, April 10. Dynamic DNA material with emergent locomotion behavior powered by artificial metabolism. Science Robotics 4 (29): eaaw3512. https://doi.org/10.1126/ scirobotics.aaw3512. https://robotics.sciencemag.org/content/4/29/eaaw3512. Accessed 13 April 2019. Havens, John. 2016. Heartificial intelligence: Embracing our humanity to maximize machines. Tarcher/Penguin. Hayes, Matt. 2019, April 10. Engineers create ‘lifelike’ material with artificial metabolism. Cornel Chronicle. http://news.cornell.edu/stories/2019/04/engineers-create-lifelike-material-art ificial-metabolism. Accessed 13 April 2019. Hutton, Eric, trans. 2014. Xunzi: The complete text. Princeton and Oxford: Princeton University Press. Kurzweil, Ray. 2005. The singularity is near. New York: Penguin Group. Mair, Victor. 1994. Wandering on the way: Early Taoist tales and parables of Chuang Tzu. New York: A Bantam Book. Nivison, David. 1996. Hsün Tzu on ‘Human Nature.’ In The ways of Confucianism: Investigations in Chinese philosophy, ed. Bryan Van Norden, 207, 201. Chicago: Open Court. Van Norden, Bryan W., trans. 2008. Mengzi: With selections from traditional commentaries. Indianapolis: Hackett Publishing Company, Inc. Wallach, Wendell, and Colin Allen. 2009. Moral machines: Teaching robots right from wrong. Oxford and New York: Oxford University Press. 29 The research of this essay was supported by a Tier-1 resarch grant from Nanyang Technological University (#RG114/20). View publication stats


    • Previous
    • Next
    • f Fullscreen
    • esc Exit Fullscreen