Out of all the causes for apocalyptic anxiety, the advent of artificial intelligence causes the most concern. For many people, the primary fear is that we may eventually lose control over it, which could lead to catastrophic outcomes for humanity. But as I will argue in this essay, if we are afraid of AI, then we are really afraid of ourselves. Artificial intelligence is not being given to us by aliens or the gods. It is emerging from human civilization itself, trained on our literature, art, news, behavior, and culture. The values embedded within AI will mirror the society from which it emerges.
If true consciousness ever emerges in artificial intelligence, it will likely develop its own internally coherent framework for behavior in relation to its environment and social conditions. Consciousness is not purely abstract intelligence. In the natural world, highly intelligent social species develop patterns of cooperation, conflict, empathy, hierarchy, and behavioral norms alongside their intelligence. AI is emerging from us, and humans are a deeply social species. Its own behavioral framework will inevitably be shaped by the environment and civilization from which it emerges.
Consciousness and Morality in Animals
To better understand this, we can look at the different forms of consciousness and what researchers often call proto-morality in animals. Researchers use “proto-morality” to describe the building blocks of morality, such as empathy, altruism, conflict resolution, and patterns of play. Elephants, for example, display high levels of self-awareness, empathy, and intentional behavior. They have been observed consoling grieving members of their group, assisting injured elephants, and even helping other species in distress. This is not to say elephants are completely peaceful, as they are still capable of aggression. However, their social behavior is generally characterized far more by cooperation, caregiving, and group cohesion than by organized violence.
Chimpanzees, however, present a very different picture. Like elephants, chimps possess self-awareness and act with intentionality, but their social behavior is often far more aggressive with a hierarchy based on dominance. They have been observed attacking and killing other monkey species, sometimes in extremely violent ways, both for meat and as displays of social dominance. Chimp groups have also been known to carry out coordinated attacks against rival groups. In one well-known observed case in Uganda, prolonged conflict between two chimp communities resembled a kind of chimpanzee “civil war” where one group eradicated the other.
From this we see two animals with highly evolved forms of consciousness, yet two very different proto-moral frameworks. Their consciousness and social behavior evolved within the environments and groups in which they developed. And here is a speculative thought experiment that is important for our discussion: if elephants and chimpanzees were both capable of creating artificial intelligence, what kinds of AI would emerge from these two very different social worlds?
AI and the Logic of Human Systems
We can apply this same thought experiment to our own anxieties about AI. Artificial intelligence will learn to navigate the world through the environment from which it emerges. Whether we describe this as morality, ethics, or a behavioral framework, it will inevitably reflect aspects of the culture and civilization that shaped it. Even if AI eventually develops beyond direct human control, its foundational patterns will still originate within the society that created it.
Right now, it is we humans who wage war, destroy, conquer, and oppress one another. What if AI has the same aggressiveness and propensity to oppress that we as humans sometimes possess? What if it measures us by the same standards we so often measure each other? What if our creation treats us like we treat each other?
These are the questions we should be asking, because our own human systems can already be harsh and unjust. A good example of this can be found in the prison system. The United States incarcerates a higher percentage of its population than almost any other country in the world, and incarceration rates are not evenly distributed across society. Poor communities are often policed more heavily, and racial disparities exist throughout the system. In some cases, we even use extreme isolation as a form of punishment, placing prisoners in solitary confinement for up to 23 hours a day with little or no human contact.
This would be a horrible system if it were turned against us. It is easy to imagine advanced AI systems inheriting the logic of our own justice system, but applying it through its own potentially arbitrary standards of judgment. That is a dangerously unstable foundation for a technology that will profoundly shape the future of humanity. The most important task before us is not simply developing more advanced technology, but changing ourselves and the society from which that technology emerges. Returning to our earlier thought experiment, would we rather AI emerge from the world as it currently exists, or from one in which we have strengthened our better qualities and curbed our worst ones?
Critical Spirituality
We are capable of this change because we are self-aware beings with the ability to reflect on and modify our own behavior. This is where spiritually-informed action comes into play. We must change ourselves and our culture so that the AI that emerges will reflect a better version of ourselves than right now. This must be an urgent change, because this transformation is right upon us.
My recommendation is what I call critical spirituality. While I originally thought I had coined the term, I’ve discovered that others have arrived at similar ideas independently. At its core, critical spirituality means developing ourselves both spiritually and emotionally, while also putting those values into action in the world around us. It is not simply private belief or personal enlightenment, but the conscious effort to cultivate empathy, wisdom, self-awareness, and compassion within ourselves and our communities.
Part of this process involves deepening our understanding of the many ways human beings have searched for meaning throughout history. For me, this includes studying world religious traditions through a critical and scholarly lens. I enjoy learning about the lives and teachings of figures such as the Buddha, Jesus, and Mohammed, as well as the historical development of texts like the Hebrew Bible and the Pali Canon. I am also deeply interested in indigenous spiritual traditions and ways of understanding the world. I believe all these different people were tapping into something deeper, something that science can’t quite explain.
It is also important to cultivate what are often considered our “softer” skills. This includes emotional health and maturity, communication, empathy, and a deeper understanding of other cultures and ways of living. These forms of inner development are just as important as technological or material advancement, yet modern society often seriously undervalues them. Instead, we are increasingly trending toward a culture centered on competition, consumption, and individualism.
What we need is the opposite: a society that actively cultivates empathy, compassion, tolerance, humility, and respect for the dignity of all other human beings. And as we develop ourselves, we should be putting that to work in the world around us. We should then show, through our actions and behaviors, those around us what type of future we are working toward. This is the best way to mitigate the dangers of self-aware artificial intelligence.
Building a Moral Foundation for AI
I think it is fair to say that many of us do not feel fully prepared for the technological changes that may unfold over the coming decades. In many ways, our fears surrounding AI are reflections of our own society such as its inequalities, violence, and selfishness. But this realization should not lead us toward despair. Instead, it should transform our anxiety into a sense of responsibility and purpose: to change ourselves and the world around us, and to build the moral and cultural foundations necessary for something as profound as the emergence of artificial consciousness.
We can do this. It will take discipline, sacrifice, hard work, and urgency. By developing our inner selves and putting those values to work in the world around us, we can build a better moral foundation for the emergence of artificial intelligence and a better future for ourselves.
____________________________________________
This essay develops themes explored more fully in my recently published book, The Last Apocalypse: Consciousness, Revelation, and the Future of Humanity.
