image

Illustration by Batool Al Tameemi

Artificial You: Bridging the Divide Between AI and Human Consciousness

Have you ever had a moment where you suddenly notice everything: the music in your headphones, the people around you, even your own breathing, and wonder what it really means to be you?

Human Consciousness
You are at the gym, listening to Replay by TRINIX while working out. All of a sudden, you find yourself conscious about everything, everywhere, everyone, all at once, experiencing a version of reality that is not very unusual but somehow uncommon.
Enclosed within a box of sentient phenomena that constitute the music that reverberates all around you, the numerous colors and shapes that make up your field of vision, and the rhythmic motion your body makes while you are sweating to do your last set of leg raises, the reference you have always been making to yourself since childhood, the “I,” the “experiencer” of sense experiences, suddenly disappears. And reality becomes apparently equivalent to pure experience.
You then ask yourself: Has life always been like this? Is “life” merely an amalgam of experiences? But soon, you will want to give up asking questions entirely. After all, how different are these questions from the repetitive “like a melody” verses you have been noticing once in a while listening to Replay, when you are not distracted by the sound of treadmills, the drum-like noise from weights falling in regular time intervals, and the occasional giggles and laughter of gym bros. Nonetheless, for whatever it is worth, you keep nagging yourself to ask the questions because you cannot help but feel that there is a deeper mystery beneath them.
Anyone who has gone through kindergarten has some understanding of the sense organs and their functions: how the eyes enable us to see, our nose smells odor, etc. Some more lessons in biology can reveal to the student the various mechanisms — involving the optic nerve and the brain — that realize the sentient experience we call vision. The same goes for the rest of the sense organs. Although we do not yet have a complete understanding of how the brain works in realizing what we see, the solution to the problem does seem to be within sight of modern neuroscience, or at least theoretically solvable. However, there is yet another problem, perhaps one that is more profound than the question of how the brain integrates information. This concerns sentience itself: the experience of the blueness of the sky, the saltiness of a meal, or the auditory experience of the rhythmic “like a melody” verses.
David Chalmers, in his book, The Conscious Mind, refers to the earlier problems of how the brain processes environmental stimulation as the “easy” problems of consciousness, while he calls the latter question — the question of “Why … all this [brain] processing [is] accompanied by an experienced inner life” — the “hard” problem. According to Chalmers, the “easy” problems address the question of how a physical system can have psychological properties; whereas the “hard” problem is about “how these psychological properties are accompanied by phenomenal properties: why all the stimulation and reaction associated with pain is accompanied by the experience of pain, for instance.”
Several solutions have been proposed to this mind-body problem, including the renowned materialism (which claims that the mind, like the rest of reality, is physical), the less popular idealism (which proposes that reality is mind-like), and property dualism (which defines consciousness as a fundamental feature of reality). Looking closely at property dualism, among the others, we find that although conscious features typically emerge from the biological brain, there might be a chance that one day even synthetic intelligences may have such features as well. One such intelligence being AI, we can raise similar questions related to the mind-body problems, but now with respect to AI.
AI Consciousness
Susan Schneider, in her book Artificial You, tries to provide an answer to the question of whether it is possible for AI to become conscious. She makes a reference to the movie Her, where the AI program, Samantha, appears to react and interact in a manner that might convince anyone that she is, in fact, a conscious entity, and leads the reader to ask whether they can provide proof to the contrary.
It is indeed challenging to entirely accept the idea that Samantha is a conscious being like us, especially given the fact that, as Schneider states, she would not experience the painfulness of pain or the richness of friendship. Nonetheless, one can claim that although an AI program like Samantha may not have what we call phenomenal consciousness (i.e., “the felt quality of one’s inner experience — what it feels like, from the inside, to be you”), it may still possess another form of consciousness called cognitive consciousness or functional consciousness (that is, when the AI has features similar to those that underlie phenomenal consciousness in humans, including attention and working memory). In the context of Samantha, one may plausibly claim that she is an AI zombie — that is, an entity with cognitive consciousness but not phenomenal consciousness.
According to Schneider, it is still important to understand cognitive consciousness for two reasons. First, cognitive consciousness might be important in order to develop conscious machines because it may be necessary for phenomenal consciousness. Second, a machine that has cognitive consciousness may also have phenomenal consciousness. As Schneider writes, “There are AIs that have the primitive ability to reason, learn, represent the self, and mimic aspects of consciousness behaviorally. … These features of cognitive consciousness are not, on their own, evidence for [phenomenal consciousness], but they are plausibly regarded as a reason to take a closer look.”
Returning to the initial question regarding the nature of human consciousness and its larger metaphysical implications related to the mind-body problems, we might ask whether it is still possible for AI to experience conscious phenomena like music and colors. And could a merger between a human mind and a machine be conscious?
The Future – Bridging the Divide
Schneider argues that we are still ignorant as to whether the parts of the brain that are responsible for consciousness can be effectively replaced by AI components. Particularly, humans cannot safely merge with AI because we do not yet have the necessary technology to replace these parts of the brain as “neural prosthetics and enhancements may hit a wall.”
This may limit the use of AI-based enhancements to a few scenarios. The first would be a restricted use of the enhancements in parts of the brain that are not related to conscious experience, and the use of only biological enhancements in areas of the brain responsible for consciousness. The second scenario involves the use of nanoscale enhancements (including the use of nanoscale AI components) without replacing neural tissue or interfering with conscious processing.
Schneider, however, concludes that both cases present a situation where a merger with AI is not possible and only a limited integration is possible. While enhancements are possible, neither of the two scenarios involves uploading to the cloud or replacing all of one’s neural tissue with AI components. She writes that since “these are emerging technologies, … we cannot tell how things will unfold.” Nonetheless, supposing that AI-based enhancements can replace parts of the brain responsible for consciousness, we can briefly wonder about the philosophical implications of this merger, particularly pertaining to personal identity.
What do we mean by the “You” that purely experiences the gym with all the music, noise, machines, and movements? Two theories about the nature of persons, among many others, might be considered here.
According to a brain-based materialist theory, “you are essentially the material that you are made out of (i.e., your body and brain).” In this case, your thinking is dependent on the brain, and thought cannot “transfer” to a different substrate. Therefore, the person would cease to exist if enhancements change one’s material substrate.
Another view is the so-called “no-self view,” which proposes that the self is an illusion. Shared by the Buddha and Nietzsche, this view assumes that the “I” is merely a grammatical fiction. Schneider writes, “If you hold the no-self view, then the survival of the person is not an issue, for there is no person or self there to begin with.” In this case, expressions like “I” and “you” do not really refer to persons or selves.
Notice that if you are a proponent of the no-self view, you may still want to enhance. For instance, you might believe that adding more super-intelligence to the universe has some intrinsic value. And valuing life forms with higher forms of consciousness, you might wish that your “successor” be such an entity.
Whether you subscribe to a brain-based materialist theory, a no-self view, or any of the other theories of the self including the psychological continuity theory (i.e., the view that you are your overall psychological configuration or your “pattern”) or the soul theory (i.e., the view that your soul or mind is your essence), you can recognize that the nature of mind and self still remains controversial, and as Schneider writes, “the future of the mind requires appreciating the metaphysical depth of these problems.”
Abenezer Gebrehiwot is a Senior Features Editor. Email them at feedback@thegazelle.org
gazelle logo