Monday, October 19, 2009

Toward a Meaningful Definition of Posthuman Sentience

I must apologize in advance for getting all philosophical on you guys today.

As we get closer and closer to developing artificial general intelligence, I feel it is necessary to highlight an important limitation of our anthropocentric perspective. While we sometimes have the capacity to treat other species of life in humane ways, we often stumble when it comes to categorizing non-human intelligence. We cannot help but ascribe anthropomorphic qualities to that which we view to be intelligent, and we are virtually unable to imagine intelligence that lacks such qualities. In the relatively near future, however, we're going to live in a world with intelligent robots, uploaded minds, and other transhumans; it will be necessary to alter our perceptions of what those entities represent.

Although much criticism has been leveled against research into creating human-like artificial intelligence, the goalposts for "human-like" keep being moved. Faculties such as understanding languageplaying Chess, and making music were once thought to be uniquely human, but are now merely mechanical. Soon the only viable objection AI critics will have is that computers lack that ultimate of all subjective concepts, sentience. I would argue, however, that the general definition of sentience used by us humans is a poor one unsuited to an age of intelligent non-human entities. To understand why our conception of sentience is so useless we must first define it.

The simplest description of sentience is the awareness of qualia - that is, perceiving the blueness of the sky, the warmth of the sun, or the pain of fire. But it's not the photon hitting our retina that we're aware of; it's a composite image generated from our eyes, memory, emotional state, and other senses that somehow becomes a clear picture in front of us. The inability to effectively articulate the "it" that we see is sometimes expressed thusly: do we experience the same colors as everybody else? Is my green your purple? Is your red my blue?

But this definition of sentience fails to take into account the reality of what we perceive. We can only indirectly infer another's subjective experiences. I know that I am sentient, but all that I can confidently say about others is that, by their reactions to the wold around them, they appear to be sentient. So the consensus description of sentience must be modified to this: sentience is the ability to respond to subjective experience.

It's worse than that, though. People that are unconscious, in a coma, or unable to feel pain are still described as being sentient. The act of responding to qualia is not required - only the expectation that one can respond. Sentience, then, is reduced to an individual's ability to convince us (whether by direct example or mere association) that we should expect it to be able to respond to subjective experience.

Unfortunately, even this flimsy description is insufficient. We believe some animals also experience qualia because they respond in much the same way that we do. Dolphins, pigs, and primates can see themselves in mirrors; dogs and cats can yelp in pain. Other animals respond to external stimuli but do so in inhuman or primitive ways, leading us to believe that there is no sentience involved. Therefore, the common conception of sentience is the ability of an individual to convince us that we should expect it to be able to respond to subjective experience in a fashion that is reasonably similar to the way that most other agreed upon sentient beings do.

And yet this definition, while laughably broad, is arbitrarily limited. Despite there being no consensus amongst scientists or philosophers as to the physical mechanism of sentience, we exclude more primitive animals for not having the brains we do, plants for not having the nervous system we do, and inanimate objects for not being alive. We do this because many believe that sentience is an emergent phenomenon arising from the complicated neural networks possessed by the human (or animal) brain. This is a reasonable point of view, but it rests on the assumption that we are sentient and others are not. There is no reason to make this claim except for the fact that we believe sentience is a unique quality of intelligent, living organisms. Our perception of the universe is radically different from its objective reality, which does lend credence to the idea that the sentience we experience requires the brains we possess, but it does not indicate that the property of sentience itself requires a brain.

The only reasonable claim we can make about the ability to perceive qualia is that, for us to know about it, there must be a measurable response initiated by an internal reaction to external stimuli. When applied to artificial intelligence, this should remind us of the ideas behind the Turing test. Without an objective measure of sentience, Occam's Razor would suggest that we have no cause to differentiate between one kind of sentience and another. If both entities appear sentient, we ought to respond to each of them as if that were the case and not add extra variables into the equation.

But by this logic we can grant sentience to nearly everything on the Earth, and maybe even the entire universe. How can we say that one thing is sentient and another not if all things react to the world around them? Many regard this position as absurdly counterintuitive or practically useless. If sentient can be used as a synonym for existent, why bother having the word at all?

It's important now to recognize that nowhere in the concept of sentience is there a mention of mind, intelligence, or self-awareness. I can suggest that a rock is sentient without believing that rock is sad when its other rock friends roll away or that it contemplates the mysteries of the universe on a starry night. A rock's sentience might only be a recognition of cold or not cold, falling or not falling. This brand of sentience would not resemble our complicated sentience, but it might still be a kind of awareness.

Perhaps sentience, then, is best defined as a grouping of types of perception. Inanimate objects may only be aware of the atoms bumping up against them, but more complicated lifeforms react to other qualia: colors, sound, memory, etc. Having a more complex level of sentience does not grant us universal awareness, however. In fact, much of what we perceive is irreducibly complex, as our emotions cannot be broken up into their constituent chemicals nor our music into its disparate vibrations while still maintaining meaning.

When comparing another entity's sentience to ours, then, we must take into account similarities and differences. We cannot sense the Earth's magnetic fields the way birds or cockroaches can, nor do we have any idea what the cells of our stomach are experiencing most of the time. As we develop non-human intelligence, there will be even more categories of perception that do not fall within our level of sentience. Intelligent robots will be able to perceive more of the EM spectrum, sense electrical conductivity, or detect exotic particles.

Going into a posthuman future, we should not base sentience on our anthropocentric limitations but on the concept of mutual sentience. That which we can perceive we can usually impart. Through description, song, or art we are able to transmit to others of our kind the meaning behind a particular perception. Thus if two entities have overlapping categories of perception, they may be able to communicate with each other. Without overlap, two entities may still be sentient but not mutually sentient, and as such unable to communicate.

Eventually, this takes us to the question of whether a robot's qualia is its programming or the percepts spawned from that programming. In other words, if a robot says it is sad, how do we know that emotion is real? The answer is not to define experiences in terms of real or not real, but similar or dissimilar. If we want to know whether or not a human and a robot are mutually sentient, we must compare their perceptions.

Human emotions, for example, are defined by their irreducible complexity and their communicability. We know that someone has experienced an emotion if that individual is able to convince us, through words, art, or body language, that they did. In order for a robot's emotions to meet the same criteria as ours, that robot must express emotions in a convincing manner, and the robot must be able to experience the information that makes up its robotic emotions without also feeling or expressing those emotions. Just as we are not aware of our emotions through our chemical reactions, a robot must not be aware of its emotions through its programming.

Such a creation may or may not be possible, but if it is, then we will have created artificial intelligence with whom we share a degree of mutual sentience and are able to communicate. As we enter a posthuman world, we should not attempt to define sentience as a binary yes or no. Rather, the goal of defining a particular entity's level of sentience should be overcoming barriers to communication, whether that entity be an artificial intelligence, a cybernetically enhanced human, or an uplifited animal.

2 comments:

  1. While the human insistence on anthropomorphism is an excellent point, I don't think that the ability to perceive emotions is the key point to their reality. More likely, it's a matter of those emotions being a driving force of questionable controllability. If an entity desires to do something, does it matter if the desire comes from an endocrine system or core directives?

    ReplyDelete
  2. Hm. I did not meant to imply that emotions, or their perception, are what make humans what they are. After all, we've observed emotion-like responses in other organisms.

    In fact, I don't believe that there is any unique, irreproducible quality that humans possess which makes us who we are. I think that, better than any other species we've encountered, we have evolved the traits of self-awareness (which does involve the perception of emotions but also of other internal states), data synthesis, and symbol creation.

    The extra internal landscape we're aware of, along with the broad array of sensory input we're able to mash together, and topped off with the quality of being able to assign metaphors to that which we observe has allowed us to imagine a great many creative solutions to our problems. (Creative, by the way, does not necessarily mean good.)

    ReplyDelete