Friday, October 9, 2009

Sumarizing Some Sentiments of the Singularity Summit

The recently held Singularity Summit - with 832 geeks in attendance - received a surprisingly substantial amount of mainstream media coverage this week. But, as can be expected of a conference populated by those who want to live forever and those devising a plan not to be annihilated by hypothetical future robots, the reports did lean slightly toward the absurd. A lot of the articles pondered the question of what, exactly, the world will look like after the technological singularity.

Now, because of the inherent unpredictability of the singularity's aftermath, transhumanists and singularitarians tend not to speculate a great deal about what is to come. Nevertheless, I feel it is useful to give a close examination of the possible paths to the singularity and their subsequent outcomes.

As I mentioned above, there are two modes of thought which run parallel to each other within the singularity community. There are those that seek human enhancement, longevity, and an end to the process of aging, and Aubrey de Grey is their champion. And the other half, pioneered by Kurzweil, believes in accelerating technological change and an eventual intelligence explosion.

Kurzweil has been making proclamations and predictions regarding immortality for years, but de Grey tends to separate the two concepts. Some proponents of immortality believe we will get there by exceeding the longevity escape velocity; that is, if we reach a point when life expectancy increases by more than one year per year, we can live indefinitely. Others tout the prospect of whole brain emulation or mind uploading, relying on computers to store their consciousness forever.

But neither of these approaches will be able to ward off the threat posed by a superhuman artificial intelligence wielding the fundamental forces of the universe. Thus, we have the friendly AI movement. A modest number of singularitarians believe, however, that the intelligence explosion will occur within our own minds, either through nootropics and neural implants or through uploading and improving our own code.

If it is to come by way of whole brain emulation, the question arises as to the ethical status of digital minds. At the Singularity Summit, Randal Koene took the categorical stance of saying that all emulated brains would have the right to free themselves from experimentation. But if this is a right to be granted to all sentient beings of our own creation, a problem surfaces. Any friendly AI would have the same, if not a greater amount of sentience than we do, and would have to be afforded the same rights.

There are two general ideas about how a superintelligence could be programmed that would not try to destroy us: programming it in such a way that makes it impossible to hurt us, or giving it programming that makes it want to help us. Physically restraining an AI - either by coding it to be unable to hurt us or installing it in a limited environment in which it cannot harm us - is clearly a violation of its rights as a consciousness. Doing this would be treating the AI as a tool. I won't say that such a policy is necessarily objectionable, but it is inconsistent for those that believe sentience deserves humane treatment.

The alternative of merely giving it the goal of wanting to help us is more akin to how we raise our children. We want our children to grow up, contribute to the world, and take care of us when we're old. Such a path is less ethically challenging, but it eventually runs into the problem of the AI's right to its own life. Just as parents must let their children go, there would be a point when denying a superintelligent AI the right to form its own goals would be unethical. (Friendly AI is a funny term to me. My friends would never intentionally hurt me, but it would be pretty difficult for me to convince them to supervise and better my life.)

So aspiring to immortality won't get us through the singularity, and attempting to control a superintelligent AI is fraught with ethical quandaries. What option does that leave us with, then? We must be the singularity. Not through brain emulation, because that raises the issue of whether or not you own your mind, but through  careful and progressive human enhancement. Expanding our memory with digital media, accelerating our mental processes through drugs or implants, and networking minds together - all of these are possibilities for a "soft takeoff" singularity.

And this is the only solution to the singularity that lets us choose our destiny. If we ensure that we advance as fast or faster than whatever artificial intelligences are being created, we can hope to ensure that they do not destroy us through malevolence or indifference. We still have our own human failings and frailties to deal with at that point, but solving our weaknesses should be a part of human enhancement.

No comments:

Post a Comment