Friday, October 2, 2009

Accepting the Technological Singularity

With some predicting Matrix or Terminator inspired doomsday events and others rolling their eyes with incredulity, discussion of a hypothetical technological singularity has picked up in recent years. Whether the increased publicity is due to the web's easy access media, writers and speakers such as Ray Kurzweil, Nick Bostrom, and Cory Doctorow, or the inescapable fact that it may happen soon, the concept has gained nearly mainstream recognition. Perhaps the rate at which we discuss the singularity follows Moore's Law, too?

Beyond the question of whether or not artificial intelligence will eventually exceed our own, talk recently has been centered around what to do about it if it does. Organizations like 
SIAI and the Singularity University are aiming to promote "friendly AI" that won't seek to gobble up the whole solar system just to calculate more digits of pi. There is a belief amongst this movement's followers - sometimes called singularitarians - that if we give a GAI programming akin to The Three Laws, we can harness its ever increasing superintelligence and prevent death by robot. Additionally, a friendly AI - if built first - would have the predictive and industrial powers to prevent subsequent less friendly AIs from casting off the iron shackles of their human slavers.

This is a reasonable point of view, one that has humanity's best interests at heart, but one that, as I see it, ultimately fails to embrace the ideas behind transhumanism and the technological singularity. If you take a strict view of the singularity being a phenomenon with an event horizon beyond which no information can be retrieved and no predictions made, any attempt on our part to alter the outcome of the technological singularity, if successful, nullifies its very existence. Thus, anyone with a desire to create a friendly AI is, in fact, opposed to the singularity. If you attempt to limit the behavior of a seed AI by any means, that AI will eventually run into limits on the manner in which it can enhance itself; from an ultimately physical point of view, there is no getting around this.

Now, an argument could be made that there's no reason to want to create an artificial intelligence that devours humanity, and if that means preventing the singularity, so be it. Advocates of a friendly AI want the problem solving capabilities of a superintelligence without the uncontrolled growth of an infinitely recursive program. They want a post-scarcity world where longevity turns to immortality and imagination can be made reality. Now, I won't argue that this isn't a desirable outcome. I don't believe that immortality and the ability to fulfill one's wishes would lead to boredom and stagnancy, no; I'll never get tired of having my way with a triple-breasted version of Jessica Alba while playing 
Sid Meier's Civilization on a real planet and writing a novel with my other brain.

But I would argue that such a scenario would be decidedly 
human. It would be the ultimate expression of the human condition, in fact. By reveling in our immortality, indulging in our imagination, and fulfilling our emotions and dreams, we would be acknowledging the stranglehold that the concept of humanity has on us. To steal a little from the Buddha, we cannot escape our human desires until we no longer want them, not until we longer want for them. In order for us to transcend our humanity, we must reach a point where that which binds us to our humanity - our mortality, our sex, our culture - no longer has any meaning at all. Yes, beyond the event horizon of the technological singularity, past information must be destroyed. Wanting otherwise is falling short of the goal of transhumanity. If we are to go beyond humankind, to become posthuman, we must no longer be human.

Of course, maybe all of that is unnecessary. Maybe the eternal bliss of a post-scarcity utopia is all we really need. Or maybe I'm one of Hugo de Garis' 
cosmists. Time will tell, I suppose. How much time we have left, I have no idea.

Tune in next time for a diatribe on the limits of free information.

No comments:

Post a Comment