Thursday, October 29, 2009

Singularitology 101

First lesson: Look, but don't touch.

If studying the past to inform the future can tell us anything about human technological progress, it's that we are very good at creating new technologies but not very good at creating specific new technologies. And even when we do manage to build exactly what we are trying to build, we almost always also build every variety of that new invention. But these generalities sound speciously ominous, so let's explore a few examples.

Twelve hundred years ago, in the course of searching for an elixir of life, Chinese alchemists stumbled upon the formula for gunpowder. That is to say, while engaged in putting a stop to death, they instead managed to take the first step in an arms race that has produced countless deaths for more than a thousand years. There is nothing reprehensible in their actions, but it is worth pointing out that science can have unintended consequences.

Eleven hundred years later, Alfred Nobel experimented with nitroglycerin in an attempt to create a stable explosive. He did so, but also managed to breathe life into the science of explosives, leading one newspaper to famously label him "The Merchant of Death" in a premature obituary. Nobel tried to invent a specific kind of explosive, and managed to, but failed to prevent the later invention and use of explosives for war.

Fewer than a hundred years were required for Nobel's invention to be dwarfed by a discovery so devastating that none before or since have equaled it. This, of course, was the realization that we could harness the force that binds atoms together to unleash terrific amounts of energy. After the success of the Manhattan Project, we developed nuclear power, thermonuclear weapons, neutron bombs, dirty bombs, and a whole host of incomparably powerful nuclear devices. All of this came with a newfound understanding of chemistry that few would contend has been undesirable.

But do not think that I am a Luddite decrying the advance of science. There have been many chance discoveries in history that did not lead to new ways of killing ourselves. Television and the rapid spread of media technology were a result of scientists experimenting with early cathode ray tubes, which most in the late 19th and early 20th centuries thought were novelties. Penicillin, the first anti-biotic, was an experiment left unchecked over night. And beer, thousands of years old, could only have come about after observing natural fermentation. None then could have known immediately that it would make their water safe to drink or spawn an uncountable variety of alcoholic beverages with which to induce inebriation.

The important lesson here is not that science is dangerous or evil, but that the advancement of science cannot be stopped so long as human nature is what it is. Thus, while I believe that the creation of a friendly artificial general intelligence could prevent a bad singularity, I also believe that it won't. Moreover, because humans are not terribly skilled at creating only what they intend to and nothing else, I believe that research into AGI of any kind - friendly or not - will hasten the arrival of a singularity we might not want. My reasons are several.

One of the unavoidable implications of discovery is that it involves peering just beyond the extent of that which you know to be true. From the most obvious example of Columbus sailing farther west than any before him to subtle observation such as the radioactivity of elements, discovery involves exploring the unknown and facing danger. It is taken as a given that with investigation comes risk, but as a whole the human species always opts to take the bad with the good. This is likely because we believe the risk associated with any possible scientific endeavor is finite, even if we cannot calculate it exactly.

But for two important reasons, it is impossible to regard the risk inherent in artificial general intelligence as finite. Firstly, the very nature of the recursive self-improvement of AGI implies that, over the long run, there is no method of successfully calculating its risk. With other human inventions, we know that we will eventually come to comprehend that which we have discovered before moving on to the next step, but with an AGI that is not the case. The underlying assumption of an artificial superintelligence is that we lose the capacity to understand whatever comes next, because the AI will be the one to make the next discovery.

With current mundane discoveries, most humans do not comprehend their meaning but the scientists engaged in the science do. While it's the collective decision of the human species to move on to the next stage, whether it knows what's going on or not, individual experts are expected to understand the risks. Once an AGI develops, however, the human species is no longer coordinating advancement and scientists are no longer assessing risk; it is completely out of our control.

Not being able to keep up with a superintelligence is not the most worrying scenario, however. If a friendly AI can be created, then it won't matter that we're no longer in control because its decisions will be better than ours would have been otherwise. After all, we're notoriously bad at assessing the risk of an experiment.

But we will be unable to rely on the conclusions made by a friendly AI because claiming to have created a friendly AI is non-falsifiable until far too late. It is a standard position that superhuman artificial intelligence would be able to deceive, outsmart, and trick us at will. It would know if we had put it in a simulation, be able to influence our decisions if we prevented it from accessing the real world, and behave in a genuinely friendly fashion if we did let it out until it was capable of realizing its own goals.

This is not to say that it is impossible to create a friendly AI, merely that it is impossible to know that you have created a friendly AI. Our categorical inability to comprehend an intelligence greater than our own guarantees that any risk posed by that intelligence is beyond our capacity to calculate, which means that we could never positively justify attempting to create a friendly AI.

Beyond the risk of failing to produce a friendly AI, there are more reasons why those that would call themselves singularitarians should not engage in the endeavor. As I said before, it is impossible to halt the advance of science so long as humans are still humans. This has two implications.

The first is that, regardless of the actions of any specific group of scientists, the human species will eventually produce a superhuman artificial intelligence. In all likelihood this will come about because a government or a corporation builds one with the express purpose of more effectively achieving its goals. There may be a profit motive, a militaristic motive, or even a humanitarian motive, but the end result is still the same: a creation which we are incapable of controlling or predicting.

And while we may not be able to prevent this discovery - especially because it may happen in secret or may have already happened - we can most certainly help it along. Science is built on shared data and mutual understanding. Any productive step toward artificial intelligence will eventually be assimilated into the collective body of knowledge on the subject and used by those that would seek to build an AGI. It follows, then, that progress toward an AGI can be retarded, even if it cannot be stopped completely. If those scientists who are most committed to working on the creation of a friendly AI stop and decide to work toward another goal instead, humanity will get there nonetheless but more slowly.

There is another approach to holding off the arrival of superhuman intelligence, however. Decelerating the rate at which we advance toward that goal may be one method, but scientists can also help by making the goal harder to reach. Increasing human intelligence through brain enhancing technologies raises the bar for a superintelligent AI. If we become smarter, we are more capable of comprehending the actions of an AGI and more equipped to evaluate the risk of an AGI. In the long run this is an arms race, and however hard we try to stay above our artificial counterparts, there is always the chance that we fail to do so.

This brings us to the second implication of the fact that humans cannot stop the advancement of science. We are only bound to that fate so long as we remain human. Neuroscience does not merely entail the explanation of human intelligence. It, along with psychology and genetics, can tell us everything we need to know about the human condition. The technologies we're developing now could be used to enhance our intelligence and memory but could also be used to redefine our emotionsperceptions, and our identities.

Currently, the aim of psychiatry is to make healthier and happier humans based on a collective, subjective view of what is acceptable and normal; but that doesn't have to be the goal. What we need to ensure our survival is a field of inquiry that engages in a psychological reassessment of the human species. Rather than relying on a brain crafted through a million years of survival of the fittest in a world of predation, hunger, and natural disasters, we should design a brain - whether biological, electronic, or both - that seeks to acquire knowledge for its own sake. We should dispense with the adage that knowledge is power and build ourselves into a species that simply wants to know about the world because it's fascinating.

There are those of us that can express this idea now - obviously - but it is never the true origin of one's actions. The accumulation of knowledge is too tied up with the reality that we must continually adapt to a world of finite resources. So it makes sense that evolution and our society would craft scientists who value knowledge above all else, but beyond that scientist the knowledge can still be exploited by the human species, or the company or country for which that scientist works, or the genes within that scientist's body. It is not until humanity's need to exploit knowledge is universally eliminated that we will be able to appreciate knowledge by itself. Then, maybe, we can be safe in creating something more intelligent than ourselves.

No comments:

Post a Comment