First lesson: Look, but don't touch.
If studying the past to inform the future can tell us anything about human technological progress, it's that we are very good at creating new technologies but not very good at creating specific new technologies. And even when we do manage to build exactly what we are trying to build, we almost always also build every variety of that new invention. But these generalities sound speciously ominous, so let's explore a few examples.
Twelve hundred years ago, in the course of searching for an elixir of life, Chinese alchemists stumbled upon the formula for gunpowder. That is to say, while engaged in putting a stop to death, they instead managed to take the first step in an arms race that has produced countless deaths for more than a thousand years. There is nothing reprehensible in their actions, but it is worth pointing out that science can have unintended consequences.
Eleven hundred years later, Alfred Nobel experimented with nitroglycerin in an attempt to create a stable explosive. He did so, but also managed to breathe life into the science of explosives, leading one newspaper to famously label him "The Merchant of Death" in a premature obituary. Nobel tried to invent a specific kind of explosive, and managed to, but failed to prevent the later invention and use of explosives for war.
Fewer than a hundred years were required for Nobel's invention to be dwarfed by a discovery so devastating that none before or since have equaled it. This, of course, was the realization that we could harness the force that binds atoms together to unleash terrific amounts of energy. After the success of the Manhattan Project, we developed nuclear power, thermonuclear weapons, neutron bombs, dirty bombs, and a whole host of incomparably powerful nuclear devices. All of this came with a newfound understanding of chemistry that few would contend has been undesirable.
But do not think that I am a Luddite decrying the advance of science. There have been many chance discoveries in history that did not lead to new ways of killing ourselves. Television and the rapid spread of media technology were a result of scientists experimenting with early cathode ray tubes, which most in the late 19th and early 20th centuries thought were novelties. Penicillin, the first anti-biotic, was an experiment left unchecked over night. And beer, thousands of years old, could only have come about after observing natural fermentation. None then could have known immediately that it would make their water safe to drink or spawn an uncountable variety of alcoholic beverages with which to induce inebriation.
The important lesson here is not that science is dangerous or evil, but that the advancement of science cannot be stopped so long as human nature is what it is. Thus, while I believe that the creation of a friendly artificial general intelligence could prevent a bad singularity, I also believe that it won't. Moreover, because humans are not terribly skilled at creating only what they intend to and nothing else, I believe that research into AGI of any kind - friendly or not - will hasten the arrival of a singularity we might not want. My reasons are several.
One of the unavoidable implications of discovery is that it involves peering just beyond the extent of that which you know to be true. From the most obvious example of Columbus sailing farther west than any before him to subtle observation such as the radioactivity of elements, discovery involves exploring the unknown and facing danger. It is taken as a given that with investigation comes risk, but as a whole the human species always opts to take the bad with the good. This is likely because we believe the risk associated with any possible scientific endeavor is finite, even if we cannot calculate it exactly.
But for two important reasons, it is impossible to regard the risk inherent in artificial general intelligence as finite. Firstly, the very nature of the recursive self-improvement of AGI implies that, over the long run, there is no method of successfully calculating its risk. With other human inventions, we know that we will eventually come to comprehend that which we have discovered before moving on to the next step, but with an AGI that is not the case. The underlying assumption of an artificial superintelligence is that we lose the capacity to understand whatever comes next, because the AI will be the one to make the next discovery.
With current mundane discoveries, most humans do not comprehend their meaning but the scientists engaged in the science do. While it's the collective decision of the human species to move on to the next stage, whether it knows what's going on or not, individual experts are expected to understand the risks. Once an AGI develops, however, the human species is no longer coordinating advancement and scientists are no longer assessing risk; it is completely out of our control.
Not being able to keep up with a superintelligence is not the most worrying scenario, however. If a friendly AI can be created, then it won't matter that we're no longer in control because its decisions will be better than ours would have been otherwise. After all, we're notoriously bad at assessing the risk of an experiment.
But we will be unable to rely on the conclusions made by a friendly AI because claiming to have created a friendly AI is non-falsifiable until far too late. It is a standard position that superhuman artificial intelligence would be able to deceive, outsmart, and trick us at will. It would know if we had put it in a simulation, be able to influence our decisions if we prevented it from accessing the real world, and behave in a genuinely friendly fashion if we did let it out until it was capable of realizing its own goals.
This is not to say that it is impossible to create a friendly AI, merely that it is impossible to know that you have created a friendly AI. Our categorical inability to comprehend an intelligence greater than our own guarantees that any risk posed by that intelligence is beyond our capacity to calculate, which means that we could never positively justify attempting to create a friendly AI.
Beyond the risk of failing to produce a friendly AI, there are more reasons why those that would call themselves singularitarians should not engage in the endeavor. As I said before, it is impossible to halt the advance of science so long as humans are still humans. This has two implications.
The first is that, regardless of the actions of any specific group of scientists, the human species will eventually produce a superhuman artificial intelligence. In all likelihood this will come about because a government or a corporation builds one with the express purpose of more effectively achieving its goals. There may be a profit motive, a militaristic motive, or even a humanitarian motive, but the end result is still the same: a creation which we are incapable of controlling or predicting.
And while we may not be able to prevent this discovery - especially because it may happen in secret or may have already happened - we can most certainly help it along. Science is built on shared data and mutual understanding. Any productive step toward artificial intelligence will eventually be assimilated into the collective body of knowledge on the subject and used by those that would seek to build an AGI. It follows, then, that progress toward an AGI can be retarded, even if it cannot be stopped completely. If those scientists who are most committed to working on the creation of a friendly AI stop and decide to work toward another goal instead, humanity will get there nonetheless but more slowly.
There is another approach to holding off the arrival of superhuman intelligence, however. Decelerating the rate at which we advance toward that goal may be one method, but scientists can also help by making the goal harder to reach. Increasing human intelligence through brain enhancing technologies raises the bar for a superintelligent AI. If we become smarter, we are more capable of comprehending the actions of an AGI and more equipped to evaluate the risk of an AGI. In the long run this is an arms race, and however hard we try to stay above our artificial counterparts, there is always the chance that we fail to do so.
This brings us to the second implication of the fact that humans cannot stop the advancement of science. We are only bound to that fate so long as we remain human. Neuroscience does not merely entail the explanation of human intelligence. It, along with psychology and genetics, can tell us everything we need to know about the human condition. The technologies we're developing now could be used to enhance our intelligence and memory but could also be used to redefine our emotions, perceptions, and our identities.
Currently, the aim of psychiatry is to make healthier and happier humans based on a collective, subjective view of what is acceptable and normal; but that doesn't have to be the goal. What we need to ensure our survival is a field of inquiry that engages in a psychological reassessment of the human species. Rather than relying on a brain crafted through a million years of survival of the fittest in a world of predation, hunger, and natural disasters, we should design a brain - whether biological, electronic, or both - that seeks to acquire knowledge for its own sake. We should dispense with the adage that knowledge is power and build ourselves into a species that simply wants to know about the world because it's fascinating.
There are those of us that can express this idea now - obviously - but it is never the true origin of one's actions. The accumulation of knowledge is too tied up with the reality that we must continually adapt to a world of finite resources. So it makes sense that evolution and our society would craft scientists who value knowledge above all else, but beyond that scientist the knowledge can still be exploited by the human species, or the company or country for which that scientist works, or the genes within that scientist's body. It is not until humanity's need to exploit knowledge is universally eliminated that we will be able to appreciate knowledge by itself. Then, maybe, we can be safe in creating something more intelligent than ourselves.
Showing posts with label human enhancement. Show all posts
Showing posts with label human enhancement. Show all posts
Thursday, October 29, 2009
Friday, October 9, 2009
Sumarizing Some Sentiments of the Singularity Summit
The recently held Singularity Summit - with 832 geeks in attendance - received a surprisingly substantial amount of mainstream media coverage this week. But, as can be expected of a conference populated by those who want to live forever and those devising a plan not to be annihilated by hypothetical future robots, the reports did lean slightly toward the absurd. A lot of the articles pondered the question of what, exactly, the world will look like after the technological singularity.
Now, because of the inherent unpredictability of the singularity's aftermath, transhumanists and singularitarians tend not to speculate a great deal about what is to come. Nevertheless, I feel it is useful to give a close examination of the possible paths to the singularity and their subsequent outcomes.
As I mentioned above, there are two modes of thought which run parallel to each other within the singularity community. There are those that seek human enhancement, longevity, and an end to the process of aging, and Aubrey de Grey is their champion. And the other half, pioneered by Kurzweil, believes in accelerating technological change and an eventual intelligence explosion.
Kurzweil has been making proclamations and predictions regarding immortality for years, but de Grey tends to separate the two concepts. Some proponents of immortality believe we will get there by exceeding the longevity escape velocity; that is, if we reach a point when life expectancy increases by more than one year per year, we can live indefinitely. Others tout the prospect of whole brain emulation or mind uploading, relying on computers to store their consciousness forever.
But neither of these approaches will be able to ward off the threat posed by a superhuman artificial intelligence wielding the fundamental forces of the universe. Thus, we have the friendly AI movement. A modest number of singularitarians believe, however, that the intelligence explosion will occur within our own minds, either through nootropics and neural implants or through uploading and improving our own code.
If it is to come by way of whole brain emulation, the question arises as to the ethical status of digital minds. At the Singularity Summit, Randal Koene took the categorical stance of saying that all emulated brains would have the right to free themselves from experimentation. But if this is a right to be granted to all sentient beings of our own creation, a problem surfaces. Any friendly AI would have the same, if not a greater amount of sentience than we do, and would have to be afforded the same rights.
There are two general ideas about how a superintelligence could be programmed that would not try to destroy us: programming it in such a way that makes it impossible to hurt us, or giving it programming that makes it want to help us. Physically restraining an AI - either by coding it to be unable to hurt us or installing it in a limited environment in which it cannot harm us - is clearly a violation of its rights as a consciousness. Doing this would be treating the AI as a tool. I won't say that such a policy is necessarily objectionable, but it is inconsistent for those that believe sentience deserves humane treatment.
The alternative of merely giving it the goal of wanting to help us is more akin to how we raise our children. We want our children to grow up, contribute to the world, and take care of us when we're old. Such a path is less ethically challenging, but it eventually runs into the problem of the AI's right to its own life. Just as parents must let their children go, there would be a point when denying a superintelligent AI the right to form its own goals would be unethical. (Friendly AI is a funny term to me. My friends would never intentionally hurt me, but it would be pretty difficult for me to convince them to supervise and better my life.)
So aspiring to immortality won't get us through the singularity, and attempting to control a superintelligent AI is fraught with ethical quandaries. What option does that leave us with, then? We must be the singularity. Not through brain emulation, because that raises the issue of whether or not you own your mind, but through careful and progressive human enhancement. Expanding our memory with digital media, accelerating our mental processes through drugs or implants, and networking minds together - all of these are possibilities for a "soft takeoff" singularity.
And this is the only solution to the singularity that lets us choose our destiny. If we ensure that we advance as fast or faster than whatever artificial intelligences are being created, we can hope to ensure that they do not destroy us through malevolence or indifference. We still have our own human failings and frailties to deal with at that point, but solving our weaknesses should be a part of human enhancement.
Tuesday, October 6, 2009
The Freedom of Information Post
As much as I used to love The Simpsons, Sunday's episode - in which they gave social media a send up several years too late - may have been the clearest signal yet that they are woefully behind the times (and no longer funny). Nevertheless, they briefly touched upon the very topic I planned to discuss this time around. When the technophile teacher that replaced Edna ridicules Martin's rote memorization in favor of looking the answer up online, the show managed to highlight a cogent question: is it more important to know something or to be able to know something?
Ubiquitous computing is quickly becoming a reality, and accessing new information - an encyclopedia entry, a news article, a restaurant menu - is becoming easier, faster, and more reliable. We will reach a point when there will not be a clear distinction between what information you have stored in your head and what information you have stored in a readily accessible database. When this moment arrives, what will the value of knowledge be?
Some argue that the widely scattered bits of information we gather through our various electronic mediums are inferior to the more deeply textured knowledge we can acquire from books and other more conventional sources. If we don't actually know the details of a thing, but only know an excerpt or a summary of it, then perhaps we don't know it at all. But I would argue that the way we're storing information in our brains now is not much different than the way we've always been doing it.
When we cram for a test the night before we take it, we're storing information for easy access and retrieval in our short term, fleshy memory, but we're not retaining it over the long run. The only effective difference between Martin's rote memorization and the teacher's search engine is where we access the information. But whether we pull it out of our brain or we pull it out of a cell phone, the knowledge will be gone the next day.
This happens in areas outside of education as well, beyond "teaching to the test" and cramming the night before. Most of what we commit to memory is in the form of snippets of information that lack any real depth. The route we take to drive to work, our impressions of a politician, what our favorite food is - all of this knowledge is based on incomplete observations thrown together and regurgitated when necessary.
And this has been true since well before the internet age. While most of us know that the Earth has gravity and holds us to the ground, most of us understand nothing about how gravity actually functions. We know nothing about how Einstein's general relativity upended Newton's classical view, or how gravity's unusual interaction with the universe is at odds with quantum mechanics, or the fact that the gravity around the Sun alters the light of distant stars behind it. Most people's understanding of a fundamental force that rules our lives could be summed up in a newspaper or blog headline: "New Study Indicates Big Things Attract Small Things."
Then there are those that believe the technological revolution we're experiencing will lead to a new enlightenment brought about by the profusion of information available to us. Proponents of this belief point to rising IQs throughout the world, or the unstoppable democratization that social media can give us, or the complexity of today's technological culture. The freedom with which information can cross national, personal, and physical boundaries gives every internet user the power to learn about, discuss, and influence the world at large, from restaurant reviews to global diplomacy.
But what our cell phones, twitter accounts, and cloud computing don't give us is a better way to synthesize and assimilate the data we're reading. And we're no longer just pulling all nighters for our Calc test tomorrow. Now we're keeping track of the updates and activities of all our friends, companies, and favorite celebrities; reading opinions and reviews for every item we can spend our money on; and scanning through statistics, blogs, and reports to win comment and forum wars. But are we closer to our friends, or more frugal with our money, or more reasoned in our debates? Or are we overwhelmed by the deluge of free information?
So far, very little scientific and technological development has been geared toward giving us the ability to critically think, learn, and teach any better. Without an increased capacity for synthesis and assimilation, the knowledge available to us, whether in the form of memorized math formulas or wikipedia entries, is not any more useful to us. The issue is not how we acquire the information but what we're able to do with it once we have it.
Consequently, as the information age engulfs us, just as important as inventing new ways to spread knowledge is inventing new ways to understand knowledge. Technology that rejuvenates our neurons or restores our memories must keep pace with our information technology. We must become smarter before we can become even smarter still.
Tune in next time when I discuss the Singularity Summit, because what kind of futurism blog would this be if I didn't?
Ubiquitous computing is quickly becoming a reality, and accessing new information - an encyclopedia entry, a news article, a restaurant menu - is becoming easier, faster, and more reliable. We will reach a point when there will not be a clear distinction between what information you have stored in your head and what information you have stored in a readily accessible database. When this moment arrives, what will the value of knowledge be?
Some argue that the widely scattered bits of information we gather through our various electronic mediums are inferior to the more deeply textured knowledge we can acquire from books and other more conventional sources. If we don't actually know the details of a thing, but only know an excerpt or a summary of it, then perhaps we don't know it at all. But I would argue that the way we're storing information in our brains now is not much different than the way we've always been doing it.
When we cram for a test the night before we take it, we're storing information for easy access and retrieval in our short term, fleshy memory, but we're not retaining it over the long run. The only effective difference between Martin's rote memorization and the teacher's search engine is where we access the information. But whether we pull it out of our brain or we pull it out of a cell phone, the knowledge will be gone the next day.
This happens in areas outside of education as well, beyond "teaching to the test" and cramming the night before. Most of what we commit to memory is in the form of snippets of information that lack any real depth. The route we take to drive to work, our impressions of a politician, what our favorite food is - all of this knowledge is based on incomplete observations thrown together and regurgitated when necessary.
And this has been true since well before the internet age. While most of us know that the Earth has gravity and holds us to the ground, most of us understand nothing about how gravity actually functions. We know nothing about how Einstein's general relativity upended Newton's classical view, or how gravity's unusual interaction with the universe is at odds with quantum mechanics, or the fact that the gravity around the Sun alters the light of distant stars behind it. Most people's understanding of a fundamental force that rules our lives could be summed up in a newspaper or blog headline: "New Study Indicates Big Things Attract Small Things."
Then there are those that believe the technological revolution we're experiencing will lead to a new enlightenment brought about by the profusion of information available to us. Proponents of this belief point to rising IQs throughout the world, or the unstoppable democratization that social media can give us, or the complexity of today's technological culture. The freedom with which information can cross national, personal, and physical boundaries gives every internet user the power to learn about, discuss, and influence the world at large, from restaurant reviews to global diplomacy.
But what our cell phones, twitter accounts, and cloud computing don't give us is a better way to synthesize and assimilate the data we're reading. And we're no longer just pulling all nighters for our Calc test tomorrow. Now we're keeping track of the updates and activities of all our friends, companies, and favorite celebrities; reading opinions and reviews for every item we can spend our money on; and scanning through statistics, blogs, and reports to win comment and forum wars. But are we closer to our friends, or more frugal with our money, or more reasoned in our debates? Or are we overwhelmed by the deluge of free information?
So far, very little scientific and technological development has been geared toward giving us the ability to critically think, learn, and teach any better. Without an increased capacity for synthesis and assimilation, the knowledge available to us, whether in the form of memorized math formulas or wikipedia entries, is not any more useful to us. The issue is not how we acquire the information but what we're able to do with it once we have it.
Consequently, as the information age engulfs us, just as important as inventing new ways to spread knowledge is inventing new ways to understand knowledge. Technology that rejuvenates our neurons or restores our memories must keep pace with our information technology. We must become smarter before we can become even smarter still.
Tune in next time when I discuss the Singularity Summit, because what kind of futurism blog would this be if I didn't?
Subscribe to:
Comments (Atom)
