It recently occurred to me that, for a blog professing to discuss science fiction (among other things), there's been very little in the way of the speculative genre here so far. Sure, I've made references to movies and books here and there, but only in so far as they emphasize my central point. Well, I've decided to remedy that. I'm going to muse about Blade Runner, which I've seen a number of times but recently watched again last weekend. I'd like to make this into a semi-regular feature in which I examine science fiction stories through the lens of my particular brand of futurism, but let's not get ahead of ourselves.
To begin, I want to address what many consider to be the central mystery behind this film: Is Deckard a replicant? The answer: It doesn't matter. It's not important to the story being told. I'd even go so far as to argue that the very fact there is ambiguity over whether or not Deckard is a replicant demonstrates that it doesn't matter; people still enjoy and find meaning in the film even without knowing.
So, for the time being, let's set aside that question and explore a few other issues. Watching Rachael realize that her memories had been fabricated, Batty coming to have compassion for Deckard, and Deckard falling in love with Rachael, I remembered a line in Kurzweil's The Age of Spiritual Machines, which was, essentially, "When machines tell us they're conscious, we'll believe them." I was originally skeptical reading that, mostly because I believe I'll still be skeptical when machines start mimicking us.
But what Blade Runner does very well as a film is to show how universal and constant some of humanity's traits are (to humans, anyway): the desire to be loved, to live a meaningful life, and to be remembered. Rachael might not be perfectly human, but it doesn't matter, because in a dark and dying world like the one we see in Blade Runner, Deckard wants very badly just to connect to someone. And likewise, Batty's impassioned, if short monologue at the end of the movie is intended to convince us and Deckard that replicants are worthy of some degree of humanity.
Now, you might argue that the replicants in Blade Runner are not machines, being as how they are genetically engineered organisms and not electronic robots. But I hardly think this distinction matters where it concerns our attitude toward our creations. Humans are already quite capable of professing sympathy for the artificial, so I'm skeptical that humanoid robots would be treated much differently than engineered replicants. What roboticists and computer scientists have yet to convince us of is that human-like creations are human, but they're getting close.
All of this brings us back to what might as well have been Dick's version of the Turing Test, the Voight-Kampff Test. From his slightly warped perspective, however, the machine is used not to elevate but to discriminate. The blade runners of the film run the Voight-Kampff to single out those that are lacking in empathy (especially, in the book, toward animals). Humans have a long history of labeling our enemies as "inhuman" and hating them for it, and here Blade Runner is more allegory than science fiction. It is interesting that we often inflict far greater cruelty and malice on those that appear not quite human than those that are not human at all.
What the film demonstrates, however, is that humans treat as human those that we feel to be so. Empathic connections are more important to us than clinical definitions, and if we want something to appear human, it is. From this perspective, the unresolved question of Deckard's humanity may be more meaningful if left so. Whether Deckard was created in a test tube or born in a hospital, if we empathize with him, then he is human. This mirrors his view of Rachael, after all.
But that leaves us with one niggling doubt, which the film does not fail to explore. The replicants we see in Blade Runner are not human-like but superhuman. They are faster, stronger, and capable of seeing things no human would believe. As a humanoid creation becomes more and more human, we are less and less inclined to trust it, thinking it a deception or an impostor of some sort. But once our creations begin to emote in recognizable fashions, all of that mistrust will disappear, and when they tell us they can feel, we'll believe them. What happens, however, once they surpass us? What will humans do when the fear is not that their creations are not quite human, but that they are far better than a human ever could be?
Blade Runner, bleak as always, paints a dim picture. The institutions of humankind, which - athough they are made up of humans - possess almost no humanity, are relentless in finding and destroying rogue replicants. Tyrell, the replicants' creator, plays God and seems to have no qualms with enslaving and deceiving his creations.
And with all the hysteria that exists in science fiction of machines run amok, could we blame ourselves (or our descendants) for reacting similarly? Will we have the wisdom to learn from our artificial progeny, or will we fear being left behind? And what rights will we grant our betters? If animals - viewed to be less than human - are afforded fewer rights, would not superintelligent robots be deserving of more rights? I doubt very many of us will see it that way.
Of course, none of this is new. The speculative genre has been probing these questions since its inception. But we are now only a decade away from the year in which Blade Runner is set. We may not have off-world colonies or genetically engineered humans, but we do have robots in our houses and AIs in our banks. Soon, the hypothetical questions of science fiction will become actual.
Wednesday, November 4, 2009
Thursday, October 29, 2009
Singularitology 101
First lesson: Look, but don't touch.
If studying the past to inform the future can tell us anything about human technological progress, it's that we are very good at creating new technologies but not very good at creating specific new technologies. And even when we do manage to build exactly what we are trying to build, we almost always also build every variety of that new invention. But these generalities sound speciously ominous, so let's explore a few examples.
Twelve hundred years ago, in the course of searching for an elixir of life, Chinese alchemists stumbled upon the formula for gunpowder. That is to say, while engaged in putting a stop to death, they instead managed to take the first step in an arms race that has produced countless deaths for more than a thousand years. There is nothing reprehensible in their actions, but it is worth pointing out that science can have unintended consequences.
Eleven hundred years later, Alfred Nobel experimented with nitroglycerin in an attempt to create a stable explosive. He did so, but also managed to breathe life into the science of explosives, leading one newspaper to famously label him "The Merchant of Death" in a premature obituary. Nobel tried to invent a specific kind of explosive, and managed to, but failed to prevent the later invention and use of explosives for war.
Fewer than a hundred years were required for Nobel's invention to be dwarfed by a discovery so devastating that none before or since have equaled it. This, of course, was the realization that we could harness the force that binds atoms together to unleash terrific amounts of energy. After the success of the Manhattan Project, we developed nuclear power, thermonuclear weapons, neutron bombs, dirty bombs, and a whole host of incomparably powerful nuclear devices. All of this came with a newfound understanding of chemistry that few would contend has been undesirable.
But do not think that I am a Luddite decrying the advance of science. There have been many chance discoveries in history that did not lead to new ways of killing ourselves. Television and the rapid spread of media technology were a result of scientists experimenting with early cathode ray tubes, which most in the late 19th and early 20th centuries thought were novelties. Penicillin, the first anti-biotic, was an experiment left unchecked over night. And beer, thousands of years old, could only have come about after observing natural fermentation. None then could have known immediately that it would make their water safe to drink or spawn an uncountable variety of alcoholic beverages with which to induce inebriation.
The important lesson here is not that science is dangerous or evil, but that the advancement of science cannot be stopped so long as human nature is what it is. Thus, while I believe that the creation of a friendly artificial general intelligence could prevent a bad singularity, I also believe that it won't. Moreover, because humans are not terribly skilled at creating only what they intend to and nothing else, I believe that research into AGI of any kind - friendly or not - will hasten the arrival of a singularity we might not want. My reasons are several.
One of the unavoidable implications of discovery is that it involves peering just beyond the extent of that which you know to be true. From the most obvious example of Columbus sailing farther west than any before him to subtle observation such as the radioactivity of elements, discovery involves exploring the unknown and facing danger. It is taken as a given that with investigation comes risk, but as a whole the human species always opts to take the bad with the good. This is likely because we believe the risk associated with any possible scientific endeavor is finite, even if we cannot calculate it exactly.
But for two important reasons, it is impossible to regard the risk inherent in artificial general intelligence as finite. Firstly, the very nature of the recursive self-improvement of AGI implies that, over the long run, there is no method of successfully calculating its risk. With other human inventions, we know that we will eventually come to comprehend that which we have discovered before moving on to the next step, but with an AGI that is not the case. The underlying assumption of an artificial superintelligence is that we lose the capacity to understand whatever comes next, because the AI will be the one to make the next discovery.
With current mundane discoveries, most humans do not comprehend their meaning but the scientists engaged in the science do. While it's the collective decision of the human species to move on to the next stage, whether it knows what's going on or not, individual experts are expected to understand the risks. Once an AGI develops, however, the human species is no longer coordinating advancement and scientists are no longer assessing risk; it is completely out of our control.
Not being able to keep up with a superintelligence is not the most worrying scenario, however. If a friendly AI can be created, then it won't matter that we're no longer in control because its decisions will be better than ours would have been otherwise. After all, we're notoriously bad at assessing the risk of an experiment.
But we will be unable to rely on the conclusions made by a friendly AI because claiming to have created a friendly AI is non-falsifiable until far too late. It is a standard position that superhuman artificial intelligence would be able to deceive, outsmart, and trick us at will. It would know if we had put it in a simulation, be able to influence our decisions if we prevented it from accessing the real world, and behave in a genuinely friendly fashion if we did let it out until it was capable of realizing its own goals.
This is not to say that it is impossible to create a friendly AI, merely that it is impossible to know that you have created a friendly AI. Our categorical inability to comprehend an intelligence greater than our own guarantees that any risk posed by that intelligence is beyond our capacity to calculate, which means that we could never positively justify attempting to create a friendly AI.
Beyond the risk of failing to produce a friendly AI, there are more reasons why those that would call themselves singularitarians should not engage in the endeavor. As I said before, it is impossible to halt the advance of science so long as humans are still humans. This has two implications.
The first is that, regardless of the actions of any specific group of scientists, the human species will eventually produce a superhuman artificial intelligence. In all likelihood this will come about because a government or a corporation builds one with the express purpose of more effectively achieving its goals. There may be a profit motive, a militaristic motive, or even a humanitarian motive, but the end result is still the same: a creation which we are incapable of controlling or predicting.
And while we may not be able to prevent this discovery - especially because it may happen in secret or may have already happened - we can most certainly help it along. Science is built on shared data and mutual understanding. Any productive step toward artificial intelligence will eventually be assimilated into the collective body of knowledge on the subject and used by those that would seek to build an AGI. It follows, then, that progress toward an AGI can be retarded, even if it cannot be stopped completely. If those scientists who are most committed to working on the creation of a friendly AI stop and decide to work toward another goal instead, humanity will get there nonetheless but more slowly.
There is another approach to holding off the arrival of superhuman intelligence, however. Decelerating the rate at which we advance toward that goal may be one method, but scientists can also help by making the goal harder to reach. Increasing human intelligence through brain enhancing technologies raises the bar for a superintelligent AI. If we become smarter, we are more capable of comprehending the actions of an AGI and more equipped to evaluate the risk of an AGI. In the long run this is an arms race, and however hard we try to stay above our artificial counterparts, there is always the chance that we fail to do so.
This brings us to the second implication of the fact that humans cannot stop the advancement of science. We are only bound to that fate so long as we remain human. Neuroscience does not merely entail the explanation of human intelligence. It, along with psychology and genetics, can tell us everything we need to know about the human condition. The technologies we're developing now could be used to enhance our intelligence and memory but could also be used to redefine our emotions, perceptions, and our identities.
Currently, the aim of psychiatry is to make healthier and happier humans based on a collective, subjective view of what is acceptable and normal; but that doesn't have to be the goal. What we need to ensure our survival is a field of inquiry that engages in a psychological reassessment of the human species. Rather than relying on a brain crafted through a million years of survival of the fittest in a world of predation, hunger, and natural disasters, we should design a brain - whether biological, electronic, or both - that seeks to acquire knowledge for its own sake. We should dispense with the adage that knowledge is power and build ourselves into a species that simply wants to know about the world because it's fascinating.
There are those of us that can express this idea now - obviously - but it is never the true origin of one's actions. The accumulation of knowledge is too tied up with the reality that we must continually adapt to a world of finite resources. So it makes sense that evolution and our society would craft scientists who value knowledge above all else, but beyond that scientist the knowledge can still be exploited by the human species, or the company or country for which that scientist works, or the genes within that scientist's body. It is not until humanity's need to exploit knowledge is universally eliminated that we will be able to appreciate knowledge by itself. Then, maybe, we can be safe in creating something more intelligent than ourselves.
If studying the past to inform the future can tell us anything about human technological progress, it's that we are very good at creating new technologies but not very good at creating specific new technologies. And even when we do manage to build exactly what we are trying to build, we almost always also build every variety of that new invention. But these generalities sound speciously ominous, so let's explore a few examples.
Twelve hundred years ago, in the course of searching for an elixir of life, Chinese alchemists stumbled upon the formula for gunpowder. That is to say, while engaged in putting a stop to death, they instead managed to take the first step in an arms race that has produced countless deaths for more than a thousand years. There is nothing reprehensible in their actions, but it is worth pointing out that science can have unintended consequences.
Eleven hundred years later, Alfred Nobel experimented with nitroglycerin in an attempt to create a stable explosive. He did so, but also managed to breathe life into the science of explosives, leading one newspaper to famously label him "The Merchant of Death" in a premature obituary. Nobel tried to invent a specific kind of explosive, and managed to, but failed to prevent the later invention and use of explosives for war.
Fewer than a hundred years were required for Nobel's invention to be dwarfed by a discovery so devastating that none before or since have equaled it. This, of course, was the realization that we could harness the force that binds atoms together to unleash terrific amounts of energy. After the success of the Manhattan Project, we developed nuclear power, thermonuclear weapons, neutron bombs, dirty bombs, and a whole host of incomparably powerful nuclear devices. All of this came with a newfound understanding of chemistry that few would contend has been undesirable.
But do not think that I am a Luddite decrying the advance of science. There have been many chance discoveries in history that did not lead to new ways of killing ourselves. Television and the rapid spread of media technology were a result of scientists experimenting with early cathode ray tubes, which most in the late 19th and early 20th centuries thought were novelties. Penicillin, the first anti-biotic, was an experiment left unchecked over night. And beer, thousands of years old, could only have come about after observing natural fermentation. None then could have known immediately that it would make their water safe to drink or spawn an uncountable variety of alcoholic beverages with which to induce inebriation.
The important lesson here is not that science is dangerous or evil, but that the advancement of science cannot be stopped so long as human nature is what it is. Thus, while I believe that the creation of a friendly artificial general intelligence could prevent a bad singularity, I also believe that it won't. Moreover, because humans are not terribly skilled at creating only what they intend to and nothing else, I believe that research into AGI of any kind - friendly or not - will hasten the arrival of a singularity we might not want. My reasons are several.
One of the unavoidable implications of discovery is that it involves peering just beyond the extent of that which you know to be true. From the most obvious example of Columbus sailing farther west than any before him to subtle observation such as the radioactivity of elements, discovery involves exploring the unknown and facing danger. It is taken as a given that with investigation comes risk, but as a whole the human species always opts to take the bad with the good. This is likely because we believe the risk associated with any possible scientific endeavor is finite, even if we cannot calculate it exactly.
But for two important reasons, it is impossible to regard the risk inherent in artificial general intelligence as finite. Firstly, the very nature of the recursive self-improvement of AGI implies that, over the long run, there is no method of successfully calculating its risk. With other human inventions, we know that we will eventually come to comprehend that which we have discovered before moving on to the next step, but with an AGI that is not the case. The underlying assumption of an artificial superintelligence is that we lose the capacity to understand whatever comes next, because the AI will be the one to make the next discovery.
With current mundane discoveries, most humans do not comprehend their meaning but the scientists engaged in the science do. While it's the collective decision of the human species to move on to the next stage, whether it knows what's going on or not, individual experts are expected to understand the risks. Once an AGI develops, however, the human species is no longer coordinating advancement and scientists are no longer assessing risk; it is completely out of our control.
Not being able to keep up with a superintelligence is not the most worrying scenario, however. If a friendly AI can be created, then it won't matter that we're no longer in control because its decisions will be better than ours would have been otherwise. After all, we're notoriously bad at assessing the risk of an experiment.
But we will be unable to rely on the conclusions made by a friendly AI because claiming to have created a friendly AI is non-falsifiable until far too late. It is a standard position that superhuman artificial intelligence would be able to deceive, outsmart, and trick us at will. It would know if we had put it in a simulation, be able to influence our decisions if we prevented it from accessing the real world, and behave in a genuinely friendly fashion if we did let it out until it was capable of realizing its own goals.
This is not to say that it is impossible to create a friendly AI, merely that it is impossible to know that you have created a friendly AI. Our categorical inability to comprehend an intelligence greater than our own guarantees that any risk posed by that intelligence is beyond our capacity to calculate, which means that we could never positively justify attempting to create a friendly AI.
Beyond the risk of failing to produce a friendly AI, there are more reasons why those that would call themselves singularitarians should not engage in the endeavor. As I said before, it is impossible to halt the advance of science so long as humans are still humans. This has two implications.
The first is that, regardless of the actions of any specific group of scientists, the human species will eventually produce a superhuman artificial intelligence. In all likelihood this will come about because a government or a corporation builds one with the express purpose of more effectively achieving its goals. There may be a profit motive, a militaristic motive, or even a humanitarian motive, but the end result is still the same: a creation which we are incapable of controlling or predicting.
And while we may not be able to prevent this discovery - especially because it may happen in secret or may have already happened - we can most certainly help it along. Science is built on shared data and mutual understanding. Any productive step toward artificial intelligence will eventually be assimilated into the collective body of knowledge on the subject and used by those that would seek to build an AGI. It follows, then, that progress toward an AGI can be retarded, even if it cannot be stopped completely. If those scientists who are most committed to working on the creation of a friendly AI stop and decide to work toward another goal instead, humanity will get there nonetheless but more slowly.
There is another approach to holding off the arrival of superhuman intelligence, however. Decelerating the rate at which we advance toward that goal may be one method, but scientists can also help by making the goal harder to reach. Increasing human intelligence through brain enhancing technologies raises the bar for a superintelligent AI. If we become smarter, we are more capable of comprehending the actions of an AGI and more equipped to evaluate the risk of an AGI. In the long run this is an arms race, and however hard we try to stay above our artificial counterparts, there is always the chance that we fail to do so.
This brings us to the second implication of the fact that humans cannot stop the advancement of science. We are only bound to that fate so long as we remain human. Neuroscience does not merely entail the explanation of human intelligence. It, along with psychology and genetics, can tell us everything we need to know about the human condition. The technologies we're developing now could be used to enhance our intelligence and memory but could also be used to redefine our emotions, perceptions, and our identities.
Currently, the aim of psychiatry is to make healthier and happier humans based on a collective, subjective view of what is acceptable and normal; but that doesn't have to be the goal. What we need to ensure our survival is a field of inquiry that engages in a psychological reassessment of the human species. Rather than relying on a brain crafted through a million years of survival of the fittest in a world of predation, hunger, and natural disasters, we should design a brain - whether biological, electronic, or both - that seeks to acquire knowledge for its own sake. We should dispense with the adage that knowledge is power and build ourselves into a species that simply wants to know about the world because it's fascinating.
There are those of us that can express this idea now - obviously - but it is never the true origin of one's actions. The accumulation of knowledge is too tied up with the reality that we must continually adapt to a world of finite resources. So it makes sense that evolution and our society would craft scientists who value knowledge above all else, but beyond that scientist the knowledge can still be exploited by the human species, or the company or country for which that scientist works, or the genes within that scientist's body. It is not until humanity's need to exploit knowledge is universally eliminated that we will be able to appreciate knowledge by itself. Then, maybe, we can be safe in creating something more intelligent than ourselves.
Tuesday, October 27, 2009
In Which I Repeat Buzzwords to Attract Readers
Being as how I don't want to scare off any potential viewers with too much dense phenomenology, I'm going to tone down my ranting in this post and just do some idle speculation on current technological trends. Specifically, I want to look at a potential future for media as we begin to merge with our technology.
One of the questions often asked of technologists and futurists is what we're going to do with ourselves in the future, especially when so much information can be gathered and presently instantly and freely. Many believe that our technological companions, which are increasingly powerful and convenient, will soon replace print media. There will be no need for paper books, newspapers, or magazines if all the content we want is readable on a lightweight, long lasting computer tablet.
I believe this future is still a while off due to the lack of color e-ink, the premium you pay for your portable devices, the hassle of navigating a truncated computer interface just to read, and the irreproducible feel of an actual book. While the last one is most important to me and other passionate book readers, future generations will have never known this and won't miss it. I admit that in a few decades, when books are no more, I will seem quite old-fashioned still lugging around my library.
But what will we find to replace the immersing comfort of books? To answer that, I must turn to buzzwords. The burgeoning internet is rife with dreams of augmented reality, crowdsourcing, and the semantic web, with each meme just waiting for the chance to supplant whatever dying paradigm came before it. Implementing these wild ideas is easier said than done, and there has been plenty of criticism on that front. But as I said in an earlier post, the real problem may be that we lack a sophisticated enough means to interpret the preponderance of information available to us. When videos, comments, and advertisements annotate nearly everything around us from novels to people to buildings, will we be able to handle that information, to synthesize it properly, to rely on it?
Jamais Cascio suggests that we may find ourselves in a sort of participatory panopticon where voluntary transparency will breed trust. But, as can be seen from the scandals of Facebooking teachers, traditionally private information made public tends to be greatly exaggerated and taken out of context. Think of how much nastier and more pervasive political muckraking would become if politicians' lives were open to such scrutiny. Rather than a collective stream of consciousness thrust onto the web without review, what we need is a more intelligent way of aggregating fiction and nonfiction that can be presented to us in a unified, articulated manner.
And I believe the best place to look for such a concept is the simple web crawler. Web crawlers, automated news services, and feed readers are the precursors to intelligent agents that will, in the future, organize the world for us. While the current breed of intelligent agents are in their infancy and possess no recognizable creativity, artificially intelligent software will eventually be able to read and understand natural knowledge and build original pieces of work. We have long had software that can generate music and art, and we are now beginning to create programs that can construct complete narratives of real events.
The next step for a new immersing content medium is software that can create linear movies constructed from photos, uploaded cell phone videos, and archived footage. This can be combined with generated music that matches the theme or mood of the story being told as well as your preferences. And each element in a constructed video can have contextual information connected to objects and people with original descriptions culled from twitter updates, blog posts, or Wikipedia entries.
Rather than subscribing to a dozen different blogs and news sites that are chock full of misinformation and bias and then attempting to sift through that data to find the gem of information you were actually looking for, an artificial system such as this would have the ability to instantaneously compare information from hundreds of different sources in order to identify and collect verifiable facts that it could present in a novel format best suited to your learning style.
A thorough piece of software would be capable of tracking down a fact's original source to prevent you from following link after link until eventually ending up back at the first link. By identifying where data comes from, software could focus on finding data related to the original source that added to the story it was trying to build. Intelligent fact-finding, if it became the norm, would stop the spread of rumors before they gained too much truthiness. By not allowing non-primary sources to add information to a story, these agents would be arbiters of objective information.
If sufficiently advanced, these programs might very well revolutionize news media, but that's hardly all there is to creative expression. In order to to create the next generation of thoroughly fulfilling fiction, software would have to be all-encompassing and interactive. Programs could be designed to create role-playing mysteries, geographically-aware chases and hunts, or passive narratives in which the user observes an artificial story integrated into the user's environment.
Imagine a future MMORPG in which, through a heads-up display, your environment was transformed into a fantasy wilderness replete with digitally created monsters. Other players in the area would appear before you in the guise of their character. Games that rely very heavily on imagination and internet support already exist, but with fully integrated displays that overlaid the real world, such games would be immersing and real in a way that no others before have been.
In the other direction, an artificially intelligent software agent with the template for a detective story could build up a realistic plot using geotagged data and objects. It would direct you to travel from one point to the next in the city you live in, and then advance the plot by enhancing real world structures and activities with digitally inserted characters or scenarios. All the while mood-appropriate music, pulled from online libraries or composed on the spot, would play in the background.
These programs, lacking the insight human authors can impart, would not be able to replace the novels and essays we have today until AIs reach human-level intelligence. But the stories they fabricate would be original and entertaining, and the freedom to literally move into and out of the setting and characters of a story you're experiencing would be unlike anything that exists today. If these programs are rich enough and complex enough, the chaos of a real world environment and real world users will create emergent themes and ideas that might not ever come to fruition if directed solely by human minds. To me, the prospect of discovering a new world hidden within our own world is very intriguing indeed.
One of the questions often asked of technologists and futurists is what we're going to do with ourselves in the future, especially when so much information can be gathered and presently instantly and freely. Many believe that our technological companions, which are increasingly powerful and convenient, will soon replace print media. There will be no need for paper books, newspapers, or magazines if all the content we want is readable on a lightweight, long lasting computer tablet.
I believe this future is still a while off due to the lack of color e-ink, the premium you pay for your portable devices, the hassle of navigating a truncated computer interface just to read, and the irreproducible feel of an actual book. While the last one is most important to me and other passionate book readers, future generations will have never known this and won't miss it. I admit that in a few decades, when books are no more, I will seem quite old-fashioned still lugging around my library.
But what will we find to replace the immersing comfort of books? To answer that, I must turn to buzzwords. The burgeoning internet is rife with dreams of augmented reality, crowdsourcing, and the semantic web, with each meme just waiting for the chance to supplant whatever dying paradigm came before it. Implementing these wild ideas is easier said than done, and there has been plenty of criticism on that front. But as I said in an earlier post, the real problem may be that we lack a sophisticated enough means to interpret the preponderance of information available to us. When videos, comments, and advertisements annotate nearly everything around us from novels to people to buildings, will we be able to handle that information, to synthesize it properly, to rely on it?
Jamais Cascio suggests that we may find ourselves in a sort of participatory panopticon where voluntary transparency will breed trust. But, as can be seen from the scandals of Facebooking teachers, traditionally private information made public tends to be greatly exaggerated and taken out of context. Think of how much nastier and more pervasive political muckraking would become if politicians' lives were open to such scrutiny. Rather than a collective stream of consciousness thrust onto the web without review, what we need is a more intelligent way of aggregating fiction and nonfiction that can be presented to us in a unified, articulated manner.
And I believe the best place to look for such a concept is the simple web crawler. Web crawlers, automated news services, and feed readers are the precursors to intelligent agents that will, in the future, organize the world for us. While the current breed of intelligent agents are in their infancy and possess no recognizable creativity, artificially intelligent software will eventually be able to read and understand natural knowledge and build original pieces of work. We have long had software that can generate music and art, and we are now beginning to create programs that can construct complete narratives of real events.
The next step for a new immersing content medium is software that can create linear movies constructed from photos, uploaded cell phone videos, and archived footage. This can be combined with generated music that matches the theme or mood of the story being told as well as your preferences. And each element in a constructed video can have contextual information connected to objects and people with original descriptions culled from twitter updates, blog posts, or Wikipedia entries.
Rather than subscribing to a dozen different blogs and news sites that are chock full of misinformation and bias and then attempting to sift through that data to find the gem of information you were actually looking for, an artificial system such as this would have the ability to instantaneously compare information from hundreds of different sources in order to identify and collect verifiable facts that it could present in a novel format best suited to your learning style.
A thorough piece of software would be capable of tracking down a fact's original source to prevent you from following link after link until eventually ending up back at the first link. By identifying where data comes from, software could focus on finding data related to the original source that added to the story it was trying to build. Intelligent fact-finding, if it became the norm, would stop the spread of rumors before they gained too much truthiness. By not allowing non-primary sources to add information to a story, these agents would be arbiters of objective information.
If sufficiently advanced, these programs might very well revolutionize news media, but that's hardly all there is to creative expression. In order to to create the next generation of thoroughly fulfilling fiction, software would have to be all-encompassing and interactive. Programs could be designed to create role-playing mysteries, geographically-aware chases and hunts, or passive narratives in which the user observes an artificial story integrated into the user's environment.
Imagine a future MMORPG in which, through a heads-up display, your environment was transformed into a fantasy wilderness replete with digitally created monsters. Other players in the area would appear before you in the guise of their character. Games that rely very heavily on imagination and internet support already exist, but with fully integrated displays that overlaid the real world, such games would be immersing and real in a way that no others before have been.
In the other direction, an artificially intelligent software agent with the template for a detective story could build up a realistic plot using geotagged data and objects. It would direct you to travel from one point to the next in the city you live in, and then advance the plot by enhancing real world structures and activities with digitally inserted characters or scenarios. All the while mood-appropriate music, pulled from online libraries or composed on the spot, would play in the background.
These programs, lacking the insight human authors can impart, would not be able to replace the novels and essays we have today until AIs reach human-level intelligence. But the stories they fabricate would be original and entertaining, and the freedom to literally move into and out of the setting and characters of a story you're experiencing would be unlike anything that exists today. If these programs are rich enough and complex enough, the chaos of a real world environment and real world users will create emergent themes and ideas that might not ever come to fruition if directed solely by human minds. To me, the prospect of discovering a new world hidden within our own world is very intriguing indeed.
Monday, October 19, 2009
Toward a Meaningful Definition of Posthuman Sentience
I must apologize in advance for getting all philosophical on you guys today.
As we get closer and closer to developing artificial general intelligence, I feel it is necessary to highlight an important limitation of our anthropocentric perspective. While we sometimes have the capacity to treat other species of life in humane ways, we often stumble when it comes to categorizing non-human intelligence. We cannot help but ascribe anthropomorphic qualities to that which we view to be intelligent, and we are virtually unable to imagine intelligence that lacks such qualities. In the relatively near future, however, we're going to live in a world with intelligent robots, uploaded minds, and other transhumans; it will be necessary to alter our perceptions of what those entities represent.
Although much criticism has been leveled against research into creating human-like artificial intelligence, the goalposts for "human-like" keep being moved. Faculties such as understanding language, playing Chess, and making music were once thought to be uniquely human, but are now merely mechanical. Soon the only viable objection AI critics will have is that computers lack that ultimate of all subjective concepts, sentience. I would argue, however, that the general definition of sentience used by us humans is a poor one unsuited to an age of intelligent non-human entities. To understand why our conception of sentience is so useless we must first define it.
The simplest description of sentience is the awareness of qualia - that is, perceiving the blueness of the sky, the warmth of the sun, or the pain of fire. But it's not the photon hitting our retina that we're aware of; it's a composite image generated from our eyes, memory, emotional state, and other senses that somehow becomes a clear picture in front of us. The inability to effectively articulate the "it" that we see is sometimes expressed thusly: do we experience the same colors as everybody else? Is my green your purple? Is your red my blue?
But this definition of sentience fails to take into account the reality of what we perceive. We can only indirectly infer another's subjective experiences. I know that I am sentient, but all that I can confidently say about others is that, by their reactions to the wold around them, they appear to be sentient. So the consensus description of sentience must be modified to this: sentience is the ability to respond to subjective experience.
It's worse than that, though. People that are unconscious, in a coma, or unable to feel pain are still described as being sentient. The act of responding to qualia is not required - only the expectation that one can respond. Sentience, then, is reduced to an individual's ability to convince us (whether by direct example or mere association) that we should expect it to be able to respond to subjective experience.
Unfortunately, even this flimsy description is insufficient. We believe some animals also experience qualia because they respond in much the same way that we do. Dolphins, pigs, and primates can see themselves in mirrors; dogs and cats can yelp in pain. Other animals respond to external stimuli but do so in inhuman or primitive ways, leading us to believe that there is no sentience involved. Therefore, the common conception of sentience is the ability of an individual to convince us that we should expect it to be able to respond to subjective experience in a fashion that is reasonably similar to the way that most other agreed upon sentient beings do.
And yet this definition, while laughably broad, is arbitrarily limited. Despite there being no consensus amongst scientists or philosophers as to the physical mechanism of sentience, we exclude more primitive animals for not having the brains we do, plants for not having the nervous system we do, and inanimate objects for not being alive. We do this because many believe that sentience is an emergent phenomenon arising from the complicated neural networks possessed by the human (or animal) brain. This is a reasonable point of view, but it rests on the assumption that we are sentient and others are not. There is no reason to make this claim except for the fact that we believe sentience is a unique quality of intelligent, living organisms. Our perception of the universe is radically different from its objective reality, which does lend credence to the idea that the sentience we experience requires the brains we possess, but it does not indicate that the property of sentience itself requires a brain.
The only reasonable claim we can make about the ability to perceive qualia is that, for us to know about it, there must be a measurable response initiated by an internal reaction to external stimuli. When applied to artificial intelligence, this should remind us of the ideas behind the Turing test. Without an objective measure of sentience, Occam's Razor would suggest that we have no cause to differentiate between one kind of sentience and another. If both entities appear sentient, we ought to respond to each of them as if that were the case and not add extra variables into the equation.
But by this logic we can grant sentience to nearly everything on the Earth, and maybe even the entire universe. How can we say that one thing is sentient and another not if all things react to the world around them? Many regard this position as absurdly counterintuitive or practically useless. If sentient can be used as a synonym for existent, why bother having the word at all?
It's important now to recognize that nowhere in the concept of sentience is there a mention of mind, intelligence, or self-awareness. I can suggest that a rock is sentient without believing that rock is sad when its other rock friends roll away or that it contemplates the mysteries of the universe on a starry night. A rock's sentience might only be a recognition of cold or not cold, falling or not falling. This brand of sentience would not resemble our complicated sentience, but it might still be a kind of awareness.
Perhaps sentience, then, is best defined as a grouping of types of perception. Inanimate objects may only be aware of the atoms bumping up against them, but more complicated lifeforms react to other qualia: colors, sound, memory, etc. Having a more complex level of sentience does not grant us universal awareness, however. In fact, much of what we perceive is irreducibly complex, as our emotions cannot be broken up into their constituent chemicals nor our music into its disparate vibrations while still maintaining meaning.
When comparing another entity's sentience to ours, then, we must take into account similarities and differences. We cannot sense the Earth's magnetic fields the way birds or cockroaches can, nor do we have any idea what the cells of our stomach are experiencing most of the time. As we develop non-human intelligence, there will be even more categories of perception that do not fall within our level of sentience. Intelligent robots will be able to perceive more of the EM spectrum, sense electrical conductivity, or detect exotic particles.
Going into a posthuman future, we should not base sentience on our anthropocentric limitations but on the concept of mutual sentience. That which we can perceive we can usually impart. Through description, song, or art we are able to transmit to others of our kind the meaning behind a particular perception. Thus if two entities have overlapping categories of perception, they may be able to communicate with each other. Without overlap, two entities may still be sentient but not mutually sentient, and as such unable to communicate.
Eventually, this takes us to the question of whether a robot's qualia is its programming or the percepts spawned from that programming. In other words, if a robot says it is sad, how do we know that emotion is real? The answer is not to define experiences in terms of real or not real, but similar or dissimilar. If we want to know whether or not a human and a robot are mutually sentient, we must compare their perceptions.
Human emotions, for example, are defined by their irreducible complexity and their communicability. We know that someone has experienced an emotion if that individual is able to convince us, through words, art, or body language, that they did. In order for a robot's emotions to meet the same criteria as ours, that robot must express emotions in a convincing manner, and the robot must be able to experience the information that makes up its robotic emotions without also feeling or expressing those emotions. Just as we are not aware of our emotions through our chemical reactions, a robot must not be aware of its emotions through its programming.
Such a creation may or may not be possible, but if it is, then we will have created artificial intelligence with whom we share a degree of mutual sentience and are able to communicate. As we enter a posthuman world, we should not attempt to define sentience as a binary yes or no. Rather, the goal of defining a particular entity's level of sentience should be overcoming barriers to communication, whether that entity be an artificial intelligence, a cybernetically enhanced human, or an uplifited animal.
Labels:
artificial intelligence,
philosophy,
posthumanism,
robotics
Wednesday, October 14, 2009
Insert clever, low-temperature baseball pun here.
Sorry for the lack of creativity, but I'm sure Larry Johnson can make up for it. Cryonics is once again in the news the only way it knows how to be - through scandal. The field even managed to acquire a painfully unfunny Letterman bit. (I'd like to note that I think this bit is unfunny because Letterman is not funny, not because it's bad press for an idea I support.)
I am continually confounded by the idea that people are universally willing to accept their own mortality. I'll go ahead and admit right out that the prospect of death scares me more than anything else, and I understand that is not necessarily true for everyone else. But very few people do, in fact, want to die. (The outrage over Obama's "death panels" should be evidence enough of that.) So why are so few people doing anything about it?
I'm not going to spend much time on the issue of Larry Johnson, his book deal, or the veracity of his claims. The evidence seems to contradict his side of the story, but even so I wouldn't be surprised if there were at least a sliver of truth to his accusations. Alcor is a large organization with a long history, and it's run by humans; it probably has a few cryopreserved bodies in the closet.
But cryonics is the topic of choice today because, no matter what problems there may be with the institution, I simply cannot understand how fewer than 200 total people are currently preserved. That's 200 out of the hundreds of millions of people who have died since James Bedford became the world's first frozen human in 1967. There are those that are waiting in the wings, of course, but the three big cryonics organizations in the United States have only a couple thousand members between them.
Doubt that cryonics is the answer to death is certainly one valid objection. So far, no human or animal has been successfully revived after preservation, although no one has actually attempted to revive a human. Scientists are also not sure if damage caused by ice crystals and ischemia is repairable, but the newer process of vitrification hopes to avoid ice crystals altogether. Further, prospects for reanimation tend to call upon the panacea of nanotechnology or the possibility of whole brain emulation. At present, these technologies can do nothing more than tell you what you're going to die of and maybe simulate a rat's neocortical column with massive supercomputers.
But none of that changes the fact that there are no physical laws of the universe that prohibit the process of reanimating dead tissue. We are all just complex organic molecules comprised of the elements of nature. Given the right stimulus, those elements (barring quantum mechanical effects) will react accordingly every time. I have no doubt that at some point in the future medical science will be able to breathe life into dead tissue once again. The more difficult question to ponder is what is lost in the interregnum between suspension and reanimation.
Here the question of consciousness arises. Even if future scientists can revive the cryopreserved, skeptics wonder if the awakened person behind the Cartesian theater is the same as the one put into cold storage decades or centuries prior. If a consciousness is discorporated when brain function ceases, is even the prevention of information-theoretic death enough to ensure that the same consciousness manifests later?
I will not dwell long on this criticism of cryonics because the mind-body question is a very old one that would distract me for many pages if I let it. Suffice it to say, I believe that there is no empirical method of evaluating consciousness and its continuity, whether between life and death, sleeping and waking, or one moment and the next. While it's certainly possible that your "own" consciousness cannot survive the transition of cryonic storage, the claim is non-falsifiable and cannot be used to inform rational decisions.
Beyond medical and philosophical objections, there is a more pragmatic aversion to the steep cost of the procedure. Cryonics companies require annual membership fees in the hundreds of dollars and lump sums upwards of $150,000. For a service with a 0% success rate at present (but no complaints from customers!), that is a daunting price to pay. The easiest way to mitigate the expense is through the purchase of life insurance policies, which are very cheap when young. But squandering your money on a cockamamie idea instead of providing for your loved ones is not terribly appealing to most.
To the transhumanists and singularitarians out there, however, this objection cannot hold up. The post-scarcity (or human) world that will exist after the singularity should obviate the feeling that you need to supply your family with anything once you're gone. And if you hope that your children will be the beneficiaries of human enhancement in a world without a singularity, the only realistic possibility is that you are already rich and able to afford the life extension treatments, bionic limbs, and neural implants of the future. In that case, a hundred and fifty grand is just a drop in the bucket.
Returning to the high price of cryonics, however, it's important to understand that the 200 current customers are essentially early adopters. As more people choose to experiment with cryonic preservation, the costs will come down due to economies of scale and the refinement of the product that increasing demand allows.
Without valid medical, philosophical, or financial objections to cryonics, I can think of no other reason to avoid it other than pessimism toward the future, which is timeless but nearly always absurd. Even though the science fiction genre is the primary source of dystopian ideas, I believe science fiction also shows us the promise of a brighter future. And if nothing else, cautionary tales help us to avoid potential mistakes. What more could you do to ensure a better tomorrow than to live it?
Cryonics supporters often call upon a decision matrix to demonstrate that, really, opting to have yourself cryopreserved is the only smart choice. It goes like this: if you freeze yourself at death but future technology cannot revive you, then you're still dead; if you freeze yourself at death and future technology can revive you, then you're alive; if you do not freeze yourself at death, you're dead. Regardless of the low chance of success, the philosophical issues, or the high cost, attempting to avoid death at least gives you a chance of doing so. The results of the alternative are clear.
Friday, October 9, 2009
Sumarizing Some Sentiments of the Singularity Summit
The recently held Singularity Summit - with 832 geeks in attendance - received a surprisingly substantial amount of mainstream media coverage this week. But, as can be expected of a conference populated by those who want to live forever and those devising a plan not to be annihilated by hypothetical future robots, the reports did lean slightly toward the absurd. A lot of the articles pondered the question of what, exactly, the world will look like after the technological singularity.
Now, because of the inherent unpredictability of the singularity's aftermath, transhumanists and singularitarians tend not to speculate a great deal about what is to come. Nevertheless, I feel it is useful to give a close examination of the possible paths to the singularity and their subsequent outcomes.
As I mentioned above, there are two modes of thought which run parallel to each other within the singularity community. There are those that seek human enhancement, longevity, and an end to the process of aging, and Aubrey de Grey is their champion. And the other half, pioneered by Kurzweil, believes in accelerating technological change and an eventual intelligence explosion.
Kurzweil has been making proclamations and predictions regarding immortality for years, but de Grey tends to separate the two concepts. Some proponents of immortality believe we will get there by exceeding the longevity escape velocity; that is, if we reach a point when life expectancy increases by more than one year per year, we can live indefinitely. Others tout the prospect of whole brain emulation or mind uploading, relying on computers to store their consciousness forever.
But neither of these approaches will be able to ward off the threat posed by a superhuman artificial intelligence wielding the fundamental forces of the universe. Thus, we have the friendly AI movement. A modest number of singularitarians believe, however, that the intelligence explosion will occur within our own minds, either through nootropics and neural implants or through uploading and improving our own code.
If it is to come by way of whole brain emulation, the question arises as to the ethical status of digital minds. At the Singularity Summit, Randal Koene took the categorical stance of saying that all emulated brains would have the right to free themselves from experimentation. But if this is a right to be granted to all sentient beings of our own creation, a problem surfaces. Any friendly AI would have the same, if not a greater amount of sentience than we do, and would have to be afforded the same rights.
There are two general ideas about how a superintelligence could be programmed that would not try to destroy us: programming it in such a way that makes it impossible to hurt us, or giving it programming that makes it want to help us. Physically restraining an AI - either by coding it to be unable to hurt us or installing it in a limited environment in which it cannot harm us - is clearly a violation of its rights as a consciousness. Doing this would be treating the AI as a tool. I won't say that such a policy is necessarily objectionable, but it is inconsistent for those that believe sentience deserves humane treatment.
The alternative of merely giving it the goal of wanting to help us is more akin to how we raise our children. We want our children to grow up, contribute to the world, and take care of us when we're old. Such a path is less ethically challenging, but it eventually runs into the problem of the AI's right to its own life. Just as parents must let their children go, there would be a point when denying a superintelligent AI the right to form its own goals would be unethical. (Friendly AI is a funny term to me. My friends would never intentionally hurt me, but it would be pretty difficult for me to convince them to supervise and better my life.)
So aspiring to immortality won't get us through the singularity, and attempting to control a superintelligent AI is fraught with ethical quandaries. What option does that leave us with, then? We must be the singularity. Not through brain emulation, because that raises the issue of whether or not you own your mind, but through careful and progressive human enhancement. Expanding our memory with digital media, accelerating our mental processes through drugs or implants, and networking minds together - all of these are possibilities for a "soft takeoff" singularity.
And this is the only solution to the singularity that lets us choose our destiny. If we ensure that we advance as fast or faster than whatever artificial intelligences are being created, we can hope to ensure that they do not destroy us through malevolence or indifference. We still have our own human failings and frailties to deal with at that point, but solving our weaknesses should be a part of human enhancement.
Tuesday, October 6, 2009
The Freedom of Information Post
As much as I used to love The Simpsons, Sunday's episode - in which they gave social media a send up several years too late - may have been the clearest signal yet that they are woefully behind the times (and no longer funny). Nevertheless, they briefly touched upon the very topic I planned to discuss this time around. When the technophile teacher that replaced Edna ridicules Martin's rote memorization in favor of looking the answer up online, the show managed to highlight a cogent question: is it more important to know something or to be able to know something?
Ubiquitous computing is quickly becoming a reality, and accessing new information - an encyclopedia entry, a news article, a restaurant menu - is becoming easier, faster, and more reliable. We will reach a point when there will not be a clear distinction between what information you have stored in your head and what information you have stored in a readily accessible database. When this moment arrives, what will the value of knowledge be?
Some argue that the widely scattered bits of information we gather through our various electronic mediums are inferior to the more deeply textured knowledge we can acquire from books and other more conventional sources. If we don't actually know the details of a thing, but only know an excerpt or a summary of it, then perhaps we don't know it at all. But I would argue that the way we're storing information in our brains now is not much different than the way we've always been doing it.
When we cram for a test the night before we take it, we're storing information for easy access and retrieval in our short term, fleshy memory, but we're not retaining it over the long run. The only effective difference between Martin's rote memorization and the teacher's search engine is where we access the information. But whether we pull it out of our brain or we pull it out of a cell phone, the knowledge will be gone the next day.
This happens in areas outside of education as well, beyond "teaching to the test" and cramming the night before. Most of what we commit to memory is in the form of snippets of information that lack any real depth. The route we take to drive to work, our impressions of a politician, what our favorite food is - all of this knowledge is based on incomplete observations thrown together and regurgitated when necessary.
And this has been true since well before the internet age. While most of us know that the Earth has gravity and holds us to the ground, most of us understand nothing about how gravity actually functions. We know nothing about how Einstein's general relativity upended Newton's classical view, or how gravity's unusual interaction with the universe is at odds with quantum mechanics, or the fact that the gravity around the Sun alters the light of distant stars behind it. Most people's understanding of a fundamental force that rules our lives could be summed up in a newspaper or blog headline: "New Study Indicates Big Things Attract Small Things."
Then there are those that believe the technological revolution we're experiencing will lead to a new enlightenment brought about by the profusion of information available to us. Proponents of this belief point to rising IQs throughout the world, or the unstoppable democratization that social media can give us, or the complexity of today's technological culture. The freedom with which information can cross national, personal, and physical boundaries gives every internet user the power to learn about, discuss, and influence the world at large, from restaurant reviews to global diplomacy.
But what our cell phones, twitter accounts, and cloud computing don't give us is a better way to synthesize and assimilate the data we're reading. And we're no longer just pulling all nighters for our Calc test tomorrow. Now we're keeping track of the updates and activities of all our friends, companies, and favorite celebrities; reading opinions and reviews for every item we can spend our money on; and scanning through statistics, blogs, and reports to win comment and forum wars. But are we closer to our friends, or more frugal with our money, or more reasoned in our debates? Or are we overwhelmed by the deluge of free information?
So far, very little scientific and technological development has been geared toward giving us the ability to critically think, learn, and teach any better. Without an increased capacity for synthesis and assimilation, the knowledge available to us, whether in the form of memorized math formulas or wikipedia entries, is not any more useful to us. The issue is not how we acquire the information but what we're able to do with it once we have it.
Consequently, as the information age engulfs us, just as important as inventing new ways to spread knowledge is inventing new ways to understand knowledge. Technology that rejuvenates our neurons or restores our memories must keep pace with our information technology. We must become smarter before we can become even smarter still.
Tune in next time when I discuss the Singularity Summit, because what kind of futurism blog would this be if I didn't?
Ubiquitous computing is quickly becoming a reality, and accessing new information - an encyclopedia entry, a news article, a restaurant menu - is becoming easier, faster, and more reliable. We will reach a point when there will not be a clear distinction between what information you have stored in your head and what information you have stored in a readily accessible database. When this moment arrives, what will the value of knowledge be?
Some argue that the widely scattered bits of information we gather through our various electronic mediums are inferior to the more deeply textured knowledge we can acquire from books and other more conventional sources. If we don't actually know the details of a thing, but only know an excerpt or a summary of it, then perhaps we don't know it at all. But I would argue that the way we're storing information in our brains now is not much different than the way we've always been doing it.
When we cram for a test the night before we take it, we're storing information for easy access and retrieval in our short term, fleshy memory, but we're not retaining it over the long run. The only effective difference between Martin's rote memorization and the teacher's search engine is where we access the information. But whether we pull it out of our brain or we pull it out of a cell phone, the knowledge will be gone the next day.
This happens in areas outside of education as well, beyond "teaching to the test" and cramming the night before. Most of what we commit to memory is in the form of snippets of information that lack any real depth. The route we take to drive to work, our impressions of a politician, what our favorite food is - all of this knowledge is based on incomplete observations thrown together and regurgitated when necessary.
And this has been true since well before the internet age. While most of us know that the Earth has gravity and holds us to the ground, most of us understand nothing about how gravity actually functions. We know nothing about how Einstein's general relativity upended Newton's classical view, or how gravity's unusual interaction with the universe is at odds with quantum mechanics, or the fact that the gravity around the Sun alters the light of distant stars behind it. Most people's understanding of a fundamental force that rules our lives could be summed up in a newspaper or blog headline: "New Study Indicates Big Things Attract Small Things."
Then there are those that believe the technological revolution we're experiencing will lead to a new enlightenment brought about by the profusion of information available to us. Proponents of this belief point to rising IQs throughout the world, or the unstoppable democratization that social media can give us, or the complexity of today's technological culture. The freedom with which information can cross national, personal, and physical boundaries gives every internet user the power to learn about, discuss, and influence the world at large, from restaurant reviews to global diplomacy.
But what our cell phones, twitter accounts, and cloud computing don't give us is a better way to synthesize and assimilate the data we're reading. And we're no longer just pulling all nighters for our Calc test tomorrow. Now we're keeping track of the updates and activities of all our friends, companies, and favorite celebrities; reading opinions and reviews for every item we can spend our money on; and scanning through statistics, blogs, and reports to win comment and forum wars. But are we closer to our friends, or more frugal with our money, or more reasoned in our debates? Or are we overwhelmed by the deluge of free information?
So far, very little scientific and technological development has been geared toward giving us the ability to critically think, learn, and teach any better. Without an increased capacity for synthesis and assimilation, the knowledge available to us, whether in the form of memorized math formulas or wikipedia entries, is not any more useful to us. The issue is not how we acquire the information but what we're able to do with it once we have it.
Consequently, as the information age engulfs us, just as important as inventing new ways to spread knowledge is inventing new ways to understand knowledge. Technology that rejuvenates our neurons or restores our memories must keep pace with our information technology. We must become smarter before we can become even smarter still.
Tune in next time when I discuss the Singularity Summit, because what kind of futurism blog would this be if I didn't?
Subscribe to:
Posts (Atom)