Wow, mind-expanding thoughts. Scary too. Read Vernor Vinge's essay -- here's the abstract:
"Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended. Is such progress avoidable? If not to be avoided, can events be guided so that we may survive? These questions are investigated. Some possible answers (and some further dangers) are presented."
Here's the full essay:
http://www.ugcs.caltech.edu/~phoenix/vinge/vinge-sing.html
Fascinating stuff! I hadn't read Vinge's essay before. Also read "Inside the unfathomable superhuman future after the 'singularity,'" by Bruce Sterling. A thought-provoking response to Vinge. Let's hope this refutation helps develop SF as a whole.
Sterling's essay:
http://www.wired.com/wired/archive/12.09/view.html?pg=4
Chris
"Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended. Is such progress avoidable? If not to be avoided, can events be guided so that we may survive? These questions are investigated. Some possible answers (and some further dangers) are presented."
Here's the full essay:
http://www.ugcs.caltech.edu/~phoenix/vinge/vinge-sing.html
Fascinating stuff! I hadn't read Vinge's essay before. Also read "Inside the unfathomable superhuman future after the 'singularity,'" by Bruce Sterling. A thought-provoking response to Vinge. Let's hope this refutation helps develop SF as a whole.
Sterling's essay:
http://www.wired.com/wired/archive/12.09/view.html?pg=4
Chris
Tags:
From:
no subject
From:
no subject
Chris
From:
no subject
If you look at it this way, singularities happen relatively frequently, even to the order of once (or occasionally two or three times) in a century. The development and use of atomic energy may or may not be one -- it was well predicted and changed politics, but how much did it change cultures? The personal computer, on the other hand, wasn't well predicted and has had dramatic impact on culture, at least in the Developed world.
But deciding that the singularity hinges on a specific thing, namely superhuman intelligence based largely on AI I think is vastly overstating things.
Regardless, hopefully the Sterling article will be a plus for SF.
From:
no subject
From:
no subject
It was, even for a rabid technophile and one who has few qualms with human augmentation, occasionally unsettling to read a 'reasoned discussion' of the pending extinction of the human species within the next few hundred years. At least the extinction of purely biological humans (zoo's, Indian reservations, Amish enclaves, etc. notwithstanding).
Kurzweil's book was highly optimistic, possibly overly so, about the 'survival' of humans, though in his vision as more-or-less virtual personalities originally downloaded from biological humans. Certainly a rosier picture than some other plausible futures, but by no means guaranteed.
Personally, I anticipate a scenario reminiscent of the movie "AI". Intelligent computers and androids becoming more and more common, and sophisticated, while humanity becomes increasingly irrelevant. At some point, the power will shift to the "machine" minds, leaving biological humans to be assimilated, exterminated, made into pets, or dwindling away.
From:
no subject
From:
no subject
I find both Vinge and Sterling's essays to be of questionable value. Neither really acknowledges the grim realities of computer science, insofar as we don't know if AI is even possible, but much less destined to happen.
To actually have some sort of danger from AI turning into a controlling, anti-human superintelligence, we'd need several factors that just aren't there:
1) Adequate computational power. That one is a sure bet to appear.
2) Sensory apparatus. Intelligence is largely pre-determined by the ability to sense, and electronic sensors just aren't there yet.
3) The ability to interpret stimuli. Natural language, natural vision, etc. are all things that elude some of the brightest minds in the computer world. A computer simply cannot make sense of speech, for example. Even with the right sensory apparati, a computer can't necessarily interpret the data. We have the computational power to do this, we just can't seem to make it happen.
4) The ability to choose an irrational course of action, or even choose at all. Computers, by their very nature, are mathematical. 0s and 1s. Even with a quantum computer, you'd still be looking at numerical calculation. One can argue that, with sufficient resources, a computer could generate seemingly irrational actions, and thus, demonstrate sentience. This seems the likeliest route to AI -- a bug in the system caused by too much software interaction.
However, computer science has firmly demonstrated that the more complex a software system gets, the more likely it is to crash. Even evolutionary (aka genetic) software still has this tendency.
5) Sufficient connection and autonomous controls to allow the AI to have any real impact. Contrary to "Terminator" and Will Smith's "I, Robot," the world is really not that wired, and shows no tendency to become so. Even if it were, we'd still have the problem of conflicting protocols. Standardization of machine-to-machine interface is just not happening at the same speed as advances in computer technology.
In other words, for their singularity to occur, you'd need vast improvements not just in science, but in the ability to deploy said science.
I just don't see it. And the fact Vernor and Sterling do shows just how little they know about the state of computer science and electronics. It's a sad reflection on the ignorance of modern sci-fi authors.
What strikes me as a far more realistic singularity to be worried about is in regards to nanotechnology. Genetically adapting chemical robots who can reproduce? One bug, and you can watch humanity go bye-byes. It's germ warfare via tiny machine.
As for the inundation of computers rendering humans irrelevant, I cry bullshit. Increased tools and decreased labor don't create irrelevancy, they offer us the opportunity to explore further. Look at North America: more tools, more sheer output, and still, we work longer hours, inventing new things. While I question the usefulness of those things, I don't see humanity as becoming irrelevant, I see us as becoming super-productive. So long as we can continue to streamline the infrastructure so it won't crush itself under it's own weight, we can continue to expand our knowledge.
Unfortunately, that super-productiveness is eating our resources faster than we can replenish them. It's time we headed for space, and got some terraforming done before we eat this planet whole and are left sitting in the mire of our collective feces.
From:
no subject
Chris