The 21st-century technologies - genetics, nanotechnology, and robotics (GNR) - are so powerful that they can spawn whole new classes of accidents and abuses. Most dangerously, for the first time, these accidents and abuses are widely within the reach of individuals or small groups. They will not require large facilities or rare raw materials. Knowledge alone will enable the use of them. Thus we have the possibility not just of weapons of mass destruction but of knowledge-enabled mass destruction (kmd this destructiveness hugely amplified by the power of self-replication. I think it is no exaggeration to say we are on the cusp of the further perfection of extreme evil, an evil whose possibility spreads well beyond that which weapons of mass destruction bequeathed to the nation-states, on to a surprising and terrible empowerment. Nothing about the way i got involved with computers suggested to me that I was going to be facing these kinds of issues.
Essay on Pulp fiction
Specifically, robots, engineered organisms, and nanobots share a dangerous amplifying factor: They can self-replicate. A bomb is smoke blown up only once - but one bot can become many, and quickly get out of phobia control. Much of my work over the past 25 years has been on computer networking, where the sending and receiving of messages creates the opportunity for out-of-control replication. But while replication in a computer or a computer network can be a nuisance, at worst it disables a machine or takes down a network or network service. Uncontrolled self-replication in these newer technologies runs a much greater risk: a risk of substantial damage in the physical world. Each of these technologies also offers untold promise: The vision of near immortality that Kurzweil sees in his robot dreams drives us forward; genetic engineering may soon provide treatments, if not outright cures, for most diseases; and nanotechnology and nanomedicine can address yet more ills. Together they could significantly extend our average life span and improve the quality of our lives. Yet, with each of these technologies, a sequence of small, individually sensible advances leads to an accumulation of great power and, concomitantly, great danger. What was different in the 20th century? Certainly, the technologies underlying the weapons of mass destruction (WMD) - nuclear, biological, and chemical (NBC) - were powerful, and the weapons an enormous threat. But building nuclear weapons required, at least for a time, access to both rare - indeed, effectively unavailable - raw materials and highly protected information; biological and chemical weapons programs also tended to require large-scale activities.
While talking and thinking about Kurzweil, kaczynski, and Moravec, i suddenly remembered a novel I had read almost 20 years ago. The White Plague, by Frank herbert - in which a molecular biologist is driven insane by the barbing senseless murder of his family. To seek revenge he constructs and disseminates a new and highly contagious plague that kills widely but selectively. (We're lucky kaczynski was a mathematician, not a molecular biologist.) I was also reminded of the borg. Star Trek, a hive of partly biological, partly robotic creatures with a strong destructive streak. Borg-like disasters are a staple of science fiction, so why hadn't I been more concerned about such robotic dystopias earlier? Why weren't other people more concerned about these nightmarish scenarios? Part of the answer certainly lies in our attitude toward the new - in our bias toward instant familiarity and unquestioning acceptance. Accustomed to living with almost routine scientific breakthroughs, we have yet to come to terms with the fact that the most compelling 21st-century technologies - robotics, genetic engineering, and nanotechnology - pose a different threat than the technologies that have come before.
Despite my current job title of Chief Scientist at Sun Microsystems, i am more a computer architect than a scientist, and I respect Danny's knowledge of the owl information and physical sciences more than that of any other single person i know. Danny is also a highly regarded futurist who thinks long-term - four years ago he started the long Now foundation, which is building a clock designed to last 10,000 years, in an attempt to draw attention to the pitifully short attention span of our society. Test of Time, so i flew to los Angeles for the express purpose of having dinner with Danny and his wife, pati. I went through my now-familiar routine, trotting out the ideas and passages that I found so disturbing. Danny's answer - directed specifically at Kurzweil's scenario of humans merging with robots - came swiftly, and quite surprised. He said, simply, that the changes would come proposal gradually, and that we would get used to them. But I guess I wasn't totally surprised. I had seen a" from Danny in Kurzweil's book in which he said, "I'm as fond of my body as anyone, but if I can be 200 with a body of silicon, i'll take." It seemed that he was at peace with this.
There is probably some breathing room, because we do not live in a completely free marketplace. Government coerces nonmarket behavior, especially by collecting taxes. Judiciously applied, governmental coercion could support human populations in high style on the fruits of robot labor, perhaps for a long while. A textbook dystopia - and Moravec is just getting wound. He goes on to discuss how our main job in the 21st century will be "ensuring continued cooperation from the robot industries" by passing laws decreeing that they be "nice 3 and to describe how seriously dangerous a human can be "once transformed into. I decided it was time to talk to my friend Danny hillis. Danny became famous as the cofounder of Thinking Machines Corporation, which built a very powerful parallel supercomputer.
Kill a mockingbird, essay, bartleby
The Age of Spiritual Machines; I would hand them Kurzweil's book, let them read paper the", and then watch their reaction as they discovered who had written. At around the same time, i found Hans Moravec's book. Robot: Mere machine to Transcendent Mind. Moravec is one of the leaders in robotics research, and was a founder of the world's largest robotics research program, at Carnegie mellon University. Robot gave me more material to try out on my friends - material surprisingly supportive of Kaczynski's argument.
For example: The Short Run (Early 2000s biological species almost never survive encounters with superior competitors. Ten million years ago, south and North America were separated by a sunken Panama isthmus. South America, like australia today, was populated by marsupial mammals, including pouched equivalents of rats, deers, and tigers. When the isthmus connecting North and south America rose, it took only a few thousand years for the northern placental species, with slightly more effective metabolisms and reproductive and nervous systems, to displace and eliminate almost all the southern marsupials. In a completely free marketplace, superior robots would surely affect humans as North American placentals affected south American marsupials (and as humans have affected countless species). Robotic industries would compete vigorously among themselves for matter, energy, and space, incidentally driving their price beyond human reach. Unable to afford the necessities of life, biological humans would be squeezed out of existence.
1, in the book, you don't discover until you turn the page that the author of this passage is Theodore kaczynski - the Unabomber. I am no apologist for Kaczynski. His bombs killed three people during a 17-year terror campaign and wounded many others. One of his bombs gravely injured my friend david Gelernter, one of the most brilliant and visionary computer scientists of our time. Like many of my colleagues, i felt that I could easily have been the Unabomber's next target.
Kaczynski's actions were murderous and, in my view, criminally insane. He is clearly a luddite, but simply saying this does not dismiss his argument; as difficult as it is for me to acknowledge, i saw some merit in the reasoning in this single passage. I felt compelled to confront. Kaczynski's dystopian vision describes unintended consequences, a well-known problem with the design and use of technology, and one that is clearly related to murphy's law - "Anything that can go wrong, will." (Actually, this is Finagle's law, which in itself shows that Finagle was right.). Similar things happened when attempts to eliminate malarial mosquitoes using ddt caused them to acquire ddt resistance; malarial parasites likewise acquired multi-drug-resistant genes. 2, the cause of many such surprises seems clear: The systems involved are complex, involving interaction among and feedback between many parts. Any changes to such a system will cascade in ways that are difficult to predict; this is especially true when human actions are involved. I started showing friends the kaczynski" from.
Write, my Essay or Paper for me, essayforme
Due to improved techniques parts the elite will have greater control over the masses; and because human work will no longer be necessary the masses will be superfluous, a useless burden on the system. If the elite is ruthless they may simply decide to exterminate the mass of humanity. If they are humane they may use propaganda or other psychological or biological techniques to reduce the birth rate until the mass of humanity becomes extinct, leaving the world to the elite. Or, if the elite consists of soft-hearted liberals, they may decide to play the role of good shepherds to the rest of the human race. They will see to it that everyone's physical needs are satisfied, that all children are raised under psychologically hygienic conditions, that everyone has a wholesome hobby to keep him busy, and that anyone who may become dissatisfied undergoes "treatment" to cure his "problem." Of course. These engineered human beings may be happy in such a society, but they will most certainly not be free. They will have been reduced to the status of domestic animals.
It might be existing argued that the human race would never be foolish enough to hand over all the power to the machines. But we are suggesting neither that the human race would voluntarily turn power over to the machines nor that the machines would willfully seize power. What we do suggest is that the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines' decisions. As society and the problems that face it become more and more complex and machines become more and more intelligent, people will let machines make more of their decisions for them, simply because machine-made decisions will bring better results than man-made ones. Eventually a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage the machines will be in effective control. People won't be able to just turn the machines off, because they will be so dependent on them that turning them off would amount to suicide. On the other hand it is possible that human control over the machines may be retained. In that case the average man may have control over certain private machines of his own, such as his car or his personal computer, but control over large systems of machines will be in the hands of a tiny elite - just as.
reading it, my sense of unease only intensified; I felt sure he had to be understating the dangers, understating the probability of a bad outcome along this path. I found myself most troubled by a passage detailing a dys topian scenario: the new luddite challenge, first let us postulate that the computer scientists succeed in developing intelligent machines that can do all things better than human beings can do them. In that case presumably all work will be done by vast, highly organized systems of machines and no human effort will be necessary. Either of two cases might occur. The machines might be permitted to make all of their own decisions without human oversight, or else human control over the machines might be retained. If the machines are permitted to make all their own decisions, we can't make any conjectures as to the results, because it is impossible to guess how such machines might behave. We only point out that the fate of the human race would be at the mercy of the machines.
While i had heard such talk before, i had always felt sentient robots were in the realm of mini science fiction. But now, from someone i respected, i was hearing a strong argument that they were a near-term possibility. I was taken aback, especially given ray's proven ability to imagine and create the future. I already knew that new technologies like genetic engineering and nanotechnology were giving us the power to remake the world, but a realistic and imminent scenario for intelligent robots surprised. It's easy to get jaded about such breakthroughs. We hear in the news almost every day of some kind of technological or scientific advance. Yet this was no ordinary prediction.
Califone mcfp1 Phonics reading &