Strategies, Dangers, Superintelligence: Paths

ByNick Bostrom

feedback image
Total feedbacks:27
11
11
5
0
0
Looking forStrategies, Dangers, Superintelligence: Paths in PDF? Check out Scribid.com
Audiobook
Check out Audiobooks.com

Readers` Reviews

★ ★ ★ ★ ★
robert lester
Fast delivery. Product as described. Good reading for everyone interested in technology. It got me thinking a lot about our future as well as increasing my awareness of how big company collecting our data and their usage.
★ ★ ★ ☆ ☆
cathrine prenot
When I decided to start reading Superintelligence I thought the book would be a page turner. Sadly, the author’s writing style lost my interest shortly after the first few pages. Obviously, I wasn’t part of his intended audience. He wrote the book for the techno-savvy crowd and because they were the intended audience, it was not easy to wade through the scholarship that was put into this book. I had to sift through other reviews to understand the gist of the authors argument.

The theme of the book should have been compelling enough. The author contemplates the risks associated with the rise of sentient computers and robots. The dangers associated with the rise of sentient technology is no longer within the realm of science fiction. By the author’s own reckoning we only have at the most 100 years left before technology is created that is more intelligent than humans. This poses an existential risk to the fate of humanity. Of course, with the threat of ecological catastrophe or nuclear war, the risk posed by sentient machines is not on many people’s radar screens. However, as the author insinuates, that doesn’t mean we shouldn’t be starting to contemplate this possible threat.

Mr. Bostrom begins his overview with a summary of current technology and the ways that it could lead to super intelligence. He also develops a tentative time line when superintelligence may be reached by machines, insinuating that at the least superintelligence is a mere 50 years away and at the most 100 years away. He also makes the case that it is only a matter of time before super intelligent technology is achieved.

He goes on to define what superintelligence is and its qualities. He continues with a lengthy discussion of possible motivations of super-intelligent technology, and possible ways those motivations could be programmed into machines. Then he begins the meat of his argument; the possible ways that super-intelligent technology could subvert the designs of its human creators, leading to catastrophe for human kind.

Of course, there are many positives to the development of super-intelligent technology. One benefit of course would be the exploitation of other planets resources. With super-intelligent technology we could effectively mine or even colonize other planets. Even now there are a lot of things that computer technology is doing that we are taking for granted. Our finances are now electronic. With the internet, communication is just a touch of the button away. However, just imagine the chaos that could ensue if super-intelligent technology gained control of the things that are already electronic. People wouldn’t be able to make financial transactions. They might not be able to even buy food at the local grocery store. So, do the risks outweigh the benefits? This is the question that Mr. Bostrom is grappling with.

Me? My biases are toward life. Even though at this moment I am benefitting from the applications of technological advances by posting this review on-line I feel that there are limits to technology that we just shouldn’t trespass. Mr. Bostrom may not be as biased as I am against technology but he definitely wants to design an ethics for its control. However, programming ethics into our computers poses numerous challenges. For example, let’s just imagine for a moment that we program the rule, ‘only do things that benefit humans.’ Who gets to decide what benefits humans, or how best to benefit humans? A rogue computer may decide that the best way to benefit humans is just to stick us in life enhancing pods encased in amniotic fluids, held in a perpetual dream state. Yes, we may be ensured of an ultra-healthy eternal life, but what about our freedom, or all the other characteristics associated with being human? So even an ethics guiding superintelligence is fraught with many dilemmas.

I feel that Mr. Bostrom’s main message to his audience is that they need to slow down with their development of super-intelligent technology and start to think about the philosophical challenges associated with it, or at least develop a philosophical orientation guiding its possible applications. Wise development would ensure that we reap the benefits and not the risks. However, there are many things we humans have done that are not that wise so only time will tell whether people will heed Mr. Bostrom’s warning.

Recommendations? If you are a science fiction fan or techno-savvy this book is for you. For the rest of us techno-illiterate people we can wade through it but we will probably get more out of just a book review of its contents. Still, for those of us worried about the future fate of humanity, we should definitely take Mr. Bostrom’s arguments into consideration.
★ ★ ★ ★ ☆
antti
Author sketches out a pretty scary issue that most of us probably have never considered. Sort of like global warming; the timetable is distant but uncertain. The book is pretty dense; definitely not a light read.
The Microbes Within Us and a Grander View of Life :: Understanding the 12 Technological Forces That Will Shape Our Future :: Book II (The Legend of Drizzt 15) - The Hunter's Blades Trilogy :: Companions Codex, Book I - Night of the Hunter :: Being Human in the Age of Artificial Intelligence
★ ★ ★ ☆ ☆
natalie jankowski
This book is interesting, but quite obviously written by and for an academia audience. It covers a lot of theory, has a lot of charts that can't be read on a kindle, and sums it up by saying, in effect, "who knows?" This subtitle of this book is all covered: The Paths, Dangers, and Strategies of attempting to handle AI Superintelligence is presented in a very repetitive format that jumbles things up a lot and says we will discuss this more in whatever chapter gets back to it. The previous sentence is a toned down attempt to demonstrate the author's writing style. The daisy chain could be 3 or four links later.

If, at the end of a very long read, the author can reach no conclusions, and not very much new was revealed, where IS the Superintelligence? I am convinced the author is a very smart and is knowledgeable about the subject; and probably a wonderful guy. But unless you can commit to providing details on how to bake the bread, there is not much worthwhile in providing a long list of ingredients and detailing endlessly what each ingredient may do. We will discuss each ingredient in further chapters. I would recommend "Our Final Invention" for a more informative look at the very near future.
★ ★ ★ ★ ☆
allen
This book blew my mind. It meanders a bit, hence the 4 stars. Same writer was featured in a long essay on the same topic that had highlights of the book. Book, essays and topic are very timely, scary and worth your time.
★ ★ ★ ★ ☆
alison dotson
This book is on an important and intriguing topic, however please only use this book as a warning of one possible outcome many many years in the future, never the less I did find the topic engaging. I have to be honest the book is a bit indulgent on speculation and without a true appreciation of the counter measures that inevitably will be built as a part of the growing intelligence they exhibit. We mistakenly think Machines have egos, but this is absurd.

My less than humble opinion is that Machine learning will impact the economic sphere more than any SkyNet apocalyptic scenario. This coming age will be devastating to a world where humans try to compete with the computers for jobs.

I find the idea that computers will take us over absurd, this does not show a deep understanding of machine learning. Machines are not built for flexibility, and billions of brains will crush any Machine with an oversized ego.
★ ★ ★ ★ ☆
antonella montesanti
A very broad analysis on the subject still without getting too technical. Different people will most likely appreciate different parts of the book, while fewer will like all of it. But definitely a good read if you're into HC AI.
★ ★ ★ ★ ☆
kevin grimsley
An enlightening read. Nick Bostrom helps bring to the forefront the very real dangers of an uncontrolled machine intelligence explosion. Highly-recommended for anyone looking for a meatier discussion of what the future of AI may hold. A bit dry in parts, but there are humorous and entertaining parts as well.
★ ★ ★ ★ ★
matt longman
Nick Bostrom’s book Superintelligence: Paths, Dangers, Strategies (2014) is the most comprehensive discussion of the issues so far. Bostrom provides a detailed overview of the history of attempts at artificial intelligence, the modes of possibility for its development, the likelihood of success, and the risks and dangers.
★ ★ ★ ★ ★
danielle franco malone
Provides ample explanation and well thought through. The author takes you through logical progression of potential considerations and their implications in which gives you a great basis for thought. You'll think with this so it might take longer to read than expected...
★ ★ ★ ★ ☆
rahat huda
A well considered and heavily researched book. My only criticism is that at times it feels as though chapters are needlessly stretched which discouraged me as a reader. However, this is a great read for anyone interested in AI and the social, political and control problems that arise with it. Lets hope policy makers and research bodies take notice.
★ ★ ★ ★ ★
merrill mason
This is an extremely well thought-out and written book. It not only describes ideas I hadn't thought of, it also analyzes ideas I had thought of so rigorously that the way they are presented gave me new insight.
★ ★ ★ ★ ★
alejandro
Nick Bostrom has achieved a tour de force with the publication of this book. He clearly has in-depth knowledge of philosophy, physics and computer science; a rare combination indeed. His analysis of the pros and cons of various approaches to solving the problem of the potential emergence of an evil superintelligence is exquisite in its subtlety. I have only one objection to his proposals. I believe the only only realistic way of instituting the necessary precautions is through an expert body of IT professionals and philosophers such as Nick Bostrom. Any wider participation would lead to suboptimal outcomes, possibly even gridlock. This potentially existential threat to humanity must be prevented from becoming a reality and only an expert body can give the assurance necessary.
★ ★ ★ ★ ☆
loolee dharmabum
Today there are thousands of companies building applications that help users to deal with the complexities of modern life. Hardly anyone asks about where all this is going. Just as adding more carbon dioxide into the atmosphere every day leads to a crisis with the climate, so building ever more intelligence into everything leads to a different kind of crisis--what is it that happens when our machines become more intelligent than us. Nick Bostrom's book looks at this coming crisis and how best to cope with it before it gets beyond our control. Once machines surpass us in intelligence and progressively become even more intelligent, we will have lost our ability to control what happens next. Before this comes to pass, it is essential that we develop a strategy to influence what happens so that the potential dangers are dealt with before they develop. As the book shows, this is easier said than done. For those who are interested in this topic, Nick's book provides a thorough analysis of the issue as well as possible strategies to deal with it.
★ ★ ★ ★ ★
abdallah
This book discusses issues related to what is likely a major, even possibly the most crucial turning point in human history. This is a point we will likely reach in this century. This is when we build machines that are more intelligent than we are. We need to be prepared to design and implement these machines, what has been called "the last tool we will ever make". There are some dangers involved we need to deal with because we cannot stop this from happening. It is inevitable.
★ ★ ★ ★ ☆
staci flinchbaugh
An exceedingly strange book that I would not highly recommend to anyone. Extremely impressive in its breadth of knowledge and fastidiousness of its argument -- and yet totally unconvincing in its general message.
It seems to me that the estimates of possible dates for AI reaching human intelligence by AI experts are mostly far too near and Bostrom, because he has too much personally invested in transhumanist nonsense and this idea of technological transcendence, isn't clear-eyed enough to see this. If this massive advance happens at all, I think the upper bound of AI-expert estimates is the time frame we should expect it in.
He acknowledges his uncertainty constantly (Bostrom aspires to be a robot himself, constantly striving to avoid the bias that comes with overconfidence, which partly explains why the book is so weird), but I almost think that the book was pointless because of how uncertain this future is. The book is a remarkable effort though, which is why I have given it four stars. Also, because it is strangely compelling to feel like you're reading the reasoning of a pale, Norwegian, shiny-headed cyborg.
★ ★ ★ ★ ☆
shakeel
Great book! It raises several questions about the implications of super intelligent AI. The book also walks through multiple scenarios of how machine superintelligence could play out. This book will definitely get you thinking about the subject of machine AI and how it will effect the human race.

The writing style could be described as verbose. Meaning that it is often unnecessarily wordy and is some cases repetitive. The writing style is a bit hard to follow in some parts. Due to the unnecessary vocabulary and verbose writing I read this book at a slower pace than most books. Despite the shortcomings in writing style the information in the book is very exciting. If you are interested in AI and machine intelligence read this book for sure. I'd recommend starting with an easier to understand book first such as How to Create a Mind by Ray Kurzweil.
★ ★ ★ ★ ★
suneeta misra
Nick Bostrom has writter a book that gives people food for thought. This book will make you think about if and when we will reach what Futurist Ray Kurzweil calls the "Singlarity" which is when computers will be as smart or smarter than we are. I must say that it is an intriguing book. Some Philosophers, Computer Scientist, Physicist, Mathmaticians, Theologians, and just plain lay people think this will never happen.

One question that was raised in the book was, "What If ?"
★ ★ ★ ★ ☆
anna talamo
A thought provoking book but I feel the author is overly pessimistic, overlooks some key concepts, and seems pretty inhumane in his own thinking. Not once does he discuss the prospect of raising an AI as you would a human child, teaching it right from wrong. I mean, we've been creating intelligent beings throughout human history that grow in power and intelligence and eventually replace us: they're called children. We try to ensure that by the time they have the power to cause real harm we've instilled a sense of ethics and compassion in them. Instead, one of the author's solutions to the "control problem" is to lock a superintelligent AI in a box with a kill switch, and not let it communicate to the outside world except for answering questions posed to it with "yes/no" (so that it could not influence the humans into letting it out, of course). Wow. This book doesn't make me fear A.I., but it makes me fear the author that he would even consider doing that to an intelligent being.
★ ★ ★ ☆ ☆
the nike nabokov
This book is incredibly dry and dull, even for those with a high threshold for academic reading pain. I am a PhD student, so I read a lot of academic papers and journals, and still there were times that I couldn't will myself to read another page. I've read engineering textbooks with more spice than this book.

Despite it's dryness, it raises some fairly interesting points and gives some good "thought experiments." It does a decent job of presenting different ways by which superintellegence could be realized and what it might mean societally. This was my first dip into the superintelligence field, and I wish it had been with a book that more approachable.

It can be deemed regrettable, however, that the text was written in such an exacting, onerous manner so as to obfuscate the very information it was attempting to convey about the potential advantages and pitfalls surrounding superintelligent beings. Beings that through different unknown pathways, and for any number of unknowable motivations, may pose an existential threat to humanity in a variety of ways.
★ ★ ★ ☆ ☆
amanda farmer
The ideas in this book are worthwhile to think about, read about, and are very interesting.

I think his presentation was lacking though. I think some of his explanations are a little long winded and somewhat repetitive. I think the book could have been shorter. If it was written in a more approachable way rather than an academic style, it would probably be a lot more popular. I doubt his goal was to create the most popular book though.

You can google information about this subject and reviews of the this book and find more approachable ways to breach the subject. If you have a lot of time on your hands and are a fan of academic texts that can be a laborous endeavor then read this book. If you aren't, stay away.
★ ★ ★ ★ ★
rania
This book is awesome. I've been looking for something like this book for about 15 years, ever since I realized something like the Singularity might happen, and then realized that Vernor Vinge had beaten me to the idea by about 16 years. Turns out actually that people realized this already back in the 50s, as pointed out in this book.

This book is arguably essential reading for both science fiction writers and futurists, ie for people writing about what could happen in the future:

- For science-fiction writers, there is a rich wealth of possible future scenarios here. I mean, every page has enough scenarios, just in one page, to write around 5 to 10 different books, meaning 5 to 10 concepts, on many of the pages. Amazingly content rich

- For futurists, this book is well worth it just for the presence of so many resources and references in one place. You want to assert that super-intelligence is at least plausibly possible? Present the survey of leading experts, and present the 10%, 50%, and 90% estimates, averaged across the experts. No longer do you need to just says "I think that...", and "Hey, look at this graph of MIPS per Intel die", so believe what I say. I mean, one can still say this, but there is a very rich set of data, resources, references, here. And it's written in a very professional, scientific, academic style, without any particularly overly obvious sensationalism. Understatement if anything. But really, neither sensationalism nor understatement, just statement of different possibilities, their probabilities, and all from a relatively detached viewpoint.

I was initially surprised that this is written by a professor of philosophy, rather than a professor of machine learning, or computer science, but on reflection it makes perfect sense. The professors of machine learning and computer science are busy working on the next small incremental improvement in combining several models together in some novel way, or improving deep neural net learning, and so on. They have neither the funding, nor the time, to take a step back, and look at the bigger picture, as Nick Bostrom has done here.

I'm not going to say this book is perfect, but it's pretty close to it :-)
★ ★ ★ ★ ★
anya kawka
The author has obviously put a huge amount of thought into this topic. The number of angles he considers in terms of implementation timelines, methodologies, pros and cons for each, likelihood of the success of different methodologies over various timeframes, are impressive.

For example, in discussing the various ways in which AI might be implemented, he concludes that AI (and subsequently, super-intelligent AI) via whole brain emulation is essentially guaranteed to happen due to ever-improving scanning techniques such as MRI or electron microscopy, ever-increasing computing power, and the fact that understanding the brain is not necessary to emulate the brain. Rather, once you can scan it in enough detail, and you have enough hardware to simulate it, it can be done even if the overarching design is a black box to you (individual neurons or clusters of neurons can already be simulated, but we lack the computing power to simulate 10 billion neurons, and we lack the knowledge of how they are all connected in a human brain -- something which various scanning projects are already tackling).

However, he also concludes that due to the time it will take to achieve the necessary advances in scanning and hardware, whole brain emulation is unlikely to be how advanced AI is actually, or initially, achieved. Rather, more conventional AI programming techniques, while perhaps posing a greater need for understanding the nature of intelligence, have a much-reduced hardware requirement (and no scanning requirement) and are likely to reach fruition first.

This is just one example. He slices and dices these issues more ways than you can imagine, coming to what is, in the end, a fairly simple conclusion (if I may inelegantly paraphrase): Super-intelligent AI is coming. It might be in 10 years, maybe 20, maybe 50, but it is coming. And, it is potentially quite dangerous because, by definition, it is smarter than you. So, if it wants to do you harm, it will and there will be very little you can do about it. Therefore, by the time super-intelligent AI is possible, we better know not just how to make a super-intelligent AI, but a super-intelligent AI which shares human values and morals (or perhaps embodies human values and morals as we wish they were, since as he points out, we certainly would not want to use some peoples' values and morals as a template for an AI, and it may be hard to even agree on some such philosophical issues across widely-divergent cultures and beliefs).

This is a thought-provoking book. It raises issues that I never even would have thought of had the author not pointed them out. For example, "infrastructure proliferation" is a bizarre, yet presumably possible, way in which a super-intelligent (but in some ways, lacking common sense) AI could end life as we know it without even being malicious -- just indifferent to us while pursuing pedestrian goals in what is, to it, a perfectly logical manner.

I share the author's concerns. Human-level (much less super-intelligent) AI seems far away. So, why worry about the consequences right now? There will be plenty of time to deal with such issues as the ability to program strong AI gets closer. Right?

Maybe, maybe not. As the author also describes in detail, there are many scenarios (perhaps the most likely ones) where one day you don't have AI, and the next you do (e.g., only a single algorithm tweak was keeping the system from being intelligent and with that solved, all of the sudden your program is smarter than you -- and able to recursively improve itself so that days, or maybe hours or minutes later, it is WAY smarter than you). I hope AI researchers take heed of this book. If the ability to program goals, values, morals and common sense into a computer is not developed in parallel with the ability to create programs that dispassionately "think" at a very high level, we could have a very big problem on our hands.
★ ★ ★ ★ ★
jesalyn
A serious, "heavy" text about a subject with monumental consequences. It is now a week after reading this book, and I still thinking about it. Uncontrolled artificial intelligence (AI) may surpass all of earth's other problems. We should all "take heed."
★ ★ ★ ☆ ☆
christopher stensli
The author shows how an intelligence explosion from the development of brain emulations of artificial intelligence could pose a existential risk to humanity. If we continue to develop AI and it surpasses human intelligence its speed of thought could overwhelm us. Thus the control problem - how we keep the AI from being malignant is critical. A criticism of the book is that it is long on words and short on concepts making the read sometimes interesting and sometimes a chore. However congratulations to Bostrom on his bringing this important subject forward in an accessible manner.
★ ★ ★ ★ ★
eric boe
This superb and well written book is both a solid introduction and a comprehensive overview of the risks of AI. It is difficult to overrate its importance or usefulness to both laypersons and professionals in the field. It is remarkably lucid and requires relatively little effort in comparison to its yield for a thinking person who feels any inclination towards understanding a grave and eminent risk to humankind globally. Read it!
★ ★ ★ ★ ☆
upali
This should be subtitled "Sinners in the hands of an angry AI" because that would provide a clue about the type and range of imaginative speculation a reader is to encounter. It seems that the urge to imagine an omniscient and omnipotent god who is at fits benificent and vindictive has some deep psychological root in the human psyche. Deprived for a century or more from display in learned circles, by orthodox agnosticism, it here bursts forth again in full glory. Just as Edwards was taken seriously in his time, those still living roots are supporting serious concern yet again. The book is interesting on many levels, other than taking it straightforwardly, but I fear that some future history will support a wish that AI was anywhere near the top of existential concerns of humankind. A rational response in Edwards tIme might have been, "Free us from this fear." Our AI directive "Read thisbook, AI, and don't cause these problems."
Please RateStrategies, Dangers, Superintelligence: Paths
More information