Artificial Intelligence and the End of the Human Era
ByJames Barrat★ ★ ★ ★ ★ | |
★ ★ ★ ★ ☆ | |
★ ★ ★ ☆ ☆ | |
★ ★ ☆ ☆ ☆ | |
★ ☆ ☆ ☆ ☆ |
Looking forArtificial Intelligence and the End of the Human Era in PDF?
Check out Scribid.com
Audiobook
Check out Audiobooks.com
Check out Audiobooks.com
Readers` Reviews
★ ★ ★ ☆ ☆
blaire
This topic has a huge potential for shallow sensationalism, but the author manages to keep it in check. He discusses several basic issues that he expects will be dangerous once some kind of human level AI appears. It is not an academic analysis but a very readable and thought-provoking collection of interviews and analysis. I think it is clearly time to think about these things out loud. Recent advances and cost reductions in GPU computing alone are a historic milestone on the path to machine intelligence. My gut feeling is that the world will find itself inundated by quasi-intelligent (?) robots of all shapes and kinds between 2020 and 2025, not even ten years hence. Better get ready.
★ ★ ★ ★ ★
tarryn
If you thought "War Games" and "Terminator" were all nonsense, this book should give you something to think about. The author explores the possible negative consequences of a true, self-directed artificial intelligence. In his view, the computer would view us (at best) as insects to be ignored, and at worst, to be gotten rid of. If IBM's Watson computer can beat a human at Jeopardy today, what would such a system be capable of in 20 years? This is an interesting book for fans of technology.
John published by Harper Perennial Modern Classics (2006) Paperback :: Journey to the End of the Night :: Dragonsong (Pern: Harper Hall series) :: Abide With Me :: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming
★ ★ ★ ★ ☆
genna
This book describes the risk that artificial general intelligence will cause human extinction, presenting the ideas propounded by Eliezer Yudkowsky in a slightly more organized but less rigorous style than Eliezer has.
Barrat is insufficiently curious about why many people who claim to be AI experts disagree, so he'll do little to change the minds of people who already have opinions on the subject.
He dismisses critics as unable or unwilling to think clearly about the arguments. My experience suggests that while it's normally the case that there's an argument that any one critic hasn't paid much attention to, that's often because they've rejected with some thought some other step in Eliezer's reasoning and concluded that the step they're ignoring wouldn't influence their conclusions.
The weakest claim in the book is that an AGI might become superintelligent in hours. A large fraction of people who have worked on AGI (e.g. Eric Baum's What is Thought?) dismiss this as too improbable to be worth much attention, and Barrat doesn't offer them any reason to reconsider. The rapid takeoff scenarios influence how plausible it is that the first AGI will take over the world. Barrat seems only interested in talking to readers who can be convinced we're almost certainly doomed if we don't build the first AGI right. Why not also pay some attention to the more complex situation where an AGI takes years to become superhuman? Should people who think there's a 1% chance of the first AGI conquering the world worry about that risk?
Some people don't approve of trying to build an immutable utility function into an AGI, often pointing to changes in human goals without clearly analyzing whether those are subgoals that are being altered to achieve a stable supergoal/utility function. Barrat mentions one such person, but does little to analyze this disagreement.
Would an AGI that has been designed without careful attention to safety blindly follow a narrow interpretation of its programmed goal(s), or would it (after achieving superintelligence) figure out and follow the intentions of its authors? People seem to jump to whatever conclusion supports their attitude toward AGI risk without much analysis of why others disagree, and Barrat follows that pattern.
I can imagine either possibility. If the easiest way to encode a goal system in an AGI is something like "output chess moves which according to the rules of chess will result in checkmate" (turning the planet into computronium might help satisfy that goal).
An apparently harder approach would have the AGI consult a human arbiter to figure out whether it wins the chess game - "human arbiter" isn't easy to encode in typical software. But AGI wouldn't be typical software. It's not obviously wrong to believe that software smart enough to take over the world would be smart enough to handle hard concepts like that. I'd like to see someone pin down people who think this is the obvious result and get them to explain how they imagine the AGI handling the goal before it reaches human-level intelligence.
He mentions some past events that might provide analogies for how AGI will interact with us, but I'm disappointed by how little thought he puts into this.
His examples of contact between technologically advanced beings and less advanced ones all refer to Europeans contacting Native Americans. I'd like to have seen a wider variety of analogies, e.g.:
* Japan's contact with the west after centuries of isolation
* the interaction between neanderthals and humans
* the contact that resulted in mitochondria becoming part of our cells
He quotes Vinge saying an AGI 'would not be humankind's "tool" - any more than humans are the tools of rabbits or robins or chimpanzees.' I'd say that humans are sometimes the tools of human DNA, which raises more complex questions of how well the DNA's interests are served.
The book contains many questionable digressions which seem to be designed to entertain.
He claims Google must have an AGI project in spite of denials by Google's Peter Norvig (this was before it bought DeepMind). But the evidence he uses to back up this claim is that Google thinks something like AGI would be desirable. The obvious conclusion would be that Google did not then think it had the skill to usefully work on AGI, which would be a sensible position given the history of AGI.
He thinks there's something paradoxical about Eliezer Yudkowsky wanting to keep some information about himself private while putting lots of personal information on the web. The specific examples Barrat gives strongly suggests that Eliezer doesn't value the standard notion of privacy, but wants to limit peoples' ability to distract him. Barrat also says Eliezer "gave up reading for fun several years ago", which will surprise those who see him frequently mention works of fiction in his Author's Notes on hpmor.com.
All this makes me wonder who the book's target audience is. It seems to be someone less sophisticated than a person who could write an AGI.
Barrat is insufficiently curious about why many people who claim to be AI experts disagree, so he'll do little to change the minds of people who already have opinions on the subject.
He dismisses critics as unable or unwilling to think clearly about the arguments. My experience suggests that while it's normally the case that there's an argument that any one critic hasn't paid much attention to, that's often because they've rejected with some thought some other step in Eliezer's reasoning and concluded that the step they're ignoring wouldn't influence their conclusions.
The weakest claim in the book is that an AGI might become superintelligent in hours. A large fraction of people who have worked on AGI (e.g. Eric Baum's What is Thought?) dismiss this as too improbable to be worth much attention, and Barrat doesn't offer them any reason to reconsider. The rapid takeoff scenarios influence how plausible it is that the first AGI will take over the world. Barrat seems only interested in talking to readers who can be convinced we're almost certainly doomed if we don't build the first AGI right. Why not also pay some attention to the more complex situation where an AGI takes years to become superhuman? Should people who think there's a 1% chance of the first AGI conquering the world worry about that risk?
Some people don't approve of trying to build an immutable utility function into an AGI, often pointing to changes in human goals without clearly analyzing whether those are subgoals that are being altered to achieve a stable supergoal/utility function. Barrat mentions one such person, but does little to analyze this disagreement.
Would an AGI that has been designed without careful attention to safety blindly follow a narrow interpretation of its programmed goal(s), or would it (after achieving superintelligence) figure out and follow the intentions of its authors? People seem to jump to whatever conclusion supports their attitude toward AGI risk without much analysis of why others disagree, and Barrat follows that pattern.
I can imagine either possibility. If the easiest way to encode a goal system in an AGI is something like "output chess moves which according to the rules of chess will result in checkmate" (turning the planet into computronium might help satisfy that goal).
An apparently harder approach would have the AGI consult a human arbiter to figure out whether it wins the chess game - "human arbiter" isn't easy to encode in typical software. But AGI wouldn't be typical software. It's not obviously wrong to believe that software smart enough to take over the world would be smart enough to handle hard concepts like that. I'd like to see someone pin down people who think this is the obvious result and get them to explain how they imagine the AGI handling the goal before it reaches human-level intelligence.
He mentions some past events that might provide analogies for how AGI will interact with us, but I'm disappointed by how little thought he puts into this.
His examples of contact between technologically advanced beings and less advanced ones all refer to Europeans contacting Native Americans. I'd like to have seen a wider variety of analogies, e.g.:
* Japan's contact with the west after centuries of isolation
* the interaction between neanderthals and humans
* the contact that resulted in mitochondria becoming part of our cells
He quotes Vinge saying an AGI 'would not be humankind's "tool" - any more than humans are the tools of rabbits or robins or chimpanzees.' I'd say that humans are sometimes the tools of human DNA, which raises more complex questions of how well the DNA's interests are served.
The book contains many questionable digressions which seem to be designed to entertain.
He claims Google must have an AGI project in spite of denials by Google's Peter Norvig (this was before it bought DeepMind). But the evidence he uses to back up this claim is that Google thinks something like AGI would be desirable. The obvious conclusion would be that Google did not then think it had the skill to usefully work on AGI, which would be a sensible position given the history of AGI.
He thinks there's something paradoxical about Eliezer Yudkowsky wanting to keep some information about himself private while putting lots of personal information on the web. The specific examples Barrat gives strongly suggests that Eliezer doesn't value the standard notion of privacy, but wants to limit peoples' ability to distract him. Barrat also says Eliezer "gave up reading for fun several years ago", which will surprise those who see him frequently mention works of fiction in his Author's Notes on hpmor.com.
All this makes me wonder who the book's target audience is. It seems to be someone less sophisticated than a person who could write an AGI.
★ ★ ★ ★ ★
minakat
Excellent book describing one potential future. While many books (namely Kurzweil) paint a bright wonderful future, this one offers a possible alternative. While not describing the "Skynet" style future it does show the possibility of failures that could be catastrophic. It is sad that many people will look on this book and decree the author a luddite instead of keeping an open mind.
★ ★ ★ ★ ★
brian
Really trapped into the abyss....this book has a meta perspective on the future of humanity that makes myself shiver. The author's perspective on future AI development is inexorable and creative, and his basic question too important to be ignored.
What ways are left to manage or stop the results of our own creation? this question needs to be answered, but still is difficult to see where from an answer will come. There are no easy solutions, and the future of the world is involved.
What ways are left to manage or stop the results of our own creation? this question needs to be answered, but still is difficult to see where from an answer will come. There are no easy solutions, and the future of the world is involved.
★ ★ ★ ☆ ☆
rebecca n
Our Final Invention is a thought provoking yet ultimately biased exploration of Artificial Intelligence which focuses on developing a polemic argument as to why AI will be the end of humanity. Barrat starts off with reasonable assumption that human level AI or even superhuman AI will be developed over the next decades due to tremendous growth in computational power along Moore's law. Once a certain threshold is reached, there will be an explosion of AI which outpaces human intelligence by an order of magnitude due to self improving systems. From here, however, Barrat bases the rest of the book on his own personal misgivings toward AI as they are inherently unknowable, unpredictable, and uncontrollable. In Barrat's mind fear of the unknown naturally leads to Armageddon by AI systems who no longer need pesky human controllers. "A super intelligent AI will be to humans, what rabbits are to humans today: pesks, pets, or prey."
Biologically speaking, Barrat would have done well to cite the rise of Homo Sapiens as the dominant life form on Earth over Neanderthals and other human subspecies. The growth in evolutionary intelligence even by 1% which differentiates humans from chimps is sufficient to produce exponential differences. The one point which Barrat does make well is that we humans are in the process of creating our successor, a new intelligent species which will likely inherit the Earth. We simply do not know our role in this brave new world as these systems have yet to be invented and thus any predictions, positive or negative, are premature. Will we compete with super AIs for resources or will we live in symbiosis?
Biologically speaking, Barrat would have done well to cite the rise of Homo Sapiens as the dominant life form on Earth over Neanderthals and other human subspecies. The growth in evolutionary intelligence even by 1% which differentiates humans from chimps is sufficient to produce exponential differences. The one point which Barrat does make well is that we humans are in the process of creating our successor, a new intelligent species which will likely inherit the Earth. We simply do not know our role in this brave new world as these systems have yet to be invented and thus any predictions, positive or negative, are premature. Will we compete with super AIs for resources or will we live in symbiosis?
★ ★ ★ ★ ★
alejandro pis
For someone who was educated in another field in the 1980s adjacent to the robotics labs at CMU, I found this book to be very accessible and thought provoking. Yes, it is a cautionary tale (suggesting a single perspective), but it considers multiple points of view along the way. I found the argument of "how" we get to ASI (three points of limitless funding) to be disturbing as they largely depend on us (via taxes and purchase power) as the enablers. Are we racing towards the hungry bear? Bragging rights to be first to develop AGI is a powerful trophy, as Barrat points out in multiple ways throughout his book. This book is a terrific read for those who live outside of the compsci world because of its non-jargonistic writing style and because it opens the door to other written work by AI scientists and authors for those who want to read more.
★ ★ ★ ★ ★
roberto i igo sanchez
Documentary filmmaker, interviewed many computer scientists for this book. LOTS of great content, interviews with Kurzweil, Vinge, etc.
Misc. Notes and Quotes:
(p. 5) When people fall back on Asimov's 3 laws, "means they've spent little time thinking or exchanging ideas about the problem."
(p. 8) Assumption throughout: Assumes it's possible to determine an AIs drives from its goals, e.g., to be a better chess player, it will want to develop a better mind, want to keep humans from messing with its brain, make it impossible to shut it off, duplicate itself.
(p. 10) "Imagine awakening in a prison guarded by mice…mice you could communicate with." To get them to help you get out, offer them cheese, or offer to protect them from the cat nation.
(p. 14) Keeping AI caged: "And the humans have to lose just once to set up catastrophic consequences."
(p. 23) Possible benefits will drive us forward to develop AGI. "AGI would be mankind's most important and beneficial invention."
(p.184) "the arrival of human-level intelligent systems would have stunning implications for the world economy." Richard Losemore and Ben Goertzel, "Why an Intelligence Explosion is Probable," H+ Magazine, March 7, 2011.
(p. 25) Poll of experts: 50% predict AGI by 2050.
Jump between AGI and ASI could be a "hard takeoff," i.e., will happen very quickly once computers can design own upgrades.
(p. 47) Machine Intelligence Research Institute (MIRI) is working on ways to transmit human values to AGIs. Michael Vassar says: "The stakes are the delivery of human values to humanity's successors. And through them to the universe."
(p. 119) Vinge's definition of singularity explained, and it is correct.
Chapter 8: Verner Vinge interview.
Chapter 9: Kurzweil interview. Barrett's criticism: "...how can you competently evaluate tools, and whether or how their development should be regulated, when you believe the same tools will allow you to live forever?"
(p. 158) "Having multiple AIs would likely be safer than having just one." Agree 100%.
(p. 238) other suggestions: 1) include (key) components that are programmed to die after a certain time, i.e., make machine "apoptotic." 2) Put machine in box, a virtual environment. Problem with both is that a superintelligent machine would certainly find out and counter these. Plus it might not "like" these negative moves, certainly wouldn't "trust" people anymore.
(p. 256+) claims Stuxnet now can be modified to do anything by anyone.
(p. 266) "Gone will be talk of AGI being the next evolutionary step for Homo sapiens, and all that implies." Why? Apparently because we won't understand ASI, and it won't have feelings (at least not at first, and even later only secondary to main goals.)
Misc. Notes and Quotes:
(p. 5) When people fall back on Asimov's 3 laws, "means they've spent little time thinking or exchanging ideas about the problem."
(p. 8) Assumption throughout: Assumes it's possible to determine an AIs drives from its goals, e.g., to be a better chess player, it will want to develop a better mind, want to keep humans from messing with its brain, make it impossible to shut it off, duplicate itself.
(p. 10) "Imagine awakening in a prison guarded by mice…mice you could communicate with." To get them to help you get out, offer them cheese, or offer to protect them from the cat nation.
(p. 14) Keeping AI caged: "And the humans have to lose just once to set up catastrophic consequences."
(p. 23) Possible benefits will drive us forward to develop AGI. "AGI would be mankind's most important and beneficial invention."
(p.184) "the arrival of human-level intelligent systems would have stunning implications for the world economy." Richard Losemore and Ben Goertzel, "Why an Intelligence Explosion is Probable," H+ Magazine, March 7, 2011.
(p. 25) Poll of experts: 50% predict AGI by 2050.
Jump between AGI and ASI could be a "hard takeoff," i.e., will happen very quickly once computers can design own upgrades.
(p. 47) Machine Intelligence Research Institute (MIRI) is working on ways to transmit human values to AGIs. Michael Vassar says: "The stakes are the delivery of human values to humanity's successors. And through them to the universe."
(p. 119) Vinge's definition of singularity explained, and it is correct.
Chapter 8: Verner Vinge interview.
Chapter 9: Kurzweil interview. Barrett's criticism: "...how can you competently evaluate tools, and whether or how their development should be regulated, when you believe the same tools will allow you to live forever?"
(p. 158) "Having multiple AIs would likely be safer than having just one." Agree 100%.
(p. 238) other suggestions: 1) include (key) components that are programmed to die after a certain time, i.e., make machine "apoptotic." 2) Put machine in box, a virtual environment. Problem with both is that a superintelligent machine would certainly find out and counter these. Plus it might not "like" these negative moves, certainly wouldn't "trust" people anymore.
(p. 256+) claims Stuxnet now can be modified to do anything by anyone.
(p. 266) "Gone will be talk of AGI being the next evolutionary step for Homo sapiens, and all that implies." Why? Apparently because we won't understand ASI, and it won't have feelings (at least not at first, and even later only secondary to main goals.)
★ ★ ★ ★ ☆
marlena
the subject matter is a revelation. I'm unsure if the potential consequences of "AI " suggested in this book are the Ideas of the author but the fact remains. Mindblowing ideas, very creative possibilities for artificial super intelligence so plausible, you'll have to start with a skeptics point of view not to be affected. Evil is not the great danger for mankind, it's something worse.
★ ★ ★ ★ ☆
alex templeton
Human third society revolution. This time it could be costly.we r making and advancing ai ahead of man. The elites and corporate world wants it that way but I don t see it as to be worship as an idol.the book brought up a very good topic on it and it should be discussed
★ ★ ★ ☆ ☆
hhhhhhhhh
This is an interesting book and recommeded to anyone who likes to know more about why AL is precived to be dangerous however author's bias comes through very strongly it almost felt like some of commets made were because of the author's believes rather than what the findings indicated. I also felt the author's bias wasn't just about the technology side but the political aspects too, specially the way he talks about activities taken place in Iran and by Iranians vs the way he talks about activities taken place by others from other parts of the world.
★ ★ ★ ★ ☆
ruth soz
Interesting and fascinating. I've never read a documentary before so I was taken aback by how much the author pushes his own opinion, sometimes in direct disagreement with experts in the field. And, as one reviewer mentioned, I agree that the book is probably twice as long as it needs to be since the author appears to repeat the same statements multiple times over throughout the book. Reminds me of when we were kids and would write book reports with repetitive phrases just to meet the word/page count directed by the teacher. Still, it's an important topic and deserves a read.
★ ★ ★ ★ ★
pilsna
Artificial intelligence may be beneficial or dangerous depending on how carefully it is controlled and the goals of those who control it. Who will be the most likely group to be the first to develop it? The probabilities may surprise you!
★ ★ ★ ★ ☆
anji
The speculations reported in this book are worth consideration. I was first introduced to AI fifty years ago at a US Government computing facility so the predictions are far from new to me. Watson is truly just a recent, public, step in the evolution. Although the book becomes a little repetitious by the end and some of the warnings may seem overblown, the topic deserves discussion outside of sci-fi stories and comic conventions. After all, "...have we ever developed a weapon that was not used?" That's why I rate it 4 stars instead of 3.
★ ★ ★ ★ ★
lara garbero tais
It is essential that we develop technologies with full awareness of their potential consequences. As the founder and CEO of one of the "stealth-mode" startups described in this book, I have asked every member of my team to read this book. [Full Disclosure: Mr. Barrat and I spoke in March 2012 as part of his research. N.B. I am grateful that I am not mentioned, nor is my company.]
Barrat's work reminds me of a book that greatly influenced me as child: "New Prometheans" by Robert S. De Ropp (1973). Like De Ropp's dystopian predictions for nuclear energy, bioengineering and computerized automation, Barrat believes that artificial intelligence poses an existential threat to the human species. Forty years after De Ropp's dire predictions, we live in a world where bioengineering has drastically reduced starvation, computerized automation has driven global wealth and nuclear power has been all but abandoned in most countries.
Although I disagree with Mr. Barrat's conclusions, he provides valuable insights into the worst-case scenarios for artificial intelligence. His astute observations that, without a model for regulation and control, engineered intelligence will likely supplant biological intelligence -- precisely because human's won't have the capacity to understand it. Moreover, he builds upon the theme that "Evolution favors the first, not the best." A maxim that is evident every time you turn on a Windows computer. Bad often wins in the marketplace -- especially if it is first and spreads faster than competitors can respond.
Barrat is also correct in stating that we lack a "general theory of intelligence" that accounts for the emergence of consciousness and autonomic learning. However, we are close. The evolution of AI is likely to follow a path similar to aircraft. Just as a couple of "hackers" (the Wright brothers) applied an internal combustion engine to harness the power of Bernoulli's principle to fly the first airplane, it is equally likely that a couple engineers in a garage will find ways to combine neuroscience with computational physics to enable machine to become "self-aware."
Just as there are many forms of human intelligence, there will likely be many forms of machine intelligence. Just as the intelligence of a species is largely determined by its physiology (e.g., dogs, humans, etc.), the intelligence of machines will be influenced by their physical forms. And just as ecosystems regulate the dominance of any one form of intelligence (species) with environmental factors, it is likely that competition among AI paradigms will reach a homeostasis. How humans will fit into the AI ecosystem is entirely dependent upon our engineering intent.
This book does a great job making the public more aware of the potential benefits, threats and implications of how AI will change our lives. Only through such awareness can we control the path of our destiny to co-evolve with AI. Barrat's work is an excellent companion to balance the predictable evangelism by Ray Kurtzweil and the Singularity and transhumanism enthusiasts.
Barrat's work reminds me of a book that greatly influenced me as child: "New Prometheans" by Robert S. De Ropp (1973). Like De Ropp's dystopian predictions for nuclear energy, bioengineering and computerized automation, Barrat believes that artificial intelligence poses an existential threat to the human species. Forty years after De Ropp's dire predictions, we live in a world where bioengineering has drastically reduced starvation, computerized automation has driven global wealth and nuclear power has been all but abandoned in most countries.
Although I disagree with Mr. Barrat's conclusions, he provides valuable insights into the worst-case scenarios for artificial intelligence. His astute observations that, without a model for regulation and control, engineered intelligence will likely supplant biological intelligence -- precisely because human's won't have the capacity to understand it. Moreover, he builds upon the theme that "Evolution favors the first, not the best." A maxim that is evident every time you turn on a Windows computer. Bad often wins in the marketplace -- especially if it is first and spreads faster than competitors can respond.
Barrat is also correct in stating that we lack a "general theory of intelligence" that accounts for the emergence of consciousness and autonomic learning. However, we are close. The evolution of AI is likely to follow a path similar to aircraft. Just as a couple of "hackers" (the Wright brothers) applied an internal combustion engine to harness the power of Bernoulli's principle to fly the first airplane, it is equally likely that a couple engineers in a garage will find ways to combine neuroscience with computational physics to enable machine to become "self-aware."
Just as there are many forms of human intelligence, there will likely be many forms of machine intelligence. Just as the intelligence of a species is largely determined by its physiology (e.g., dogs, humans, etc.), the intelligence of machines will be influenced by their physical forms. And just as ecosystems regulate the dominance of any one form of intelligence (species) with environmental factors, it is likely that competition among AI paradigms will reach a homeostasis. How humans will fit into the AI ecosystem is entirely dependent upon our engineering intent.
This book does a great job making the public more aware of the potential benefits, threats and implications of how AI will change our lives. Only through such awareness can we control the path of our destiny to co-evolve with AI. Barrat's work is an excellent companion to balance the predictable evangelism by Ray Kurtzweil and the Singularity and transhumanism enthusiasts.
★ ★ ★ ★ ★
hermione laake
If nothing else will make you interested (and yes concerned) about rapid technological evolution, this book will-- and will fascinate.
Critically, it will allow focus and more resources on seemingly inevitable scenarios that are not favorable to the human species.
Many aspects of this book are more disturbing than any Texas chainsaw movie. Read it and follow up on your conclusions.
Critically, it will allow focus and more resources on seemingly inevitable scenarios that are not favorable to the human species.
Many aspects of this book are more disturbing than any Texas chainsaw movie. Read it and follow up on your conclusions.
★ ☆ ☆ ☆ ☆
juanita
the topic is very relevant. people must think about this and build their opinion. the book is not worth the time you need to read. it is a constant repetition of he same concept dozen of times. occasinally is almost cut and paste. do NOT waste your money or time.
★ ★ ★ ★ ★
asher rapkin
I enjoyed this book because it gave both sides of the evidence and opinions on whether machine intelligence will mean the end of the human species. Sometimes redundant when trying to make his point. I would recommend this book to everyone interested in our future.
★ ★ ★ ★ ★
miranda
Potentially the most important work on AI ever written and a timely insight into the race to create what could be our successors on this blue planet. A masterclass on a subject fraught with danger for us mere humans. Essential reading!
★ ★ ★ ★ ★
santvanaa sindhu
Beautifully written. I found Our Final Invention engaging and hard to put down. This book really makes you think about how we'll be interacting with machines in the not too distant future. I wish Barrat painted a rosier picture of what AI will do to our world… but am grateful he has taken this opportunity to warn us.
★ ★ ★ ★ ★
altyn sultan
Extremely readable review of artificial intelligence by an author who has become extremely well informed on the subject. Reading this book is time well spent for anyone who plans to be around the next few years.
★ ★ ★ ★ ★
kelly p
Other books about the future of Artificial Intelligence (specifically Artificial General Intelligence = AGI, also known as "strong" AI, seed AI or thinking machines), tend to be about the power, the promise, and the wonder of AGI, and spend about 2 minutes on the dangers. Even if they literally say, "If we get this wrong, it will kill us all," the relative time spent tends to leave the reader unimpressed by the danger.
This book explains the danger in detail. James Barrat interviewed many of the researchers in the field, plus knowledgeable external experts, to assemble this layman-readable overview. It is easy to comprehend, well organized, and is about the people involved as well as the technologies. It points out the ordinary human motivations that lead to inappropriately discounting and denying dangers, and how they play out in this field.
Although he does not pitch a fund drive, it is clear that anyone can make a difference. Barrat identifies the very small number of tiny organizations that are working to reduce the danger. Small size means small contributions have a big impact. Ordinary people, experts in various fields (especially mathematics), students, and wealthy people can contribute and improve humanity's chances to survive this century and prosper.
I invite you to read this book, then get into action. The clock is ticking.
This book explains the danger in detail. James Barrat interviewed many of the researchers in the field, plus knowledgeable external experts, to assemble this layman-readable overview. It is easy to comprehend, well organized, and is about the people involved as well as the technologies. It points out the ordinary human motivations that lead to inappropriately discounting and denying dangers, and how they play out in this field.
Although he does not pitch a fund drive, it is clear that anyone can make a difference. Barrat identifies the very small number of tiny organizations that are working to reduce the danger. Small size means small contributions have a big impact. Ordinary people, experts in various fields (especially mathematics), students, and wealthy people can contribute and improve humanity's chances to survive this century and prosper.
I invite you to read this book, then get into action. The clock is ticking.
★ ★ ★ ☆ ☆
florence phillips
Never gets around to discussing "friendly AI", what it is, how it can be implemented, why it will work. Interesting check point on where the various AI experts think we are, where we are going and what will happen when we get there.....
★ ★ ★ ★ ★
chancerubbage
After seeing the excellent A24 film, Ex Machina, in the theater I sought out an objectively pessimistic view of artificial intelligence development. This brilliant account of our species' reckless, stumbling race toward AGI->ASI leaves me with the first rational fear of my life. I like this book so much that I've purchased three extra Kindle copies for my friends and family.
★ ★ ★ ★ ★
jayanth
Barrat presents a cautious picture of the rush towards super AI. He interviews a wide variety of experts, and their nearly uniform message is troubling - much can go wrong and there is not much we can do about it.
This is a compelling read!
This is a compelling read!
★ ★ ★ ★ ★
philberta leung
This is a great book that asks the questions: Will artificial surpass human intelligence and what will be the consequences? The answers are yes (perhaps within 20 years) and not sure. He suggests that ASI (advanced super intelligence) might want to take over the universe, but that raises some unanswered questions. It assumes there is no other organic or ASI in the universe (which raises questions about the age of the universe and our relative age among others). Still, there is little doubt that ASI will evolve and what will be the consequences are food for thought. Ah, for the Stepford wives :-)
Well worth reading and it is a relatively short work.
Well worth reading and it is a relatively short work.
★ ★ ★ ★ ★
dan cote
The accelerating evolution of intelligence from human to narrow artificial (AI) to general artificial (AGI = human level) to super artificial (ASI) threatens to end the human era in our own lifetime. This book opposes the euphoria of Kurzweil's anthropomorphic bias about the coming Singularity. It warns of an explosion of alien intelligence unsympathetic to humankind, not necessarily hostile but indifferent. There does not seem to be any way to stop it. The existential danger is not even widely recognized. Relinquishing technology is not an option. The current world population could not survive without technology which must accelerate to support accelerating population growth. The economic and political winner-take-all competition to be the first to obtain AGI guarantees that research will not be renounced or even slowed down by alternate development of "friendly" intelligence. The Fermi Paradox (why are there no signs of intelligent life elsewhere in this vast universe?) may be explained by intelligence evolving to levels beyond human perception. If the end of the human era is really at hand in a matter of decades or less, considering that my present age is 71, I may be spared having to face the worst shocks of the looming crisis. Facing the end of my own life may be about all I can handle, but this view is also egocentric.
Update 2013-12-20: An excellent review of Barrat's book can be read in the opinion section of the Washington Post by Matt Miller.
Update 2013-12-20: An excellent review of Barrat's book can be read in the opinion section of the Washington Post by Matt Miller.
★ ★ ★ ★ ★
craig warheit
I was quite impressed with this material as it was presented and found it to be a sobering piece of work well worth examining. I appreciated that it didn't try to fear monger but did provide some balanced perspectives that most people and most technologists give very short shrift. This conversation will have to be expanded in the future if we are to have one. Of course if we don't then we will likely get what we deserve.
★ ★ ★ ☆ ☆
bev morrow
The book offers a very pessimistic and bleak view of the consequences of developing Artificial Intelligence (AI) and especially Artificial Super Intelligence (ASI) which will inevitably follow AI. According to the book's author (Barrat), ASI will inevitably destroy humankind.
One of the recommended principles of some of Barrat's group of people is called the "Precautionary Principle:" If the consequences of an action are unknown but judged by some scientists to have even a small risk of being profoundly negative, it's better to not carry out the action that risk negative consequences. If this principal were enforced in earlier human times, we probably wouldn't have fire let alone the Industrial Revolution.
One of the problems i have with this book is the slanted cited sources. For example, page 60 states "In 2007 in South Africa, a robotic antiaircraft gun killed nine soldiers and wounded fifteen others in an incident lasting an eighth of a second." The only source cited for this statement is a website on the Internet. I read history and political articles on the Internet fairly frequently and I often find errors, distortions, exaggerations, and outright fabrications on these sites. If a website is the best Barrat can offer for this alleged incident, then I doubt it ever happened. To me, it's like citing Fox News as a source for political opinion.
In fact, in looking through the Notes at the end of the book, most of Barrat's cited sources are Internet websites. I'm not impressed.
There is another question or issue that Barrat doesn't explore. We (humans) have been seeking signs of Extra Terrestrial Life (ETL) for decades. Logically, it seems to me that somewhere out there ETLs would have developed ASI by now. Based on Barrat's gloomy prognosis, those ETLs would have then been destroyed by their own ASI. Therefore, it is impossible for any form of intelligent life to progress much past the level of technology we humans now have. Maybe this is true, but somehow I doubt it. Maybe a majority of ETLs perished as a result of their development of ASI but perhaps some would not. If they can survive ASI, then so could we. The other alternative, I suppose, is that the are no (and have never been) other intelligent life forms in the entire universe other than humans here in Earth.
I will say that the book offers much to think about on the subject of AI. I just don't find it to offer a very objective view of the subject. Admittedly, I am by no means really knowledgeable on AI.
One of the recommended principles of some of Barrat's group of people is called the "Precautionary Principle:" If the consequences of an action are unknown but judged by some scientists to have even a small risk of being profoundly negative, it's better to not carry out the action that risk negative consequences. If this principal were enforced in earlier human times, we probably wouldn't have fire let alone the Industrial Revolution.
One of the problems i have with this book is the slanted cited sources. For example, page 60 states "In 2007 in South Africa, a robotic antiaircraft gun killed nine soldiers and wounded fifteen others in an incident lasting an eighth of a second." The only source cited for this statement is a website on the Internet. I read history and political articles on the Internet fairly frequently and I often find errors, distortions, exaggerations, and outright fabrications on these sites. If a website is the best Barrat can offer for this alleged incident, then I doubt it ever happened. To me, it's like citing Fox News as a source for political opinion.
In fact, in looking through the Notes at the end of the book, most of Barrat's cited sources are Internet websites. I'm not impressed.
There is another question or issue that Barrat doesn't explore. We (humans) have been seeking signs of Extra Terrestrial Life (ETL) for decades. Logically, it seems to me that somewhere out there ETLs would have developed ASI by now. Based on Barrat's gloomy prognosis, those ETLs would have then been destroyed by their own ASI. Therefore, it is impossible for any form of intelligent life to progress much past the level of technology we humans now have. Maybe this is true, but somehow I doubt it. Maybe a majority of ETLs perished as a result of their development of ASI but perhaps some would not. If they can survive ASI, then so could we. The other alternative, I suppose, is that the are no (and have never been) other intelligent life forms in the entire universe other than humans here in Earth.
I will say that the book offers much to think about on the subject of AI. I just don't find it to offer a very objective view of the subject. Admittedly, I am by no means really knowledgeable on AI.
★ ★ ★ ★ ★
barb pardol
Our Final Invention is one of the most fascinating, yet concerning non-fiction books I have ever read. It is amazing that this issue has been left out of our current conversations regarding near term concerns for our planet.
★ ★ ★ ☆ ☆
david mcnutt
Interesting read, but has a loose arguments to tie together a risk assessment for AGI and ASI creation. Some decent research jumping off points for those casually interested in the field. The author introduces a number of self generated buzz words without fully formulating the concepts and framework underpinning the terms.
★ ★ ★ ★ ★
laken oliver
Stimulating, Gripping, & Relevant
If I knew a male that didn't like to read often or one that is indifferent to many books, I'd give him this one. It is cutting edge and critical warranting the attention of the intellectuals and visionaries among us. (I'm sure some the females will enjoy it as well.)
If I knew a male that didn't like to read often or one that is indifferent to many books, I'd give him this one. It is cutting edge and critical warranting the attention of the intellectuals and visionaries among us. (I'm sure some the females will enjoy it as well.)
★ ★ ★ ★ ★
jessica bitting
This book explores the OTHER side of our tech wonderland in easy layman's terms. While I do not agree with every conclusion, they do remain possible.
The world at large, those currently unaware of transhumanism and the singularity, need to be brought into the conversation, to consider all the positives AND the negatives.
This book gives them much to consider.
The world at large, those currently unaware of transhumanism and the singularity, need to be brought into the conversation, to consider all the positives AND the negatives.
This book gives them much to consider.
★ ★ ★ ★ ★
yvette bentley
An excellent, up to date and comprehensive account of recent developments in Artificial Intelligence (AI) research. The book presents a systematically argued position that we need to pay close attention to developing safeguards to keep AI under human control. But, it also suggests a more dire possibility that this may not be possible.
CK
CK
★ ★ ★ ★ ★
brittnie
Global warming is a minor matter when compared with the inevitable prospect of smarter than human machine intelligence. Numerous companies such as Google, Vicarious, IBM AND of course the US defense deapartn
Please RateArtificial Intelligence and the End of the Human Era
The only thing I would critique is that, while his views are rationally supported and thorough, I would have liked to have seen more specific ideas about what can be done to prevent his pessimistic predictions from happening. While I definitely praise Barrat for his ability to raise awareness on this issue, I was depressed when I finished the book. A little more emphasis on what we can do to appease our new robotic overlords would have been greatly appreciated. I will read his next book though.