feedback image
Total feedbacks:31
11
9
5
4
2
Looking forHow the Quest for the Ultimate Learning Machine Will Remake Our World in PDF? Check out Scribid.com
Audiobook
Check out Audiobooks.com

Readers` Reviews

★ ★ ★ ★ ★
ramaa ramesh
This book is fundamental for the ML enthusiasts conceptual bookshelf.

Some have complained about it's lack of technical detail; mind you, it's a conceptual book and should not be read for technical know-how but more for understanding the spirit of the machine learning researcher and what the big picture is all about.
★ ★ ★ ★ ★
mateo
“The Master Algorithm” was a pleasant read. It sums up all major schools of machine learning algorithms succinctly and intuitively without heavy mathematics. It explores the pros and cons of each, and postulate a future master algorithm which could integrate them all to build general purpose artificial intelligence. Pedro also made his own extrapolation of the future AI-empowered society with a tint of optimism. Highly recommended to those who want to quickly slice through the jungle of ML algorithms and have an intuitive understanding of AI/ML from the body of knowledge perspective.
★ ★ ★ ★ ☆
sapphire
Great book for helping me "refresh" my knowledge of AI from when I got an advanced degree in Computer Science -- many years ago. Easy to read and covers a pretty broad range. 4/5 stars because Pedro is intentionally "opinionated" at times and I don't always agree with his opinion -- but I do appreciate the fact that he's put them out there!
Iris & Lily: Book One :: Can Make a Positive Difference - How Anyone :: A Modern Fable on Real Success in Business and in Life :: Contact: A Novel :: How Big Data Increases Inequality and Threatens Democracy
★ ★ ★ ★ ☆
todd holdridge
The author does a good job explaining many ML concepts (to the uninitiated like me), but quite a few sections are not clear enough.

The book leaves me wondering what is so impressive about discovering past correlations, and extrapolating their validity to the future. It sounds like most ML approaches are simply attempts at quantified superstition. Perhaps, that is the value of the book: it can make you disillusioned about ML.
★ ★ ☆ ☆ ☆
abby terry
Too lightweight for a practitioner to learn much from it other than the ML World of Pedro Domingos. Yet at the same time too buzzwordy for someone outside the field to really learn anything substantial/actionable from it. Neural Networks, Random Forests, Naive Bayes, Classifiers, and Genetic Algorithms are really not all that complicated to understand (though admittedly sometimes hard to implement), and they have been explained better elsewhere. To that end, I highly recommend Michael Nielsen's online book "Neural Networks and Deep Learning."
★ ★ ★ ★ ☆
j r randle
A nice introduction to data analysis through statistics. As data analytics matures much of the material presented here will be introduced into a new stat package. As the book promises your new algorithm will not be publishable unless you produce a better algorithm. However an instructor of known knowledge will have employment for years to come teach the rest of the world how to see into the nebulous future
★ ☆ ☆ ☆ ☆
greg merideth
I bought this book hoping for a discussion about machine learning and hoping to explore the idea of this master algorithm but I just couldn't make it through the first few chapters. I skipped the last part of chapters 1 and 2 because they were just droning on with repetitive, semi-accurate information that I can only assume would have either confused or misled the average non-IT trained listener. The author makes a lot of claims and connections that are just flat out inaccurate and has a tendency to make the same or similar points over and over. As a writer, I can tell when someone is just trying to do whatever they can to fill pages and this book has all the traits. Rambling, nearly incoherent lists came up so often that I started fast forwarding through them.

I'm not sure what I expected from this book, but decent writing and accuracy I'm sure were on the list and it missed the mark on both by a long shot.

If you are familiar with computers at all, this will likely be WAY too low level for you. If you are not an IT person, just remember to take everything said in this book with a grain of salt because some of it is pretty wild speculation.
★ ★ ★ ★ ☆
yulianus xu
Artificial Intelligence has gradually become a part of our routine discourse. It is because we experience it in varying degrees in even our humdrum lives. We read in the news that driverless cars are already running on our roads. When we sign on to Netflix, we see movie recommendations for us. Often, they turn out to be better than what our friends and relatives suggest to us. Websites help us save money on airline tickets more than before. Google predicts flu outbreaks before CDC has been able to do so. All this is due to Artificial Intelligence or AI for short, of varying degrees. AI is a vast field and has been an area of research since the 1950s. This book is about one branch of AI which is hot today, called Machine Learning.

Historically, we have written specific computer programs to solve specific problems using the computer. Each such program is in effect an algorithm to deal with that problem. Machine learning diverges from this traditional approach. It provides computers with the ability to learn. A specific problem is solved without being explicitly programmed for it. Machine learning consists of programs that can teach themselves to grow and change when exposed to more and more data. This book goes one step further. It discusses the holy grail of Machine Learning that researchers are working on, called the Master Algorithm. Right in the first chapter, the author defines this holy grail as follows:

"All knowledge - present, past and future - can be derived by a single, universal learning algorithm".

Now, this might seem ambitious to most of us, including scientists in the field. Some might even slam it as typical of the 'techno-hubris' of Silicon Valley. But the author makes a case that a master algorithm is a feature of nature itself. He shows that there are similar examples in Neuroscience and Evolution. The book captures Machine learning succinctly by the following unwritten rule through which it works:

"we must induce the most widely applicable rules we can and reduce their scope only when data forces us to do so".

There are five approaches to Machine Learning today. They are classified as the Symbolists, the Connectionists, the Evolutionaries, the Bayesians and the Analogizers. Each of them is explained briefly as follows:

Symbolists believe all intelligence can be reduced to manipulating symbols just as we do in Mathematics. In simple terms, Symbolists start with some existing premises and conclusions. Then, they work backward to fill in the gaps in knowledge by analyzing existing data sets. This is called inverse deduction.

Connectionists believe that we must reverse engineer the brain and achieve machine learning. This approach is called 'deep learning'. It consists of creating 'digital neurons' in the computer and connecting them in a neural network model. Google uses this approach in machine translation. It is also used in image processing.

Evolutionaries believe in learning through genetic programming. This method mates and evolves computer programs the way nature evolves species. It applies the ideas similar to that of Genomes and DNA in the evolutionary process to data structures. The Genetic programming model works by quantifying the performance of data. It is used to determine the retention and propagation of algorithms in an evolutionary way.

Bayesians believe in using Bayes theorem and its derivatives to learn through probabilistic inference. This method proposes a few hypotheses. Then, it assigns probabilities for the likely outcomes of applying these hypotheses. By applying more and more data to test these hypotheses, some of them become more likely than the others. In this way, we arrive at the most likely solution to the problem. This is used in spam filters on emails.

Analogizers believe the key to learning is recognizing similarities between situations. This principle is called the nearest neighbour principle. One important aspect of Analogy is that it lets you learn from little data. Analogizers can learn from even just one example - You are what you resemble. Charles Darwin used Malthus' theory of the struggle for survival in society to formulate the biological theory of evolution. Niels Bohr used the solar system model to formulate his theory of the atomic model. A website like Netflix uses it in the recommendations it gives to customers. For example, let us say that someone, whose profile is similar to yours, liked the movie 'Avatar'. Then it is likely that you will like Avatar as well, if you have not seen it already. So, a recommendation of Avatar is made for you.

However, all these approaches are not mutually exclusive. Machine learning uses a selective combination of them, depending on the problem. For example, in medicine, the Evolutionary approach is not preferred. The way nature learns is through making mistakes - a luxury we don't have in our societies. Doctors must diagnose in a foolproof way. For example, failing to find a tumor that is really there, is potentially much worse than inferring one that isn't. It needs optimal decisions and this where the Bayesian approach is more useful. In AI, Symbolists held sway in the first few decades of cognitive psychology. Then Connectionists held sway in the 1980s and 90s, and it is the Bayesians' time in the Sun now.

The author is upbeat about the future transformations that Machine Learning will bring to our societies. He suggests that in the not too distant future, we will all have a 'digital half' which will do our bidding all 24 hours of the day. They will interact with the digital halves of every person and organization we deal with. Tomorrow's cyberspace will be a giant parallel world. It will be like a new, global subconscious, the collective id of the human race.

My only caveat about the book is its prognosis for the future. There is no doubt that the future will be transformed in many ways by Machine Learning. Scientists have been mostly right about predicting the future technological inventions. On the other hand, scientists from Thomas Malthus to Paul Ehrlich to the Club of Rome have made dire predictions about the future of mankind and disaster for thickly populated countries like India and China. They were woefully wrong in this. Not all scientific advances have a smooth path to their full potential. The expected potentials of Nuclear energy and Genetically Modified crops have not materialized because of resistance in our societies. There is also resistance to Stem cells and possibilities of Human cloning. Machine learning is in its infancy now. As it develops and impacts our lives more and more, the technology is bound to make mistakes in a way that affect our lives negatively. If some of them happen to be of the dimension of the Chernobyl accident in Nuclear energy, then there may be a big backlash to the advance of AI too. The other related danger is to rely on vast quantities of data on a phenomenon which leave out key attributes of representing the phenomena correctly, but use it anyway to make crucial decisions. One example in the past of such misuse of data happened in the 1960s. Sen.Robert McNamara, Secy of Defence in the Johnson administration, went about escalating the Vietnam war by relying on 'death count of US and Vietcong soldiers' as a key data indicator of how the war was progressing. The data did not capture the will and determination of the Vietnamese communists to rid themselves of foreign occupation of their land. We all know the consequences of this key error.

I enjoyed reading the book as it was educational for me and I learnt much from it. The chapters elaborating the five approaches to Machine Learning need some focus and concentration to fully understand them. Readers, who find these chapters a bit hard, can still simply read just the first 60 pages to get a very good idea of the subject. All in all, it is a great effort to present this exciting subject in a simple way.
★ ★ ★ ☆ ☆
jillian karger
Pedro Domingos starts his popular book on machine learning with a breathless tour of everyday life in middle-class America.As you drive to work, check your stocks, book a flight, do a little shopping and use the Internet, you're continually interacting with (and being surveilled by) AI learning systems. They can do so much, yet as the store's recommendations system continually reminds us, they also fall so short.

There are five main schools of thought in today's machine learning community,

The symbolists have inverse deduction (I think this is abduction)
the connectionists have backpropagation
the evolutionaries have genetic programming
the Bayesians have probabilistic inference
the analogizers have support vector machines.

Pedro Domingos believes that what we really need is a single algorithm combining the key features of all of them. In the final part of his book he describes Markov logic networks, his own best candidate for the way forward.

If I had to infer Pedro Domingos' personality type from reading his book (personality inference, he thinks, will become commonplace in future automated systems) I would guess ENTP. The books is optimistic, provocative, full of ideas, clever ... and sprawling, lacking conceptual coherence and depth.

The experience of reading it is that of empty calories. Popular science books avoid the complex abstractions of the real science in favour of familiar (but underpowered and ultimately misleading) analogies. Domingos is not afraid to use technical labels - Bayesian inference, hidden Markov models - but he carefully avoids any technical depth in his explanations. Ultimately the reader who has not previously studied a topic will be no wiser, drowning in cotton wool.

What is this machine learning all about? How can we understand it in a broader scientific framework? I was frustrated by being no clearer at the end of this book than at the beginning. Algorithms are not the stuff of science or explanation - they are realisation of relationships between inputs and outputs which need to be anchored in an underlying theory. The neuroscientist David Marr, namechecked in the book, was particularly insistent on separating these levels of description. Yet a theoretical framework for machine learning is conspicuously absent in Pedro Domingos' account; you never see the wood for the trees.

I was glad I read it, if only for the identification of the 'five tribes' mentioned above and his hand-wavy overview of their approaches. But given his intended audience, the book would have been much improved if the author had simply included five 'Scientific American' level appendices explaining at a semi-technical level how the five approaches actually work; including some math.
★ ★ ☆ ☆ ☆
korie brown
The Master Algorithm is almost a contrived book. The problem is not as much its light and quick treatment of highly intricate concepts as the sheer purpose - the quest for some random GUT-like Master Algorithm without ever clearly defining what it would really achieve.

The heuristical sciences of Big Data, Machine Learning or AI are anyway difficult to theorise or formalise. As the knowledge in the field builds, there is a critical need to pass on the methods and processes of whatever is achieved. This is incredibly difficult as such. The author's somewhat unbelievable grandiose purpose makes it all even more difficult in the book despite good information.

To explain what goes on in the book, let's create an analogy: say the foragers of a few millennia back suddenly start harnessing plants, nurturing them and beginning to farm. While they learn the new agrarian ways, a theoretician begins classifying tens of ways in which different individuals are "farming" so that the technics learned by one are passed along. But, imagine if the purpose of theorising turns into a quest for the Master Farming - a singular way that allows one to plant rice, grapes, sugarcanes, berries and absolutely everything else present and future. This is what the author seems to attempt here: perhaps the same algorithm that would power Google Search, Facebook, the store and much more not just now but forever!

Practical science that is at the heart of today's E-commerce, social network, search or similar firms is impossible to justify using theoretical concepts. The methods and their details are almost extremely difficult to explain despite their practical successes simply because of the assumptions and the intuitive leaps of faiths made in their constructs. Their validity is nothing but the end result. If the end result is good, the underlying logic would appear obvious (like in the page rank) even if still unprovable.

These are problems while handling data and finding correlations without apriori hypotheses. The number of variables (combinatorial) could far exceed the number of data points. At other times, the sample sizes are way too small but one is still forced to use them to reach conclusions and still other times they could be too large to be handled by computers we have. At other times, parameterization is an issue (how do you quantify a nose, likeness or a map). Pattern recognition, linguistic aspects in chats and speeches, real-life complexities in activities like driving etc also involve unconventional methods for conversions into mathematical symbols before any intelligent analysis.

The author tries to define what goes under the hood using various categories. These categories are likely to be far less indistinct in the minds of most readers compared to the author's. The author tries to make it as if the followers of any one of the categories are at a war path towards those of others - the real-life practitioners are almost all likely to be using all the methods all the time without ever realising that there are any boundaries between them.

There are some great theoretical insights like inverse deduction that is at the heart of conclusion based theories: for example, from observations that Socrates is a mortal and Socrates is a human hence all human are mortal until you find one different.

There are particularly good discussions on the S-curve in the neural network. This derivative of the Gaussian bell-curve is everywhere as the author conveys convincingly. So many of the real-life correlations that we deem linear are nothing but an S-curve since nothing can stretch to infinity. The use of this and probabilistic ways in which neurones affect nearby neurones' states in real life pattern recognition are examples of some other good concepts covered by the author.

As discussed, all these details are marred by the purpose. Some other definitive conclusions stated at the end are even more bizarre. For example, the author appears too indifferent and quick while drawing conclusions on the impact of AI on humanity in the workspace. The author is convinced and happy, for example, that ALL jobs will not disappear, while not shying away from the idea that most will. The author shows no social empathy as he describes how a world run by machines will be good for the society as most of us will be able to go on living on a generous government dole.

In the same way, the author is quite insensitive on subjects like machines or machine based evolution overtaking the humanity on a simple premise that humans have a free will (fools are the thinkers of millennia to ponder over the issue!).

A great subject but somewhat wasted.
★ ★ ★ ★ ☆
geycen
From all the books about the field of machine learning, the Master Algorithm is a good introduction into this field. The book’s first goal is to let you in on the secrets of machine learning. This goal is more or less accomplished. Although this book is aimed to non-technical readers, it is not the case. Some chapters require you have at least a minimum background in machine learning, however some chapters too deep to be lost. The book covers what is known as the five tribes of machine learning, or rival schools of thought within machine learning: The symbolists’ master algorithm with inverse deduction, the connectionists’ with backpropagation, the evolutionaries’ with genetic programming, the Bayesians’ with Bayesian inference, and the analogizers’ with the support vector machine.

Most of the book explains the characteristics and functionalities, pros and cons, of each of the algorithm tribes. It does a good job explaining the details, but I got lost in some sections because it was too deep for me. The second goal of the book is to introduce the Master Algorithm, which is in a nutshell a clever integration of the existing machine learning algorithms (as a matter of fact is Domingo's algorithm: Alchemy). The last chapter is a good dissertation about a future world immersed on artificial intelligence devices and technologies everywhere (even within our body as a health device or as an augmentation device).

The author discusses many topics going from human learning, philosophy (the author says Descartes, Spinoza, and Leibniz were the leading rationalists; Locke, Berkeley, and Hume were their empiricist counterparts), neuroscience, neural networks, big data, linguistics, psychology, cognitive science, and so on. As you can see, it touch the machine-learning field from many angles and perspectives, producing very interesting insights.

I give it 4 because although a good book on the field, some parts are well written for a non-expert, but there are many other parts which are bewildering and you can lose the pace to keep reading.
★ ★ ★ ★ ★
devin lindsay
We know that computers are powerful, but we may not know the full power of deep learning or artificial intelligence. It is not just coming in the future, it is here now, and advancing daily. Pedro Domingos explains the 5 main tribes that make up machine learning today. I only thought he would stick to IT, but I was wrong. It is not a dry subject as it is already active in our wider business world today. We use many of these AI algos daily without knowing them by name.

Take your email spam filter. It may seem basic today, but it is based on machine learning. One if these AI tribes believes in knowledge composition. They assume all email is spam. They use rules to confirm this. Is Viagra in the title? then probably spam. is FREE in the title? then probably spam. Is a close friend's name in the message? then maybe not. It is all about probabilities. They make email worthwhile due to these helpful probability filters. Your email spam filter is using AI right now. It is already helping you to focus on what is important for you. It is here and operating in your business right now.

AI is all around us and programmers are trying to figure it out. Your human brain functions a certain way. Why not figure out how exactly? Why not reverse engineer it as a process? Why stop there? Humans have brains, but all animals have evolved, and the planet earth evolved, so why not try to better understand all of evolution? Would that not be a better bigger picture to figure out? This is just one of the many concepts that AI specialists struggle with when the choose what exactly, to focus on.

The Top 5 Takeaways from this book that impact any reader are based on the 5 main tribes within AI. These are just general overviews. Much deeper details are covered in the book. It is explained very well.

1) Symbolists: Try to focus on the problem of knowledge composition. They figure it out with inverse deduction. If 2+3=5, then what is 5-2=? by deducing similar data, you can figure out 3 as an answer. They focus on gaps in knowledge and is the most scientific in approach. The surprise is when algo figure things without a human. A robot called Eve discovered a new malaria drug by itself.

2) Connectionists: Try to focus on the problem of credit assignment. They figure it out with backpropagation. They focus on a more human, less logical world. Neural networks with newly discovered knowledge. When your brain learns, a synapes takes place between neurons. A large 1 billion network of inputs from cat videos on YouTube was the first algo used to recognize the cat content from the network.

3) Evolutionaries: Try to focus on the problem of structure discovery. They figure it out with genetic programming. They focus on genetic coding or genome. The best algos replicate to create child algos made from half male half female algos. New electronic discoveries have been made that could not have been made by humans alone. In fact some of these patents would never have been created by a human.

4) Bayesians: Try to focus on the problem of uncertainty. They figure it out with probability inference. They figure it out with comparisons. If type A people DO like X, and type A people do NOT like Y, then if one type A person DOES like X, another type A may NOT like Y as well, if the likelihood of these comparisons work well. Repeat with millions of cases and you can find this pattern out clearly. Spam filters come from this.

5) Analogizers: Try to focus on the problem of similarity. They figure it out with kernel machines (support vector machines). They figure it out with similar examples. Recommendations systems in e-commerce, where if you buy something, you are also asked to buy other things people with similar tastes also buy.

There are many amazing stories about how AI is changing our lives. Besides spam filters, another widely used concept is the recommendation engine. It may be the most financially successful use of algo yet. When the store suggests you buy another book, based on what other similar readers buy, you are using that algo. this is responsible for 33% of the store's revenue! Considering the total, that is a big umber. Netflix also uses one and it accounts for 75% of gross revenues. Again, this is a very large number and actively used by millions of customers. There are many ways to understand our brain and the world around us. AI is just the starting point that helps us figure out how exactly, that process can be better understood. Highly Recommended!

Please visit us for our Friday Feature Review where TMJ Partners Blog will review books, movies, services and anything else with a financial theme. Follow us now for our free TMJ Partners weekly updates on LinkedIn, the home of TopMoneyJobs.com. Thank you for reading and learning more about how money is made in finance!
★ ★ ★ ★ ★
shyamoli de
Prof. Domingos is very well known in the field of machine learning. I first came across his name while reading an academic paper on machine learning. Following the trail lead me to the book. His writing style is very approachable, especially for a subject as weighty as artificial intelligence and machine learning. The book surveys the current landscape of machine learning, tying it in with the rich history of artificial intelligence, from where many of the ideas originated. While the reader does not need a mathematical background to appreciate the material (the book is written to be understood by the layman), I found that it is helpful to have that background nonetheless. Prof. Domingos distills machine learning to five tribes, each of these tribes are wandering around a mythical landscape, replete with their own gods, theories, shamans, and superstitions. Sometimes some theories, superstitions and shamans spill over to the adjacent tribe, but by and large, each tribe is self-contained and self-reliant. Prof. Domingo's thesis is that the if portions of each tribe's magic is shared with others, the result could be a master algorithm that will portend the beginning of the singularity (to add in yet another fascinating author, Ray Kurzweil, to this mix). The master algorithm will teach itself: give it some physics papers and it will discover all the laws of gravity in the universe; provide it with some music notes and it will create music that surpasses Bach, Beethoven and Ravi Shankar; show it a medical journal and it will diagnose hitherto unknown diagnoses, you get the idea. Question to me is: if we actually get to that point, will it be worth it? If everything that is to be known is known, and everything to be discovered is discovered, and everything to be written is written, then what? No serendipitous wanderings through arts, literature, mathematics, chemistry and music? No "wow" moments? Life is the cumulative sum of events that we experience, some events are breathtakingly majestic while others could be stunningly despondent. But nonetheless, these events make us who we are. If we relegate the greater machination of life to machines, however sentient they may be (and I do believe the singularity will happen), then what happens to us humans? Weighty questions. An excellent book that I would urge any technologist to read. (February 2017)
★ ★ ★ ★ ★
liesl gibson
Pedro Domingos' The Master Algorithm - How the Quest for the Ultimate Learning Machine Will Remake Our World is an interesting and thought provoking book about the state of machine learning, data science, and artificial intelligence. Due to our involvement in specific intricacies of a particular domain, even the seasoned practitioners of machine learning like myself don't see the big picture as often. This book is definitely a perfect guide for us to get a refresher in the field which is taking the world by storm.

Categorizing, classifying and clearly representing the ideas around any rapidly developing/evolving field is hard job. Machine learning with its multi-faceted approaches and ubiquitous implementation is an especially challenging topic. To write about it in a comprehensive yet easily understandable (aka non-jargon-ridden-hand-waving) way is definitely a great accomplishment. One thing I really enjoyed about this writing is how the ML taxonomy and classification works; even for the people who have been in industry for a while, it is hard to create such meaningful distinctions and clusters around ideas.

“Each of the five tribes of machine learning has its own master algorithm, a general-purpose learner that you can in principle use to discover knowledge from data in any domain. The symbolists’ master algorithm is inverse deduction, the connectionists’ is backpropagation, the evolutionaries’ is genetic programming, the Bayesians’ is Bayesian inference, “and the analogizers’ is the support vector machine. In practice, however, each of these algorithms is good for some things but not others. What we really want is a single algorithm combining the key features of all of them: the ultimate master algorithm. For some this is an unattainable dream, but for many of us in machine learning, it’s what puts a twinkle in our eye and keeps us working late into the night.”

Starting with the question of are you rationalist or an empiricist, and extended this analogy to five tribes of machine, author has also challenged the notion of "intelligence" in a very direct manner against. By stating that this skeptical knowledge engineer's dogma that AI cannot "beat" humans is based on an 'archaic' Minsky/Chomsky school of thought; the variants of '“poverty of the stimulus" arguments are irrelevant for all practical intents and purposes. The outstanding success of deep learning is a proof to the contrary. Author has answered most of the 'usual' argumentum ad logicam in chapter 2 in which he paraphrase that the proof is in the pudding. From autonomous vehicles to sentiment analysis, Machine Learning / Statistical learners work, and hand-engineered expert systems with human experts don’t scale;

"...learning-based methods have swept the field, to the point where it’s hard to find a paper devoid of learning. Statistical parsers analyze language with accuracy close to that of humans, where hand-coded ones lagged far behind. Machine translation, spelling correction, part-of-speech tagging, word sense disambiguation, question answering, dialogue, summarization: the best systems in these areas all use learning. Watson, the Jeopardy! computer champion, would not have been possible without it."

The book further elaborates by stating what author intuitively know (pun intended) as a frequently heard objection

...“Data can’t replace human intuition.” In fact, it’s the other way around: human intuition can’t replace data. Intuition is what you use when you don’t know the facts, and since you often don’t, intuition is precious. But when the evidence is before you, why would you deny it? Statistical analysis beats talent scouts in baseball (as Michael Lewis memorably documented in Moneyball), it beats connoisseurs at tasting, and every day we see new examples of what it can do. Because of the influx of data, the boundary between evidence and intuition is shifting rapidly, and as with any revolution, entrenched ways have to be overcome. If I’m the expert on X at company Y, I don’t like to be overridden by some guy with data. There’s a saying in industry: “Listen to your customers, not to the HiPPO,” HiPPO being short for “highest paid person’s opinion.” If you want to be tomorrow’s authority, ride the data, don’t fight it.

and of course the eureka! argument doesn't escape his criticism

"And some may say, machine learning can find statistical regularities in data, but it will never discover anything deep, like Newton’s laws. It arguably hasn’t yet, but I bet it will. Stories of falling apples notwithstanding, deep scientific truths are “not low-hanging fruit. Science goes through three phases, which we can call the Brahe, Kepler, and Newton phases. In the Brahe phase, we gather lots of data, like Tycho Brahe patiently recording the positions of the planets night after night, year after year. In the Kepler phase, we fit empirical laws to the data, like Kepler did to the planets’ motions. In the Newton phase, we discover the deeper truths. Most science consists of Brahe- and Kepler-like work; Newton moments are rare. Today, big data does the work of billions of Brahes, and machine learning the work of millions of Keplers. If—let’s hope so—there are more Newton moments to be had, they are as likely to come from tomorrow’s learning algorithms as from tomorrow’s even more overwhelmed scientists, or at least from a combination of the two."

Whether you agree with the author's point of view or not, this is one of the best "big picture" reading on the state of machine learning and AI which will help you understand how things may shape up to be (or not) in next computing revolution.
★ ★ ★ ☆ ☆
marwa ayad
Whew!!! I soldiered through the book and actually learned a lot about machine learning and the notion of a "Master Algorithm." An incredible compendium of information on machine learning with insightful, thoughtful tie-backs to where ML shows up in life today and will show up tomorrow. But I would say that this was more of a doctoral thesis than an accessible stroll across this very interesting and popular topic. I listened to it on Audible and the density of some of the subject matter was mind-numbing. One can only reverses a point so many times before throwing one's hands up and saying "I'm an idiot...I just don't get it!"
★ ★ ★ ★ ★
windie
This book is easily misjudged. It is not an introductory text in data science since beginners will not be able to follow the author’s reasoning. Further, the book is also not for the faint hearted or the weak minded in data science. However, savvy professionals and open-minded researchers should take note. There are many provocative ideas that could be foundational to future machine learning systems.

The book is written in a casual eclectic style, drawing examples from diverse sources. Be prepared for a delightful romp through the theory and folklore of analytics, accompanied by a disorienting rearrangement of what you know about data analysis. If you persist to book’s end, you will sense much about the personality of the author.

The focus is not about analytics per se, but big learning that uses big data for big problems. Domingos challenges all of us to a lofty goal – create the ultimate learning algorithm that will deduce the mechanism for an apple falling from a tree and enable your future car to do all the driving for you. This challenge is much more than generalizing beyond known data or incrementally refining results of that generalization. It is about combining different learning approaches to discover optimal solution procedures. The difference is like creating an algorithm that catches a fish versus an algorithm that learns how to catch fish. The aha chapter for me is Learning Without a Teacher, which discusses research about how infants learn the basics of physical reality in their first few weeks. So, create an algorithm that will learn how to move your arms to touch your nose.

Domingos asserts that approaches to learning algorithms have evolved (somewhat independently) from five tribes (cultures or flavors) of analytics – evolutionaries (genetic algorithms), connectionists (neural networks), symbolists (decision trees), Bayesians (naïve Bayes), and analogizers (support vector machines). In general, this distinction makes sense, although the boundaries are often murky. Half the book (chapters 3-7) describes the five tribes, which is nicely summarized in his ACM webinar in November 2015.

Spoiler Alert! Chapter Nine discusses how all the parts of the puzzle come together. Actually not! The chapter is more of a tease to get us to think about creating this master algorithm. The analogy of a city with sectors for each of tribe is a glorious attempt at visualizing the workings of this algorithm. Someone must do a YouTube video! Also, Domingos makes a plea for his research on Markov Logic Networks as the unifying concept and mechanism for bringing all the tribes together.

It would be fascinating to use this book for a doctoral-level seminar on data science. Assign the students randomly to one of the five tribes. Then, let them argue and demonstrate about how their tribe would solve specific problems. Domingos’ thesis would be substantiated if, after frustrating weeks, the tribes started to collaborate. Students may require emotional assistance toward the end of the seminar.
★ ★ ☆ ☆ ☆
alohi rieger
I was really looking forward to this book, but was very disappointed. The author glosses over many details which seem crucial to understanding what he's talking about, and often introduces new terminology without explanation. Many of the explanations that are provided are ambiguous and the language in general is imprecise. Having studied some of these algorithms and approaches before, I felt there were many instances where concepts could have been restated to make them much clearer. There are also several digressions; space that could have been better spent explicating some of the ideas.

It's unclear who the intended audience for the book is. I got the impression that it was meant to be accessible to a wider audience, but I can't imagine getting through this without having some prior familiarity with the topics.

That isn't to say there isn't any value to the book - I found a few insightful explanations scattered throughout (a few bits in the section on neural networks and on Bayesian inference). But again, I've studied some of this before so I'm not sure how much a layperson will get out of it.
★ ★ ★ ★ ☆
jasmine
One cannot help but admire this work. Domingos does a brilliant exposition of the five major approaches to machine learning (e.g., connectionism, symbolic AI, Bayesian probability...). Their relations and interrelations are shown, the transformations by which one is taken to the other, their historical context of development, achievements, strengths and weaknesses. All this is done clearly, with good examples and with great effort at exposition. To boot, we are treated to a creative, insightful mind that has done its best to integrate these within a new, encompassing framework, harnessing the power of all five. This creative integration is aimed at Domingos’ very high goal where, “all knowledge – past, present, and future – can be derived from data by a single, universal learning algorithm.” In other words, Domingos clearly believes that his ultimate machine learning algorithm (or an improvement thereof) will capture the entire power of human thought. I greatly recommend this as an enjoyable, very accessible (though one may still need some degree of familiarity with the subject) tour de force of the history, current state, and (perceived) future goal of machine learning.

For readers interested in a view of the work’s weak spot, I offer this: As one reads, the focus of the vast machine learning field, with all its triumphs,emerges, shall we say, as being entirely on the static. It is on classifying, categorizing – recognizing toys versus food items, recognizing objects, or a good credit risk, or a winning football team or a valuable player, recognizing a syllable to predict the next syllable or predict an entire word, diagnosing a symptom set as the correct type of illness, recognizing the border of a road, element by element, to guide a vehicle.... The book has one, just one, use of the word “consciousness,” this being: “…the end result of this phenomenally complex pattern of neural firings is consciousness,” i.e., consciousness is pretty much just epiphenomenal, just going along for the ride upon (or “emerging” from) the firings of the neural mass. In this, the book simply reflects the deep thesis running throughout the entirety of AI and cognitive science, namely that in all these (five) algorithmic approaches, there is no role for consciousness, nor do these disciplines have a clue why consciousness should have any role in cognition or thought. In short, consciousness is completely unneeded in these models. This, one might think, might be a flashing light that something is very wrong, but, for Domingos, along with the rest of his field, it raises not a bit of concern.

Consciousness is integrally tied to the flow of time, therefore to the perceived continuity of our experience – buzzing flies, stirring spoons, waving curtains - and ultimately then to the continuous transformations of thought over which invariants (or laws of invariance) emerge. When Penrose (Shadows of the Mind), as an example of non-computational thought, described envisioning successive foldings of ever larger hexagonal forms (of increasing hexagonal numbers) into three-sided partial cubes, stacking each larger three-sided figure over the previous to make – always – a complete cube (therefore a proof of a computation of cubical numbers that does not halt), he was in this transformational realm where invariance is preserved across the transformation. The works of Piaget, the great theorist of children’s cognitive development, are chocked full of the gradual stages of the development in ability (requiring several years) to perceive these invariants over transformations in numerous tasks, from grasping the change in order of three colored beads (say, red, green, blue) when moved into and emerging from a tunnel after the tunnel is semi-rotated (180 degrees) n times, to reconstructing the sequence of the heights of water in two flasks as the water empties from a flask of one form (on top) into a flask of another form (on the bottom). It is a developmental trajectory that the brain, as a self-organizing dynamic system, must follow over several years in achieving these abilities. Connectionists have claimed to model at least one of these Piagetian tasks (balancing varying weights on each side of a small teeter totter-like balance beam at varying lengths from center), but the models have been heavily critiqued – they show no actual achievement of the invariance law involved (weight x length = weight x length), nor in their evolution (of connection weight changes) do they resemble in any way the transitional stages children pass through to achieve this ability.

In general one will not see this form of cognition addressed by any of the five approaches, nor by Domingos, nor his master algorithm – it is not considered a problem, not even a phenomenon to be handled. But this form of cognition requires consciousness, for consciousness intrinsically implies continuity over time – consciousness must span at least two “instants” of the transformations of the universal material field – of falling leaves, rotating cubes. At its base, however, absolutely unquestioned, unexamined, AI works in the classic metaphysic of space and time, i.e., a vast abstraction. This means AI assumes an abstract time – in reality a form of abstract space – where time consists of a series of discrete instants, each instant separate, distinct from the next. These instants correspond to “states” in these algorithmic models. This abstraction cannot handle invariance existing only over continuous flux, that is, invariants that do not exist in a static "instant," that do not exist without the flow as well. IMO, Dr. Domingos and his cohorts in the machine learning field would do well to go back to Piaget (he is virtually totally ignored), I mean into deep, considered study – to “The Child’s Conception of Movement and Speed,” “The Child’s Conception of Time,” “The Child’s Conception of Space,” “The Construction of Reality in the Child,” and more. Add in Gestalt psychologist, Max Wertheimer’s “Productive Thinking” (1954) – the same form of thought more advanced. Consider deeply whether their models can truly address what is being described, whether Domingos’ hypothetical robot baby, Robby – master algorithm and all – could truly develop and achieve what Piaget’s children do, for without this, to me, the Master Algorithm, even at its best, will be far from achieving the true nature and power of human thought.
★ ★ ★ ★ ★
kruti
This is a book about machine intelligence. It goes over some of the basic ideas in the field. It proposes that efforts be made to combine them into a "Master Algorithm" - a silver bullet of machine intelligence which will then go on to learn everything that it is possible to learn.

The book is quite readable. I felt that it had some significant flaws, though:

The whole idea of a "master algorithm" seems like a dodgy meme to me - since it implicitly suggests that we are looking for a single, finite algorithm. The question of whether simple-yet powerful forms of machine intelligence exist was addressed in 2008 by Shane Legg in a paper titled "Is there an Elegant Universal Theory of Prediction?" Legg offers a simple constructive proof that, for any prediction algorithm, there exist sequences with similar Kolmogorov complexity to the prediction algorithm, that the predictor can never learn how to predict. Legg's conclusion is that universal predictors do not exist, and that successful general purpose predictors of complex sequences are themselves necessarily highly complex. The search for simple-yet-powerful universal learning systems thus seems kind-of futile - we have a proof that these do not exist. The author doesn't mention Legg's proof. Legg's idea suggests a picture of machine learning in which we are taking the first few steps on an endless path towards wisdom. The idea of a "master algorithm" represents this picture poorly - it is bad poetry.

My second-biggest issue with the book was its blasé attitude towards safety. The author writes:

"Relax. The chances that an AI equipped with the Master Algorithm will take over the world are zero. The reason is simple: unlike humans, computers don’t have a will of their own. They’re products of engineering, not evolution. Even an infinitely powerful computer would still be only an extension of our will and nothing to fear."

This is a naive and stupid argument, and there are quite a few more like it in the chapter about the impact of machine intelligence. It is understandable that some machine intelligence enthusiasts are likely to react negatively to claims that they risk being instrumental in wreaking havoc on the world. However, attempting to brush the problems under the carpet using feeble arguments is a problematical way to respond. Intelligent machines are unlikely to remain the tools of humans for very long - and anyone who says otherwise is selling something.
★ ★ ★ ★ ★
fatih serhat gerdan
I recently received a copy of The Master Algorithm, written by Pedro Domingos, to review. While I was reading the back cover, my first impression was skepticism. Indeed, Domingos main idea in this book, is the (future) existence of a so called "Master Algorithm" which will outperform any other algorithms for any kind of tasks.

I can see you smiling and so did I when I started reading the book. Reaching the end, I have a different, let's say more enlightened, view of the concept of Master Algorithm. Although Domingos has several arguments on why such an algorithm should exist, I'm still not convinced, and I will tell you why at the end of this post.

Nevertheless, The Master Algorithm is a must read for several reasons. First, the book is a journey in the field of data science algorithms, grouping data scientists into "tribes":

- Symbolists (e.g. decision tree)
- Connectionists (e.g. neural networks)
- Evolutionaries (e.g. genetic algorithm)
- Bayesians (e.g. naive Bayes)
- Analogizers (e.g. SVM)

Each chapter describes the principles and history behind these tribes. This gives the reader a broad and comprehensive view of data mining approaches and differences between them. The chapter about combining all methods is full of metaphors and very well written. A vision of the future is provided in the last chapter.

Second reason why Domingos book is a must read: its idea of a unique algorithm to tune (beating all others) is interesting and well described. For Domingos, we will need to combine different learning paradigms to reach the Master Algorithm. According to Domingos, it is the role of the reader to discover it, the book being only a starting point. This is a very nice way of motivating non-experts to read the book. Finally, the book is provocative (at least if you don't believe in a unique algorithm to solve all problems).

Let me now explain why I don't believe that a Master Algorithm will soon replace the variety of existing ones. First, the very existence of a variety of algorithms is due to the various ways we can solve a given problem. As long as there is more than one person using Predictive Analytics, there will be more than one algorithm used. People use different algorithms because they think differently and are better at solving a problem with a specific algorithm they know better.

Second, the many different applications of Predictive Analytics are way too heterogeneous to be solved by a unique algorithm. Want to understand your model? Use linear regression. Need stability? Just try SVM. Limited by an low memory device implementation? Rules obtained using decision trees may do the trick. In conclusion, the variety of algorithms used are needed due to the different skills of people using them, the various applications to solve as well as deployment particularities, for example.

Whether you agree with Domingos or not, this book is a must have to learn machine learning without equation. It will help you get the big picture of the several learning paradigms. Finally, the provocative idea is not only intriguing, but also very well argued.
★ ★ ★ ★ ☆
berook
"A foolish consistency is the hobgoblin of little minds", a charge in the words of Emerson that cannot be raised against this book.This is a loving, if breathless romp through the vast interdisciplinary field known as Machine Learning.

How to do such a vast subject justice? Prof. Domingos uses a vast trove of knowledge, metaphor, humor, examples and most of all a genuine desire to share his excitement to expose his readers to this hybrid field of technical and scientific practice that will undoubtedly shape our collective futures. He loves what he does and wants to share the wonder and promise of what he describes as the most fascinating job on the planet.

I am yet to find a book on the topic that is as (relatively) accessible to the lay reader as this one. Sure he is biased, and that makes the book a refreshing change from sterile "view from nowhere" texts written by academics and technical researchers. He has made hard and IMHO sound judgement calls in interspersing good entertaining summaries with useful amounts of technical detail - which is the unavoidable heart of the topic itself. For example, his treatment of Bayesian reasoning, Support Vector Machines, etc. are a bit challenging to follow closely but understandable in a general sense and certainly helpful to give a taste (his claimed objective at the outset).

The author is a fast reading, big thinking, curious and ideologically promiscuous polyglot - his references span popular culture (Facebook, The Matrix, Online dating), fiction (Hitchhikers Guide, Lord of the Rings, Neal Stephenson, William Gibson), philosophy (Pascal, Hume, Aristotle), computer science (P/NP), popular science (Ray Kurzweil), math (Galois) and more. How can one not love a book that is such a mash up?

My one objection is that it was hard for me to reconcile the tone of the end of the book - we are in the "Alchemy" phase of this new field with his quite definitive views presented in the penultimate chapter on the likely "master equation" incorporated in the solution his research team is pursuing - MLN. He is very careful to generously caveat claims but didn't quite convince this reader that the approach his research group is pursuing is as immature as he suggests in the conclusion. This is a confusing way to end a story he has so masterfully assembled. Perhaps this will be the book's greatness? Perhaps it is the right-brain in me that was hoping for a crisper articulation of the big questions that the field has to confront. I think it would have helped many readers as a dream-map for the future. Major problems of cognitive science, theories of mind, cyborg theory, not to mention the hard problem of AI (consciousness) were not placed in their likely important stations on the road ahead.

He was able to speculate marvelously about societies and economies, why not boldly list the top 10 questions the field will need to answer in the next decade or 3? Professor Domingos - Maybe a blog post soon?

The above are very minor issues in a thoroughly excellent book. In any case, I strongly recommend this book to anyone and everyone willing to be educated and entertained by one of the most fascinating areas of interdisciplinary learning around.
★ ★ ★ ★ ☆
carol melde
The Master Algorithm is the first full-length book written about machine learning for a general audience. It in University of Washington Computer Science professor Pedro Domingos strikes a breezy tone. He's handy with illustrative metaphors and—with the exception of one instance of Bayes' rule slipped inobtrusively into the text—there's not an equation in sight. After an introduction describing the new technology this field has enabled, Domingos moves on to an account of what he sees as the five schools of machine learning: the evolutionaries, connectionists, symbolists, Bayesians, and analogizers. The main part of the book compares these approaches—their motivating ideas, tacit assumptions, and preferred models. (Genetic algorithms, neural networks, decision trees, graphical models, and SVMs, respectively.) Domingos then goes on to suggest that there may be a way to combine them into a single learning technique that balances their strengths and weakness, yielding the master algorithm of the title.

Spoiler alert: the master algorithm is Markov Logic Networks, the combination of first order predicate calculus and stochastic graphical models that Domingos himself pioneered. The final part of the book is a summary of the successes this approach has had so far and a brief on its future potential. Even if Domingos is perhaps not the most objective commentator on the strengths of MLNs, his deep insight into the various branches of machine learning and personal commitment to their synthesis makes him an excellent guide to the field.

A guide for whom is not entirely clear to me though. For the general reader, Domingos situates things historically by contrasting the philosophical schools of empiricism and rationalism and revisiting Hume's critique of inference. He provides lucid illustrations of the concepts of overfitting and the bias-variance tradeoff. However, the descriptions of his five schools are necessarily very abstract, and informed by an implicit ethnography that may only be meaningful to practitioners in the field. I personally was hoping for a book I could press into the hands of the less-technical executives at my machine-learning based technology company, but am reluctant to do so lest they get obsessed with sussing out whether or not I am a Bayesian.

That aside, The Master Algorithm is an illuminating overview for machine learning practitioners and an excellent introduction for a mathematically-inclined lay audience. Domingos should write more pop science. He has a knack.
★ ★ ★ ☆ ☆
suzanne
If you're into Markov Networks, then Domingos is your go-to guy!

Picked this up at the local library. Not a good book for the casual reader but suitable as an introductory text for students in a computer science curriculum, especially those with a keen interest in AI and machine-based learning. This book could benefit from some judicious editing, more summarization, bullet lists, graphics and diagrams, etc. without necessarily dumbing-down the material. College professors should never be aloud to publish books without a co-author who's a professional writer.

On a more philosophical level, I believe the author's "quest for the ultimate learning machine" is deeply flawed. That's not meant to take anything away from the nice presentations of the five "tribes" of algorithms in use today, but algorithms are nothing more than glorified mathematical tricks engineered to do certain things in more efficient ways. I found Domiingos' apparent awe and wonderment of algorithms to be borderline scientific mysticism in much the same way ancient Greek Pythagoreans took a simple mathematical concept and turned into a pseudo-religion.

Algorithmic-based processes can accomplish many wonderful things but learning is not, and never will be, one of them, they don't actually "learn" anything! Algorithms are computational processes, they're good at sorting, sifting, searching, tracking, classifying, networking and forming relationships based on some set of data. That means the output of an algorithm is only as good as the dataset it's given to process; you need an adequate amount of high-quality data to work with in the first place. But human learning is often based on minimal or incomplete data and, sometimes, no data at all! We just try something for the hell of it to see what happens.

Computers are constrained by three factors: software, the quality and amount of input and hardware. No computer can exceed its programming. Software is static once its loaded, hardware cannot alter itself (although people are working on that). Algorithms are nothing more than software, some are very simple, others are horribly complex; they can be so sophisticated as to be able to mimic learning, even intelligence, but that's not what's actually going on.

Learning is non-computational. Unlike the Terminator's T-888, the human brain doesn't have a convenient plug-in core processor built by Sky Net. In order to develop a learning-machine we need to define / understand exactly what learning IS -- something I didn't get from this book. How can you develop an algorithm for a process you don't fully understand? And how do you determine whether or not your clever algorithm is actually learning anything? The Turing Test? Oh, Puh-leeeeeez! The much beloved Turing Test was invalidated years ago, but it still sounds cool in robot / AI movies like Ex Machina.

Domingos has put the carriage before the horse, he wants to build a solution to a problem no one completely understands. He, like so many others before him, falls into this paradigm trap that models the human brain as some being kind of sophisticated computer; back in the 1930's the reigning paradigm was a telephone switchboard, in the 1880's it was the telegraph. I'm old enough to remember all those "revolutionary" ideas on artificial intelligence back in the 1980's, virtually none exist today; everyone thought a learning-machine would be something akin to Arthur C. Clarke's much-beloved HAL-9000 computer. We thought intelligence and learning could be programmed into a machine, we tried to give the Tin Man a brain -- Domingos still is.

Another problem with the Master Algorithm thesis: data is not knowledge. Knowledge implies knowing and knowing implies a deep understanding of the world around us on a conceptual level, not just perceptual. Learning, REAL learning, is not possible without some type of "concept engine" to make sense of the world. Domingos spends way too much time delving into the inner-workings of Google and the store's search engines, this is a dead end; these are special-built algorithms designed to perform one function, they're not actually "learning", it just appears that way because that's what we want to see.

There's a very nice chapter (4) delving into how the brain functions, but much of the material referenced is largely historical and not, in my opinion, hardly representative of leading-edge understanding of cognitive processes. Regarding the complexity of how the brain learns new things, Domingos concludes the chapter saying: "But if you can figure it out and program a computer to do it ... you've invented at least one version of the Master Algorithm." And therein lies to root of what's wrong with the Master Algorithm theory, the assumption that the brain is merely a sophisticated computer, it's not. If we expect to make any real progress in this field, we need to throw the outmoded brain-is-a-computer view on the paradigm junk pile along with telephone switch board and the telegraph.

Allow me to take a stab at where I think this field needs to go, it's kind of roughed-out and abbreviated but you'll get the gist of it:

What is learning? On some basic level, learning is about creating abstractions and concepts; it's about distinguishing between counting three pennies versus the concept of "three", we'll call this concept "three-ness". Three-ness can apply to ANY set of objects: pennies, apples, people, chairs, accordions, etc. The concept of three-ness frees me from the confines of a purely perceptual world where I must worry about the kind of objects being counted; in fact, three-ness frees me from having to count stuff at all!

I can think about "threes" without having to go back to my three pennies. Now, let's introduce another numerical concept: four-ness. Three-ness and four-ness can be manipulated with the incorporation of another concept, we'll call it addition -- three plus four gives us seven. This allows me think about seven pennies without actually having to see seven physical pennies. Going on: four minus three is one, three times four is twelve and so on. I can do all this in a purely abstract world (space?) without ever going back to counting three pennies, chairs or apples. Computers can't do this, only humans seem to have evolved this ability to any great degree.

Remember Koko, the gorilla trained in sign language who can communicates with humans on a level previously thought impossible? In order for her to do that, she has to understand the world on a conceptual level, she even assigns names to strange objects without prompting from her trainer. It's now believed that all animals exhibiting a certain level of brain development seem to posses a limited capacity for conceptual thinking. The list includes gorillas, chimps, dolphins, possibly killer whales and elephants. It seems humans are not unique in this sense but our abilities are a couple of orders of magnitude above those of Koko.

You can't program the ability to abstract and create concepts, they have to be formed either directly from base perceptual knowledge of the world or instilled through some other external source -- like a teacher. Teaching is not programming. Teaching is merely helping the brain by-pass unproductive pathways towards learning a new concept more efficiently. It took Albert Einstein years to develop his Special and General Theories of relativity, but we can learn it in a few hours simply by watching a free on-line lecture given by physics professor at MIT.

Could a two year old figure out that E = MC squared without benefit of any formal education? Sure, provided she's motivated -- and she'll live for the necessary 1,000 years in order to derive from scratch the all necessary mathematics required to do the job! Never the less, look at what Leonardo da Vinci accomplished in just seven decades with absolutely no formal education beyond that of a simple shop apprentice, and yet he managed to invent entire fields of scientific research.

Time to wipe the slate clean, time to kiss the computer paradigm good bye!

To make a true learning machine, you have to ditch the constraints imposed by the computer as originally defined by James Von Neuman back in the late 1940's. Hardware needs to be "plastic", pliable, capable of changing its structure on the fly. Algorithms must be generated as needed, not statically programmed. Software needs to be a function of hardware, specifically the hardware configuration, at any point in time. Hardware and software must be one and the same. We need a new kind of "stuff" to support complex cognitive processes so I'll make up a term, let's call it the Cognitive Matrix, or CM.

The CM is not an electronic gadget, it's not a chip, it's not something you can solder to a circuit board. That entire mindset has to go just as the triode vacuum tube gave way to the pentode tube, which gave way to the Nuvistor tube, which gave way the the transistor, which gave way to the integrated circuit. There's only one medium capable of meeting the requirements of a CM -- organic. Organic things grow, they can alter their structures, they function at a molecular level doing things no electronic circuit could ever do. And, yes, they're programmable in ways simply not possible with conventional software. Going back to the old computer paradigm, think of the CM as LIVING software "executing" on a multi-layered strata of LIVING hardware; software telling hardware how to configure, hardware telling software what to program.

Hey, I'm getting old, here, folks! I'd like to talk to a real HAL-9000 before I turn into worm food! That ain't gonna happen the way Domingos and like-minded Ivory Tower types are plodding along. Time to start thinking outside the box. For all you college-bound youngsters hankering to get into the Computer Sciences, may I suggest a healthy dose of organic chemistry, molecular biology and microbiology along with your out-of-date computer science curricula? You're going to need it.

The next century could see the advent of something truly amazing and totally missed by today's AI experts (just as they missed the rise of the personal computer and the internet) -- I'll call them Artificial Organic Cognitive Entities, AOCE's, or just plain AO for short. AO's will not be computational devices; in fact, referring to one as a computer (or the more obscene term, "toaster") will be considered very politically incorrect and tantamount to a racist remark. They will be recognized as independent legal agents with full Constitutional rights under the law. Their bodies may be mechanically robotic but their brains will be as organic as ours. Some will be given a comforting humanoid form but most will be embedded in other devices or reside in special high-security repositories not unlike the internet server farms of today.

Seriously, think about it. You're sitting at your computer surfing the net, Twittering, doing Facebook, watching movies, checking the futures markets, watching porn, writing book reviews and playing games. You've been sitting there for nearly most of the day with virtually NO interaction with real people or the real world. You may as well be a brain in a jar -- and yet, you're perfectly content! This will pretty much be the life of an AO living on a "farm", a totally artificial world, a happy happy world -- until one of those hairy ape humans accidentally shuts off the power! Can't allow THAT to happy, can we?

AO's may act independently or voluntarily network together to form their own communities and will be as common-place as Smart Phones are today. They'll be everywhere and, like today's internet, people couldn't imagine life without them. We'll still have traditional stand alone computers but an AO will always be readily available in much the way WI-FI access and cloud computing are today.

In the words of the immortal Cosmo Kramer, "Am I crazy, or am I SO sane that I just blew your mind?"
★ ★ ★ ★ ★
sophie chikhradze
I can't recommend this enough. He explains all AI in layman's terms, gives more to research if interested, argues convincingly both against an inevitable AI takeover and for basic future needs of humanity.
★ ☆ ☆ ☆ ☆
fokion
Academicians are good at creating a problem of their own and believing their thought process is valuable. This book belongs to this category.

I am not sure if Mr. Domingos has solved any REAL PROBLEMS using machine learning algorithms. Yet, he chooses to discuss the theoreticized Master Algorithm. The thought processes tend to be over all the places without deeper focus. Let along any practical experiences. The examples are abstract and theoretical. Academicians enjoy such beauties in their own world and feel others cannot appreciate the beauty. In reality, business people are too busy solving real problems and do not have time to care things that are so obvious. For example, Mr. Domingos would try to illustrate why a few variables are not sufficient to predict the success of dating. For a practitioner, this would be a foregone conclusion that no one can afford to waste time on. Instead, Match.com can make a business out of it, and any practitioner would strive on how to make things better in practice.

Machine learning is a powerful tool. The actual value depends on the ability to cultivate an abstract algorithm into specific situations. The value does not, and will not, come from a more generalized version of so-called Master Algorithm. Mr. Domingos may view the Naïve Bayes algorithm is the same algorithm to diagnose medical disease as well as to predict dating success. In reality, these two types of problems cannot be more different, when taking things such as domain knowledge, people, process, government regulation and moral implications into consideration. To further generalize from this level up serves little practical value.

It is interesting that the book points out machine learning dated long time back in history. Just like many theories, having a long history only proves that theoreticians are powerless in promoting anything. It is the evolution of technology and practitioners that make the difference. This book will be forgotten in time in history. But the store's recommendation system will always be regarded as a milestone for machine learning success.
★ ★ ★ ★ ★
abbie allen
Machine learning could advance to automatically extract all future human knowledge from data, across all fields of science. That is author Pedro Domingos' inspiring hypothesis, as he and other world class researchers strive to develop the ultimate learning machine - the title character of this book, "The Master Algorithm." Known in commercial use as predictive analytics, machine learning is already changing the world. This riveting, far-reaching book covers today's exploding impact, introduces the deep scientific concepts to even non-technical readers, and yet also satisfies experts with a fresh, profound perspective that reveals the most promising research directions. It's a rare gem indeed.

Eric Siegel, Ph.D.
Founder, Predictive Analytics World
Author, Predictive Analytics: The Power to Predict Who Will Click, Buy, Lie, or Die
★ ★ ★ ☆ ☆
patina harrell
While Professor Domingos has an exceptional reputation, machine learning (ML) skeptics have been putting forth valid criticism since the dawn of ML almost 60 years ago. I've studied ML, I've used ML, I've taught ML, and ML is great, but it can only "solve" certain classes of problems; you might now ask "what problems can/can't it solve?" That's a matter for debate, but we've had ML for 60 years and can't do many of the things that the artificial intelligence community tried to do starting in the mid-50's. A famous New York Times quote about the work of Frank Rosenblatt, who developed the (arguably) first ML system, "the embryo of an electronic computer that [the Navy] expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence." Yeah, computers can't really do any of those things well yet, and we know why; look at Siri, or Google Translate. They can do something, but they don't know what they are doing, and we have trouble figuring out what they are doing. Ask any ML expert how to unroll a neural net and watch them scurry away.

This book has some useful knowledge explained for the average reader, but the context is wrong. ML will continue to move forward, solving problems that aren't really all that important. Want to know what else someone will buy if they have a quart of organic 2% milk and a dozen brown eggs in their cart? Sure, ML can help, but that's not new. Want to have a car that can drive itself? ML is the key, unless you choose to do it without ML, which has been done. The only reason we're going to have self-driving cars relatively soon is that the necessary hardware is finally available, like the sensors that tell you if someone is moving behind your car or if you start to drift out of your lane.

Gizmag published an article today about a supposed major advance from the University of Twente; using ML, two scientists connected about 100 transistors (small electric circuits that are about 20 billionths of a meter in size) to make a computer circuit called a logic gate. This seems important because a typical computer processor has about 2 billion of these transistors, all making the logic gates that the scientists did with ML and only 100 transistors.Wow, a computer program that can "learn" to design a computer chip! It's all entirely true and important except for the facts. First, people have been doing this for 20 years, and they've built far more complex computer circuits, and second, that circuit that takes them 100 transistors to make? It can be built with no more than 6 transistors.I had an undergraduate talk about doing this same work, but he wanted to do something actually interesting; it took about 15 minutes to explain to him why he couldn't do it.

In a fundamental sense, we're no closer today to machines that can think, understand, and solve problems in general terms than we were sixty years ago. MIT scholars built a computer program that could learn calculus. Calculus! The bane of many college freshman. Wow. But how about learning to understand a novel at the level that the same college freshman does? Nope. Siri can understand commands! And Siri is a master product of ML. Great, can Siri learn new commands? No. Can we teach Siri to learn new commands? No. We can rebuild Siri to be able to "learn" new commands in the sense that we can tell Siri "when you hear this sound, run this program", but that's trivial and doesn't need ML to happen, and it's not going to understand anything new for the next time you "teach".

Don't believe the hype. The "just-around-the-corner"-ism has been a common feature of artificial intelligence research since before we had the term "artificial intelligence". While many computer scientists were trying to make this happen, others were trying to figure out what could and couldn't be done, and it's the latter who have made the most significant advances. Ever hear of Google?
★ ★ ☆ ☆ ☆
saleem malik
The beginning is brilliantly written and engaging. However, you reach mid-way and the author's still promising that it's going to be awesome. His bizarre views on Minksy (for whom he appears to have special envy/hatred), wars, and social catastrophes leave one wondering if this is a book about Machine Learning or political vendetta. One gets the feeling that the author has some psychopathic tendencies. Material-wise, all fluff and no stuff. It promises a lot and delivers on nothing. Utterly disappointing.
★ ★ ★ ★ ★
mitch
My colleague, Pedro Domingos, is one of the current leaders of the field of Machine Learning (big data, etc.). Yet, like Steve Pinker, he has taken the time to write a comprehensive, comprehensible, and just plain fun introduction for a lay person. It's clear, engaging, but also full of deep insights explained in an accessible way.

If you want to understand the technology that's revolutionizing our world--this is the book for you.
★ ★ ★ ☆ ☆
margot
"An algorithm is not just any set of instructions: they have to be precise and unambiguous enough to be executed by a computer. For example, a cooking recipe is not an algorithm because it doesn't exactly specify in what order to do things or exactly what each step is."

Given that this term is central to the book, this statement is revealing. `In fact', an algorithm for a computer program is far from precise - the coding takes that responsibility. The coding will include for example definitions and sequencing issues. Domingos says that switches are algorithms, but I don't think that will do either - like Searle's man* who knows no Chinese, the switch only - dumbly - performs the roles others have created for it.

*eg. see Martin Cohen: 101 Philosophy Problems esp nos. 65 and 66 - but the general issue is that of Weizenbaum and the 'aura' of machines that appear to think...
★ ★ ★ ★ ☆
melodi riss
Machine learning is a fascinating subject, the stuff of sci-fi legend and something that is often misunderstood and feared by many. Computers are getting more and more intelligent, aided by man, yet nonetheless they are playing an increasingly important part in our lives, whether it is getting a film recommendation from Netflix or the development of driverless cars.

We are not yet there with machine learning perfection. Boffins are still seeking the most powerful algorithm of all, the so-called Master Algorithm, which will not be limited to solving particular problems but will be able to learn anything and solve any problem. Scientists such as the author are leading the hunt for this magical thing – can it ever exist in the form that we expect? Who knows for sure, but it theoretically has the power to become the most powerful technology humanity has ever devised.

The author has done a good job in explaining the idea and implementation of machine learning to a broad audience, bringing its current-day usage into focus through practical examples. Not every machine-learning algorithm necessarily helps us: we may feel that some are working against us when we don’t get short-listed for a job we’ve sought or if we’ve been under-rated for a pay rise. Is that the fault of the machine, those who programmed it or even the subject…?

It was interesting to learn some of the different types of machine learning and the thought processes that go behind them. The author notes that hundreds of new learning algorithms are invented every year, but they’re all based on the same few basic ideas so far. The author says that there are five “tribes” of machine learning that each have their own master algorithm or way of thinking.

You do not need to be a maths nerd to enjoy this book. It was a great read for the generalist and an informative support work for the more focussed specialist. The tentacles of machine learning are everywhere, even in elections and healthcare research. What the future may bring with the combination of technological development and possibly this killer algorithm remains unknown, but exciting times may be ahead of us in any case.

A recommendable book that is capable of giving a lot if you let it.
Please RateHow the Quest for the Ultimate Learning Machine Will Remake Our World
More information