Follow on Twitter as @n_vpatel |
Algorithms. Is there anything they can’t do?
Yes. Tons of stuff, but they remain at the heart of the internet as we know it. Much of what we are exposed to online courtesy of search engines or Siri or Facebook is surfaced based on algorithms designed to improve performance by gathering information. These are today’s most important — if not most effective — learning machines, but Pedro Domingos, a professor of computer science at the University of Washington, is more concerned with what we’ll be capable of calculating tomorrow.
The Master Algorithm |
In his new book, The Master Algorithm, Domingos makes the case that it’s possible we may one day create an algorithm so adept at learning and harness information that it will forever change the way we think. That hypothetical algorithm will make Google’s site crawler look like basic arithmetic.
Inverse asked Domingos about his mathematically messianic prophesy and the future of calculation.
Can you give me a brief history of the research and development of machine learning? What are maybe two or three of the biggest milestones people should know for how algorithms have evolved over the last several decades?
Computers got their start around World War II — that’s really when computers science began. From the very beginning, there were people who were writing algorithms, explaining line-by-line what the computer should do. But there were also people, including Alan Turing, who were very interested in this idea of computers learning from experience the way people do.
One of the first milestones was the Perceptron Algorithm: the first neural network. Frank Rosenblatt was the first person to develop it. And it was the beginning of simulation for how the brain learns. It was extremely popular in the 50s and 60s, but then there was this book called Perceptrons: An Introduction to
Computational Geometry that revealed a lot of the limitations. People lost faith in machine learning for about 20 years.
Machine learning came back in the 80s when people realized the conventional computer processes didn’t scale. Hand-coding all the knowledge you need to solve problems is too expensive, slow, and brittle.
What would you say is the central thesis of your book?
My thesis is that there is a learning algorithm that can discover any knowledge from data. All the knowledge that human beings have, acquired by experience and evolution, and all the future knowledge that we have yet to acquire like curing cancer — all of this can be learned by an algorithm. There are reasons for and against this idea, and I discuss them in the book. But at the end of the day, we’re only going to find out if I’m right by trying.
There are different paradigms under which different sets of researchers fall under, who have what they call their own master algorithms. The connection is backpropagation, which is really what drives deep learning. Often they are convinced that this is the master algorithm, and that they will solve the whole learning problem with it. I myself don’t think any of those things by themselves are the master algorithm. But we need an algorithm that combines them. Again, the analogy is with the unifying theories you find physics or biology — like the standard model or central dogma. That’s what we need here, too.
The impact on the world would be revolutionary, in all aspects of life.
Read more...
Source: Inverse