Photo: tostphoto/iStock |
"The advantages of mathematics, however, sometimes come with a cost… in a nutshell… not everything is provable," the researchers, led by first author and computer scientist Shai Ben-David from the University of Waterloo, write in their paper.
"Here we show that machine learning shares this fate."
Awareness of these mathematical limitations is often tied to the famous Austrian mathematician Kurt Gödel, who developed in the 1930s what are known as the incompleteness theorems – two propositions suggesting that not all mathematical questions can actually be solved.
Now, Ben-David's new research indicates that machine learning is limited by the same unresolvability.
In this argument, a machine's ability to actually learn – called learnability – can be constrained by mathematics that is unprovable. In other words, it's basically giving an AI an undecidable problem, something that's impossible for an algorithm to solve with a true-or-false response.
"For us, it was a surprise," senior researcher and mathematician Amir Yehudayoff, from the Technion-Israel Institute of Technology, explained to Nature...
According to the researchers, in this kind of case, the mathematical problem to be solved bears similarities to a machine learning framework known as probably approximately correct learning (aka PAC learning), but it's also similar to a mathematical paradox called the continuum hypothesis, another field of investigation for Gödel...
The findings are reported in Nature Machine Intelligence.
Read more...
Source: ScienceAlert