Translate to multiple languages

Subscribe to my Email updates

https://feedburner.google.com/fb/a/mailverify?uri=helgeScherlundelearning
Enjoy what you've read, make sure you subscribe to my Email Updates

Saturday, February 22, 2020

AI Weekly: Why a slow movement for machine learning could be a good thing | AI - VentureBeat

Kyle Wiggers, Staff Writer at VentureBeat says, In 2019, the number of published papers related to AI and machine learning was nearly 25,000 in the U.S. alone, up from roughly 10,000 in 2015. 

A snapshot of the 2019 NeurIPS conference in Vancouver, Canada.
Photo: Khari Johnson / VentureBeat
And NeurIPS 2019, one of the world’s largest machine learning and computational neuroscience conferences, featured close to 2,000 accepted papers from thousands of attendees.
There’s no question that the momentum reflects an uptick in publicity and funding — and correspondingly, competition — within the AI research community. But some academics suggest the relentless push for progress might be causing more harm than good...

In a recent tweet, Zachary Lipton, an assistant professor at Carnegie Mellon University, jointly appointed in the Tepper School of Business and the machine learning department, proposed a one-year moratorium on papers for the entire community, which he said might encourage “thinking” without “sprinting/hustling/spamming” toward deadlines...

There’s preliminary evidence to suggest the crunch has resulted in research that could mislead the public and stymie future work. In a 2018 meta analysis undertaken by Lipton and Jacob Steinhardt, who is a member of the statistics faculty at the University of California, Berkeley and the Berkeley Artificial Intelligence Lab, the two assert that troubling trends have emerged in machine learning scholarship, including:
  • A failure to distinguish between explanation and speculation and to identify the sources of empirical gains
  • The use of mathematics that obfuscates or impresses rather than clarifies
  • The misuse of language, for example by overloading established technical terms
They attribute this in part to the rapid expansion of the community and the consequent thinness of the reviewer pool. The “often-misaligned” incentives between scholarship and the short-term measures of success — like inclusion at a leading academic conference — is also likely to blame, they say.
Read more...

Source: VentureBeat