Translate into a different language

Saturday, January 20, 2018

Wise up, deep learning may never create a general purpose AI | - Technology

AI and deep learning have been subject to a huge amount of hype. In a new paper, Gary Marcus argues there's been an “irrational exuberance” around deep learning. 

Photo: CSA Images/iStock
In August 2015, a number of carefully selected Facebook users in the Bay Area discovered a new feature on Facebook Messenger.

Known as M, the service was designed to rival Google Now and Apple’s Siri. A personal assistant that would answer questions in a natural way, make restaurant reservations and help with Uber bookings, M was meant to be a step forward in natural language understanding, the virtual assistant that – unlike Siri – wasn’t a dismal experience.

Fast forward a couple of years, and the general purpose personal assistant has been demoted within Facebook’s product offering. Poor M. The hope was that it would tell users jokes and act as a guide, life coach and optimisation tool.

The disappointment around M largely derives from the fact that it attempted to a new approach: instead of depending solely on AI, the service introduced a human layer – the AI was supervised by human beings. If the machine received a question or task it was incapable of answering or responding to, a human would step in. In that way, the human being would act to further train the algorithm.

This, of course, is where we are with virtual assistants; on the cusp of what may well be a transformational technology – in the form of deep learning – we’re at the peak of inflated expectations (right next to the connected home which, despite the best efforts of electronics industry at this year’s CES, appears not to be fulfilling a single human need – I’m looking at you, Cloi). To misquote Peter Thiel, we were promised general purpose artificial intelligence, and we got a home assistant that looks at the contents of our fridge and tells us to make a sandwich. What a time to be alive.

The aspirational narrative is that AI will be everywhere and in every object, as ubiquitous as oxygen. It can help us read X-rays more accurately, pursue science more effectively, empower us to understand foreign languages without studying and ensure that autonomous vehicles behave the way we would like them to. There will be breakthroughs in agriculture, medicine and science. Governments will discover ways to combat inequality and crime.

But we’re not there yet. And we maybe never will, says Gary Marcus, the former head of AI at Uber and a professor at New York University. Marcus – who participated in a robust exchange of views with DeepMind’s Demis Hassabis at the Neural Information Processing Systems Conference in December last year – is known for tempering the excitement within the tech community regarding the progress of research into machine learning. In a paper Deep Learning: A Critical Appraisal published earlier this month, he outlines “concerns” that must be addressed if the most widely known technique in artificial intelligence research is to lead to general purpose AI. Marcus writes that the field may be subject to “irrational exuberance” and outlines what he feels might be done in order to move the field forward.

Marcus argues that in instances when data sets are large enough and labelled, and computing power is unlimited, then deep learning acts as a powerful tool. However, “systems that rely on deep learning frequently have to generalise beyond the specific data that they have seen, whether to a new pronunciation of a word or to an image that differs from one that the system has seen before, and where data are less than infinite, the ability of formal proofs to guarantee high-quality performance is more limited.”

The paper outlines ten areas in which Marcus argues deep learning has limitations; for instance, it’s need for vast labelled data sets. In applications such as image recognition, a more restricted volume of data can mean that deep learning has difficulty in generalising novel perspectives (a round sticker on a parking sign could be identified as, say, a ball). 


If you enjoyed this post, make sure you subscribe to my Email Updates!

Students as young as pre-K learning computer coding through Dallas ISD STEM | KENS 5 TV

"Students are learning computer coding as young as pre-kindergarten" reports Demond Fernandez, WFAA, TV News Reporter. 

Photo: KENS 5 TV

It is a trend happening at one elementary school in Pleasant Grove, where children are getting a very early introduction to STEM programs.

In Mrs. Rogers kindergarten class at Frederick Douglas Elementary School, five-year-old students are busy learning to code. In fact, computer coding and technology has become part of regular class instruction on the campus, since a pilot program was introduced last school year.

Watch this video

Students from pre-K to fifth grade are programming and developing apps.

"The kids love it,” said Allana Felder, the schools Science Instructional Coach. “They love getting on the computers.”

Teachers said these elementary school students are learning everything from the basics to beyond. Staff calls this early exposure to computer science a game changer for Dallas ISD.

“Statistics show that out of the stem discipline, technology and computer science is one of those that doesn’t have a high enrollment in the high schools and colleges," Felder explained. 

Staff believes the skills are setting students up for scholarships and careers early on.

Source: KENS 5 TV

If you enjoyed this post, make sure you subscribe to my Email Updates!

Science tells us when life begins. Why isn't it taught in the classroom? | Washington Examiner - Opinion

"Recent debates about what American students should be learning in their science classrooms have focused on evolution and climate change" says Brooke Stanton, founder and CEO of Contend Projects, a science education organization dedicated to spreading accurate information about the start of a human life and the biological science of human embryology. 

For 75 years the field of human embryology has documented when a human life begins in the Carnegie Stages of Early Human Embryonic Development and the Carnegie Chart.
Photo: iStock

Advocates for America’s latest K-12 science guidelines, the Next Generation Science Standards, claim that anyone who does not adopt or support the new standards, including the controversial content about evolution and climate change, is a “science denier.” Interestingly, this new science education policy excludes a fundamental, far-reaching, and powerful science reality: when the life of a human organism/being begins — thus making the NGSS a “science denier” as well.

The NGSS identifies key concepts or “core ideas” that represent the “most important aspects of science content knowledge” (per the designers). The first core idea in life sciences is that cells are the basic unit of life and that an organism may consist of one cell or many cells. This significant concept is extremely deficient if students are not learning about the structure, function, growth, and development of early human organisms/lives too.

When a human being begins to exist as a single-cell organism is an essential and relevant scientific fact of life that everyone can and should know, because there is a simple, well-established answer. For 75 years the field of human embryology (the branch of biology that specializes in the beginning of human life and early development) has documented when a human life begins in the Carnegie Stages of Early Human Embryonic Development and the Carnegie Chart.

Carnegie Stage 1a marks the beginning of a sexually reproduced human life.

The Carnegie Stages are the global authority of human embryological research. Human embryologists view the Carnegie Stages and Chart as chemists view the Periodic Table — it’s their gold standard. The Carnegie Chart contains the 23 Stages of development of the early human being during the eight-week embryonic period and was formally instituted in 1942 by the National Museum of Health and Medicine’s Human Developmental Anatomy Center (a secular government organization that is a part of the National Institutes of Health). The Carnegie Stages are required to be included in every genuine human embryology textbook worldwide.

In human sexual reproduction, both in vivo (inside the body) and in vitro (outside the body), the biological beginning of a new human being/organism occurs at Carnegie Stage 1a, at first contact of the sperm and the oocyte, the beginning of the biological process of fertilization. Fertilization mainly occurs in vivo in a woman’s fallopian tube, not in her uterus/womb, and the beginning of the fertilization process is when pregnancy normally begins as well.
Read more... 

Source: Washington Examiner

If you enjoyed this post, make sure you subscribe to my Email Updates!

Science of Learning symposium brings together experts from diverse fields | The Hub at Johns Hopkins - Science+Technology

The Science of Learning symposium will take place on Monday, Jan. 22, from 8 a.m.–5:30 p.m. at Hodson Hall on JHU's Homewood campus. The event will also be broadcast live on the Johns Hopkins UStream Channel.

"For its symposium next week, the Science of Learning Institute has borrowed the famous tagline chanted at passengers of the London Underground—"Mind the Gap."" inform Katie Pearce, Writer & Editor at Johns Hopkins University.

In this case, "the gap" refers to the gulfs that exist between different disciplines, research methods, and even individual viewpoints when it comes to understanding the fundamental science of learning.

Of course, the Johns Hopkins institute was launched five years ago this month to tackle this very challenge.

"Our key goal has been to bridge this gap by creating opportunities for innovative collaboration in science and practice," said Barbara Landau, who directs the cross-disciplinary effort. "We consider this essential for achieving a truly comprehensive, integrative understanding of learning."

The diverse range of experts who will take part in the institute's third biennial symposium Monday reflects this ongoing mission. The event brings together experts in cognitive science, neuroscience, education, and other fields to explore different perspectives on the cognitive and neural bases for learning and motivation.

One visitor from outside Hopkins is psychologist Daniel Simons of the University of Illinois, whose famous "Invisible Gorilla" study helped reveal just how much the human brain can miss when focused elsewhere...

selective attention test 

The institute, one of the most comprehensive of its kind, was forged with the goal of understanding learning from all levels of scientific inquiry, including brain and cognitive development, neurological and psychiatric diseases, and the effects of aging. Among its efforts since, the institute has funded a total of 33 collaborative grants on target research areas including memory and attention, language, and spatial cognition.​

Source: The Hub at Johns Hopkins and Daniel Simons Channel (YouTube)

If you enjoyed this post, make sure you subscribe to my Email Updates!

Sunday, January 14, 2018

Maryam Mirzakhani Scholarship for Women | Financial Tribune - Art And Culture

In honor of the late award winning Iranian mathematician at Stanford University, Maryam Mirzakhani (1977-2017), Persia Educational Foundation, based in London, has established the ‘Persia Mirzakhani Scholarship for Women.’

Photo: Financial Tribune

The scholarship is designed to support the education of Persian-speaking women of any age or citizenship enrolled in a master of science or final year of a doctorate program studying STEM at the University College London. STEM is a recent department at UCL, which focuses on the interface between science, technology, engineering and math.

The inaugural scholarship will be £1500 ($2,000). Eligible applicants will be encouraged to submit their application for the 2018-2019 academic year to the foundation by February 1, according to the website of the foundation

Scholarship winners will be announced on May 3, marking the first birth anniversary of the accomplished professor.

In 2014, Mirzakhani became a household name after becoming the first woman ever to win the Fields Medal, which is widely referred to as the Nobel Prize of mathematics awarded to honor excellence in the field to mathematicians under the age of 40.
Read more... 

Source: Financial Tribune

If you enjoyed this post, make sure you subscribe to my Email Updates!

The scientific debates of the Vienna circle | The Economist - Books and arts

"Philosophy and science between the wars" appeared in the Books and arts section of the print edition under the headline "Talking heads"  

 Entrance to the Mathematical Seminar at the University of Vienna, Boltzmanngasse 5. Meeting place of the Vienna Circle.
Photo: Wikipedia, the free encyclopedia

ON OCTOBER 21st 1916 Friedrich Adler, a theoretical physicist turned socialist politician, went to a famous restaurant in Vienna and ate a three-course lunch. Having lingered over coffee, he went up to Karl von Stürgkh, the imperial prime minister, who was sitting at a nearby table, and shot him several times with a pistol, killing him. Adler, the son of the legendary founder of Austro-Hungarian social democracy, calmly waited to be arrested. Something had to be done to change the general way of thinking, he claimed, and he had done it. At first condemned to death, he was pardoned two years later.

When the Nazis came to power in Austria, Adler, by then the secretary of the Socialist Workers’ International, held urgent meetings with other socialist politicians to work out a common strategy. During one of these meetings, an emotional Adler rambled on, seemingly unable to come to the point. “He shoots better than he talks,” one French delegate remarked drily. “Exact Thinking in Demented Times”, Karl Sigmund’s fond and knowledgeable exploration of the ideas and members of the legendary Vienna circle between the two world wars, contains stark warnings not only about demented times, but also about the possible costs of exact thinking.

Exact Thinking in Demented Times:
The Vienna Circle and the Epic Quest for the Foundations of Science

The Vienna circle was made up mainly of physicists, mathematicians and philosophers, whose fortnightly meetings were dedicated to investigating problems of logic, science, language and mathematics. Led by Moritz Schlick, a philosopher, the discussions attracted some brilliant intellectuals, including Kurt Gödel, a mathematician; Otto Neurath, an economist; three philosophers—Rudolf Carnap, Sir Karl Popper and Ludwig Wittgenstein (pictured, whose work became the main focus of the discussions for a while)—as well as Albert Einstein and Bertrand Russell.

Debates about the possibility of a unified science, the dangerous vagaries of everyday language or the structures of mathematics and logic raged on for more than two decades. These arguments, which seemed so abstract, produced insights of vital importance for computing, astrophysics and cosmology, not to mention theory of science and philosophy. Mr Sigmund devotes a considerable part of the book to explaining some of these concepts. Readers unable to grasp them immediately are in good company. “Most scholars agree”, he writes, “that neither Wittgenstein nor Russell ever really understood Gödel’s ideas.”
Read more... 

Source: The Economist 

If you enjoyed this post, make sure you subscribe to my Email Updates!

Alpha Zero Teaches Itself Chess 4 Hours, Then Beats Dad | Science 2.0 - Life sciences

"Peter Heine Nielsen, a Danish chess Grandmaster, summarized it quite well. "I always wondered, if some superior alien race came to Earth, how they would play chess. Now I know"" according to Tommaso Dorigo, experimental particle physicist. 


The architecture that beat humans at the notoriously CPU-impervious game Go, AlphaGo by Google Deep Mind, was converted to allow the machine to tackle other "closed-rules" games. Successively, the program was given the rules of chess, and a huge battery of Google's GPUs to train itself on the game. Within four hours, the alien emerged. And it is indeed a new class of player.
The AlphaZero neural network uses reinforcement learning to teach itself things from scratch. It does not rely on previous knowledge - which in the case of chess is surprising, as the mass of knowledge on the game accumulated in centuries of experimentation is hard to shrug off. Combined with a powerful search algorithm, the neural network is at present unbeatable. This was demonstrated in a 100-game match against the strongest chess program around, Stockfish 8.

What impressed me when I saw a few games from that match, which was concluded with 25 wins and 75 draws, no losses from Alpha zero, is that the machine can display an evolved treatment of openings, is keen to sacrifice material for positional gains, and has no prejudices. Indeed, while most chess machines around have pre-defined weights that discourage certain kinds of positions -say, putting your king in the center of the board when there's lots of pieces around potentially capable of threatening it is a no-no strategy, punished with negative weights that prevent chess engines from entertaining the thought- alpha zero knows no borders. Look at this position, e.g.:

It transpires that something has gone wrong for black - while its position is solid, it is left with a white-squared bishop that has no future, blocked as it is from its own central pawns. White, instead, has gotten rid of its own potentially similarly fated darksquared bishop, and enjoys more space. So what is the next move that white does here? 

Source: Science 2.0

If you enjoyed this post, make sure you subscribe to my Email Updates!

Learn To Build Your Own Neural Networks With This Training Bundle | Interesting Engineering - Innovation - AI

This is a promotional article about one of the company partners with Interesting Engineering. By shopping with us, you not only get the materials you need, but you’re also supporting our website.

IE Shop, The Interesting Engineering Shop features exclusive offers on the latest gadgets, software, and online courses hand-picked for Interesting Engineering readers inform, "Use TensorFlow and Theano to get a firm understanding of deep learning and artificial intelligence."

Photo: Pixabay

Science fiction movies seem to have done Artificial Intelligence (AI) a bit of a disservice. Due to decades of popular yet farfetched sci-fi releases, when most people think of AI, they think only of evil robots taking over the planet, or perhaps friendlier (but still evil, maybe?) robots along the lines of the robot-woman in Ex Machina.

In many ways, however, real-life artificial intelligence has become more interesting than in the movies, with self-driving cars redefining transportation, quantum computing reshaping how we work with large sets of data, and medical robots performing some of the most advanced surgeries known to man with astounding precision.

Indeed, the future of technology in many ways belongs to AI. This means that the most exciting and important careers of the future will belong to those who possess a solid understanding of both deep learning and artificial intelligence principles. The Deep Learning and Artificial Intelligence Introductory Bundle breaks down some of the more fascinating topics in these fields into easy-to-understand and entertaining lessons, and it’s on sale for just $39.

The first course in this bundle kicks off your education in this exciting field by giving you a deep understanding of probability theory—one of the most fundamental elements of deep learning and AI, since it allows for complex and accurate predictions to be made. You’ll learn about everything from Moore’s Law and linear regression to large-scale data analysis and beyond.

If you enjoyed this post, make sure you subscribe to my Email Updates!

Saturday, January 13, 2018

How machine learning engineers can detect and debug algorithmic bias | Boing Boing

Follow on Twitter as @doctorow
"Ben Lorica, O'Reilly's chief data scientist, has posted slides and notes from his talk at last December's Strata Data Conference in Singapore, "We need to build machine learning tools to augment machine learning engineers."" notes Cory Doctorow, Writer, blogger, activist.

Photo: Boing Boing

Lorica describes a new job emerging in IT departments: "machine learning engineers," whose job is to adapt machine learning models for production environments. These new engineers run the risk of embedding algorithmic bias into their systems, which unfairly discriminate, create liability, and reduces the quality of the recommendations the systems produce. 

He presents a set of technical and procedural steps to take to minimize these risks, with links to the relevant papers and code. It's really required reading for anyone implementing a machine learning system in a production environment. 

We need to build machine learning tools to augment machine learning engineers [Ben Lorica/O'Reilly]
(via 4 Short Links)

Source: Boing Boing

If you enjoyed this post, make sure you subscribe to my Email Updates!

Seven Ways Cybercriminals Can Use Machine Learning | Forbes - Technology

"AI has given cybercriminals new ways to steal information, but there are things you can do to prevent it" reports Alexander Polyakov, CTO and Co-Founder at ERPScan. President of EAS-SEC. SAP cybersecurity evangelist. 

Photo: Shutterstock

Ben Gurion, the main international airport in Israel, is one of the most protected airports in the world. It is known for its multilayered security. On the way from the office to the airport, you get caught in the lens of airport cameras. The road curves several kilometers to the terminal, and when you are driving, the security system has enough time to analyze your identity. In case of any signs of danger, you will be intercepted. The system of behavior anomalies analysis in computer systems works the same way. The implementation of these systems is effective in defense. While a perpetrator is running certain commands, an AI-based system can stave off any damage, having identified an intrusion.

AI deployment is not so rosy in the world of cybersecurity. Hackers move forward and adopt it as well. The U.S. intelligence community reports that artificial intelligence actually works in cybercriminals' favor.

Let's go over a few areas for hackers deploying machine learning and find out which cybersecurity measures should be taken.

Data Gathering

Every single breach starts with data gathering. Hackers maximize the chances of success by gaining more information. They classify users and select a potential victim thoroughly using several classification and clustering methods. This task can be automated.
How can you protect yourself from being their victim? It goes without saying that your personal information must not be available in open sources, so you should not publish an awful lot of information about yourself on social networks.

Neural networks can be trained to create spams that resemble a real email. However, in order for this to work, it is better to know the sender’s behavior. This can be achieved through network phishing that provides hackers with easy access to personal information. Research from BlackHat about automated spearphishing on Twitter proves this idea. This tool can increase the success of phishing campaigns up to 30% -- which is twice as much as traditional automation and similar to manual phishing.

How can you protect yourself from phishing? You could just mail a question to a sender. Hackers have become savvier, however, and can analyze your message and respond appropriately so that you are sure that the account is not compromised. Nowadays systems are not complicated but it will not be long before smart chat bots communicate with you like your friends do.

The most actionable recommendation is to ask the user through other channels and messengers if he or she sent the message. There is little chance that several of his or her accounts are compromised at once...

The ideas above are only some examples of the ways hackers can use machine learning.

Aside from using more secure passwords and being more careful while following third-party websites, I can only advise paying attention to security systems based on AI in order to be ahead of perpetrators. A year or two ago, everyone had a skeptical attitude toward the use of artificial intelligence. Today’s research findings and its implementation in products prove that AI actually works, and it's here to stay.

Source: Forbes 

If you enjoyed this post, make sure you subscribe to my Email Updates!