Translate to multiple languages

Subscribe to my Email updates

https://feedburner.google.com/fb/a/mailverify?uri=helgeScherlundelearning
Enjoy what you've read, make sure you subscribe to my Email Updates

Saturday, September 01, 2018

Using deep-learning techniques to locate potential human activities in videos | Computer Sciences - Phys.Org

"When a police officer begins to raise a hand in traffic, human drivers realize that the officer is about to signal them to stop. But computers find it harder to work out people's next likely actions based on their current behavior" according to Phys.Org.
 
The 'YoTube' detector helps makes AI more human-centered.
Photo: iStock
Now, a team of A*STAR researchers and colleagues has developed a detector that can successfully pick out where human actions will occur in videos, in almost real-time.

Image analysis technology will need to become better at understanding human intentions if it is to be employed in a wide range of applications, says Hongyuan Zhu, a computer scientist at A*STAR's Institute for Infocomm Research, who led the study. Driverless cars must be able to detect police officers and interpret their actions quickly and accurately, for safe driving, he explains. Autonomous systems could also be trained to identify suspicious activities such as fighting, theft, or dropping dangerous items, and alert security officers.

Computers are already extremely good at detecting objects in static images, thanks to deep learning techniques, which use to process complex image information. But videos with moving objects are more challenging. "Understanding human actions in videos is a necessary step to build smarter and friendlier machines," says Zhu.
Read more... 

Additional resources
Hongyuan Zhu et al. YoTube: Searching Action Proposal Via Recurrent and Static Regression Networks, IEEE Transactions on Image Processing (2018). 
DOI: 10.1109/TIP.2018.2806279

Detecting 'deepfake' videos in the blink of an eye by Siwei Lyu, Associate Professor of Computer Science; Director, Computer Vision and Machine Learning Lab, University at Albany, State University of New York
"The new technology behind machine learning-enhanced fake videos has a crucial flaw: Computer-generated faces don't blink as often as real people do."

Source: Phys.Org