Translate to multiple languages

Subscribe to my Email updates

https://feedburner.google.com/fb/a/mailverify?uri=helgeScherlundelearning
Enjoy what you've read, make sure you subscribe to my Email Updates

Saturday, June 19, 2021

Game On: Why College Admission Is Rigged and How to Beat the System | Books - Macmillan

Check out this book entitled Game On: Why College Admission Is Rigged and How to Beat the System by Susan F. Paterno, director of the journalism program at Chapman University.

Game On:
Why College Admission Is Rigged and How to Beat the System

Director of the Chapman journalism program—and mother of four recent college grads—Susan F. Paterno leads you through the admissions process to help you and your family make the best decision possible.

How is it possible that Harvard is more affordable for most American families than their local state university? Or that up to half of eligible students receive no financial aid? Or that public universities are rejecting homegrown middle- and working-class applicants and instead enrolling wealthy out-of- state students? College admission has escalated into a high-stakes game of emotional and financial survival. How is the deck stacked against you? And what can you do about it?

READ THE FULL EXCERPT

Future artificial intelligence will happen at the edge | Opinion - IOL

 “Edge computing” has become a buzzword just like “Internet of Things” (IoT) and “cloud computing” in the past. In addition, the COVID-19 pandemic has tremendously accelerated the adoption of edge computing by Professor Louis CH Fourie, technology strategist.

Photo: IOL

According to the 2021 State of the Edge Report by the Linux Foundation, specifically digital health care, manufacturing, and retail business will increase their use of edge computing in the coming years, pushing enterprise-generated data created and processed outside the cloud from 10% to 75% by 2022....

Edge computing

Edge computing refers to computation and data storage that are located close to where it is needed and is done at or near the source of the data, instead of relying on the centrally located cloud at one of the major data centres to do all the processing work...

Edge artificial intelligence (AI)

Edge computing is a very powerful paradigm shift, but it is even more powerful when combined with AI. Most AI processes are carried out in the cloud and need considerable processing capacity. Edge AI, however, requires very little or even no cloud infrastructure beyond the initial training phase. Edge AI only requires a microprocessor and sensors to process the data and make predictions in real time.

Read more... 

Source: IOL

Will AI Make Interpreters and Sign Language Obsolete? | Innovation - Interesting Engineering

Arificial intelligence is changing how we view language— and how we're making it more accessible, according to Jenn Halweil, Chief Story Engineer and Skylar Walters, Research Intern at GoBeyond Lab.

Photo: Markus Spiske/Unsplash
In the age of the internet, people are being drawn closer and closer— you can Snapchat your friend from Turkey, video call your parents on their fancy vacation, send a quick text to your old pen pal (now your new keyboard pal) in Japan. 

But as the world is drawn closer together, our attention spans are becoming more and more commodified. We spend hours scrolling through Instagram, while spending less time engaging with each other directly. 

Ironically, artificial intelligence is now changing that...

Technology has incredible potential to bring people together, but when people are left out, whether as a result of disabilities, race, ethnicity, or otherwise, it can be a divisive and isolating force. Thanks to natural language processing, we’re starting to fill in these gaps between people to build a more accessible future. 

Read more... 

Source: Interesting Engineering 

The danger of anthropomorphic language in robotic AI systems | AI - Brookings Institution

Cindy M. Grimm, professor in the School of Mechanical, Industrial, and Manufacturing Engineering at Oregon State University argues, When describing the behavior of robotic systems, we tend to rely on anthropomorphisms. 

A visitor puts her hand against the glass in front of “Nexi” robot during the “ROBOTS” exhibition at the Hong Kong Science Museum in Hong Kong on May 8, 2021. The exhibition explores the 500-year story of humanoid robots and the artistic and scientific quest to understand what it means to be human.
Photo: Miguel Candela / SOPA Images/Sip via Reuters Connect

Cameras “see,” decision algorithms “think,” and classification systems “recognize.” But the use of such terms can set us up for failure, since they create expectations and assumptions that often do not hold, especially in the minds of people who have no training in the underlying technologies involved. This is particularly problematic because many of the tasks we envision for robotic technologies are typically ones that humans currently do (or could do) some part of. The natural tendency is to describe these tasks as a human would using the “skills” a human has—which may be very different from how a robot performs the task. If the task specification relies only on “human” specifications—without making clear the differences between “robotic” skills and “human” ones—then the chance of a misalignment between the human-based description of the task and what the robot actually does will increase.

Designing, procuring, and evaluating AI and robotic systems that are safe, effective, and behave in predictable ways represents a central challenge in contemporary artificial intelligence, and using a systematic approach in choosing the language that describes these systems is the first step toward mitigating risks associated with unexamined assumptions about AI and robotic capabilities. Specifically, actions we consider simple need to be broken down and their components carefully mapped to their algorithmic and sensor counterparts, while avoiding the pitfalls of anthropomorphic language. This serves two purposes. First, it helps to reveal underlying assumptions and biases by more clearly defining functionality. Second, it helps non-technical experts better understand the limitations and capabilities of the underlying technology, so they can better judge if it meets their application needs...

One could argue that the two statements “The robot sees an apple” and “The robot detects an object that has the appearance of an apple” are pretty much the same, but in their assumptions of cognitive ability, they are very different. “See” carries with it a host of internal models and assumptions: Apples are red or green, fit in the hand, smell like apples, crunch when you bite them, are found on trees and fruit bowls, etc. We are used to seeing apples in a wide variety of lighting conditions and varying view points—and we have some notion of the context in which they are likely to appear. We can separate out pictures of apples from paintings or cartoons. We can recognize other objects in a scene that tell us if something is likely to be an apple or another red object. In other words, we bring an entire internal representation of what an apple is when looking at an image—we don’t just see the pixels. “Detect,” on the other hand, connotes fewer internal assumptions and evokes, instead, the image of someone pointing a sensor at an apple and having it go “ding.” This is more akin to how a robot “sees” and how it internally represents an apple. A sensor (the camera) is pointed at the apple, and the numeric distribution of pixel values is examined. If the pixel values “match” (numerically) the previously-learned examples of pixel distributions for images labeled as “apples,” the algorithm returns the symbol “apple.” How does the algorithm get this set of example pixel distributions? Not by running around and picking up objects and seeing if they smell and taste like apples, but from millions of labeled images (thank you, Flickr). These images are largely taken with good lighting and from standard viewpoints, which means that the algorithm struggles to detect an apple in bad lighting and from odd angles, nor does it know how to distinguish between an image that matches its criteria of being an apple but isn’t one. Hence, it is more accurate to say that the robot has detected an object that has the appearance of an apple.

Read more... 

Source: Brookings Institution

Miami middle school students study STEM, robotics during summer vacation | Miami-Dade County - WPLG Local 10

Middle school students across Miami-Dade County are embarking on a summer learning program to learn about STEM by Hatzel Vela, Reporter at WPLG Local 10.

Photo: Screenshot from Local10.com's Video.

It’s called the Verizon Innovative Learning Program — and it’s finally all in-person.

The program is a free, month-long extensive STEM program for about one hundred young students...

The lack of person-to-person interaction, the online learning is arguably responsible for what many are calling the “COVID gap,” a pandemic-related academic setback for even the smartest kids, like those in this group.

Read more... 

Source: WPLG Local 10

Cybersecurity is the next frontier for AI and ML | The Machine - VentureBeat

This story originally appeared on Raffy.ch. Copyright 2021


Raffael Marty, technology executive, entrepreneur observes , Before diving into cybersecurity and how the industry is using AI at this point, let’s define the term AI first. 

Photo: Shutterstock

Artificial intelligence (AI), as the term is used today, is the overarching concept covering machine learning (supervised, including deep learning, and unsupervised), as well as other algorithmic approaches that are more than just simple statistics. These other algorithms include the fields of natural language processing (NLP), natural language understanding (NLU), reinforcement learning, and knowledge representation. These are the most relevant approaches in cybersecurity.

Given this definition, how evolved are cybersecurity products when it comes to using AI and ML?...

How AI is used in security

Generally, I see the correct application of AI in the supervised machine learning camp where there is a lot of labeled data available: malware detection (telling benign binaries from malware), malware classification (attributing malware to some malware family), document and website classification, document analysis, and natural language understanding for phishing and BEC detection. There is some early but promising work being done on graph (or social network) analytics for communication analysis. But you need a lot of data and contextual information that is not easy to get your hands on. Then, there are a couple of companies that are using belief networks to model expert knowledge, for example, for event triage or insider threat detection. But unfortunately, these companies are a dime a dozen.

Read more... 

Source: VentureBeat 

Art and the Algorithm: Computer Program Predicts Painting Preferences | Informatics - Technology Networks

This article has been republished from the following materials. Note: material may have been edited for length and content. For further information, please contact the cited source.


New study offers insight into how people make aesthetic judgments by California Institute of Technology

Impressionist painting by Worthington Whittredge.
Photo:  Smithsonian American Art Museum, L.E. Katzenbach Fund

Do you like the thick brush strokes and soft color palettes of an impressionist painting such as those by Claude Monet? 

Or do you prefer the bold colors and abstract shapes of a Rothko? Individual art tastes have a certain mystique to them, but now a new Caltech study shows that a simple computer program can accurately predict which paintings a person will like.

The new study, appearing in the journal Nature Human Behaviour, utilized Amazon's crowdsourcing platform Mechanical Turk to enlist more than 1,500 volunteers to rate paintings in the genres of impressionism, cubism, abstract, and color field. The volunteers' answers were fed into a computer program and then, after this training period, the computer could predict the volunteers' art preferences much better than would happen by chance...

In this case, the deep-learning approach did not include any of the selected low- or high-level visual features used in the first part of the study, so the computer had to "decide" what features to analyze on its own.

"In deep-neural-network models, we do not actually know exactly how the network is solving a particular task because the models learn by themselves much like real brains do," explains Iigaya. "It can be very mysterious, but when we looked inside the neural network, we were able to tell that it was constructing the same feature categories we selected ourselves." These results hint at the possibility that features used for determining aesthetic preference might emerge naturally in a brain-like architecture.

Read more...

Additional resources

Reference: Iigaya K, Yi S, Wahle IA, Tanwisuth K, O’Doherty JP. Aesthetic preference for art can be predicted from a mixture of low- and high-level visual features. Nat Hum Behav. 2021;5(6):743-755. doi: 10.1038/s41562-021-01124-6

Source: Technology Networks  

Tuesday, June 15, 2021

33 New Skills You Can Learn on LinkedIn Learning This Week | New courses - LinkedIn Learning Blog

Each week presents an opportunity to learn new skills to help us navigate this unique moment in our lives and careers, Rachel Parnes reports.

Photo: LinkedIn Learning Blog
At LinkedIn Learning, we want to provide the online learning courses you need to learn those skills. Each week we add to our 16,000+ course library. This past week we added 33 courses. What can you expect from the new additions? 

Whether you’re learning to get started as a full-stack web developer or honing your AWS skills, we’ve got you covered on those topics and more. Check out one of the 33 new courses this week.

Looking for resources to help you get your next job? We can help with that too. Find courses that map to the top in-demand jobs, free until December 31, 2021, at opportunity.Linkedin.com.   

The new courses now available on LinkedIn Learning are:

Read more... 

Source: LinkedIn Learning Blog

Composing with geometry: This Venezuelan guitarist makes music using a compass and a ruler | Music & Nightlife - Miami Herald

Lorena Cespedes, Colombian student at Florida International University majoring in journalism says,Aureo Puerta Carreño makes geometry his muse. The Venezuelan guitarist, 28, created a method to compose music, using geometric patterns. 

Venezuelan guitarist Aureo Puerta Carreño uses geometry to compose music.
Photo: Handout

Musicians and mathematicians have applauded his work. His compositions have been played at guitar festivals in the United States, and in 2020 he was named composer-in- residence by the Miami Symphony Orchestra.

“I compose my music through geometry, and for this, I only use a compass and a ruler,” he said. “It is a system that I invented six years ago, by which a matrix of hexagons becomes my canvas, where I am going to write the notes. In short, I can put music to any object I want.”...

“But the squares on the paper did not help me to place the 12 notes of our musical tonal system, since a square only has four sides,” he said. “One day watching a program about the importance of bees, the hexagon came to mind, and that was the thing that led to the discovery of the geometric pattern on which music is based.”...

“In the academy, we learn with the traditional method,” said Vicky Beltran, a student. “Professor Puerta has taught us with his method a few times, and it has been fun. If he writes all the notes on the hexagon’s left side, we know that we are playing a happy song, and if it is on the right side, we are playing a sad song.” 

Read more... 

Source: Miami Herald

These Are the 10 Hardest Math Problems That Remain Unsolved | Science - Popular Mechanics

The smartest people in the world can't crack them. Maybe you'll have better luck.

Photo: Popular Mechanics

For all of the recent strides we've made in the math world—like a supercomputer finally solving the Sum of Three Cubes problem that puzzled mathematicians for 65 years—we're forever crunching calculations in pursuit of deeper numerical knowledge. Some math problems have been challenging us for centuries, and while brain-busters like these hardest math problems that follow may seem impossible, someone is bound to solve 'em eventually. Maybe.  

For now, you can take a crack at the hardest math problems known to man, woman, and machine.

Read more... 

Source: Popular Mechanics