Hoodwinked

Imagine that you're in your self-driving car, cruising toward a busy intersection. You panic as you see the stop signal ahead — but your car doesn't stop.

We've all misinterpreted our surroundings before. Missing that bottom step or reaching too far for a nearby pen is part of being human. But what happens when a machine is made to miscalculate? Before we dive headlong into the world of autonomous cars and cities, we need to make sure our machines can't be fooled.

Second Glance

Machine learning algorithms rely on input from their surroundings. As AI research has progressed, researchers have started to discover examples of how real-world interference can disrupt an algorithm's results. AI experts call these inputs "adversarial examples" or "weird events" — and they present a major challenge for the future of an increasingly automated world.

A few bits of tape can make a stop sign imperceptible to object detectors, for instance. Juggalo makeup, of all things, can render people invisible to facial recognition technology.

“We can think of them as inputs that we expect the network to process in one way, but the machine does something unexpected upon seeing that input,” Anish Athalye, a computer scientist at the Massachusetts Institute of Technology told the BBC.

By creating adversarial events, it's possible to trick algorithms into mistaking sea turtles for rifles — or even cats for guacamole, as Athalye himself demonstrated last year.

Holy 'Mole

Neural networks, which power much of machine learning, learn in a fashion similar to humans. As we learn about the world around us, we learn to spot examples and associate information that help us classify things. Ducks quack. Cows live in a field. Algorithms learn in a similar fashion, extracting patterns from thousands of examples before being tested to evaluate items on their own.

Information is input into the system and fed through many layers of evaluation. But when the information being input is manipulated, the output can be unexpected.

We still have a lot to learn about how algorithms evaluate their surroundings. We're only just beginning these processes and determining ways we can prevent them from becoming biased.

Before the tech goes mainstream, we need to be careful to train and fool-proof our machines as much as possible.

READ MORE: The 'weird events' that make machines hallucinate [The BBC]

More on artificial intelligence: You Have No Idea What Artifical Intelligence Really Does


Share This Article