Why we haven't made it out of Artificial Narrow Intelligence?

It can feel like the technology of today is incomparable with even moderately recent technology. While what we now see in applications such as Siri or self-driving cars is relatively new, the building blocks of that technology go back to the 1950s. The difference is in the capabilities of the technologies and their application across various use cases. 

While the scope of these technologies is more vast, all AI up to this point still falls into the same category of Artificial Narrow Intelligence (ANI) or weak AI. This is essentially the assembly line worker mentality applied to AI. You will have one job, we will teach you how to do it, and you will do it well. If you ask that worker or machine to perform another utterly different task without training it first, there is a good chance something will catch on fire. 

Some of the most famous early examples of ANI are game-related. It makes sense, considering games are a relatively easy to measure metric. The first example was an AI that could play checkers in 1952. Not the most advanced of games but still above the capacity of most five-year-olds. The next big step in computerized gaming was in 1997 when IMBs Deep Blue won the World Chess Championship. Clearly a leap in technological skill, but given 45 years, realistically, anyone could have made the jump from checkers to chess. 

Then in 2016, AlphaGo reached another game milestone by beating the world Go champion. Like chess, GO is a notoriously complex game that does take a degree of complexity to win. Still, the AIs ability to play a game says nothing about its ability to complete an entirely different task, even a simple one, such as telling time. 

This is not to say that there weren't other huge developments in ANI during that time. Apple's Siri was launched in 2011, and Google's Alexa shortly followed suit in 2014. These technologies might seem incomparable to AIs that play games most of us learned when we were kids. However, both Siri and Alexa are classified as ANI. While they can tell you who the King of Prussia was in 1871 and then turn around and rattle off the titles of the entire Beyonce discography, is it because the AI was somehow trained in all of these pieces of information. 

Anyone who has ever had to try and have a bit of fun with either of these technologies has inevitably found themselves in the situation where they make a segway comment, hoping to distract you from realizing that she has no idea what you are saying. It's fun and cute because their coy voices are trained to make some fun quip or joke, but that is just to keep you from noticing their lack of understanding. It's basic misdirection applied to AI.  

While it might seem that Siri or Alexa might know everything there is to know about the universe, everything they know, they know because they have somehow been taught. 

So why haven't we been able to break through to the next level of AI?

Humans have been able to launch rockets and satellites into space. To figure out how old a fossil is by measuring radioactive particles. We have figured out so much about our universe, but nothing has been as challenging as figuring out the thing closest to us. To jump to the next phase of AI, we need machines to emulate the cognitive learning power of the most advanced machine in the world- our brains. 

There is an excellent quote from Aaron Saenz, an AI writer, in which he says, "All these narrow AIs are like the amino acids in the primordial ooze of the Earth. There was an enormous amount of time where the Earth was steeping in small, single-celled organisms. Then, in what almost constitutes a blink of an eye in terms of the age of the universe, the biological technology of life skyrocketed. We went from single-celled organisms to multi-cell, then up to fish, reptiles, and mammals in almost no time. For the last 70 years we have been steeping in ANI, but we have been missing that spark of life to set us over the edge. 

The things that are incredibly difficult for humans, say knowing what 9283 times 20928 is, are incredibly easy for a machine. Trivial almost. However, the things that are almost inane for us like identifying a picture of a sheep versus a picture of a cloud, are absurdly difficult for a machine. It's almost comical that we can easily teach a machine to do things that are horrendously challenging for us, but we can't get over the hurdle of teaching it things we find so simple.

(Pretty simple don’t you think?)

Current projections assume it might take decades to get to the next phase of AI- Artificial General Intelligence or AGI. In order to reach this benchmark, a machine would have to be as smart as a human across the board. Even if we set the bar very low and say the machine only has to have a child's intelligence, it would still be a challenge precisely because the simplest things are somehow the most difficult.

To read this article in its original publishing, click here

Previous
Previous

You have to know the past to understand the present

Next
Next

What we would need to get to Artificial General Intelligence?