What we would need to get to Artificial General Intelligence?

In order to reach Artificial General Intelligence, a machine would have to be as intelligent as a human across the board. This means, in the words of Professor Linda Gottfredson, that machines would need to have "the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience."

While a lot of the ANI around us might seem intelligent, that is a huge hurdle to overcome. 

So what's stopping us?

The first hurdle in making the leap to AGI is simply computing power. We have machines that are powerful enough, but they aren't exactly accessible. After bouncing between the US and China for the last decade, Japan took back the title of owning the world's fastest supercomputer this year. Fugaku (the name for Mount Fuji in Japanese) is a huge step up from the previous fastest computer, the US's Summit, in just about every way. It is more powerful, less expensive to build, and physically smaller, three significant factors. For AGI to become a reality, however, these supercomputers have to be accessible and affordable for all, not just for governments and companies like IBM. 

It is promising to see how quickly and sharply these supercomputers outdo each other though. At this point, it is difficult for any computer to hold the throne for more than two or three years before another one comes up and blows the previous contender out of the water. 

When could we actually get there?

There is an interesting and surprisingly accurate rule called Moore's Law, which says computing power generally doubles every two years. With our current trajectory, it seems we might be on the right path. 

Ray Kurzweil, a futurist and writer, estimates that AGI will be fully integrated into our lives once you can buy a machine with an equivalent CPU to the human brain for $1.000. Assuming we keep following the same projection, Kurzweil predicts we might have affordable enough machines by 2029. 

Everyone, of course, has vastly different predictions on when this might happen. Some are not quite as optimistic, but many guestimates between 2030 and 2100. This is all to say, it's complicated. Really, really complicated, and even the most knowledgeable people in the industry can have vastly varying hypotheses. 

But that's not all…

I know you were excited about having a fully sentient robot best friend in the relatively near future, but unfortunately, there is a little bit more to it than that. Inevitably we will get to a point where we can create machines powerful and affordable enough for the job, but figuring out how to make them intelligent is a whole other problem. And this is before we even get into any moralistic questioning if we should even be doing any of this. 

Not surprisingly, there is a divide in opinions regarding what we still need to achieve before AGI would be possible. Some say it's bound to happen if we stay on our current trajectory. Others argue we need to have a handful of fundamental breakthroughs before we could ever get there. Often the dividing lines are fairly partisan depending on which field of AI they come from. While there is probably more disagreement than consensus, many agree that some of the most significant hurdles facing the step up to AGI are the transfer of knowledge and unsupervised learning. 

Transfer of Knowledge

Humans use their acquired knowledge across different fields every day. Baking a birthday cake, one might utilize knowledge from their 7th-grade chemistry class. Next, you might have to dig real deep back to elementary school fractions to figure out how to cut that cake into an awkward nine pieces. Once the human brain learns something, we are able to use that knowledge in another situation without the slightest bit of thought. This is very much not the case for machines. 

Unsupervised Learning

Currently, we have two types of machine learning, supervised and unsupervised learning. Supervised learning was the predecessor, which required every piece of knowledge to be programmed in by a human sitting behind a keyboard for hours on end. As you might imagine, this is neither a quick nor fun process. When we are dependent on humans to do the preliminary work on machines, it almost always results in a bottleneck. 

Unsupervised learning, on the other hand, is a bit newer and more complicated. Through clustering, representation learning, and density estimation, machines can sometimes learn, or more aptly, identify patterns without being explicitly told. Right now, common unsupervised learning use cases include exploratory analysis and dimensionality reduction. These are all good starts, but in order to reach AGI, we have to expand the unsupervised learning capabilities by a significant amount.

So where does that leave us?

The solution in front of you might not always be the solution you expected. This is absolutely the case with current AI solutions. While we have yet to figure out AGI, we have been able to build increasingly complex ANI systems by combining algorithms and processes. We see this a lot in particular with companies like Google or Facebook that have an abundance of ways to collect and utilize user data. They might seem more complex than other forms of AI, but only because they are working together to create a web of information and algorithms that can support and enhance other tools. 

One day we will reach AGI. We just don’t know if it will be in five, fifty, or five hundred years. Until then, ANI and other machine learning tools will grow more complex, giving us more tools and benefits, as well as more cause for concern.

To read this article in its original publishing, click here

Previous
Previous

Why we haven't made it out of Artificial Narrow Intelligence?

Next
Next

What managers should know about AI