Are we actually prepared for Artificial Intelligence?

Are we actually prepared for Artificial Intelligence?

"Philosophically, intellectually—in every way—human society is unprepared for the rise of artificial intelligence"

Artificial Intelligence (AI) is getting a lot of attention these days, particularly in the technology industry and in corporate boardrooms. AI is also becoming prevalent in consumers everyday lives. Consumers don’t always recognize it as such, as corporate marketing experts prefer to avoid technical jargon and instead use consumer friendly names like Siri and Alexa – but for people that are more technically inclined, the ubiquitous presence of AI is hard to miss.

AI is not a new concept. In fact, its roots go back several decades. So why so much buzz now? Is this just another technology hype that is going to fade, or does it truly have the potential to bring about transformations, either good or bad, of epic proportions?

Let’s take a look at how we got here and why AI is suddenly capturing so much attention. We will revisit a little bit of the history of AI and the convergence of three growth vectors: Algorithmic advances, computing power, and data explosion. These vectors have their own historical landmarks as outlined below, until they converge around 2007, when the iPhone was first introduced.

The first vector, algorithmic advances, goes back as far as 1805 when French mathematician Adrien-Marie Legendre published the least square method of regression which provides the basis for many of today’s machine-learning models. In 1965 the architecture for machine deep-learning using artificial neural networks was first developed. Between 1986 and 1998 we saw a number of algorithmic advances: backpropagation, which allows for optimization without human intervention; image recognition; natural language processing; and Google’s famous PageRank algorithm.

The second vector, computing power, had a significant historical landmark in 1965 when Intel cofounder Gordon Moore recognized the exponential growth in chip power: the number of transistors per square inch doubles every year. This became known as Moore’s law, and has correctly predicted the doubling of computing power every 18 months to the present day and into the foreseeable future. At the time the state-of-the-art computer was capable of processing in the order of 3 million FLOPS (floating-point operations per second). By 1997, IBM’s Deep Blue achieved 11 giga FLOPs (11 billion FLOPS), which led to its victory over Gary Kasparov, the world chess champion. In 1999 the Graphics Processing Unit (GPU) was unveiled – a fundamental computing capability for deep learning. In 2002 we saw the advent of Amazon’s Web Services (AWS) making computing power easily available and affordable through cloud computing. In 2004 Google launched MapReduce, which allows computer to deal with immense amounts of data by using parallel processing, leading to the introduction of Hadoop in 2006, which allowed companies to deal with the avalanche of data produced by the web.

Finally, the third vector, data explosion, started is 1991 when the world wide web was made available to the public. In the early 2000’s we saw wide adoption of broadband, which opened the doors to many internet innovations, resulting in the debut of Facebook in 2004, and Youtube in 2005. At this time, the number of internet users worldwide surpassed one billion.

The year 2007 became a significant landmark. It is at this point that the technologies begin converging and intercepting new horizons as the mobile explosion came to life with Steve Job’s announcement of the iPhone in 2007. From here several significant advances give birth to a renewed enthusiasm for Artificial Intelligence. In 2009, Stanford University scientists showed they could train deep-belief networks with 100 million parameters using GPUs at a rate 70 times faster than using CPUs. By 2010, 300 million smartphones were sold, and internet traffic reached 20 exabytes (20 billion gigabytes) per month.

In 2011 a key milestone was achieved. IBM Watson defeated the two greatest Jeopardy champions, Brad Ruttner and Ken Jennings. Such achievement was made possible by IBM servers capable of processing 80 teraFLOPS (80 trillion FLOPS). Remember that when Moore’s law was pronounced in the mid 60’s, the most powerful computer could only process 3 million FLOPS.

By 2012 significant progress had been made in deep learning for image recognition. Google used 16,000 processors to train a deep artificial neural network to recognize images of cats in YouTube videos without providing any information to these machines about the images. Convolutional Neural Networks (CNN) became capable of classifying images with a high degree of accuracy. In the meantime, the data explosion continued, with the number of mobile devices on the planet exceeding the number of humans, generating 2.5 quintillion bytes of data per day by 2017. Computing power reached new heights as Google announced its Tensor Processing Units (TPU) capable of 180 million teraFLOPS.

It is at this point in the history of Artificial Intelligence that many people started to realize we might not be too far from achieving, or even exceeding, what is known as Artificial General Intelligence (AGI). To the astonishment of the world, Google’s DeepMind hit another major milestone when its AlphaZero algorithm learned to play by itself the games of chess, shogi and Go (Go is a very complex game, much more challenging than chess). Not only did AlphaZero learn to play by itself, it defeated the best computers that had been fed instructions from human experts! And it did this in only eight hours of self-play!

ConclusionWe have some really big and hairy issues in front of us. We don’t know how we will address these difficult questions, but the time to start these conversations is now. As we pointed out earlier, AGI, and shortly after, ASI, is likely going to be a major part of our reality in a short few decades. How many congressmen do you know who have a good grasp of the issues above? How many CEOs are thinking about how values guide the decisions of their organizations, and how those values might influence the behavior of machine Superintelligence? If we don’t start addressing these issues now, we may run out of time. The consequences are unfathomable.

Comments

Popular posts from this blog

Despise Discourse on Social Media: Worldwide Comparisons

Bragging of getting a new opportunity in your career

Best workouts to decrease stammering in children