Magicless Artificial Intelligence

Artificial intelligence (AI). A term that can be regarded as probably the most mysterious in the world of IT. A concept that both attracts and raises concerns. While artificial intelligence is believed to improve all data usage processes, on the other hand it also creates a sense of threat. Yet taking a closer look might result in depriving artificial intelligence of mystery as it turns out that it is, to a large extent, just a marketing strategy that covers up many different approaches leading to different results. And as we know, catchy marketing terms can sometimes hide more than they reveal.

At the end of the second decade of the 21st century the interest in artificial intelligence was so high that AI gives the impression of being an all-powerful technology with the potential to completely change the entire virtual world. We learn about the AI contribution on a daily basis. It seems like ancient history when artificial intelligence defeated the world chess champion Garry Kasparov in 1997. Nowadays it is a common practice that AI is able to translate spoken words into text or automatically translate languages.

Artificial intelligence can recognize cancer cells better than top experts, or predict the movement of a stock market more efficiently than the most experienced stockbroker. The excitement of the possibilities offered by artificial intelligence technology is perhaps outweighed just by the fear of possible scenarios od AI dominance. Is has, evidently, already begun when it manipulated voters by analyzing data from their Facebook profiles...

Is artificial intelligence really a breakthrough new technology that changes the world? Can it even possibly present the last invention of humanity that will end our civilization, as some warn?

The current enthusiasm about the artificial intelligence we observe is actually the third wave. So far, the two previous waves have always started with great expectations but led to great disappointment. The first began in 1956 with the very first use of the concept of artificial intelligence in the academic world, namely at the Dartmouth College workshop. Key representatives of this wave were Allen Newell, Herbert Simon, John McCarthy, and Martin Minsky. In this period, the very term of artificial intelligence was established as well as the research resources. The protagonists were extremely optimistic as they assumed that within 20 years machines could handle every job the same way as a human being. Even if the predictions didn’t meet the reality and the results lagged far behind the original optimistic assumptions, the first chess algorithms able to beat an average chess player came to existence by the mid-1970s. Nevertheless, the rapture about the AI died down.

In the early 1980s, the second wave of re-awakened enthusiasm about the AI reappeared with the dawn of expert systems technology that allowed to program the way an expert makes decisions. According to this technology a physician, for example, could make a diagnosis based on further examinations and logical rules. There were undoubtedly several expert systems that have brought benefits such as dam safety monitoring. The moment sensors started to detect extreme values, the expert system could evaluate the probability of danger. The system

Even if the system is beneficial, the reality is, however, way too complex and based on so many logical conditions that it is too almost impossible to be pre-coded. Expert systems, for example, have failed to recognize the voice. Thus the failure of expert systems to solve more complex tasks resulted in a decline in interest in artificial intelligence once again.

Around 2010 the artificial intelligence recovers from its ambivalent past and becomes associated with the advent of improved neural network technology, although the very idea is in fact quite old and dates back to the 1940s. The aim is now to design a model of how the human brain works. For a long time, this idea was no more than a mathematical toy until the end of the first decade of the 21st century when a truly effective mathematical tool developed to "teach" neural networks. This presents the beginning of today's craze.

The improved mathematical model is being translated into increasingly effective applications. New neural networks begin to initially unsolvable problems such as image recognition. In less than 5 years of development, neural networks are being able to evaluate visual information with fewer errors than human experts make. It is demonstrated in the following graph:

The trend of percentage error in images classification by artificial intelligence

graf

Dean Takahashi, 2016

 

In 2011, the artificial intelligence error rate in categorizing images exceeded 25%, while in 2015 the error rate of neural networks fell below 5%, which equals the human error rate. Thanks to a hundred million years of biological evolution we are able to evaluate the situation with a 95% success rate. Now consider that neural networks needed just 5 years to do that.

This phenomenal success of computer science will undoubtedly bring fundamental change to many areas of society. But is a concern that it could possibly take over humanity realistic? Not really. Behind catchy marketing term there is a rather boring mathematical model which doesn’t operate on the same principle that the human mind does. Neural networks are not as conscious as humans, nor will they learn new things in the human sense. They cannot look at the problem from a new perspective. Neural networks are "just" applications designed to look for approximate patterns in what may appear like chaos.

Neural network creators have assumed that reality can be described as a mathematical function with a relatively high degree of accuracy. the vast majority of neural networks evaluate data deriving from tens or hundreds of dimensions. Still, if reality cannot be described by function, the neural network will not find a solution.

Thus, current neural network technology may seem almost trivial and very far from what we would expect a true "artificial intelligence" to be like. Sophistically speaking, neural networks solve a relatively simple task, especially if it is a two-dimensional function. Even if the function is multidimensional containing a number of complex parameters, it still based on relatively simple foundations. In contrast to humans, there is no principal difference between the three and the six-dimensional reality that a computer can recognize.

Will artificial intelligence dominate the world? One day maybe, but not with current technology. The added value of neural network technology is its ability to recognize and process a number of parameters that will allow us to predict future developments instead of relying solely on intuition or expensive expert estimations. The new wave of artificial intelligence will certainly bring significant changes comparable to the industrial revolution and bring significant social impacts. However, we should bear in mind that high expectations often result in disappointment. Today's artificial intelligence is nothing less, nothing more than just a good tool to quickly find patterns in what seems like chaos to us.