AI: Is the super-intelligent takeover coming?

Daniel Guha
In recent years, the introduction of ‘revolutionary’ technologies, such as ChatGPT, to society has triggered an influx of articles, many considering the potentially desolate future that lies ahead. However, this bleak outlook is partially caused by a common lack of understanding behind the fundamentals of current machine learning. Prominent experts in the field such as Google’s own Ray Kurzweil, predicts the consciousness of AI as early as 2029- in his book ‘The Age of Spiritual Machines’. However, to what extent are these claims realistic? And how close to human intelligence is AI currently?
One of the most common problems with the question of Artificial General Intelligence(AGI) is how we define human intelligence. For example, chess has for many years been seen as a fundamental display of intelligence, with many chess champions being crowned ‘geniuses’. However, as early as 1997, algorithms such as Deep Blue were developed which could defeat world champions. Does this mean we’ve already achieved human-level AI? Well, no. Although deep blue and other systems may surpass humans in chess, they are extremely specialised systems which cannot display any semblance of so-called general intelligence - they wouldn’t be able to even play checkers due to this trait. To assess AGI in an AI, several methods have been established: one of the most popular being the Turing test: Both a computer and a human converse with a second human enquirer, who is unaware of which of the respondents they are communicating with. After a set time period, the enquirer must guess which is the computer and which is human. If this enquirer is fooled, we could conclude that the AGI is approaching human general intelligence. Many problems, such as a lack of detail or instruction involved in the original description of this test and a limited focus on the type of intelligence tested, have led many to disqualify this as a judge of general intelligence.
Another core aspect of AGI, that could help to define general intelligence, is understanding. The notion of understanding is complex, but generally, it encompasses the ability to comprehend and learn certain concepts and predict how a specific cause could lead to a specific effect.
Current AI models have a complete lack of understanding, unlike humans. We can use our mental models of the world to not only find a correlation between events or data but also make simulations of predictions based on prior knowledge. Current machine learning, such as that used to train ChatGPT, uses large amounts of data to discover patterns and applies probability-based calculations to output appropriate responses to inputs. However, if data that is sufficiently different from the training set appears, these models become unreliable and tend to “hallucinate”. This lack of understanding creates a massive issue for AGI as it severely limits the AI’s ability when facing unknown or unseen data.
True AGI is still debated but it certainly requires an ability to adapt and function in a dynamic environment with infinite possibilities, and also the ability to learn from previous data and correlations and apply this to new and unseen situations. Presently, this is not possible with AI systems. However, there are also many plausible arguments for the view that true AGI is going to be arriving soon. One of the most popular being based on the belief that we will have an exponential increase in progress and computing power as seen in Moore's law - an observation that the number of transistors doubles every year leading to an exponential increase in processing power. This argument suggests that the time taken to achieve such a distant goal of human-level AI will be much less than we humans may predict due to the exponential nature of our progress.
Overall, the rapid development of AI and its extended use towards our daily life in all forms, from our smart assistants and our social media feeds to movie recommendations, is a sign of change and an indication of greater reliance on AI in the future, however, caution must be taken when placing too much trust on AI. After all, an intelligent computer taking over the world might just be less worrying than an unintelligent one that already has control.
Comments ()