THE AI TAKEOVER: The dawn of the ending
Should we be afraid of artificial intelligence? For me, this is a simple question with an even simpler, two letter answer: no. But not everyone agrees – many people, including the late physicist Stephen Hawking, have raised concerns that the rise of powerful AI systems could spell the end for humanity.
Whether on the big screen or on the small screen, we have all witnessed the plot unfold as the AI-driven robotic overlords slowly, surely and ruthlessly take over the human race. However, in the movies, the human race always wins. We are not yet sure about the outcome of the AI takeover if it happens in real life!
The omniscient, omnipresent and almost omnipotent power of AI has been put to question time and time again. The future possibilities of this technology can easily be compared to the myth of Pandora’s Box. However, we know what pandora’s box contained, the future of AI, on the other hand, remains quite foggy, which justifies the question that we are faced with again and again, and that is- will AI replace us human beings?
Clearly, your view on whether AI will take over the world will depend on whether you think it can develop intelligent behaviour surpassing that of humans – something referred to as "super intelligence”.
HOW DID IT ALL START: THE BEGINNING
The term “artificial intelligence” was coined in 1956 by John McCarthy, a researcher who later founded AI labs at MIT and Stanford.In the early 1950s, the study of “thinking machines” had various names like cybernetics, automata theory, and information processing.During the early days, the pioneers of AI did not believe that machines could behave intelligently and definitely did not consider the possibility that machines will eventually far surpass all the intellectual activities of any man!
However, when the results proved to be astonishing- computers being able to solve numerical problems, invent mathematical proofs that were more elegant than the original, follow instructions and answer questions in English.Over the years, as the cost of computing started declining and processing power got more powerful, AI is now able to run a more complex algorithm on more data than ever!
NO PHANTOM IN THE MACHINE
But let us demystify the most popular AI techniques, known collectively as "machine learning”. These allow a machine to learn a task without being programmed with explicit instructions. This may sound spooky but the truth is it is all down to some rather mundane statistics.
AI TAKEOVER IS A POSSIBILITY
Suppose, we as humans are successful in building an AI system that is smart enough to surpass all the activities that the most intelligent humans can collectively perform. Such a system will become the most intelligent thing in the world. The machine can then improve itself or build a better machine since it is smarter than its own inventor. This self-improvement cycle turns into a never ending loop where the invention evolves continuously. Such an event where intelligence keeps on improving itself is called Intelligence Explosion. Ultimately, it can lead to superintelligent AI becoming the last human invention ever.
Super intelligent systems can become smart enough to prevent intervention by us even if we try. They can learn to defend themselves against any external threats, including us. Driven by self-preservation, they can prevent their own shutdown once the user loses control over them. A representation of such an event is shown in a movie from "The Terminator" series where a system called SKYNET completely takes over the entire world, from AI weapons to the smallest of electronic devices like mobile phones, and safeguards itself from being shut down, using those weapons.
AI is like a mischievous kid that can’t be left alone and constantly needs adult supervision, and carelessness in controlling the technology can lead to an AI takeover. Although AI has proven to be effectively beneficial in fields like healthcare and commerce, it needs to be monitored and controlled as it has the capabilities to eventually take control over the world. If intelligent systems become capable enough it’s not a given that they might support the existence of humankind. Hence, AI systems should not be given the liberty to take complete control of anything, ever. Though superintelligent systems and artificial general intelligence (AGI) are still some way away in the future, AI safety should be an integral part of AI research regardless of application, as creating superintelligence without safety can, ironically, end up being the dumbest thing we’ve done.
Quite insightful
ReplyDelete