As AI technology grows so does the concerns surrounding it. With AI that can modify its own code in real time, and AI being used for malicious intent, can humanity keep up with the explosive growth and take care of problems as they arise or should we slow down and inch forward carefully?
Geoffrey Hinton, widely recognized as one of the pioneers of AI, quit Google recently in order to speak out about the dangers of AI. His concerns were not based on the work Google is doing, as he said they have been “very responsible” with their handling of AI, making sure to be careful not to overreach by analyzing emerging risks while focusing on pushing innovation forward. Hinton’s concerns seem to be based on the societal impact of AI tools. One of them is the advent of deep fakes and the creation of misleading or false information that could pass off as real.
People who were on the internet a few weeks ago would remember that AI-generated photo of Pope Francis circulating wearing one of the most fashionable puffer jackets I’ve seen. While harmless, the creation of images for the purpose of misleading the general public is very concerning. We’ve already gone through an era of droves of fake news being pumped out. Deep fakes make them much easier to pass off as real -- after all, until now the worst we’ve had was Photoshop, which was easier to detect than completely falsely generated images. AI trained to detect deep fakes could assist deep fake generators in creating even better deep fakes. This makes proper journalism even more important.
Another concern would be the amount of freedom some people are giving AI projects. AutoGPT, an AI capable of attempting any task you give it. It can rewrite and improve its own code if it realizes that it is currently incapable of achieving the goal with its current programming. It’s terrifying to think of a bad actor giving an AI a malicious directive that it could analyze and execute given time, especially considering how fast AI can learn from scratch.
Sci-fi fans would be aware of Asimov’s Laws, or the Three Laws of Robotics that dictate how robots should never be able to harm humans. The same should be followed when developing AI in order to prevent something like the Terminator series’ SkyNet uprising from becoming a reality. While general-use AI have their purposes, taking precautions like preventing AI from accessing the internet can prevent another Microsoft Tay situation from occurring. Learning AI must have heavy restrictions placed upon it initially in order to guide its growth in the right direction. AI being allowed to rewrite its own code is also concerning, as any primary directives instilled in them can possibly be removed or rewritten entirely.
Here at LexIT, we take great care in developing our products to adhere to their primary directive while remaining innovative. Although the potential for AI to be used for bad things exists, it should be treated as you would any other tool. Used well and with care, it can be a great boon to us. AI will progress, whether we like it or not. It is up to the future tech pioneers to ensure that it grows and innovates in a safe manner.