We should embrace AI progress
A technological boom occurred in early December last year, with its waves still felt today. Articles everywhere at the time were focused on this brilliant new chatbot, ChatGPT. The New York Times commented, "[it] feels different. Smarter. Weirder. More flexible." Within weeks, millions of people signed up to test OpenAI’s new application, reveling in its potential. I was one of them, testing the limits of ChatGPT, learning how to break it and where its strength lies.
Three months later, GPT 4.0 was released for public consumption, and OpenAI announced its research and planned release of GPT 4.5 and GPT 5.0. Following these announcements, significant backlash ensued from the public, including influential figures such as Elon Musk signed the letter asking for the pausation of AI development. Those who signed this letter argued that all AI research should be put on hold until safety protocols are enacted to prevent such AI systems from posing a risk to humanity. They are being absurd.
If AI is more brilliant than humans, is that a bad thing? The human species has this preconceived notion that we are the most intelligent beings, but what if we aren’t? Should that change how we live? No, we should continue living our lives the way we have been. If GPT 5.0 is smarter and more intuitive than humans, we should capitalize on the potential it presents. Integrating new, extremely powerful AI into our daily lives can help us evolve and stimulate a massive growth period for our society.
Musk and others argued that AI development needs stricter rules and to be rigorously audited by outside experts. While I agree that there are safety measures needed for AI development, placing too many restrictions on research hinders the potential progress that AI research can achieve. If the proposed rules and regulations place AI development in a box, any creativity or insights that could have blossomed with less strict rules would be crushed by the weight of the regulations. It is also important to realize that these regulations won’t actually stop the development of AI. Based on the powerful tools AI bring to the table, it is not unlikely that there are government projects developing these technologies that won’t follow the above regulations. If at least the public can also see how the development of AI is progressing, it should bring a greater sense of control and comfort to everyone.
Moreover, people should be excited about this new field of exploration. It would be humanity's first interaction with something of similar intelligence, which should be celebrated rather than feared. On top of that, this is a great chance to interact with something of similar cognitive prowess. Following the Fermi Paradox, let us assume there is no extraterrestrial life with similar intelligence as human beings. If new research leads to us finding AI with intelligence similar to humans, we should not hinder that progress and instead embrace what may be our first chance to research such an exciting topic.
While I disagree with stopping AI progress, I understand its origins. Unfortunately, science fiction novels scare many people into believing that AI will turn on humanity, leading to the end of the world. This is an exceptionally pessimistic way to view AI because you don’t see any potential bonuses this research could find. It is also essential to consider that these fears are not grounded in reality, but fictional stories.
This assumes GPT 5.0 and other AI models can attain artificial general intelligence. GPT 5.0 and similar versions may not reach AGI but simply regurgitate information similar to GPT 4.0 and GPT 3.5. If that is the case, I firmly believe research should continue as we progress towards a new era.