credit: unsplash

AI Armageddon: contriver of mass hysteria or a very real possibility?

By Aleeza Siddiq

In wake of the climate crises, public trust in businesses has dropped, what should this teach us about our trust in the ever-growing Artificial Intelligence (AI) industry?

Whether it’s horror stories of nuclear destruction, asteroids, or biblical reckonings, humanity has long flirted with its own destruction. Most of these stories involve human hubris. Fiction for years has centred around us flying too close to the sun. Have we become too desensitised to seriously care about yet another threat to humanity, a threat we have all too willingly already allowed into our homes? 

In November 2022, Chat GPT was dropped. Its release incited a huge surge of excitement and wonder surrounding the advancement, and potential threat of Artificial Intelligence (AI). This initial flame of wonder, curiosity, and fear has burnt down to a flickering flame over the years. This great feat of human engineering is now nothing more than a tool for school children’s assignments, and an email writing aid. I suppose the great destroyer of human civilisation has to start somewhere.  

Discussions of the drawbacks of AI often centre around the potential loss of jobs and the demise of human creativity. However, co-founder of the Machine Intelligence Research Institute (MIRI), Eliezer Yudkowsky, is one of many who believe the true danger of AI is its eventual sentience. Yudkowsky, who in 2023 featured in Time Magazine’s 100 Most Influential People in AI, warns in Time that ‘’progress in AI capabilities is running vastly, vastly ahead of progress in AI alignment.’’ As Yudkowsky suggests, the AI industry is advancing faster than anyone expected, and the truth is that as more technology is released on the market, the less and less wonder it incites. Chat GPT is now a familiar friend, one who we have allowed on our screens, into our homes. A friend who, considering the advancement and upcoming release of Chat GPT-5, seems all too likely to stay.  

Yudkowsky, in the past, has advocated for large training runs and multinational agreements to track all AI systems. However, the AI doomsayer has recently called for an all-out shutdown of AI advancement, which he believes is happening too fast for policymakers to keep up with. Yudkowsky believes that if AI progression isn’t halted, or seriously policed, sentient AI will eventually kill us all. It is quite easy to see how Yudkowsky can be viewed as radical and unrealistic in his approaches to AI safety, however, I do believe he raises important points about how we must be cautious toward technology we do not fully understand. 

This concern about how fast AI is developing echoes the sentiments of Sir Oliver Dowden. Last year, as Deputy Prime Minister, he delivered a speech at the UN General Assembly declaring that “global regulation is falling behind current advancements” in AI. Dowden also noted that the creators of AI themselves aren’t even fully aware of how it works. If our experts cannot explain AI, how can they predict its future?  Yes, an uncaring, all-powerful alien AI technological takeover may seem far-fetched to the everyday person, however, we are all too familiar with the uncaring, all-powerful CEOs running our economy, industries, and governments. 

We know that 100 companies have been responsible for 71% of greenhouse gas emissions – the leading factor causing climate change. In 2022, Edelman in a survey found that 64% of the public believe businesses are doing ‘mediocre or worse’ at keeping promises concerning their commitments to tackle the climate crises. In 2023 an Edelman survey showed that businesses are trusted less than government and non-governmental organisations (NGOs) on issues concerning the climate. This is not surprising considering that, despite aiming for net zero by 2050, BP’s oil and gas emissions increased by 8 million metric tons in 2023. Big business has repeatedly proved they prioritise profit above all else, including public wellbeing. Bearing this in mind, the global AI market was estimated to reach USD 279.22 billion this year, proving the industry to be immensely profitable. Additionally, another survey based on Edelman data found that trust in companies building AI tools has dropped in the last five years from 61% to 53%. The question here then changes from a question of AI sentience to a question of trust. Can we trust those in power to prioritise human safety over technological advancement and capital gain?  

I do not claim to be an expert in AI, nor do I claim to have a confident theory for our doomsday. I do however believe that the public has a right to transparency, we should be able to trust the big monopolies and industry policymakers who have the power to cause our destruction. Alien invasions and terminators, although great fun in fiction, should not allow AI sentience to be written off as an unserious topic. The truth is that if we cannot predict the inevitable, we should at the least prepare for the possible. 

Author

Share this story

Follow us online

Subscribe
Notify of
guest

0 Comments
Inline Feedbacks
View all comments