Credit Jonathan Kemper via unsplash

Augmented humanity: AI and society

By Radoslav Serafimov

AI models have quickly taken a comfortable role as our personal translators, problem solvers, and even friends- how did we get here?

ChatGPT was launched over a year ago, in November 2022. It, and OpenAI’s other public AI model, Dall-E (launched a year before ChatGPT), have become a figurative black hole in the tech discourse landscape, with endless articles being written about them from hacks, aficionados and experts alike, alternatively singing their praises and professing doom. At this point in time, AI applications are rapidly insinuating themselves into conversations where no one would have thought they belong, as AI enthusiasts and countless startups try to convince us that all societies’ woes have been waiting for the advent of AI to bring about a resolution.

The launch of ChatGPT has seen it integrated into Microsoft Bing, blocked in Italy entirely for fear of General Data Protection Regulation (GDPR) breaches (and who have since issued a list of to-dos in order for the ban to be lifted), and Yokosuka, Japan has become the first city using the Large Language Model (LLM) for administrative work. It has been a wildly successful, if contentious year for OpenAI, which is now valued at $29bn, and things are only set to become more convoluted as their AI finds its way into more and more applications, from governmental work to healthcare.

The current threats that AI poses, as recognised by ChatGPT itself upon prompting, are mainly ones of bias being ingrained in responses and misinformation being spread due to the AI’s confident presentation of false information. Researchers have successfully shown that ChatGPT can be convinced to be toxic with the correct prompts (although presently attempts at this are less successful), and others have demonstrated that it can perpetuate gender stereotypes and medical racism, which is concerning, considering the model’s proposed implementation in healthcare systems. It is also sobering to see that 53.1% of average Americans were unable to tell a GPT-3.5 written copy from real human writing, a figure which rose to 63.5% for GPT-4, sparking serious concern about the potential for LLMs to be intentionally used to mislead.

The problems with bias for AI stems from their training data, which is supplied from three sources: data publicly available on the internet, information licensed from third parties, and information provided by users and human trainers. While OpenAI performs purposeful filtering to remove what they define as undesirable information from the training data, a few potential problems can still persist and seem to have done so, considering the numerous examples of discrimination the model has presented over the past year. Firstly, there is the question of mechanical time – to carefully examine every single bit of data inputted into the model is an unfeasible suggestion, considering the sheet size of the datasets required to train a LLM as broadly capable as ChatGPT. As such, filtering out “unsuitable” specific words or phrases is a good first step, however misses the more subtle biases data may contain. This issue is additionally exacerbated when asking the exhausted question – who sits at the table?” The unsurprising answer for the computing sciences is still mostly white men, with only 31% of masters’ degree holders in the US in 2022 being of a gender minority, and only 7% Black and Hispanic. Without the experiential analysis of the multitude of race, gender and other potential avenues for discriminatory power relations being present and centred in the development of these models, not to mention concepts such as intersectionality, there is no way in which anything but the current outcomes could have occurred.

For what it’s worth, both Microsoft (who have invested $11bn into OpenAI) and OpenAI themselves have policies dictating standards regarding the development of AI, with Microsoft;s being more concrete and actionable, and both policies avoid mentioning discrimination specifically. OpenAI’s charter states: “…to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.”, which while noble, is a sentiment more than a policy, and is already being subtly subverted by the $20 monthly price tag placed on the use of their most advanced LLM – GPT-4, which excludes those financially vulnerable from its use. It is also worth noting that OpenAI does seem to have done their best to right issues as they have been raised, but this nevertheless feels like too little, too late, in the face of ChatGPT’s 10 billion lifetime users and ever increasing ability to mislead.

On the political front, regulation is only now coming in to tackle the issues AI present, with varying degrees of severity being adopted by various governments. The first international summit on AI regulation was held in the UK on November 1, wherein 28 countries, notably including the US and China, signed the binding, if not terribly detailed in its agenda, Bletchley Agreement, detailing the creation of internationally shared scientific understanding and respective risk-based policies to manage the rising threats presented by AI. While the UK’s current policy is arguably lax in its approach to regulation, the US President Joe Biden issued a much more detailed and actionable executive order on October 30, and the EU is currently in the process of finalising its own detailed set of regulations.

So what does the future of AI look like? It’s not slowing down, that much is for sure. With such broad adoption seen over the course of only a single year and investment in the field growing rapidly, the tech is on a rapid road to betterment, but increased public scrutiny and political intervention will hopefully mitigate the worst of the harms that are to come. While ChatGPT seems to hold a slight liberal bias overall, it is important to remember that this is the ideology most beneficial to its creators and most prevalent in the country of its creation. With AI’s future application in warfare, this is the time for us to become cognisant of the threats this carries, and start very carefully thinking about what parts of our lives we wish to outsource to machines built by those who may have very different desires and worldviews to ours. The potential to improve our lives is great, but I fear that on a lot of fronts, ChatGPT’s response to that might just be: “I’m sorry, but I can’t assist with that request.”.

Author

Share this story

Follow us online

Subscribe
Notify of
guest

0 Comments
Inline Feedbacks
View all comments