Go Back

View from the street: sound the AI-larm


Phoebe O'Carroll-Moran



Sound the AI-larm

You would be forgiven for feeling anxious about artificial intelligence (AI). Amid the continued furore around ChatGPT, the so-called “godfather of AI”, Geoffrey Hinton, quit Google last month, warning of the threat posed by AI, calling for regulation and – happy us – invoking the end of the human race. And as if it couldn’t get worse, we’re all about to lose our jobs.

Concerns were further stoked last week when a group of more than 350 industry leaders penned a one-sentence open letter, warning that: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war”.

The brevity of the letter may reflect the difficulty of getting 350 academics and industry leaders to agree on this issue. But we should consider – for that very reason – that the words and the frames of reference they evoked were chosen carefully. But is there enough discourse and information available for policymakers to implement regulation and at pace?

There is no question that the rapid development of AI – in a largely unfettered and unregulated market – is cause for profound concern, but focusing on the end of humanity might obscure some of the important steps we can take to safeguard information and intelligence.

Despite appearances, chatbots such as ChatGPT or Google Bard are not engaged in dialogue with us. Instead, their output is formed through a series of iterative, probabilistic guesses – what computational linguists call a “stochastic parrot” model of communication. Its “choices” essentially result from the role of a loaded dice and there are risks in over-interpreting the results.

The media is already full of stories about the perils of over-relying on ChatGPT. Lawyer, Steven Schwartz, used it to draft a court brief, which saw him sanctioned after the document was found to cite fictional cases.

As Schwarz discovered, information pollution is just one place where mishaps can occur, especially when an AI is trained on data of questionable provenance. In ChatGPT’s case, this is believed to consist of the contents of Wikipedia and Reddit, two sites known to be unreliable at best and to contain misleading information at worst.

Controlling the veracity of information that is fed into a chatbot is part of the challenge we have already been facing around “fake news” and misinformation.

Make no mistake, existential threats cannot be ignored. This week will see Rishi Sunak announce the next steps in the government’s plans to regulate the sector and AI regulation is expected to feature in his discussions with president Biden in Washington.

But these choices will be informed by the quality of the debate and the discourse. Data biases, misinformation and deepfakes all risk causing mayhem of a more familiar sort. These threats, though pedestrian when compared to those foreshadowed by the likes of Geoffrey Hinton, cannot be eclipsed by more alarmist rhetoric.


Our weekly View From the Street, alongside a useful look-ahead to the coming week, is sent to our subscribers every Monday morning. To get it first, sign up on our website,  scrolling down to “Subscribe to our briefings”.


Want to find out how we can help your business? View our services here, and get in touch if you want to find out more about Charlotte Street Partners.