It would be too embarrassing to disclose the number of times I’ve found myself pondering the end of Sophie’s Choice. I won’t give anything away, but the Alan J Pakula film starring Meryl Streep poses a genuine moral dilemma.
Although we may not encounter as poignant or difficult a situation as that portrayed in the award-winning film, ethical quandaries are a feature of our daily lives.
However, as unlikely as it may seem that moral decisions could be outsourced to machines in the same way that logical problems can, artificial intelligence (AI) may in fact be capable of coming to our rescue.
A new AI bot called Ask Delphi has been developed to provide ethical advice based on more than 1.7m moral judgements on everyday questions and scenarios. You pose an ethical question and the machine-learning algorithm built into the prototype will give you an answer.
As is often the case with machine-learning projects, however, most of Ask Delphi’s judgements are directly influenced by the framing of its researchers and is highly sensitive to the nuances of language. So much so, that small changes to a question’s phrasing can flip the oracle’s judgement from condemnation to approval and vice versa.
Proving that point is the main purpose of the research programme. As the disclaimer on the demo’s website reads, the project is “intended to study the promises and limitations of machine ethics” – something with which some big corporations and regulators are still grappling.
On Tuesday, Meta (aka Facebook) announced that it would be shutting down its face recognition system on the Facebook app, “as part of a company-wide move to limit the use of facial recognition in our products.” The social media group cited “growing concerns about the use of this technology as a whole” at a time of widespread criticism over Facebook’s attitude towards user privacy and safety.
Yet Meta’s move has done little to dissuade the many voices expressing concern about its “metaverse”, which relies on AI to run most of its platforms’ algorithms. Business leaders such as former Google chief executive Eric Schmidt and Tesla boss Elon Musk have criticised Meta’s aim to create alternative worlds, warning that they engender unhealthy and parasocial relationships.
The race to develop AI is happening at break-neck speed, in most cases raising more questions than answers about its potential risks to human rights. Until safeguards are agreed at a global level, being wary of the most dangerous possibilities brought by AI systems will prove key to preventing them.