Some time earlier this month, the latest piece of lobotomised, trend-following AI ‘journalism’ began making the rounds. This story follows New Zealand supermarket chain Pak ‘n’ Save, which recently added an AI recipe-making tool to its app. The language model, dubbed “Savey Meal-Bot”, was meant to create recipes from the ingredients submitted by users, and was advertised as a way to find uses for leftovers.
You might be tempted to think this idea seems like a harmless corporate gimmick. But don’t worry, the benevolent news outlets are working overtime to correct your naive thoughts. From Forbes to The Guardian to Business Insider, headlines were blazing with reports of an AI telling people to make “mustard gas”, “ant poison sandwiches” and “turpentine-flavoured French toast”.
This is, of course, unacceptable. Think of poor little Timmy, who had just asked the Savey Meal-Bot to help him make a yummy lunch, who is now dead after being instructed to raid the cleaning products kept under the sink. Except, of course, that this is a ridiculous and misleading way to frame the issue.
The Meal-Bot was not encouraging deadly recipes out of the blue because it was malfunctioning or badly calibrated. The Meal-Bot was meant to create recipes using the ingredients given by a user. In the case of the mustard gas recipe, which the bot hilariously described as “the perfect non-alcoholic beverage to quench your thirst and refresh your senses”, New Zealand political reporter Liam Hehir had asked Meal-Bot what they could make if they “only had water, bleach and ammonia”.
What a surprise then, that when given only the three ingredients, the Meal-Bot suggested that you use those three ingredients together. This is akin to mixing red and blue paint, and then scolding the brush for making purple. The same is true for the other deadly examples, where users intentionally gave Meal-Bot poisonous ingredients and predictably got inedible recipes in return.
Of course, the Meal-Bot is no saint and some of its other recipes (such as Oreo and vegetable stir-fry), while safe to eat, certainly sound unpleasant. However, to suggest that the bot presents the possibility of serious harm and needs more stringent safety censorship is to miss the point. If one of the journalists took an axe to their own leg, would they be outraged that the axe let them do it? This is obviously an exaggeration, and most people would support new technology being held to the highest standard of safety, but using cases like these as examples of the risk posed by AI borders on journalistic negligence.
These are cases where human users have specifically given an AI language model bad input, in an attempt to get a bad output. This often works, as language models do not “think” in a way that people would consider real thinking. Language models take input text, and use the statistical knowledge from their training data to transform it into output text. Rubbish in, rubbish out, as they say.
Filling headlines with examples like these does nothing but bait people into reading worthless “news” stories. Click-hungry online publications are forgoing any commitment to responsible, informative reporting and leaving readers with a warped understanding of AI tools. This vacuous reporting ignores any real potential harms of the tools, and encourages excessive censoring of AI systems in the pursuit of harmlessness.
To be clear, the option to have more safeguards on AI bots is always good. And as with every other product on the market, any AI tools aimed at children should have much higher standards of safety.
But we should be spending more time discussing how AI automation has the potential to seriously disrupt our approaches to white-collar labour, and the economy in general. Or perhaps how mass-produced AI content threatens to destroy the shared concept of “truth” or “fact” that has been profoundly weakened since the advent of the internet.
But perhaps those stories are not as clickable as someone asking a chatbot to make a poisonous snack, and then being astonished that they got exactly what they asked for.
James Browning is a freelance tech writer and music journalist.
BUSINESS REPORT