An apocryphal joke that can’t be corroborated say that when translating “The spirit is willing but the flesh is weak” from Russian to English, a computer spat out “The vodka is strong, but the meat is rotten”.

We don’t know if this really happened, but the story is funny, and it does point to a true fact. Humans are very good at saying one thing, but meaning something completely different. Humans are also very good at deciphering these idiomatic cues. Humans are also really good at perverting everything that touches the internet. Computers are not.

Lately, big tech giants like Facebook, Google, and Microsoft have started to release chatbots and algorithms into the wild that need to make human judgements. Predictably, things have gone wrong. Very, very wrong.

In March this year, Microsoft tested a chatbot they called Tay that was connected to Twitter. The chatbot was developed by Microsoft’s Research and Technology teams as well as its Bing division, and it was supposed to connect to 18-24 year olds to “experiment with and conduct research on conversational understanding”.

Within hours the internet that it tried to reach a “conversational understanding with” had transformed it into a racist and a nazi activist. After a further sixteen hours of extolling the virtues of Donald Trump, Adolf Hitler, and Trump’s Wall, Microsoft shut down the chatbot and apologised to everyone.

This time, it was Facebook that had its fingers smacked. Earlier this year, when Facebook was criticised that its human editors and curators were biased against Conservatives, the company fired all of the humans who could be biased, and then built an algorithm to do their job. The job was to select which news items showed up on the site’s Trending Topics feature.

Very soon after, the algorithm selected fake news about a Fox News anchor, conspiracy theories, and a strange story about a man masturbating to chicken. It would seem that if a news item has lots of interactions, it could just be that people are shouting at something really dumb. Or that idiots are sharing a dumb story. Or that they are the internet perverting some cool tech to looking ridiculous.

Humans have the capacity to instantly decide whether a news item is worthy or not. Computers do not. Humans can tell that an Alex Jones video about 9/11 being and inside job conducted by Neptunians is dumb. A computer do not, and just sees the numbers attached to the metrics and the sharing. High numbers must mean that it is important, right? Wrong. So, so wrong.

With all the fears about automation, that’s a bit of a relief, to be honest. Computers are still too dumb to decide anything, and writers and editors won’t be replaced just yet. The future is bright.