Microsoft’s new AI chatbot-powered search offering Bing (aka Sydney) was less disastrous, but problems were quickly exposed by users who had early access to the service. The AI isn’t just getting things wrong, it’s also being defensive about it, as a reporter from El Pais discovered when he tried to correct the AI’s bizarre insistence that Spain’s Prime Minister has a beard (don’t miss our Robots Gone Rogue article for more on recent AI weirdness.)
It’s easy to put these errors down to early teething problems, but they may take a long time to resolve. Researchers refer to these bloopers as hallucinations, and while they have identified a wide range of potential causes, including unclean data, decoding errors, and good old-fashioned human error, exactly how these factors combine to cause hallucinations is not known.
What’s more, the rapid pace of AI’s development is creating even more confusion. As the systems get more complex, they get harder to understand, and scientists increasingly struggle to explain how AI works under the hood (learn more on why AI's mind can’t be read here). That makes eliminating these hallucinations difficult. In the meantime, it seems that while AI services like ChatGPT offer tremendous potential, their unprofessional habits will prevent them from replacing humans in the workplace for some time to come.