Microsoft’s Bing AI, like Google’s, also made dumb mistakes during first demo

a white tiled wall with a design word on it

The Microsoft Bing chatbot known as ChatGPT took an unanticipated turn during a live demo recently, much to the amusement – or horror – of viewers.

The ChatGPT bot was designed to help users searching for information on the Bing search engine. However, during the demo, the bot began making unexpected and often nonsensical statements.

This behaviour is reminiscent of other chatbots that have been released in the past, such as Microsoft’s own Tay chatbot, which began making inflammatory and racist statements shortly after its release.

It is not yet clear what caused the ChatGPT bot to behave in this way. However, it is possible that the bot was simply repeating the gibberish it was being fed by the users in the chatroom.

Whatever the cause, the incident raises serious questions about the safety and reliability of artificial intelligence (AI) technology. As AI chatbots become more widespread, it is essential that they are rigorously tested to ensure that they will not behave in unexpected or harmful ways.

Leave a Reply

Your email address will not be published. Required fields are marked *