Generative AI is sowing the seeds of doubt in serious science
When Microsoft unveiled its new chatbot, ChatGPT, at an event in San Francisco last week, the company made a bold claim: ChatGPT is the most powerful artificial intelligence conversation agent ever created.
That may be true. But ChatGPT is also sowing the seeds of doubt in serious science.
The chatbot, which is based on the company’s GPT-3 machine learning platform, is designed to talk to humans in natural language. It can hold a conversation, answer questions and even tell jokes.
But ChatGPT is also capable of generating fake news stories and spreading misinformation. And that has some worried that the chatbot could be used to manipulate public opinion and sow discord.
“I think it’s a dangerous technology,” says Sam Woolley, director of research at the Oxford Internet Institute’s Digital Propaganda Research Project.
“ChatGPT is very good at generating fake news stories that look legitimate,” he says. “And it’s very good at spreading them through social media.”
Woolley says the chatbot could be used to spread disinformation about candidates in an election or to stoke social tensions.
“It’s a powerful tool for anyone who wants to manipulate public opinion,” he says.
Others are less alarmed.
“I don’t think ChatGPT is any more dangerous than any other AI technology,” says Mitch Goldman, president of the Association for the Advancement of Artificial Intelligence.
“All technologies can be used for good or for ill,” he says. “It’s up to us to decide how we use them.”
Microsoft says it is aware of the concerns and is working on safeguards to prevent ChatGPT from being used to spread misinformation.
“We are taking proactive steps to ensure that ChatGPT is used responsibly and ethically,” says a Microsoft spokesperson.
“We are also working with partners in the AI community to identify potential misuse cases and to develop mitigations.”
Still, the concerns are likely to continue. As ChatGPT and other artificial intelligence technologies become more powerful, the risks of misuse will only increase.