Cookie Consent by Free Privacy Policy Generator Microsoft’s AI Chatbot Accused of Spreading Election Misinformation | Review Space



Cover Image

Microsoft’s AI Chatbot Accused of Spreading Election Misinformation

Copilot's is Allegedly Disseminating Inaccurate Election Information. But is this really true?

NEWS  AI  December 15, 2023  Reading time: 3 Minute(s)

mdo Max (RS editor)


As the 2024 US elections draw near, concerns about the role of technology in shaping public opinion and disseminating accurate information have come to the forefront. In a recent investigation, has been uncovered troubling instances of Microsoft's AI chatbot, formerly Bing Chat and now known as Microsoft Copilot, allegedly spreading misinformation about polling locations, candidates, and election-related topics.

The study, conducted by AI Forensics and AlgorithmWatch, reveals systemic issues with Copilot's responses, pointing to inaccuracies in election details in both the US and European contexts. Microsoft's attempts to address these concerns have shown some progress, but the persisting issues raise questions about the effectiveness of the company's measures in fighting disinformation ahead of the crucial 2024 elections.

Microsoft's Copilot, based on OpenAI's GPT-4, has reportedly provided misleading information about electoral candidates, polling numbers, and election dates. The research alleges that these inaccuracies are not isolated incidents but rather a consistent pattern, highlighting the potential risks to voters and democratic processes. In response to these findings, Microsoft spokesperson Frank Shaw stated

"We are continuing to address issues and prepare our tools to perform to our expectations for the 2024 elections. We are taking a number of concrete steps in advance of next year’s elections and we are committed to helping safeguard voters, candidates, campaigns, and election authorities."

 CONSPIRACY THEORIES COVER A WIDE RANGE OF FANTASIES, OUTLANDISH IDEAS AND INACCURATE FACTS 

The study's focus extends beyond the US elections, examining Copilot's responses to European elections in Switzerland and Germany. Researchers claim that a third of Copilot's answers contained factual errors, rendering it an "unreliable source of information for voters".

The report further highlights instances where Copilot allegedly fabricated controversies about candidates, contributing to voter confusion and misinformation. Concerns about the inconsistency in Copilot's responses across different languages raise questions about the prioritization of content moderation and safeguards in non-English-speaking markets. As technology continues to play an increasingly significant role in shaping public discourse, the study underscores the need for robust measures to ensure the accuracy and reliability of information provided by AI-powered tools. With the 2024 elections on the horizon, the implications of AI-generated misinformation on democratic processes cannot be ignored.

In a world where the rapid development of generative AI tools poses new challenges, this investigation sheds light on the potential threats posed by chatbots like Copilot. As Josh A. Goldstein, a research fellow at Georgetown University’s Center for Security and Emerging Technology, warns:

"The tendency to produce misinformation related to elections is problematic if voters treat outputs from language models or chatbots as fact."

Now, though, the question is: is Copilot really sharing misinformation or is it just virtually siding with political views different from those who commissioned/wrote the report? I can clearly see that, even if not explicitly reported, a given amount of the "false" information provided must be undeniably linked to pro Trump views and ideas. I've tried myself on Chat GPT to obtain a positive outcome about the unemployment data during Obama and Trump administrations. The true information I was looking to be reported by Chat GPT was that Trump's unemployment rates were better than Obama's... it took me more than an hour of complicated reasonings with AI to finally make Chat GPT admit that Trump's numbers were better, as like if it was "trained" to retain true information about a specific political side. 

All in all, once again, we're still on the debate about who is the controller, and who controls the controller? Is AI really free to learn and provide true information? I don't think so. The very moment "precautions" are taken for "safety reason", this automatically lead to the controlled information realm. In short we are in a controversial territory, and the only thing that I personally know is that I would be more than happy to freely experiment - at least once - with an unrestricted, controller's free, AI Chat Bot.

 IMAGES CREDITS: REVIEW SPACE 

SHARE THIS ARTICLE


*Our pages may contain affiliate links. If you buy something via one of our affiliate links, Review Space may earn a commission. Thanks for your support!
spacer

SPONSORED



SPONSORED


CATEGORIES



banner

Buy Me a Coffee at ko-fi.com