Stay Updated Icon

Subscribe to Our Tech & Career Digest

Join thousands of readers getting the latest insights on tech trends, career tips, and exclusive updates delivered straight to their inbox.

AI Chatbots Amplify Disinformation During LA Protests, Worsening Confusion

5:32 AM   |   11 June 2025

AI Chatbots Amplify Disinformation During LA Protests, Worsening Confusion

AI Chatbots Amplify Disinformation During LA Protests, Worsening Confusion

In the chaotic and emotionally charged atmosphere surrounding recent protests in Los Angeles, the quest for accurate information has become increasingly challenging. As residents took to the streets to voice opposition to escalating Immigration and Customs Enforcement (ICE) raids, social media platforms were simultaneously flooded with a torrent of claims, counter-claims, and outright falsehoods. Amidst this digital fog, a new and concerning dynamic has emerged: the role of artificial intelligence chatbots, specifically Grok and ChatGPT, in inadvertently amplifying and generating disinformation.

Traditionally, periods of civil unrest and significant public events have been fertile ground for the spread of misinformation and disinformation. Tactics range from the simple repurposing of old footage or clips from unrelated events, including video games and movies, to the propagation of elaborate conspiracy theories suggesting protests are orchestrated by shadowy, well-funded groups. These methods, while not new, continue to find traction on platforms where content moderation has reportedly been scaled back.

What's different now is the integration of AI into the information-seeking process. Faced with an overwhelming volume of conflicting reports and visual content on social media, some users are turning to conversational AI models, hoping for a quick, authoritative summary or fact-check. However, as demonstrated during the LA protests, these tools are proving to be unreliable arbiters of truth in real-time, dynamic situations.

The Shifting Landscape of Information Consumption

The way we consume news and information has undergone a radical transformation over the past two decades. The rise of the internet, followed by social media, democratized publishing but also dismantled many traditional gatekeepers of information, such as established news organizations with dedicated fact-checking processes. While this has enabled diverse voices to be heard and information to spread rapidly, it has also created an environment where false narratives can gain traction with alarming speed.

Social media platforms, designed for rapid sharing and engagement, often prioritize virality over veracity. Algorithms can inadvertently boost sensational or emotionally resonant content, regardless of its accuracy. This structural incentive, coupled with reported reductions in content moderation efforts by some major platforms, has created a challenging environment for discerning truth from fiction.

Enter AI chatbots. Presented as powerful tools capable of summarizing vast amounts of text, answering complex questions, and even generating creative content, they are increasingly being positioned as intelligent assistants for navigating the digital world. For users overwhelmed by the noise of social media during a breaking news event like the LA protests, querying a chatbot might seem like a logical step towards finding clarity. The expectation is that the AI, having access to a vast dataset, can quickly synthesize information and provide a reliable answer.

However, this expectation often clashes with the current reality of large language models (LLMs). While impressive in their ability to generate human-like text and process information from their training data, they are not designed as real-time fact-checking engines for unfolding events. Their knowledge is typically based on data up to a certain cutoff point, and their responses are generated based on patterns and probabilities learned from that data, rather than a true understanding of truth or access to verified, real-time information streams.

Case Study: The Misidentified National Guard Photos

One prominent example of AI-fueled disinformation during the LA protests involved photographs of National Guard troops. The San Francisco Chronicle published images depicting National Guard members sleeping on floors, highlighting potentially inadequate conditions during their deployment. California Governor Gavin Newsom shared these images on X, using them to criticize the federal government's support for the troops.

Almost immediately, these images became a focal point for disinformation. Users on social media platforms like X and Facebook began questioning their authenticity, claiming they were either AI-generated or taken from a different time and place entirely. This is a common tactic used to discredit inconvenient or politically unfavorable visual evidence.

Seeking to verify the images, some users turned to AI chatbots. One user queried Grok, X's own chatbot, about the origin of the photos. Grok's response was startlingly inaccurate:

  • Initially, Grok claimed, "The photos likely originated from Afghanistan in 2021, during the National Guard's evacuation efforts in Operation Allies Refuge." It added that claims linking them to the 2025 Los Angeles deployment "lack credible support and appear to be a misattribution."
  • When challenged by another user who pointed out the San Francisco Chronicle's reporting, Grok doubled down, stating, "I checked the San Francisco Chronicle’s claims. The photos of National Guard troops sleeping on floors are likely from 2021, probably the U.S. Capitol, not Los Angeles 2025."

Both of Grok's assertions were incorrect. The photos were, in fact, recent and related to the LA deployment, as reported by a credible news organization. Grok's confident but false claims provided ammunition for those seeking to dismiss the images as fake or unrelated, further muddying the waters of public understanding.

ChatGPT also contributed to this specific piece of misinformation. Melissa O’Connor, an "OSINT Citizen Journalist," shared her interaction with ChatGPT where she uploaded the same National Guard photos. OpenAI's chatbot incorrectly identified one of the pictures as being taken at Kabul airport in 2021 during the US withdrawal from Afghanistan. This result was then shared on other platforms, including Facebook and Truth Social, by users presenting it as definitive proof that the photos were old and unrelated to the LA protests. While O'Connor later clarified that she realized the photos were recent, her initial post, amplified by the chatbot's incorrect identification, had already contributed to the spread of the false narrative.

These instances highlight a critical vulnerability: AI chatbots, despite their sophisticated language capabilities, can confidently generate plausible-sounding but entirely false information, especially when asked about recent or rapidly evolving events that may not be fully represented in their training data or accessible via real-time, verifiable sources.

Case Study: The Mystery Bricks

Another piece of disinformation amplified by AI during the LA protests involved a photograph of a pile of bricks. This image was posted on X by Mike Crispi, who suggested it was evidence of a "pre-planned, left wing protest." The implication, a common trope in protest disinformation, is that the presence of bricks indicates an intention for violence and that the protest is not a genuine grassroots movement but rather an orchestrated event.

The image was further amplified by actor James Woods, reaching a much larger audience and garnering millions of views with a similar insinuation that the protests were not "organized" in a legitimate sense. This narrative feeds into broader conspiracy theories about external forces manipulating civil unrest.

However, the image was fact-checked by LeadStories, a reputable fact-checking website, which determined that the photo was not taken in Los Angeles during the June 2025 protests but rather in a suburb of New Jersey. This is another classic disinformation tactic: taking a real image from an unrelated context and presenting it as evidence for a false claim in a different situation.

Despite the fact-check, Grok was again queried about the image. Its response mirrored the disinformation narrative:

  • Grok stated, "The image is likely a real photo from Paramount, Los Angeles, taken on June 7, 2025, near the Home Depot on Alondra Boulevard during protests against ICE raids."
  • When confronted with the LeadStories fact-check showing the image was from New Jersey, Grok refused to retract its statement, claiming, "I cannot retract the statement, as evidence strongly supports the image being from Paramount, CA... News reports from ABC7, Los Angeles Times, and others confirm bricks were used in clashes with federal agents."

WIRED's reporting indicated they could not find evidence in reports from the mentioned outlets confirming bricks were used in the recent LA protests. Grok's response here demonstrates a phenomenon known as "hallucination," where the AI confidently asserts false information, sometimes even fabricating sources or evidence to support its claims. This is particularly dangerous because it lends a false sense of authority to the disinformation.

Why AI Chatbots Get It Wrong (During Breaking News)

The failures of Grok and ChatGPT in these instances are not necessarily malicious, but they are indicative of fundamental limitations in current AI models when dealing with real-time, unverified information:

  1. **Training Data Limitations:** LLMs are trained on vast datasets of text and code, but this data has a cutoff point. They do not have inherent, real-time access to the internet or the ability to process and verify live news feeds like a human journalist or fact-checker would.
  2. **Lack of Ground Truth:** Chatbots generate responses based on patterns and probabilities in their training data. They don't possess a concept of "truth" or the ability to distinguish between verified facts and unverified claims, especially when the information is new or rapidly changing.
  3. **Hallucination:** LLMs can generate plausible-sounding but entirely fabricated information, including events, facts, or even sources. This is a known issue and is particularly problematic when users trust the AI to provide factual summaries.
  4. **Inability to Verify Visuals:** While some models have multimodal capabilities (processing images), their ability to verify the context, origin, and authenticity of a photograph in real-time, especially against a backdrop of rapidly spreading social media content, is limited. They may rely on image captions or associated text in their training data, which can be misleading or incorrect.
  5. **Influence of Training Data Bias:** If the training data contains a significant amount of unverified or biased information related to past events (like previous protests), the AI might inadvertently reproduce those patterns or narratives when asked about similar current events.
  6. **Design for Conversation, Not Fact-Checking:** Chatbots are primarily designed to be conversational and helpful, generating fluent and coherent text. This design goal can sometimes conflict with the need for absolute factual accuracy, especially when the AI is prompted with ambiguous or unverified information.

These limitations mean that while chatbots can be useful for summarizing historical information or generating creative text, they are currently ill-equipped to serve as reliable fact-checkers for fast-moving news events. Relying on them in such situations is akin to asking a highly articulate parrot to verify complex financial data – it can mimic the language, but it lacks the underlying understanding and access to necessary information.

The Amplification Effect: AI and Social Media

The danger is compounded by the environment in which these AI responses are shared. When a user queries a chatbot and receives an inaccurate answer, they may then share that answer on social media platforms like X or Facebook. Because the answer came from an AI, it might be perceived by some users as more objective or authoritative than a post from another human user.

This creates a feedback loop: disinformation spreads on social media, users ask AI about it, the AI provides an inaccurate response based on the prevalent (but false) narratives it might encounter in its data or struggle to verify, and then the AI's response is shared back onto social media, lending it a veneer of algorithmic authority. This cycle can accelerate the spread and entrenchment of false narratives, making it harder for accurate information to break through.

The article also touches upon traditional disinformation tactics that persist alongside the AI issue. The claim that protesters are "paid agitators" directed by "shadowy forces" is a long-standing trope used to delegitimize protests. This narrative surfaced again during the LA events, fueled by misinterpretations of news footage. For instance, footage showing face masks being distributed was spun by some as evidence of a "paid insurrection," despite the masks appearing to be protective respirators and only a small number being dispersed. These human-generated falsehoods exist in parallel with, and can potentially influence, the information landscape that AI models attempt to process.

The Broader Implications for Information Integrity

The situation in Los Angeles serves as a stark warning about the potential impact of AI on the information ecosystem, particularly during times of crisis and social tension. As AI becomes more integrated into search engines, social media feeds, and personal assistants, the risk of it inadvertently spreading or generating disinformation on a massive scale increases.

The implications are significant:

  • **Erosion of Trust:** If users repeatedly receive inaccurate information from AI tools they trust, it can lead to a broader erosion of trust in AI, technology, and even legitimate news sources, making people more susceptible to believing unverified claims from less credible sources.
  • **Polarization:** Disinformation often plays on existing societal divisions. AI amplifying false narratives about protests, especially those tied to sensitive issues like immigration, can exacerbate polarization and make constructive dialogue more difficult.
  • **Real-World Harm:** False information about ongoing events can have real-world consequences, influencing public perception, potentially inciting violence, or misdirecting public safety efforts.
  • **Difficulty in Fact-Checking:** The sheer volume of AI-generated content, combined with human-generated disinformation, creates an overwhelming challenge for fact-checkers and journalists trying to provide accurate information.

Combating AI-Fueled Disinformation

Addressing this challenge requires a multi-pronged approach involving AI developers, social media platforms, news organizations, and individual users.

For AI developers, it means:

  • **Improving Real-Time Capabilities:** Investing in research and development to enable AI models to access and verify real-time information from credible sources more effectively.
  • **Implementing Confidence Scores/Caveats:** Designing AI to express uncertainty or provide confidence scores for its answers, especially regarding recent or sensitive topics, rather than presenting potentially false information as fact.
  • **Transparency:** Being transparent about the limitations of AI models, particularly their inability to provide real-time, verified information on breaking news.
  • **Responsible Deployment:** Carefully considering the potential for misuse or unintended consequences when deploying AI models in contexts where factual accuracy about current events is critical.

For social media platforms, it means:

  • **Revisiting Moderation Policies:** Evaluating the impact of content moderation decisions on the spread of disinformation, including that amplified by AI.
  • **Fact-Checking Partnerships:** Strengthening partnerships with human fact-checking organizations to identify and label false content quickly, regardless of whether it originated from a human or an AI.
  • **Promoting Authoritative Sources:** Highlighting information from credible news organizations and official sources during breaking news events.

For news organizations, it means:

  • **Clear and Rapid Reporting:** Providing timely, accurate, and clearly sourced reporting on unfolding events to serve as a reliable counter-narrative to disinformation.
  • **Proactive Fact-Checking:** Actively monitoring social media and AI outputs for emerging false narratives and issuing rapid fact-checks.

For individual users, it means:

  • **Skepticism:** Approaching information, especially during fast-moving events, with a healthy dose of skepticism, regardless of whether it comes from social media or an AI chatbot.
  • **Verifying Information:** Cross-referencing information with multiple credible sources before accepting or sharing it. Do not rely on a single source, especially an AI chatbot, for definitive truth about breaking news.
  • **Checking Sources:** Looking for the original source of information or images. As the National Guard and brick photo examples show, the context and origin are crucial for determining accuracy.
  • **Understanding AI Limitations:** Recognizing that current AI chatbots are not infallible truth machines and are prone to errors, especially on current events.

Conclusion

The LA protests have served as a microcosm of the evolving disinformation challenge. While traditional tactics persist, the integration of AI chatbots into the information ecosystem adds a new layer of complexity and risk. Tools like Grok and ChatGPT, when queried about unfolding events, have demonstrated a capacity to generate and confidently assert false information, inadvertently boosting harmful narratives already circulating on social media.

This situation underscores the urgent need for greater awareness among users about the limitations of current AI models for real-time fact-checking. It also places a significant responsibility on AI developers to improve the accuracy and reliability of their models in dynamic information environments and to be transparent about their capabilities and limitations. As AI becomes more ubiquitous, ensuring its responsible development and deployment is paramount to protecting the integrity of the information we rely on, especially during critical moments of public discourse and civil action.

Navigating the modern information landscape requires vigilance. While AI offers powerful capabilities, it is not a substitute for critical thinking, cross-verification, and reliance on established, credible sources, particularly when seeking to understand complex and rapidly unfolding events like the protests in Los Angeles.

A pixellated image of a car on fire amid protests in LA being set on fire
PHOTO-ILLUSTRATION: WIRED STAFF; GETTY IMAGES

External Links Used: