...

Don’t fall for AI-powered disinformation attacks online – here’s how to stay sharp


gettyimages-2217159460

JuSun/Getty Images

ZDNET’s key takeaways

  • AI-powered narrative attacks, or misinformation campaigns, are on the rise. 
  • These can create real business, brand, personal, and financial harm. 
  • Here are expert tips on how to spot and protect yourself against them. 

Last month, an old friend forwarded me a video that made my stomach drop. In it, what appeared to be violent protesters streaming down the streets of a major city, holding signs accusing the government and business officials of “censoring our voice online!” 

The footage looked authentic. The audio was clear. The protest signs appeared realistically amateurish.

But it was completely fabricated.

That didn’t make the video any less effective, though. If anything, its believability made it more dangerous. That single video had the power to shape opinions, inflame tensions, and spread across platforms before the truth caught up. This is the hallmark of a narrative attack: not just a falsehood, but a story carefully crafted to manipulate perception on a large scale.

Why ‘narrative attacks’ matter more than ever

Narrative attacks, as research firm Forrester defines them, are the new frontier of cybersecurity: AI-powered manipulations or distortions of information that exploit biases and emotions, like disinformation campaigns on steroids. 

I use the term “narrative attacks” deliberately. Terms like “disinformation” feel abstract and academic, while “narrative attack” is specific and actionable. Like cyberattacks, narrative attacks demonstrate how bad actors exploit technology to inflict operational, reputational, and financial harm. 

Also: Navigating AI-powered cyber threats in 2025: 4 expert security tips for businesses

Think of it this way: A cyber attack exploits vulnerabilities in your technical infrastructure. A narrative attack exploits vulnerabilities in your information environment, often causing financial, operational, or reputational harm. This article provides you with practical tools to identify narrative attacks, verify suspicious information, and safeguard yourself and your organization. We’ll cover detection techniques, verification tools, and defensive strategies that work in the real world.

A perfect storm of technology, tension, and timing

Several factors have created the ideal conditions for narrative attacks to flourish. These dynamics help explain why we’re seeing such a surge right now:

  • AI tools have democratized content creation. Anyone can generate convincing fake images, videos, and audio clips using freely available software. The technical barriers that once limited sophisticated narrative campaigns have largely disappeared.

  • Social media platforms fragment audiences into smaller, more isolated communities. Information that might have been quickly debunked in a more diverse media environment can circulate unopposed within closed groups. Echo chambers amplify false narratives while insulating curated groups.

  • Content moderation systems struggle to keep pace with the volume and sophistication of synthetic media. Platforms rely heavily on automated detection, which consistently lags behind the latest manipulation techniques. Human reviewers cannot examine every piece of content at scale.

Meanwhile, bad actors are testing new playbooks, combining traditional propaganda techniques with cutting-edge technology and cyber tactics to create faster, more targeted, and more effective manipulation campaigns.

Also: 7 ways to lock down your phone’s security – before it’s too late

“The incentive structures built into social media platforms benefit content that provokes controversy, outrage, and other strong emotions,” said Jared Holt, an experienced extremism researcher who recently worked as an analyst for the Institute for Strategic Dialogue. Tech companies, he argued, rewarded engagement with inorganic algorithmic amplification to keep users on their services for longer periods, generating more profits. 

“Unfortunately, this also created a ripe environment for bad actors who inflame civil issues and promote social disorder in ways that are detrimental to societal health,” he added.

Old tactics, new tech

Today’s narrative attacks blend familiar propaganda methods with emerging technologies. “Censorship” bait is a particularly insidious tactic. Bad actors deliberately post content designed to trigger moderation actions, then use those actions as “proof” of systematic suppression. This approach radicalizes neutral users who might otherwise dismiss extremist content.

Also: GPT-5 bombed my coding tests, but redeemed itself with code analysis

Coordinated bot networks have become increasingly sophisticated in mimicking human behavior. Modern bot armies use varied posting schedules, attempt to influence influencers, post diverse content types, and use realistic engagement patterns. They’re much more complicated to detect than the automated accounts we saw in previous years. 

Deepfake videos and AI-generated images have become remarkably sophisticated. We’re seeing fake footage of politicians making inflammatory statements, synthetic images of protests that never happened, and artificial celebrity endorsements. The tools used to create this media are becoming increasingly accessible as the LLMs behind them evolve and become more capable. 

Synthetic eyewitness posts combine fake personal accounts with geolocation spoofing. Attackers create seemingly authentic social media profiles, complete with personal histories and local details, and use them to spread false firsthand reports of events. These posts often include manipulated location data to make them appear more credible.

Agenda-driven amplification often involves fringe influencers and extremist groups deliberately promoting misleading content to mainstream audiences. They frequently present themselves as independent voices or citizen journalists while coordinating their messaging and timing to maximize their impact.

Also: Beware of promptware: How researchers broke into Google Home via Gemini

The list of conspiracy fodder is endless, and recycled conspiracies often get updated with contemporary targets and references. For example, the centuries-old antisemitic trope of secret cabals controlling world events has been repackaged in recent years to target figures like George Soros, the World Economic Forum, or even tech CEOs under the guise of “globalist elites.” Another example is modern influencers transforming climate change denial narratives into “smart city” panic campaigns. Vaccine-related conspiracies adapt to target whatever technology or policy is currently controversial. The underlying frameworks remain consistent, but the surface details are updated to reflect current events. 

During recent Los Angeles protests, conspiracy videos circulated claiming that foreign governments orchestrated the demonstrations. An investigation revealed that many of these videos originated from known narrative manipulation networks with ties to overseas influence operations. Ahead of last year’s Paris Olympics, we saw narratives emerge about “bio-engineered athletes,” potential “false flag” terrorist attacks, and other manipulations. These stories lack credible sources but spread rapidly through sports and conspiracy communities.

Fake local news sites have resurfaced across swing states, publishing content designed to look like legitimate journalism while promoting partisan talking points. These sites often use domain names similar to real, local newspapers to increase their credibility.

A recent viral video appeared to show a major celebrity endorsing a political candidate. Even after verification teams proved the footage had been manipulated, polls showed that many people continued to believe the endorsement was genuine. The false narrative persisted despite apparent debunking.

How to spot narrative attacks 

The most important thing you can do is slow down. Our information consumption habits make us vulnerable to manipulation. When you encounter emotionally charged content, especially if it confirms your existing beliefs or triggers strong reactions, pause before sharing.

Also: Syncable vs. non-syncable passkeys: Are roaming authenticators the best of both worlds?

“Always consider the source,” says Andy Carvin, an intelligence analyst who recently worked for the Atlantic Council’s Digital Forensic Research Lab. “While it’s impossible to know the details behind every potential source you come across, you can often learn a lot from what they say and how they say it.” 

Do they speak in absolute certainties? Do they proclaim they know the “truth” or “facts” about something and present that information in black and white terms? Do they ever acknowledge that they don’t have all the answers? Do they attempt to convey nuance? Do they focus on assigning blame to everything they discuss? What’s potentially motivating them to make these claims? Do they cite their sources? 

Media literacy has become one of the most critical skills for navigating our information-saturated world, yet it remains woefully underdeveloped across most demographics. Carvin suggests giving strong consideration to your media consumption habits. When scrolling or watching, ask yourself three critical questions: Who benefits from this narrative? Who is amplifying it? What patterns of repetition do you notice across different sources?

“It may not be possible to answer all of these questions, but if you put yourself in the right mindset and maintain a healthy skepticism, it will help you develop a more discerning media diet,” he said. 

Also: I found 5 AI content detectors that can correctly identify AI text 100% of the time

Before sharing content, try these tips: 

  • Spend 30 seconds checking the source’s credibility and looking for corroborating reports from different outlets. 
  • Use reverse image searches to verify photos, and be aware of when content triggers strong emotional reactions, as manipulation often targets feelings over facts. 
  • Follow journalists and experts who regularly cite sources, correct their own mistakes, and acknowledge uncertainty. 
  • Diversify your information sources beyond social media platforms, and practice reading past headlines to understand the full context. 
  • When evaluating claims, again ask who benefits from the narrative and whether the source provides a transparent methodology for their conclusions.
  • Watch for specific red flag behaviors. Content designed to trigger immediate emotional responses often contains manipulation. Information that spreads unusually fast without transparent sourcing should raise suspicions. Claims that cannot be verified through credible sources require extra scrutiny.
  • Pay attention to the role of images, symbols, and repetition in the content you’re evaluating. Manipulative narratives often rely heavily on visual elements and repeated catchphrases to bypass critical thinking.
  • Be especially wary of “emotional laundering” tactics that frame outrage as civic duty or moral responsibility. Attackers often present their false narratives as urgent calls to action, making audiences feel that sharing unverified information is somehow patriotic or ethical.

Tools that actually help

Here are a few additional apps and websites that can guide you to authentic content. These verification tools should be used to supplement — not replace — human judgment and traditional verification methods. But they can help identify potential red flags, provide additional context, and point you toward reliable information.

  • InVID provides reverse image search capabilities and metadata analysis for photos and videos, making it particularly useful for verifying whether images have been taken out of context or digitally manipulated.

  • Google Lens offers similar reverse image search functionality with a user-friendly interface. It can help you trace the source of suspicious images.

  • Deepware Scanner specifically targets deepfake detection, although it works more effectively with obvious manipulations than with subtle ones.

  • The Bellingcat digital toolkit features various OSINT (Open Source Intelligence) plugins that aid in verifying sources, checking domain registration information, and tracing the dissemination of content across platforms.

  • WHOIS and DNS history tools let you investigate the ownership and history of websites, which is crucial when evaluating the credibility of unfamiliar sources.

  • Copyleaks: The app utilizes advanced AI to detect plagiarism and AI-generated content. While primarily targeted at educators and content creators, it also has consumer utility in identifying whether text has been machine-generated or copied from another source, rather than verifying factual accuracy.

  • Facticity AI: A relatively new entrant focused on rating the factual integrity of online content. Its real value lies in using AI to detect narrative framing and misinformation patterns, but it’s still developing in terms of consumer accessibility and widespread use.

  • AllSides: Shows news stories from left, center, and right perspectives side by side, with media bias ratings that reflect the average judgment of all Americans across the political spectrum. AllSides Headline Roundups bring you top news stories from the left, center, and right of the political spectrum — side-by-side so you can see the whole picture. Available as both a website and a mobile app.

  • Ground News compares how different news publishers frame the same news story, showing bias ratings and allowing users to read from multiple perspectives across the political spectrum. Unlike traditional news aggregators, which utilize crowdsourcing and algorithms that reward clickbait and reinforce pre-existing biases, Ground News helps users understand the news objectively, based on media bias, geographic location, and time. Available as a website, mobile app, and browser extension.

  • Ad Fontes Media: Creator of the Media Bias Chart that rates news sources for bias and reliability using a team of analysts from across the political spectrum. The Media Bias Chart rates various media sources on two scales: political bias (from left to right) on the horizontal axis and reliability on the vertical axis. Offers both free static charts and premium interactive versions.

  • Media Bias Detector: Developed by the University of Pennsylvania, this tool tracks and exposes bias in news coverage by analyzing individual articles rather than relying solely on publishers. Using AI, machine learning, and human raters, it tracks topics, events, facts, tone, and political lean of coverage from major news publishers in near real-time. The tool reveals important patterns, such as how headlines can have different political leanings than the articles they represent.

  • RumorGuard, created by the News Literacy Project, helps identify credible information and debunk viral rumors by teaching users how to verify news using five key credibility factors. Goes beyond traditional fact-checking by using debunked hoaxes, memes, and other misinformation as the starting point for learning news literacy skills. Categorizes misinformation by topics and provides educational resources about media literacy.

  • Compass Vision and Context: My day job is at Blackbird.AI, where my teammates and I help organizations identify and respond to manipulated narratives. We built Compass Context to help anyone, regardless of expertise and experience, analyze internet content for manipulated narratives. The app goes beyond fact-checking to interpret the intent, spread, and potential harm of narrative attacks. While initially built for enterprise and government, it surfaces critical information about who is behind a campaign, how it’s scaling, and whether it’s likely coordinated, making it powerful for advanced users who want more than a true/false score.

How to talk about narrative attacks – without fueling them

The language you use when discussing false information significantly impacts how others perceive and respond to it. Poor communication can accidentally amplify the very narratives you’re trying to counter. Here are a few approaches to try: 

  • Never repeat false claims verbatim, even when debunking them. Research indicates that repetition enhances belief, regardless of the context in which it occurs. Instead of saying “Some people claim that X is true, but Y,” try “Evidence shows that Y is the case.”
  • Focus on describing tactics rather than specific claims. Explain how the content was manipulated to spread outrage rather than detailing what the manipulated content alleged. This approach helps people recognize similar tactics in the future without reinforcing false narratives.
  • Be transparent about uncertainty. If you’re unsure whether something is true or false, say so. Acknowledging the limits of your knowledge builds credibility and models appropriate skepticism.
  • Encourage critical thinking without promoting paranoid conspiracy theories. There’s a crucial difference between healthy skepticism and destructive cynicism. Help people ask better questions rather than teaching them to distrust everything.

What organizations and leaders should do now

Traditional crisis communications strategies are insufficient for narrative attacks. Organizations need proactive defensive measures, not just reactive damage control.

  • Start by auditing your brand’s digital vulnerability. What narratives already exist about your organization? Where are they being discussed? What communities might be susceptible to negative campaigns targeting your industry or values?
  • Train staff on narrative detection, not just cybersecurity hygiene. Employees need to understand how manipulation campaigns work and how to spot them. This training should be ongoing, not a one-time workshop.
  • Monitor fringe sources alongside mainstream media. Narrative attacks often begin in obscure forums and fringe communities before spreading to larger platforms. Early detection requires monitoring these spaces.
  • Prepare statements and content to anticipate and respond to predictable attacks. Every organization faces recurring criticism. Develop template responses for common narratives about your industry, such as labor practices, environmental impact, AI ethics, or other predictable areas of controversy.
  • Consider partnering with narrative intelligence platforms that can provide early warning systems and professional analysis. The sophistication of modern narrative attacks often requires specialized expertise to counter effectively.
  • Establish clear protocols for responding to suspected narrative attacks. Who makes decisions about public responses? How do you verify the information before responding to it? What’s your escalation process when attacks target individual employees?

More steps organizations can take 

Cultural media literacy requires systematic changes to how we teach and reward information sharing. Schools should integrate source evaluation and digital verification techniques into their core curricula, not just as separate media literacy classes. News organizations should prominently display correction policies and provide clear attribution for their reporting. 

Also: Why AI-powered security tools are your secret weapon against tomorrow’s attacks

Social media platforms should slow down the spread of viral content by introducing friction for sharing unverified claims. Professional associations across industries should establish standards for how their members communicate with the public about complex topics. Communities can organize local media literacy workshops that teach practical skills, such as identifying coordinated inauthentic behavior and understanding how algorithmic amplification works.

Implementation depends on making verification tools more accessible and building new social norms around information sharing. Browser extensions that flag questionable sources, fact-checking databases that journalists and educators can easily access, and community-driven verification networks can democratize the tools currently available only to specialists. We need to reward careful, nuanced communication over sensational claims and create consequences for repeatedly spreading false information. This requires both individual commitment to slower, more thoughtful information consumption and institutional changes that prioritize accuracy over engagement metrics.

Narrative attacks represent a fundamental shift in how information warfare operates, requiring new defensive skills from individuals and organizations alike. The verification tools, detection techniques, and communication strategies outlined here aren’t theoretical concepts for future consideration but practical necessities for today’s information environment. Success depends on building these capabilities systematically, training teams to recognize manipulation tactics, and creating institutional cultures that reward accuracy over speed. 

Also: Yes, you need a firewall on Linux – here’s why and which to use

The choice isn’t between perfect detection and complete vulnerability but between developing informed skepticism and remaining defenseless against increasingly sophisticated attacks designed to exploit our cognitive biases and social divisions.



Source link

#Dont #fall #AIpowered #disinformation #attacks #online #heres #stay #sharp