Inside the Future of Information Warfare & AI Before 2030 With Sohan Dsouza
The term “information warfare” has become so elastic that it now encompasses everything from troll farms and covert online operations to American politicians catastrophizing about free speech. For Sohan DSouza, a visiting researcher at the Max Planck Institute, this ambiguity is part of the problem. “Practitioners may not always aim to dominate the narrative,” he said, “sometimes it’s just about sowing enough uncertainty that the truth never fully takes root.”
Sohan, who has studied influence operations and artificial intelligence ethics through an OSINT lens, told The Intelligence Spotlight podcast that modern disinformation campaigns are less about persuasion and more about disruption. They thrive on fear, doubt, and polarization—on making people believe that “the other side is more gullible.”
One of the most striking developments in recent years has been the rise of synthetic personas powered by large language models. These accounts do not only push propaganda; they share cat photos, complain about traffic, and behave convincingly like ordinary users. Yet, Sohan noted, there are still “slip ups.” Bursts of activity in the wrong time zone or repeated AI-generated hallucinations can give them away.
The riots in Britain earlier this year offered a case study. False claims about a migrant perpetrator spread rapidly online, seeded by accounts that on closer inspection were operated from Dubai and India. “You can use OSINT to look under the hood—past handles, past content, what’s been scrubbed,” Sohan explained. These trails often expose the real networks behind supposedly grassroots narratives.
But as investigators become more sophisticated, so do adversaries. Sohan warned of “misinformation about misinformation”—false leads designed to waste analysts’ time. Operations like “Matryoshka,” he said, deliberately plant fabricated clues: fake graffiti, phantom websites, or fabricated tweets meant to misdirect inquiries.
Technology provides both opportunities and risks. Automation can help detect anomalies, cluster similar content, or scrape vast amounts of data across platforms. Yet Sohan remains cautious. “Humans still need to supervise,” he insisted. AI can be misled by memes, slang, or subtle cultural cues. The risk, he added, is that automated systems “look for coherence where there may only be coincidence.”
For investigative journalists and OSINT units, the challenge now is to set norms for AI-assisted investigations. Sohan argued for robust archiving, transparency, and reproducibility:
“Whatever setup you have should always be able to answer the question—what data did you use to reach this conclusion, and how would it look different if you had used something else?”
Looking ahead to 2030, he foresees more localized, community-targeted disinformation powered by AI, often aimed at splitting opposition to authoritarian movements. At the same time, he predicts an escalation of attacks on investigators themselves. “There’ll be more attempts to mislead, to poison the data, to waste our time,” he said.
The countermeasures, he believes, must combine better policy, stronger transparency, and smarter tools. But ultimately, the work still relies on human judgment. “Some of the best analysts,” Sohan observed, “aren’t the most technical. They’re the sharpest at asking questions, piecing together contexts, and reading people.”