Information Warfare in the Middle East (Israel, Palestine & Lebanon) with Tal Hagin

In this episode of The Intelligence Spotlight, we delve deep into the murky world of digital deception with Tal Hagin — an OSINT analyst specialising in information warfare, with a sharp focus on how social media platforms are weaponised in conflict zones, particularly around the Israel–Palestine and Lebanon frontlines.

Tal’s journey into OSINT began not in a lab or lecture hall, but from the heart of the conflict itself. “Growing up in Israel, violence was never abstract,” he says. “I started out doing pro-Israel advocacy, pushing emotional talking points online. But at some point, I realised I didn’t even understand half the material I was sharing.” That led to a shift — away from partisan narratives, toward data, methodology, and education.

He now works at Fake Reporter, an Israeli watchdog network that monitors digital threats, malicious campaigns, and psychological operations. Tal’s key aim? Not to tell people what to think — but how to think critically in a world saturated with misinformation.

Misinformation, disinformation, malinformation: know the difference

Tal offers one of the clearest breakdowns of the digital deception triad:

  • Misinformation: False content spread unknowingly — often emotional, impulsive, and viral.

  • Disinformation: Deliberate deceit — crafted and spread with intention to mislead or manipulate.

  • Malinformation: The weaponisation of real data (e.g. doxxing) to cause targeted harm.

Debunking, in this context, is not just about pointing out a fake. “It’s about protecting the information environment,” Tal says. “Even if the truth hurts your own side.”

From ‘Pollywood’ to ghost soldiers: how propaganda adapts

Tal notes a dangerous pattern during periods of escalation — fabricated stories designed to exploit emotional flashpoints. “One of the most common tactics is taking old footage — sometimes from Syria, sometimes from decades ago — and rebranding it as a current event,” he says.

False claims of IDF soldiers committing crimes, AI-generated videos of bombed cities, and doctored casualty lists all fit a wider strategy. “These aren’t just random lies. They’re calculated moves in a psychological game, designed to fuel division and erode trust.”

He warns of the return of “Pollywood” — the accusation that Palestinians fake deaths or injuries — and how certain recycled clips are used to prime audiences into distrust. “These claims rarely hold up under scrutiny, but they’re sticky. They exploit existing beliefs.”

The Telegram effect

Telegram, WhatsApp, and similar channels are fuelling a dangerous information vacuum. “These platforms are anonymous, unregulated, and extremely easy to use,” Tal says. “There’s no accountability. No bylines. And when something’s false, it’s quietly deleted — after it's already done the damage.”

He flags the rise of disinformation from third-party actors — such as Russian Telegram channels targeting Western-aligned countries — with AI-generated images, doctored logos, and fake casualty reports. “These are often not from either of the local sides. They’re international players muddying the waters.”

How to fact-check like an analyst

Tal outlines a simple but rigorous process for rising fact-checkers:

  1. Search for the original source. If a post cites a media outlet but doesn’t link it, treat it with suspicion.

  2. Use Google Lens for photos and take clean screenshots from videos to reverse search key frames.

  3. Cross-check names, dates, and metadata. If there’s no traceable path, it’s unverifiable.

  4. Be cautious of urgency. The desire to “be first” often comes at the cost of accuracy.

He’s adamant about transparency: “Never say ‘trust me’. Say ‘here’s how I found it, here’s where you can check it yourself’.”

What’s next for disinformation?

The battlefield is evolving. Generative AI is producing images and videos that mimic live footage. “Soon, you’ll be watching what looks like a CNN anchor reporting live from Tel Aviv, only it’s all AI,” Tal warns. “And people will believe it — not because they’re foolish, but because the tech is that good.”

He calls for urgent legislation: AI-tagging, accountability for serial disinformers, and digital literacy across populations. “Governments can’t sit this one out,” he says. “The cost isn’t just confusion. It’s real-world harm — panic, violence, failed evacuations.”

Final advice for those entering the field?

“Don’t try to be the first. Try to be right,” Tal says. And don’t underestimate your own influence. “Each post, each like, each share — that’s power. Use it with care.”

He recommends starting with Google Lens and following reputable fact-checking collectives like Fake Reporter. “If you’re passionate about this work, volunteer. We need more eyes, more critical thinkers, and above all, more people who care about truth — even when it’s inconvenient.”

Previous
Previous

Migration and Human Rights Journalism with May Bulman from Lighthouse Reports

Next
Next

Inside Occupied Myanmar with Clare Hammond