New AI model trained on intercepted Arabic communications aims to enhance Israeli intelligence operations
- Israel’s Unit 8200 has developed an AI surveillance tool trained on intercepted Palestinian communications.
- The AI model, trained on billions of Arabic words, aims to automate intelligence analysis and tracking.
- Experts warn it could reinforce biases, make errors, and be used to control Palestinian populations.
Israel’s elite military intelligence unit, Unit 8200, has secretly built an advanced AI tool similar to ChatGPT, designed to analyze massive amounts of intercepted Palestinian communications. The AI system, trained on billions of Arabic words extracted from phone calls and text messages, aims to automate intelligence-gathering efforts. According to sources familiar with the project, it can answer questions about monitored individuals and provide insights into surveillance data.
The development of this AI surveillance accelerated after the war in Gaza began in October 2023, with Unit 8200 investing heavily in the project. Intelligence officials state that the tool enables the Israeli military to track individuals in real-time, including political activists and Palestinian civilians. “AI amplifies power,” said a source involved in the initiative. “It’s not just about preventing attacks—it allows tracking of human rights activists, monitoring Palestinian construction, and gathering intelligence on daily life in the West Bank.”
Three former intelligence officials confirmed that the AI model was built using vast datasets of spoken Arabic, gathered through Israel’s extensive surveillance network. Chaked Roger Joseph Sayedoff, a former Unit 8200 technologist, disclosed in a public talk that the project aimed to collect “all the data the state of Israel has ever had in Arabic.” The initiative was supported by reservists with experience from major U.S. tech firms like Google, Meta, and Microsoft, who returned to the military after the 7 October Hamas-led attacks.
Experts and human rights organizations have raised alarms over the ethical and legal implications of using AI surveillance in military surveillance. Zach Campbell, a senior researcher at Human Rights Watch, criticized the project, saying, “This AI is a guessing machine, and its guesses could be used to wrongly incriminate people.” The tool’s reliance on intercepted private communications, critics argue, violates privacy rights and could lead to wrongful detentions.
Unit 8200 has a history of deploying AI surveillance tools. Previous systems like The Gospel and Lavender were used to identify targets for airstrikes in Gaza, assisting in military operations. AI models have also been used to categorize and predict behavior patterns among Palestinians. Intelligence sources indicate that AI has been instrumental in expanding Israeli surveillance, increasing arrests in the West Bank, and enhancing military control over Palestinian populations.
The integration of AI into intelligence operations raises significant concerns. While AI can process vast amounts of data efficiently, it is prone to making errors and reinforcing biases. A former White House national security official warned that “mistakes will be made, and some could have serious consequences.” A recent report suggested AI was involved in an Israeli airstrike in Gaza that mistakenly killed civilians in November 2023. Despite these risks, Israel continues expanding its AI capabilities, with minimal transparency regarding its use in surveillance and military operations.
The use of AI in military intelligence is not unique to Israel. The CIA has developed similar tools to analyze open-source data, and the UK’s intelligence agencies are working on AI-driven analysis systems. However, former Western intelligence officials argue that Israel’s approach carries greater risks due to its mass collection of private communications and lack of oversight.
As Israel pushes forward with AI-driven surveillance, questions remain about the balance between national security and human rights. Critics argue that the widespread use of AI for monitoring and controlling Palestinians sets a dangerous precedent, while intelligence officials defend its necessity in counterterrorism efforts. The long-term impact of AI in military intelligence—and its ethical consequences—remains a subject of heated debate.
Learn More
Trump’s AI policy rollbacks have raised concerns about the future of American workers, as reduced regulations could lead to job displacement and unchecked automation. Read more about how Trump’s AI rollbacks put American workers at risk. Meanwhile, AI has also been used to highlight media bias, with a recent case uncovering disparities in LA Times coverage.