In late 2023, Israel sought to eliminate Ibrahim Biari, a senior Hamas leader in northern Gaza who played a role in the October 7 attacks. However, Israeli intelligence struggled to locate him, believing he was concealed in Gaza’s tunnel systems.
Consequently, Israeli officials turned to a novel military technology enhanced by artificial intelligence, according to three Israeli and American officials familiar with the situation. This technology was created around ten years ago but had never been used in combat. The urgency to find Mr. Biari spurred improvements to the tool, leading engineers from Israel’s Unit 8200, akin to the National Security Agency, to incorporate A.I. into it.
Soon after, Israel intercepted Mr. Biari’s calls and employed the A.I. audio technology, which indicated the general area from which he was communicating. With this intel, Israel executed airstrikes on October 31, 2023, resulting in Mr. Biari’s death, alongside over 125 civilian casualties, as reported by Airwars, a conflict monitoring organization based in London.
This audio tool is one instance of how Israel has utilized the conflict in Gaza to swiftly test and implement A.I.-driven military technologies at an unprecedented level, according to conversations with nine defense officials from both the U.S. and Israel, who requested anonymity due to the confidential nature of the information.
Over the last 18 months, Israel has also integrated A.I. with facial recognition to identify partially obscured or injured individuals, used A.I. to generate potential airstrike targets, and developed an Arabic-language A.I. model to power a chatbot that could assess text messages, social media updates, and other Arabic data, as shared by two individuals familiar with the initiatives.
Many of these projects involved collaboration between active-duty personnel in Unit 8200 and reserve soldiers working in tech companies like Google, Microsoft, and Meta, according to three sources knowledgeable about the technologies. Unit 8200 established an innovative hub, known as “The Studio,” to connect experts with A.I. initiatives.
However, as Israel rushed to expand its A.I. capabilities, the deployment of these technologies occasionally resulted in misidentifications and wrongful arrests, as well as civilian fatalities, noted the Israeli and American officials. Some expressed concerns over the ethical ramifications of A.I. tools, warning of heightened surveillance and potential civilian harm.
No other country has engaged in as extensive real-time testing of A.I. tools in warfare as Israel, according to European and American defense officials, providing a glimpse into how such technologies may be utilized in future conflicts—and the risks they carry.
Hadas Lorber, head of the Institute for Applied Research in Responsible A.I. at Israel’s Holon Institute of Technology and a former senior director at the Israeli National Security Council, remarked, “The pressing need to address the crisis accelerated innovation, much of it powered by A.I. This led to revolutionary technologies on the battlefield and provided crucial advantages in combat.”
Nevertheless, she cautioned that the technologies “also pose significant ethical challenges.” Ms. Lorber emphasized the necessity of implementing checks and balances, advocating that humans should retain ultimate decision-making authority.
An Israeli military spokesperson stated she could not disclose specific technologies due to their confidential status. She confirmed that Israel “is dedicated to the lawful and responsible application of data technology” and noted that the military is investigating the strike on Mr. Biari, without offering further details until the inquiry concludes.
Meta and Microsoft opted not to comment. Google clarified that it has “employees who serve in reserve duty across different countries. The activities carried out by those employees as reservists are not affiliated with Google.”
In earlier conflicts in Gaza and Lebanon, Israel utilized combat situations to refine military tech tools like drones, phone hacking apparatus, and the Iron Dome defense system, which helps intercept short-range missiles.
Following Hamas’s cross-border assaults on October 7, 2023, which claimed over 1,200 lives and resulted in 250 hostages, A.I. technologies were promptly authorized for use, as indicated by four Israeli officials. This expedited collaboration between Unit 8200 and reservists in “The Studio” to develop new A.I. capabilities.
Avi Hasson, chief executive of Startup Nation Central, an Israeli nonprofit that connects investors with emerging companies, mentioned that reservists from Meta, Google, and Microsoft played a pivotal role in fostering innovation in drones and data integration.
“Reservists provided expertise and access to essential technologies that were otherwise unavailable to the military,” he stated.
Israel’s military soon leveraged A.I. to enhance its drone capabilities. Aviv Shapira, founder and CEO of XTEND, a software and drone company collaborating with the Israeli military, reported that A.I.-driven algorithms were employed to create drones capable of locking onto and monitoring targets from afar.
“Previously, targeting relied on zooming in on an image of the target,” he explained. “Now, A.I. can identify and track the object itself—whether it’s a moving vehicle or a person—with lethal accuracy.”
Mr. Shapira indicated that his primary clients, the Israeli military and the U.S. Department of Defense, are cognizant of the ethical considerations surrounding A.I. in warfare and engage in discussions regarding responsible use of the technology.
Among the initiatives developed by “The Studio” was an Arabic-language A.I. model known as a large language model, as per three Israeli officers familiar with the project. (This model was previously mentioned by Plus 972, an Israeli-Palestinian news outlet.)
Developers had faced challenges in creating this model due to a lack of Arabic data for training purposes. When such data did exist, it tended to be in formal Arabic, which is less commonly used compared to the various spoken dialects.
The Israeli military, however, did not encounter this issue, with decades of intercepted text messages, recorded phone calls, and social media posts from a variety of spoken Arabic dialects at their disposal. Consequently, Israeli officers constructed the large language model during the initial months of the conflict and developed a chatbot capable of processing queries in Arabic. They integrated the tool with multimedia databases, enabling analysts to conduct complex searches across images and videos, four
Israeli officials stated.
Following the assassination of Hezbollah leader Hassan Nasrallah in September, a chatbot was employed to assess reactions across the Arabic-speaking regions, according to three Israeli officers. This technology was capable of distinguishing various dialects in Lebanon, which aided Israel in determining whether there was public sentiment advocating for a counterstrike.
At times, the chatbot struggled with modern slang and words that had been transliterated from English into Arabic, as noted by two officers. This shortcoming necessitated that Israeli intelligence agents, skilled in different dialects, step in to revise and improve the chatbot’s outputs, according to one officer.
The chatbot also occasionally produced incorrect results — for example, it sometimes retrieved images of pipes instead of firearms, as mentioned by two Israeli intelligence agents. Despite these flaws, they acknowledged that the A.I. tool significantly sped up the processes of research and analysis.
In the wake of the October 7 attacks, Israel began outfitting temporary checkpoints between the northern and southern Gaza Strip with cameras capable of capturing and transmitting high-resolution images of Palestinians to an A.I.-supported facial recognition program.
However, this system also faced challenges, especially when it came to identifying individuals whose faces were partially covered. This resulted in the wrongful identification and subsequent arrests and interrogations of several Palestinians, according to two intelligence officers.
Additionally, A.I. was utilized to filter through data collected by intelligence officials regarding Hamas operatives. Prior to the conflict, Israel developed a machine-learning algorithm known as “Lavender” to rapidly analyze information and identify lower-level militants. This system was trained on a database of verified Hamas members and was designed to predict potential associates. Though its predictions had limitations, Israel employed it at the onset of the war in Gaza to assist in target selection for attacks.
A top priority was the identification and removal of Hamas’s senior leaders. Among the most wanted was Mr. Biari, the Hamas commander believed by Israeli officials to be integral in orchestrating the October 7 assaults.
Israeli military intelligence swiftly intercepted Mr. Biari’s communications with fellow Hamas members but could not determine his exact location. Consequently, they utilized an A.I.-aided audio tool that could analyze various sounds, including sonic bombs and airstrikes.
After estimating the approximate area from which Mr. Biari was making his calls, Israeli military personnel were cautioned that this location, which encompassed several apartment buildings, was densely populated. They concluded that the airstrike would need to hit multiple structures to ensure the elimination of Mr. Biari, and the operation received approval.
Since then, Israeli intelligence has continued to leverage the audio tool, along with maps and images of the intricate network of tunnels in Gaza, to locate hostages. Over time, the tool has been improved to more accurately identify individuals, as reported by two Israeli officers.