SCROLL TO CONTINUE WITH CONTENT

Peace in Palestine = Peace in the World

ADVERTISEMENT

SCROLL TO CONTINUE WITH CONTENT

Killing by Machine: How AI and Digital Platforms Are Involved in the Gaza Genocide

Farah Salsabila Editor : Sajadi - Friday, 11 April 2025 - 22:28 WIB

Friday, 11 April 2025 - 22:28 WIB

14 Views ㅤ

Illustration

In the 20th century, wars were fought with tanks, bombs, and soldiers. But in the 21st century, a new kind of war has emerged—one that’s quieter but just as deadly. In Gaza, we’re seeing how advanced technology, which was meant to improve human life, is now being used to kill quickly, silently, and with frightening precision.

After the attacks on October 7, 2023, Israel began a massive military operation in Gaza that continues to this day, resulting in the deaths of tens of thousands of civilians. But this war isn’t just about bombs and bullets. Behind the explosions, there’s another kind of weapon at work: artificial intelligence (AI), surveillance systems, and digital technology.

In this war, Israel isn’t just using traditional weapons. They’ve added high-tech tools like spyware, systems that block GPS signals, and advanced AI programs such as Lavender and The Gospel. These tools combine the speed of machines with the power of decision-making without any human accountability.

This means the lives of people in Gaza aren’t only threatened by bombs. They’re also at risk because of quiet decisions made by computers, clicks inside data centers that decide who should live and who should die.

Also Read: The Historical Pattern of Zionist Betrayal (By: Imaam Yakhsyallah Mansur)

A recent investigation has revealed this hidden side of the war. It shows how technology that should bring people together and make life better is instead being used to destroy lives in a cold and calculated way, leaving almost no evidence behind. This isn’t just a normal war anymore. It’s a genocide driven by algorithms.

Since October 7, the world has seen extreme levels of destruction in Gaza. But what most people haven’t noticed is the role that AI and digital systems are playing in identifying targets, planning attacks, and even predicting how many civilians might die.

In this latest attack on Gaza, Israel has introduced something new: AI systems like Lavender and The Gospel that can pick human targets automatically even down to individuals. These systems allow the military to increase the number of people they target from around 50 per year to 100 per day. Many of the targets are private homes, believed to belong to low-level Hamas members often without clear proof or concern for whether the people inside are fighters or ordinary civilians.

In April 2024, the world was shocked by reports showing that Lavender, an Israeli military AI tool, was used to target thousands of people in Gaza. What’s even more alarming is that Lavender reportedly pulls information from WhatsApp, the world’s most popular messaging app, owned by Meta.

Also Read: Indonesian Muslim Community in Japan Shares Kindness During Ramadan

37,000 Palestinian Civilians Marked by AI, Simply for WhatsApp Contacts

An exclusive investigative report by +972 Magazine and Local Call reveals the existence of a secret AI system called Lavender. Developed by the Israeli military, this program automates the identification of bombing targets in Gaza.

According to testimonies from six active intelligence officers, Lavender uses big data and machine learning to create lists of thousands of names, automatically marking them as targets without sufficient human verification. Even the AI-generated results are treated “as if they were human decisions.”

“Lavender doesn’t just help select targets. Lavender is the target itself,” one of the informants shared in the investigative report.

Also Read: The History of the Jews is a History of Defeat

This technology operates under the command of Unit 8200, Israel’s largest cyber intelligence unit, led by Brigadier General Yossi Sariel, author of The Human-Machine Team. His book promotes human-machine collaboration in military operations, but in reality, it’s creating mass killings designed by machines.

Lavender was developed as a quick identification tool for what Israel’s intelligence labels “suspects.” The system processed up to 37,000 Palestinian names in Gaza and determined bombing targets based on digital data. One of the key indicators? If a person was part of a WhatsApp group also joined by someone suspected of being a militant.

This is no longer about accurate intelligence gathering. It’s an algorithm based on social associations, punishing anyone connected to a specific digital network.

An Israeli military source admitted they bombed homes without hesitation, even when the target was with their children and family. “It’s easier to drop a bomb on a house,” he said. The AI was designed to choose such scenarios: a target with their family, at home, in a “comfortable” situation.

Also Read: Halal Certification Costs and Delays, Here’s Why?

Paul Biggar, a software engineer and founder of Tech for Palestine, described this as a form of pre-crime killing, similar to the dystopian world shown in Minority Report. However, this is no longer just a science fiction plot—it’s a real-world tragedy happening in Gaza.

Meta, Human Rights Violations, and Death Behind Encryption

WhatsApp has long claimed to protect user privacy through its end-to-end encryption system. But recent accusations have shaken public trust in that promise. Meta has denied the claims, stating that it has “no information that the report is accurate.” However, the company did not fully deny involvement—only asserting that it does not provide “bulk data to any government.”

But if there truly were no violations, why has Meta been widely criticized for suppressing pro-Palestinian content, deleting accounts of Palestinian journalists, and blocking hashtags like #FreePalestine?

Also Read: Celebrating International Day of Multilateralism and Diplomacy for Peace with Indonesia Identity

Reports also revealed that Meta has hired former agents from Israel’s Unit 8200—a cyber-intelligence division notorious for spying on Palestinians. These practices have been described as “systematic digital surveillance” against people living in one of the world’s most dangerous conflict zones.

So, is Meta just a tech company? Or has it become part of a digital colonial system that enables genocide?

One of the most harrowing incidents recently went viral—a video showing a Gaza journalist being burned alive. Outside Al-Nassar Hospital, journalists had pitched makeshift tents, using them as a base to send vital news updates to the world. Among those killed were also two contributors affiliated with an Indonesian journalist network, SMART 171, in which the writer of this piece is also involved. These Palestinian journalists were more than just reporters—they were the eyes and voices of Gaza to the outside world.

But who could have imagined that while speaking live to an Indonesian media editor, while their voices were still on air, while they sent locations on WhatsApp for safety reasons—those very actions might have sealed their fate?

Also Read: The Corruption Paradox in a Religious Nation

We now know that Meta, the parent company of WhatsApp, has been accused of feeding data to Israel’s military AI system known as Lavender. This system targeted thousands of individuals based solely on digital connections: being in the same WhatsApp group, sharing a location, or maintaining contact with international media networks.

This raises the chilling possibility that the two contributors were tracked through those communications. Conversations meant to save lives—requests for aid, evacuation, or fact-based reporting—may have instead triggered AI systems to lock onto them as targets.

When the journalists’ tents were bombed. When Al-Nassar Hospital was attacked. When their last connection went silent. We now wonder: Was it our communication that led to their deaths? And if so, what does that mean for journalists around the world—for anyone choosing to stand on the side of truth?

Meta can deny. But the fact remains—these killing machines crave data. And they get it from platforms we trust every day: messaging apps, social media, communication tools. Data becomes ammunition. Conversations become tracking signals. And anyone—even a journalist—can become a target.

Also Read: 17 South Korean Youths Embrace Islam at the Start of Ramadan

This is not just about Meta.

Recently, Microsoft and other global tech giants have come under heavy scrutiny following reports that their artificial intelligence and cloud services were being massively used by the Israeli military after October 7, 2023.

According to a January report by The Guardian, Microsoft signed multimillion-dollar contracts to provide IT infrastructure, data storage, and thousands of hours of technical support to Israel’s armed forces. The military’s growing reliance on cloud platforms like Microsoft Azure, Google Cloud, and Amazon Web Services has raised serious concerns about the role of global tech companies in enabling controversial military operations.

Pegasus Spyware and Total Surveillance

Also Read: Five Best Foods for Breaking the Fast

Beyond AI and digital platforms, Israel has also relied on the notorious Pegasus spyware to infiltrate the devices of civilians and activists. According to the Israeli Internet Association, once Pegasus is installed on a device, operators can access everything, data that has been deleted, files created by third-party apps, even remotely activate the camera and microphone without the user’s knowledge. The spyware can also track the user’s location, record conversations, and analyze the target’s entire social network.

All of this data is then processed by AI systems to generate highly detailed behavioral profiles, ranging from daily routines to political leanings.

In addition to Pegasus, Israel deploys advanced drones equipped with AI sensors that can detect human presence based on body heat and sound. These drones can analyze building structures to identify “strategic” targets for airstrikes and claim to distinguish between combatants and civilians, though in practice, this often results in deadly misidentifications.

Even more concerning, these drones are capable of GPS jamming, mimicking cell tower signals to intercept all communications from a victim’s device without detection. Victims believe their phones are operating normally, while in reality, every activity is being monitored in real-time.

Also Read: Ramadan in Japan: Diversity in the Warmth of Faith

We Are the Next Target

This war is not just about Gaza. The whole world is next.

Over the past two decades, Israel has grown into a global cyber power. In 2020 alone, Israeli cyber firms attracted 31% of all global investment in the sector. Military exports hit $8.8 billion, while cyber exports reached $10 billion, according to Visualizing Palestine.

But behind this technological rise lies a colonial agenda. These tools are not only built for profit—they are built to maintain control over Palestinians and dominate narratives in the region.

Also Read: Ramadan in Iceland: Navigating the World’s Longest Fasting Hours

Israel has pioneered what’s known as “spyware diplomacy,” selling surveillance tech to countries in exchange for political normalization. Meanwhile, Palestinians have become the world’s primary test subjects for cutting-edge surveillance and digital warfare systems.

Through AI-driven military infrastructure and mass-monitoring networks, Israel has created a machine of death—faceless, operator-less, and utterly remorseless.

But the use of AI and spyware cannot justify the killing of over 50,000 civilians in Gaza. On the contrary, it exposes how reliance on unaccountable technology only makes war more brutal, faster, and harder to trace.

The threat posed by this AI-military nexus goes far beyond Palestine. The world now stands on the edge of a new era—where machines decide who lives and who dies. In the hands of a colonial regime, such technology becomes a tool of engineered genocide.

The international community must act now. Global regulations on AI and spyware in armed conflict are urgently needed. International courts must hold these crimes accountable—openly and transparently.

Because if we don’t act today, tomorrow it won’t just be Gaza in flames. Tomorrow, the algorithm may choose any of us.

This is what the world must understand: AI in warfare is not just a military upgrade. It’s the beginning of data-driven killings, cold, calculated, and dehumanized.

If today Palestinians are targeted for being in a WhatsApp group, who’s next? An activist in London, a journalist in Jakarta, a student in Ankara, or a Palestinian supporter in New York?

This system sets a terrifying precedent, where anyone can be targeted, not for what they do, but for what they believe or who they’re connected to online. If we allow this to continue, no one who stands for justice will ever be truly safe.

This is not just a genocide against Gaza. It’s an attack on all of humanity. On all of us.

Those who stand against injustice are now being watched flagged by machines. Israel is testing the weapons of the future, and Gaza is the lab.

Do not stay silent. Do not believe this is only happening “over there.” Because “there” will soon become “here” faster than we think. []

Mi’raj News Agency (MINA)

Recommendation for you