Speeding Up the “Kill Chain”: Pentagon Bombs Thousands of Targets in Iran Using Palantir AI


This is a rush transcript. Copy may not be in its final form.

AMY GOODMAN: As the U.S. and Israeli war extends into its 19th day, we turn now to look at how the U.S. is using artificial intelligence to identify and prioritize targets. The system, known as Project Maven, was created by Palantir, and it incorporates the AI model Claude, built by Anthropic. The Pentagon is investigating if the AI system played a role in the U.S. strike on the Iranian girls’ school that killed over 170 people, mostly girls.

This is CENTCOM Commander Admiral Brad Cooper talking about the use of AI in Iran.

ADM. BRAD COOPER: Our war fighters are leveraging a variety of advanced AI tools. These systems help us sift through vast amounts of data in seconds so our leaders can cut through the noise and make smarter decisions faster than the enemy can react. Humans will always make final decisions on what to shoot and what not to shoot and when to shoot, but advanced AI tools can turn processes that used to take hours, and sometimes even days, into seconds.

AMY GOODMAN: Israel has used similar AI targeting programs in Iran, as well as in Gaza and Lebanon. The Pentagon also reportedly used the AI tools during the recent military attack on Venezuela when U.S. Special Forces abducted the Venezuelan President Nicolás Maduro and his wife, Cilia Flores.

This comes as a major rift has emerged between Anthropic and the Pentagon after Anthropic moved to restrict the use of its technology for mass surveillance of Americans and for fully autonomous weapons. In late February, President Trump ordered federal agencies to stop using Anthropic products. Defense Secretary Pete Hegseth declared the firm a supply chain risk, effectively cutting it off from government contracts and related work. It marked the first time the Pentagon has designated a U.S. company as a supply chain risk, prompting Anthropic to sue. On Tuesday, CNN reported that nearly 150 retired federal and state judges have filed an amicus brief supporting Anthropic in its lawsuit against the Trump administration.

We’re joined now by Craig Jones, senior lecturer in political geography at Newcastle University, author of The War Lawyers: The United States, Israel, and Juridical Warfare. He’s the co-author of a new article in The Conversation headlined “Iran war shows how AI speeds up military ‘kill chains.’”

Why don’t we start there, Professor Jones?

CRAIG JONES: Thank you.

Yeah, I mean, the U.S. military, the Israeli military, as your headlines have said, using AI, the kill chain is a bureaucratic mechanism whereby militaries go from trying to designate targets, to identify enemies and military targets, to the process of actually killing them. They’re in the process across the 20th century, early 21st century, of speeding that process up. Military drones have helped greatly with that. And the latest front of that is AI. As Bradley Cooper talked about, you’re reducing a massive human workload of tens of thousands of hours into seconds and minutes. You’re reducing workflows, and you’re automating human-made targeting decisions in ways in which, I think, you know, open up all kinds of problematic legal, ethical and political questions.

AMY GOODMAN: The U.S.-Israel war in Iran is being described as the first AI war. Explain what that means, Craig.

CRAIG JONES: Yeah, I would say it’s not quite the first AI war. As you mentioned, Israel has used AI in Gaza. I think this was the first major use of AI in warfare. I think, actually, the history goes back a little longer, with computer programs partially enabled with AI have been used in the background of military systems for several years now. It was used in a major way in Gaza in the first few months, where we saw tens of thousands of targets put in a target bank opted by military intelligence. Up to 35,000 suspected Hamas combatants found themselves on this list as Israel worked through that to assassinate them, as well as tens of thousands of targets that are ultimately part of the civilian infrastructure. As you’ve said, the U.S. has used it with Maduro, and now Israel and the U.S. are also using these systems in Iran.

The key innovation here is twofold. It is the use of AI for intelligence analysis. Intelligence, military intelligence, is multi-format. There is so much of it. It hoovers up what they call signals intelligence, so mobile phones, internet traffic, SMS, mobile phone tracking, all kinds of things. And the AI systems are being used to spot what militaries call patterns of life — you know, who meets with who, who talks with who, what are the nature of the messages, how are they interacting in ways which are deemed suspicious. And the AI systems look for those patterns and make recommendations, which is the second innovation, for targets. They nominate targets to this bank of targets, which then has — which we can talk about — some technical human oversight. And that’s problematic, I think. It’s problematic because that’s a really persuasive technology. It’s nominating hundreds, thousands of targets potentially a day, and it’s working at speeds which are just beyond, you know, the evolution of human cognition in, again, ways that are problematic.

AMY GOODMAN: Can you explain — I mean, this is being investigated by everyone, including the U.S. government and the Pentagon — how Palantir was used, it’s believed, in the first strikes, the first day of the U.S.-Israeli war on Iran, may have been involved in the targeting of a girls’ school in southern Iran using the tools of Palantir and Claude, which is a property of Anthropic?

CRAIG JONES: Yeah, so, this strike on the girls’ school is at the moment the leading kind of civilian casualty incident, in which around, as you’ve said, 170, mainly girls, were killed, innocent civilians. At the start, we should remember some of the history of this. It was denied by the U.S. military. Trump insinuated at one point that it was an Iranian missile. It was later verified that it was indeed a U.S. series of Tomahawk missiles that struck this area. And a U.S. preliminary investigation has now found and confirmed indeed what many people thought, which was that U.S. is responsible.

It looks — we’re not yet clear the role of AI in that particular strike. Whether that becomes clear in the coming days and weeks, we’ll have to see. What we do know is that the Claude and Anthropic model by Palantir have been extensively used to do several things, including the intelligence analysis. So we can deduce that that AI system is not yet capable of detecting, or is at least, you know, open to making systemwide errors. It did not identify the school as a school, in an extremely problematic way in which, you know, within a couple of days, organizations such as The New York Times are able to verify via satellite imagery that there is a wall that’s been put up around 13 years ago between the school and a IRGC compound that was nearby. If you’d have been watching drone footage from above, as militaries have the capability to do, just for, you know, half an hour before or a few hours before, you would have seen, you know, that morning 170 girls dropped off by their parents, and that would have been identified as a nonmilitary target with clearly civilian usage.

AMY GOODMAN: But let’s get — 

CRAIG JONES: So, we don’t yet —

AMY GOODMAN: Let’s drill down into this, because, yes, there was this military facility right next to it. As you described, years ago, a wall was built between the two, so you’ve got the school very clearly identified. But how does AI work, where you have this old, what, 10-year-old perhaps, information about it being a military base that’s fed in, and then it is never updated? Where do human beings come into this?

CRAIG JONES: Yeah, this is a really important question, where it, you know, gets tricky. But we could — we know a lot already. So, it looks like it’s just an intelligence failure, that an area marked on a map, this is — you know, the whole entire area has been marked as a military compound. There is obligations, you know, legal obligations and ethical obligations, and just political obligations, within defense intelligence agencies to check this.

And what happens is, some of these targets are nominated from U.S. military bases back in the United States. Some of those people I’ve worked with over the last several years on what that — what they call target nomination, what it looks like. They hand that over to CENTCOM, who I know you cover. And they have bases in the Middle East. There’s a central one based in Qatar, where these targeting decisions are executed. There is an obligation for CENTCOM to check and double-check that intelligence, that it’s up to date, that everything’s kosher on the target. It’s clear that that was not done, whether that was — you know, there should be a human oversight of that, even if it’s AI-recommended or even if it’s human-recommended. There should be some human intelligence checking. It looks like, for whatever reason — and we don’t yet know why.

So, what happens also is a really interesting technicality, is everything in a society that the U.S. military is targeting de facto is labeled on a no-strike list, because everything is assumed to be civilian. And in order to strike it, you need to put it — get it off the no-strike list to be able to target it. So, the question here is: Why was this school taken off a no-strike list, deemed a legitimate military target? It looks like a combination of AI and human intelligence failure, to produce something, you know, truly catastrophic.

AMY GOODMAN: And talk about how Palantir interacts with Claude, which is owned by Anthropic, especially for the Luddites who are listening all over, for people who don’t quite understand how this all works.

CRAIG JONES: So, yeah, from what we know, Palantir is a system, much like a deep software system that — you know, like a video game, that has all kinds of inputs, that you can look at targets. You have all kind of variables, like, you know: What size missile should we drop? What is the compound that we’re looking at? What’s it made out of? All these human — these variables with intelligence overlays. And then, in the same way that software works on a computer is the Claude is that thing which is in the background, which is kind of, you know, doing the processing of that data, making those recommendations. And then it provides the human some parameters that the human or operator or targeteer can then kind of play with.

Obviously, it’s highly sensitive and secretive, and beyond the very few people using it, you know, the designers even with Anthropic would be a very small amount of people who have the intelligence clearance and who’ve seen this stuff working with sensitive military data. We know from some of the things they’ve released, like the demos that they’ve released, we can see some of what that looks like. And one of the most worrying developments that I’ve seen, and from what’s publicly available, is the lack of attention and ability to track civilian casualties within those programs. And that is something which we’ve seen. You know, this war on lawyers and war on civilian casualty harm, you know, that the administrations have built for several years in the U.S. Department of Defense, has been eroded by the Trump administration, and you actually see that now programmed into the software.

AMY GOODMAN: This is Palantir CEO Alex Karp, interviewed on CNBC last week.

ALEX KARP: These technologies are dangerous societally. The only justification you could possibly have would be that if we don’t do it, our adversaries and — will do it, and we will be subject to their rule of law. So, if you decouple this from the support of the military, you’re going to have an enormous problem explaining to the American people why is it that we’re absorbing the risk of disrupting the very fabric of our society, including the most powerful parts of our society, if it’s not because it’s about maintaining our ability to be American in the near term and long term.

AMY GOODMAN: Craig Jones, if you can respond to the CEO of Palantir?

CRAIG JONES: Palantir has a long history of making serious tens of millions, billions of profit from what ultimately I see as killing people in faraway lands that are too easy not to care about. I think this latest endeavor is as we’ve kind of started this AI arms race. It’s been good to see at least Anthropic throw their hands up and say, “We want some ethical parameters put on that.” But even that, which seems to be, you know — and meanwhile, as that whole controversy has been playing out, as you covered, with the Trump administration, we see Sam Altman from OpenAI rush in and take the contract that Anthropic has ultimately dropped.

Huge profits. They’re a huge — the DOD is a — Department of War is a huge customer for many Silicon Valley firms. We’ve seen Microsoft use their platforms for the Israeli targeting. Apparently, Microsoft are looking into that. We see Google AI analytics also used for Palantir and for U.S. DOD contracts. This is huge money. And I think, you know, should the Silicon Valley community wake up to, ultimately, the consequences of the technologies which they’re working on, and see their effects on the ground — which is where I work, with the people who have lost entire families, who’ve had their homes destroyed, who’ve been displaced, who have, you know, had their legs blown off — there’s this real disconnect between those tens of billions being made for profits of war and those people who suffer its consequences.

AMY GOODMAN: This is OpenAI CEO Sam Altman, who you mentioned, speaking at the India AI Impact Summit in New Delhi in February.

SAM ALTMAN: We don’t yet know how to think about some superintelligence being aligned with dictators in totalitarian countries. We don’t know how to think about countries using AI to fight new kinds of war with each other. We don’t know how to think about when and whether countries are going to have to think about new forms of social contracts. But we think it’s important to have more understanding and societywide debate, before we’re all surprised.

AMY GOODMAN: So, that’s Sam Altman of OpenAI. And just quoting the Pentagon secretary — Trump calls him the war secretary — the defense secretary, Pete Hegseth, at a briefing in the last days, “Unlike so many of our traditional allies who wring their hands and clutch their pearls, hemming and hawing about the use of force, America, regardless of what … international institutions say, is unleashing the most lethal and precise air power campaign in history — B-2s, fighters, drones, missiles and, of course, classified effects — all on our terms with maximum authorities. No stupid rules of engagement, no nation-building quagmire, no democracy-building exercise, no politically correct wars.” Craig Jones?

CRAIG JONES: Those are two jarring statements. Sam Altman’s, you know, ideas, I think, in what he said, he’s right. We don’t know about this. We don’t know about that, what the future holds. My view would be that because we don’t know the potential dangers, risks and damages that these technologies brings, we should pause, as societies, as companies, as nations, as leaders, to have a serious conversation about what kind of AI future we want, whether this is a world that we want to build.

Meanwhile, Hegseth, the Department of War, in January, released a statement — a whole program, actually, called the AI warfare fighter strategy, which some of the quote that you’ve just read comes from. And it talks about maximum lethality, as you say, out with the rules of engagement. This is a deliberate sidelining of the checks and balances, accountabilities for war, the firing of military lawyers, who are the community that I’ve worked with, that give legal advice to militaries, and just going ahead with it and saying — Hegseth saying explicitly, you know, just because we don’t know how these technologies work, we need this first mover advantage. And it’s that classic move fast and break things, and, you know, we don’t care about the consequences. These are really worrying times and developments.

AMY GOODMAN: You’ve referred to the war lawyers several times, and it’s the title of your book. Explain what you mean and how they’ve been fired and sidelined.

CRAIG JONES: So, these military lawyers have been, you know, fighting alongside militaries for centuries. In fact, the U.S. corps is the oldest law firm in America. And they do all kinds of things, but the thing that I’ve been interested in is they’re giving advice to military commanders and decision-makers for operations. So, any time a single target has struck in the last couple of decades, you would have a military lawyer present looking at things, doing what’s called a proportionality calculation. So, OK, here’s the military target. What’s the risk of civilians? Should we go ahead? Should we pause? Are there certain measures we can take to avoid civilian casualties? And a host of other considerations. You know, one would be the girls’ school. Is this a legitimate target, or is this indeed a girls’ school? So, that’s military necessity. And, you know, they’ve had a long history. And, you know, I work with them. These are professional, serious people, educated at the best law schools throughout America. They’re also soldiers. Israel has its own version of them. And, you know, they’ve done credible, credible work with militaries.

And the Trump administration, one of the first acts that he does after he’s sworn in in his second term is to fire the heads of those legal units. So, you know, the Navy, the Army, the Air Force, each have their own heads. He fired them. And then further down the ranks, he fired and replaced them with yes men. And beyond the firing and replacing, we are hearing from reporting and from some of my own contacts that the military lawyers are either just not being listened to when they raise objections, or, you know, they’re becoming silent in these war rooms where these decisions are made, because much like in the Trump administration, where his, you know, civilians and his advisers are around him, unless you say yes and go along with it, you’re simply not welcome there, and you’ll either be fired or not listened to.

And so, again, you know, seriously worrying, especially when you put that alongside this simultaneous war on all these civilian casualty initiatives. So, there was something called the Center of Excellence, which was to do with civilian protection. It’s been a decade in the making. Lots of senior people in the U.S. administrations, from Obama to Biden through Trump term one, have been involved in that. And Trump presses control-alt-delete on day one and gets rid of the civilian center, because they’re not interested in avoiding civilian casualties, which feels like we’re harkening back to Vietnam or something.

AMY GOODMAN: Finally, Craig Jones, we just have a minute, but if you can explain this rift between Anthropic and the Pentagon, Anthropic saying its company could not be used for mass surveillance of Americans and for fully autonomous weapons, and then the Trump administration retaliating — after they sued, President Trump ordering federal agencies to stop using Anthropic products, Pete Hegseth declaring the firm a supply chain risk? But then we hear that Claude, owned by Anthropic, is possibly used by Palantir in targeting this girls’ school that killed well over a hundred girls.

CRAIG JONES: Yeah, there’s lots to say here. One is that, you know, that seems like a disproportionate act, when a military — when a company just, you know, exercises its right to disagree with what the government is doing. And, you know, I think the CEO at the time said, you know, “Disagreeing with the government is as American as American pie.” But the other thing is that this is infrastructure. I think some people think, you know, AI is just a tool. You know, it’s something on your desk, or it’s something in the background. You just press delete. It’s infrastructure that’s embedded in the entire, you know, intelligence apparatus, and so, therefore, you can’t just delete it, so, hence why it’s still used, hence why it might take up to six months to try and get some of Claude products out of the — out of the software.

The other thing is, you know, it was good to see that ethical objection. It seems like the only, you know, moral stance which has been taken on these conversations in the AI war, certainly in Silicon Valley. I would just object to their objection on two principles. They’re against mass surveillance of U.S. citizens only. They say nothing about citizens around the world. And partly, their objection to its use for lethality is a technical, rather than moral, objection. It’s to say right now the algorithms are not quite good enough, because they have this error rate. But they’re not necessarily saying that they wouldn’t go along with that use later on. So, it’s not that they’re against lethality and killing, per se, but that just technically the algorithms are not quite ready, and so they wanted to press pause. So, there’s lots to say about that, but it is a disproportionate act and response by the Trump administration, I think.

AMY GOODMAN: Craig Jones, we want to thank you for being with us, senior lecturer on political geography at Newcastle University, joining us from the U.K., the author of The War Lawyers: The United States, Israel, and Juridical Warfare, expert on modern warfare and aerial targeting, currently leading a research project on civilians casualties and war-related injury in Gaza and Iraq. We’ll link to your piece in The Conversation, “Iran war shows how AI speeds up military ‘kill chains.’”

Coming up, the director of the National Counterterrorism Center has resigned over the war in Iran, magnifying a rift within the MAGA movement over the war. Stay with us.

[break]

AMY GOODMAN: “Welcome to the New World.”



Source link

Latest articles

Related articles