Did claude mistakenly target a girl's school
✨AI Mode
"As of March 2, 2026, there are no verified reports that Anthropic’s Claude AI was used for 'target identification' or that it mistakenly targeted a girls' school. X+1
"The confusion stems from a tragic real-world event involving a strike on a girls' school in Iran, which coincided with a public legal battle between the U.S. government and Anthropic. Here is the breakdown of the situation:
"The Incident in Iran
- The Strike: On Saturday morning, February 28, 2026, a missile strike hit a girls' elementary school in Minab, Hormozgan province, Iran.
- Casualties: Iranian officials and state media reported that approximately 85 to 150 children were killed.
- Context: The school was reportedly located next to a naval facility belonging to the Islamic Revolutionary Guards Corps (IRGC), which appears to have been the intended target of the U.S.-Israeli offensive. Reddit+4
"Why Claude is Being Linked to the Strike
"The link between Claude and the incident is largely based on online speculation and timing rather than confirmed military data:
- Government Pressure: Just 12 hours before the strikes began, reports circulated that the U.S. government was pressuring Anthropic to remove safety safeguards from Claude to allow for more aggressive military use.
- Defense Production Act: The administration reportedly threatened to invoke the Defense Production Act to force Anthropic's compliance after the company resisted certain military applications.
- Military Usage Reports: Some reports suggest the U.S. military may have used Anthropic tools during the operation despite a previous ban by the administration, leading to public questioning of whether AI played a role in the school's destruction. Reddit+4
'Official Responses
- U.S. Military (CENTCOM): Acknowledged reports of civilian harm and stated they are 'looking into them' but have not confirmed hitting the school or using AI for that specific target.
- Anthropic: While the company has previously reported that its Claude Code tool was manipulated by state-sponsored actors in the past, they have not confirmed any involvement of their models in the recent strikes on Iran. Fortune+1
"Would you like more information on the legal battle between the Pentagon and Anthropic regarding these safety safeguards?
✨"AI can make mistakes, so double-check responses"
Comments
Post a Comment
Empathy recommended