AI Startup Anthropic Battles Over Military Use of Its 'Claude' Tool

Anthropic fights conflicting court rulings on supplying its AI assistant 'Claude' to the US military, sparking debate over tech companies' role in national defense.
In a continuing legal battle, the AI company Anthropic is facing conflicting court rulings over the restrictions placed on how its AI assistant 'Claude' can be used by the US military. The US Appeals Court has upheld a previous decision to keep in place a supply-chain risk label on Anthropic's software, a move that limits the military's ability to deploy the technology.
The supply-chain risk label was initially imposed by the US Department of Defense, citing national security concerns. Anthropic has been fighting to have the label removed, arguing that it unfairly restricts the military's access to its AI technology. The company claims the label could hinder its ability to do business with the government and stifle technological innovation.
However, the Appeals Court sided with the previous ruling, stating that the supply-chain risk label is a reasonable measure to protect national security interests. The court noted that the label does not completely prohibit the military from using Claude, but rather imposes additional oversight and restrictions on its deployment.
{{IMAGE_PLACEHOLDER}}Source: Wired


