Anthropic 因人工智能保障措施面临美国军方压力

以注重安全着称的人工智能公司 Anthropic 与美国军方就其强大的语言模型 Claude 的使用存在争议。五角大楼正在推动不受限制的访问,但 Anthropic 正在抵制。
Anthropic, the AI company known for its commitment to safety, finds itself in a tense standoff with the US military over the use of its powerful language model, Claude.由国防部长 Pete Hegseth 领导的五角大楼一直在敦促该公司不受限制地使用 Claude 的能力,但据报道 Anthropic 对此表示抵制,不愿意允许其产品用于大规模监视或可以无需人工输入即可杀人的自主武器系统。
The dispute has escalated in recent weeks, with Hegseth giving Anthropic CEO Dario Amodei until the end of this Friday to agree to the Department of Defense's terms or face potential penalties, according to Axios. This clash highlights the growing tension between the technology industry's efforts to develop responsible AI and the military's desire for unhindered access to powerful technologies.
{{IMAGE_PLACEHOLDER}}Anthropic, which has positioned itself as the most safety-forward of the leading AI companies, has been vocal about the potential risks of advanced AI systems. The company's stance on limiting the military's use of Claude has reportedly led to a heated dispute with the Pentagon.
{{IMAGE_PLACEHOLDER}}周二美国军方领导人和人类高管之间的会议试图找到解决持续冲突的办法。 The Pentagon has argued that it needs unfettered access to Claude's capabilities to support national security and defense operations, while Anthropic has insisted on maintaining safeguards to prevent the misuse of its technology.
{{IMAGE_PLACEHOLDER}}The debate over the use of advanced AI in military applications is not a new one, but the standoff between Anthropic and the Pentagon underscores the growing complexity of navigating the ethical and legal boundaries of these emerging technologies. As the military seeks to leverage the power of AI for its operations, companies like Anthropic are grappling with the responsibility of ensuring their creations are not used in ways that could harm human life or violate fundamental rights.
{{IMAGE_PLACEHOLDER}}The outcome of this dispute could have far-reaching implications for the future of AI development and its integration into national security frameworks. Both sides will need to find a balanced approach that addresses the military's needs while upholding the principles of responsible AI and safeguarding against potential abuses.
来源: The Guardian


