华盛顿——美国国防部长皮特·赫格塞思计划于周二与人工智能公司Anthropic的首席执行官会面。在同行中,Anthropic是唯一一家未向美国军方新内部网络提供技术的公司。
Anthropic是聊天机器人克劳德(Claude)的制造商,该公司拒绝就会面发表评论。不过,首席执行官达里奥·阿莫代伊(Dario Amodei)已明确表达了他对政府无节制使用人工智能的伦理担忧,包括全自主武装无人机带来的危险,以及可能追踪异见人士的人工智能辅助大规模监控。
一位未经授权公开评论、要求匿名的国防部官员证实了赫格塞思与阿莫代伊的会面。
此次会面凸显了围绕人工智能在国家安全中作用的争论,以及人们对该技术在涉及致命武力、敏感信息或政府监控的高风险情况下如何使用的担忧。此外,赫格塞思曾发誓要铲除他所谓的军方“觉醒文化”,此次会面也正值此背景之下。
阿莫代伊在上个月的一篇文章中写道:“一个强大的人工智能,通过分析数亿人的数十亿次对话,能够评估公众情绪,发现不忠思想的萌芽,并在其发展壮大之前将其扼杀。”
五角大楼去年夏天宣布,向四家人工智能公司授予国防合同,这四家公司分别是Anthropic、谷歌、OpenAI和埃隆·马斯克的xAI。每份合同价值高达2亿美元。
Anthropic是第一家获批用于机密军事网络的人工智能公司,它与Palantir等合作伙伴开展合作。目前,其他三家公司仅在非机密环境中运营。
到今年年初,赫格塞思只强调了其中两家公司:xAI和谷歌。
今年1月,赫格塞思在马斯克位于南得克萨斯州的太空飞行公司SpaceX发表演讲时表示,他不会理会那些“不允许用于战争”的人工智能模型。
赫格塞思表示,他对军事人工智能系统的愿景是,它们“在运行时不受限制合法军事应用的意识形态约束”,并补充说五角大楼的“人工智能不会是觉醒的”。
今年1月,赫格塞思称马斯克的人工智能聊天机器人格罗克(Grok)将加入五角大楼网络GenAI.mil。此前几天,格罗克(嵌入马斯克旗下的社交媒体网络X中)因未经同意生成人们高度性化的深度伪造图像而受到全球审视。
今年2月初,OpenAI宣布也将加入军方的安全人工智能平台,使服务人员能够使用定制版ChatGPT执行非机密任务。
自2021年其创始人离开OpenAI创立Anthropic以来,Anthropic一直将自己定位为更负责任、更注重安全的人工智能领军企业。
乔治城大学安全与新兴技术中心分析与研究员副主任欧文·丹尼尔斯(Owen Daniels)表示,与五角大楼的不确定性正在考验这些意图。
欧文说:“包括Meta、谷歌和xAI在内的Anthropic同行,都愿意遵守该部门将模型用于所有合法应用的政策。因此,该公司在这一方面的议价能力有限,而且有可能在该部门推动采用人工智能的过程中失去影响力。”
在ChatGPT发布后引发的人工智能热潮中,Anthropic与拜登政府密切合作,主动让其人工智能系统接受第三方审查,以防范国家安全风险。
首席执行官阿莫代伊曾警告过人工智能可能带来的灾难性危险,但他拒绝被贴上人工智能“末日论者”的标签。他在1月的文章中称,“与2023年相比,我们在2026年更接近真正的危险”,但这些风险应该以“现实、务实的方式”加以管理。
这并非Anthropic倡导更严格的人工智能保障措施首次与特朗普政府产生分歧。Anthropic曾公开批评芯片制造商英伟达,指责特朗普提出的放宽出口管制以允许部分人工智能计算机芯片在中国销售的提议。不过,这家人工智能公司仍是英伟达的密切合作伙伴。
特朗普政府和Anthropic在美国各州推动人工智能监管的游说活动中也站在对立面。
特朗普的首席人工智能顾问戴维·萨克斯(David Sacks)去年10月指责Anthropic“基于制造恐慌的策略,实施复杂的监管俘获”。
萨克斯是在X上针对Anthropic联合创始人杰克·克拉克(Jack Clark)的一篇文章发表上述言论的。克拉克在文中谈到他试图在技术乐观主义与对日益强大的人工智能系统的“适当恐惧”之间取得平衡。
特朗普重返白宫后不久,Anthropic就聘请了多位前拜登政府官员,但该公司也试图表明其采取两党合作的方针。该公司最近将特朗普第一任期的白宫官员克里斯·利德尔(Chris Liddell)纳入董事会。
五角大楼与Anthropic之间的这场争论,让人联想到几年前一些科技工作者反对公司参与“Maven项目”(五角大楼无人机监控项目)所引发的轩然大波。尽管一些员工因该项目辞职,谷歌也退出了该项目,但五角大楼对无人机监控的依赖程度反而更高了。
欧文表示:“同样,人工智能在军事领域的应用已成为现实,且不会消失。”
“有些应用场景风险较低,例如后台办公工作,但在战场上部署人工智能会带来不同且风险更高的后果。”他提到使用致命武力或核武器等武器时说,“军事用户已意识到这些风险,并且近十年来一直在思考如何降低风险。”
Hegseth and Anthropic CEO set to meet as debate intensifies
WASHINGTON --Defense Secretary Pete Hegseth plans to meet Tuesday with the CEO of Anthropic, with the artificial intelligence company the only one of its peers to not supply its technology to anew U.S. military internal network.
Anthropic, maker of the chatbot Claude, declined to comment on the meeting but CEO Dario Amodei has made clear hisethical concernsabout unchecked government use of AI, including thedangers of fully autonomous armed dronesand of AI-assisted mass surveillance that could track dissent.
The meeting between Hegseth and Amodei was confirmed by a defense official who was not authorized to comment publicly and spoke on condition of anonymity.
It underscores the debate over AI's role in national security and concerns about how the technology could be used inhigh-stakes situationsinvolving lethal force, sensitive information or government surveillance. It also comes as Hegseth has vowed toroot out what he calls a “woke culture”in the armed forces.
“A powerful AI looking across billions of conversations from millions of people could gauge public sentiment, detect pockets of disloyalty forming, and stamp them out before they grow,” Amodei wrote in an essay last month.
The Pentagon announced last summer that it was awarding defense contracts to four AI companies — Anthropic, Google, OpenAI and Elon Musk’s xAI. Each contract is worth up to $200 million.
Anthropic was the first AI company to get approved for classified military networks, where it works with partners like Palantir. The other three companies, for now, are only operating in unclassified environments.
By early this year, Hegseth was highlighting only two of them: xAI and Google.
The defense secretary said in a January speech at Musk’s space flight company, SpaceX, in South Texas that he was shrugging off any AI models “that won’t allow you to fight wars.”
Hegseth said hisvision for military AI systemsmeans that they operate “without ideological constraints that limit lawful military applications,” before adding that the Pentagon’s “AI will not be woke.”
In January, Hegseth saidMusk’s artificial intelligence chatbot Grokwould join the Pentagon network, called GenAI.mil. The announcement came days after Grok — which is embedded into X, the social media network owned by Musk — drew global scrutiny forgenerating highly sexualized deepfake imagesof people without their consent.
OpenAI announced in early February that it, too, would join the military's secure AI platform, enabling service members to use a custom version of ChatGPT for unclassified tasks.
Anthropic has long pitched itself as the more responsible and safety-minded of the leading AI companies, ever since itsfounders quit OpenAI to form the startup in 2021.
The uncertainty with the Pentagon is putting those intentions to the test, according to Owen Daniels, associate director of analysis and fellow at Georgetown University’s Center for Security and Emerging Technology.
“Anthropic’s peers, including Meta, Google and xAI, have been willing to comply with the department’s policy on using models for all lawful applications,” Owens said. “So the company’s bargaining power here is limited, and it risks losing influence in the department’s push to adopt AI.”
In theAI crazethat followed the release of ChatGPT, Anthropic closely aligned with President Joe Biden’s administration in volunteering to subject its AI systems to third-party scrutiny to guard against national security risks.
Amodei, the CEO, has warned ofAI’s potentially catastrophic dangerswhile rejecting the label that he’s an AI “doomer.” He argued in the January essay that “we are considerably closer to real danger in 2026 than we were in 2023″ but that those risks should be managed in a “realistic, pragmatic manner.”
This would not be the first time Anthropic’s advocacy for stricter AI safeguards has put it at odds with the Trump administration. Anthropic needled chipmaker Nvidia publicly, criticizing Trump’s proposals to loosen export controls toenable some AI computer chips to be sold in China. The AI company, however, remains aclose partner with Nvidia.
The Trump administration and Anthropic also have been on opposite sides of a lobbying push to regulate AI in U.S. states.
Trump’s top AI adviser, David Sacks, accused Anthropic in October of “running a sophisticated regulatory capture strategy based on fear-mongering.”
Sacks made the remarks on X in response to an Anthropic co-founder, Jack Clark, writing about his attempt to balance technological optimism with “appropriate fear” about the steady march toward more capable AI systems.
Anthropic hired a number of ex-Biden officials soon after Trump’s return to the White House, but it’s also tried to signal a bipartisan approach. The company recently added Chris Liddell, a former White House official from Trump’s first term, to its board of directors.
The Pentagon-Anthropic debate is reminiscent of an uproar several years ago when some tech workers objected to their companies’ participation inProject Maven, a Pentagon drone surveillance program. While some workers quit over the project andGoogle itself dropped out, the Pentagon’s reliance on drone surveillance has only increased.
Similarly, “the use of AI in military contexts is already a reality and it is not going away,” Owens said.
“Some contexts are lower stakes, including for back-office work, but battlefield deployments of AI entail different, higher-stakes risks,” he said, referring to the use of lethal force or weapons likenuclear arms. “Military users are aware of these risks and have been thinking about mitigation for almost a decade.”





