spot_imgspot_img

US Military Pressures Anthropic Over AI Use Restrictions

The US Pentagon demands changes to Anthropic’s AI safeguards, threatening penalties unless conditions are met by Friday.

Leaders from the United States military are applying pressure on Anthropic, an artificial intelligence firm known for its emphasis on safety, to relax restrictions concerning its AI model, Claude. During a meeting on Tuesday, Defence Secretary Pete Hegseth provided Anthropic CEO Dario Amodei with a deadline of Friday to accept military terms or face potential penalties, according to Axios.

Anthropic has gained prominence as one of the zeroed-in AI firms, primarily because it was the first to be integrated into classified military operations. The ongoing dispute stems primarily from the Pentagon’s desire for broader access to Claude’s capabilities while Anthropic aims to uphold its safeguards against the model being used for mass surveillance or for programming autonomous weapons systems.

Since the development of Claude, the Pentagon has used the model in various operations, but tensions have arisen over perceived limitations that Anthropic has placed on its use. The Department of Defense (DoD) has warned that failure to comply could result in punishment, including the cancellation of existing contracts and potential designation as a supply chain risk.

In July of the previous year, the DoD entered into contracts with several AI organisations including Anthropic, Google, and OpenAI, with each contract valued at up to $200 million. However, recently, the Pentagon expanded access to include another AI model, from Elon Musk’s xAI, which also now has the green light for classified use.

OpenAI, for its part, has seemingly aligned with the government’s stipulations, permitting its technology to operate for all lawful purposes. No immediate comment was obtained from OpenAI regarding their arrangement with the Pentagon.

The backdrop to this negotiations includes the recent military operation that involved the capture of Venezuelan President Nicolás Maduro, which reportedly utilised Anthropic’s Claude software. The integration of AI into military protocol has been a central focus of various administrations, notably since former President Donald Trump reinforced the intention to lead in the AI landscape.

Emil Michael, the chief technology officer at the Pentagon, has publicly urged Anthropic to reconsider its stance. He stated, “If someone wants to profit from the government, especially from the US Department of War, the regulations should be adapted for our purposes as long as they remain lawful.”

Dario Amodei, the CEO of Anthropic, has positioned himself as a proponent of heightened regulations around AI technology, voicing concerns about its unchecked governmental applications. Debates surrounding autonomous weapons have gained prominence, leading to ethical discourses in light of developments seen in conflicts, such as the ongoing situation in Ukraine where semi-autonomous drones are deployed.

As negotiations continue, the Pentagon has emphasized the importance of clear access to advanced AI tools for operational readiness and effectiveness, suggesting that current limitations are hindering progress. Nonetheless, Anthropic remains resolute in its commitment to ethical guidelines regarding the use of its AI technologies, ensuring they are not involved in harmful applications that could undermine public trust and safety.

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Popular Articles

0
Would love your thoughts, please comment.x
()
x