OpenAI Updates Pentagon Agreement
OpenAI revised its agreement to work with the US Department of Defense following criticism by users and experts in the industry. The company acknowledged it had rushed its original announcement, and that they failed to clearly explain the key safeguards.
The company’s chief executive Sam Altman has said that the contract will be amended to include stricter terms. This updated agreement will expressly prohibit the use of OpenAI for surveillance on US citizens or nationals. The rollout was rushed, he said.
Anthropic Dispute and its aftermath
Tensions between Anthropic, the Pentagon and other parties erupted. Anthropic reportedly declined to use its AI model Claude for fully autonomous weapons or mass surveillance. This refusal caused friction between US officials in the defense industry.
Pentagon officials have not publicly commented on their discussions with Anthropic.
Users’ Reactions and App Rankings
The uninstall rate for ChatGPT mobile app has risen dramatically over the past weekend, according to reports. Claude climbed the App Store’s rankings at the same time.
This episode has rekindled debates about the way private AI companies work with government and their influence in military operations.
AI in Modern Warfare
AI plays an important role in the defense strategy. Maven is its defense platform. It pulls in vast streams of data, from satellite reports to field reports.
Nevertheless, the large-scale language models may generate errors that can sometimes produce false or misleading results known as Hallucinations. Officials from the military insist on human supervision as being central.
Some experts are concerned that the removal of safety-focused firms from discussions on defense will weaken oversight. Oxford University Professor Mariarosaria Taddeo argued that the exclusion of more conservative AI firms could have long-term risks.
AI and War: The bigger question
The situation is a symptom of a larger problem. Governments want to access AI tools as they become more powerful.
