OpenAI, owner of the ‘ChatGPT’ platform, has removed a clause from its usage policy that prohibited the technology’s use for military purposes.
Until Wednesday, January 10, OpenAI’s ‘Usage Policy’ page explicitly forbade ‘activities that carry a high risk of physical harm,’ including ‘weapon development’ and ‘military and warfare’ uses.
However, as observed by ‘The Intercept,’ the new policy now posted on the website simply states not to ‘use our services to harm yourself or others,’ with ‘weapon development or use’ cited as an example. The complete ban on ‘military and warfare’ applications has disappeared.
OpenAI claims this change aims to make the document ‘clearer’ and ‘easier to read,’ including many significant linguistic and formatting changes.
Company spokesperson Nico Felix said, ‘We were looking to create a set of universal principles that are easily remembered and applied, especially with our tools now widely used by everyday users who can also build their own GPT models.’
He added, ‘For instance, the principle of ‘do not harm others’ is widespread, easily understood, and applicable in many contexts. Additionally, we specifically referred to weapons and harm to others as clear examples.’
However, Heidi Khalaf, Director of Engineering at cybersecurity firm Trail of Bits, notes a distinct difference between the two policies. The original explicitly prohibited weapon development and military and warfare use, while the new one emphasizes flexibility and legal compliance.
She pointed out that ‘weapon development and conducting military and warfare activities are legal to varying degrees’ and stressed the potential implications for AI safety, given known cases of bias and hallucination in large language models (LLMs) and their lack of accuracy.
She warned that their use in warfare could lead to inaccurate and biased operations, potentially increasing harm and civilian casualties.
Sarah Meyers West, Executive Director of the Artificial Intelligence Institute, comments, ‘Given the use of AI systems in targeting civilians in Gaza, the removal of the phrase “military and warfare” from OpenAI’s permissible use policy is a significant moment.’
She added that ‘the remaining language in the policy is vague and raises questions about how the company intends to enforce it.’
While no current service from ‘OpenAI’ can reasonably be used to directly kill someone, whether militarily or otherwise – as ‘ChatGPT’ cannot control a drone or launch a missile – the military still retains at least the capability to kill.”