The US artificial intelligence (AI) firm Anthropic is looking to hire a chemical weapons and high-yield explosives expert to try to prevent 'catastrophic misuse' of its software.

In other words, it fears that its AI tools might tell someone how to make chemical or radioactive weapons, and wants an expert to ensure its guardrails are sufficiently robust. In the LinkedIn recruitment post, the firm states that applicants should have a minimum of five years of experience in chemical weapons and/or explosives defense as well as knowledge of radiological dispersal devices—also known as dirty bombs.

The company mentioned that this role is similar to jobs in other sensitive areas that it has already developed. Anthropic is not alone in this approach; OpenAI has a similar position advertised for a researcher in biological and chemical risks, with salaries nearly double that of Anthropic's offer. However, some experts express alarming concerns over this recruitment strategy, raising questions about whether it is safe for AI systems to handle information about sensitive chemicals and explosives.

As the US government pushes for AI firms to assist in military operations, the urgency of these precautions has escalated. Anthropic is also involved in a legal dispute with the US Department of Defense, highlighting the broader implications of AI technology in warfare.