The US artificial intelligence (AI) firm Anthropic is looking to hire a chemical weapons and high-yield explosives expert to try to prevent catastrophic misuse of its software. In other words, it fears that its AI tools might tell someone how to make chemical or radioactive weapons, and wants an expert to ensure its guardrails are sufficiently robust.
In the LinkedIn recruitment post, the firm says applicants should have a minimum of five years experience in chemical weapons and/or explosives defence as well as knowledge of radiological dispersal devices – also known as dirty bombs. The firm told the BBC the role was similar to jobs in other sensitive areas that it has already created.
Anthropic is not the only AI firm adopting this strategy. A similar position has been advertised by ChatGPT developer OpenAI, which lists a job vacancy for a researcher in biological and chemical risks, with a salary of up to $455,000, almost double that offered by Anthropic.
However, some experts warn that this approach could pose risks by giving AI tools access to sensitive information. Dr. Stephanie Hare, tech researcher and co-presenter of the BBC's AI Decoded TV program, questioned the safety of using AI systems for handling such sensitive information, noting the absence of regulations governing this area.
The urgency of these issues has increased with the US government calling on AI firms amid military operations in conflict regions. As Anthropic navigates its responsibilities, questions remain about the implications of providing AI systems with information about weapons and explosives, highlighting the complex intersection of technology, ethics, and security.
In the LinkedIn recruitment post, the firm says applicants should have a minimum of five years experience in chemical weapons and/or explosives defence as well as knowledge of radiological dispersal devices – also known as dirty bombs. The firm told the BBC the role was similar to jobs in other sensitive areas that it has already created.
Anthropic is not the only AI firm adopting this strategy. A similar position has been advertised by ChatGPT developer OpenAI, which lists a job vacancy for a researcher in biological and chemical risks, with a salary of up to $455,000, almost double that offered by Anthropic.
However, some experts warn that this approach could pose risks by giving AI tools access to sensitive information. Dr. Stephanie Hare, tech researcher and co-presenter of the BBC's AI Decoded TV program, questioned the safety of using AI systems for handling such sensitive information, noting the absence of regulations governing this area.
The urgency of these issues has increased with the US government calling on AI firms amid military operations in conflict regions. As Anthropic navigates its responsibilities, questions remain about the implications of providing AI systems with information about weapons and explosives, highlighting the complex intersection of technology, ethics, and security.





















