ChatGPT company warns future versions of artificial intelligence (AI) tool could be used to create bioweapons.



Tuesday, June 24, 2025 - The company behind ChatGPT has issued a warning that future versions of its artificial intelligence (AI) tool could potentially be used to create bioweapons. While AI has been praised for its promise in advancing medical researchhelping scientists develop new drugs and faster vaccines, OpenAI, the creator of ChatGPT, cautions that as the technology becomes more advanced in biology, it could also generate “harmful information.”

In a recent blog post, OpenAI acknowledged that highly skilled individuals might use the AI to assist in developing biological weapons, though physical access to laboratories and sensitive materials remains a limiting factor. The company stressed, however, that these barriers are not absolute.

OpenAI’s safety lead Johannes Heidecke told Axios that future ChatGPT models likely will not be able to manufacture bioweapons independently but could be sophisticated enough to help amateurs replicate known biological threats. “We are more worried about replicating things that experts already are very familiar with,” he said

The company is taking proactive steps to build safeguards into future AI models to prevent misuse. Heidecke emphasized the need for near-perfect programming to detect and alert human supervisors to any dangerous content, stating that anything less than extremely high accuracy is unacceptable.

OpenAI has collaborated with experts in biosecurity, bioterrorism, and bioweapons to shape the AI’s responses. The concern comes amid growing fears about the misuse of AI, highlighted by past bioweapon incidents such as the 2001 anthrax attacks in the U.S., where letters containing deadly anthrax spores were mailed to media outlets.

Last year, top scientists warned that AI could one day produce bioweapons capable of threatening human survival, urging governments to regulate the technology to prevent its use in biological or nuclear warfare. OpenAI’s statement signals a recognition of the serious risks posed by increasingly capable AI systems and the urgent need to address them before harm occurs.

Post a Comment

0 Comments