OpenAI has announced a radical tightening of security procedures and cooperation rules with law enforcement agencies. This decision came after it was revealed that the perpetrator of the tragic shooting in Tumbler Ridge, Canada, managed to bypass a previous block by creating a second account in the ChatGPT service. The new protocols are intended to enable immediate reporting of credible threats to the police, which, according to company representatives, could have prevented the tragedy if detection systems had been working properly before the attack.
Account block bypass
The perpetrator of the shooting in Canada created a second ChatGPT account after the first one was blocked for aggressive content.
Cooperation with police
OpenAI announces automatic reporting to authorities of credible threats of violence detected by algorithms.
Security reform
The company is implementing new mechanisms aimed at preventing the planning of crimes using AI.
The American tech giant OpenAI has found itself at the center of the debate over the responsibility of artificial intelligence creators following the tragic events in Tumbler Ridge. The company officially admitted that the perpetrator of the shooting at a Canadian school had a second, hidden account in the ChatGPT service, which she likely used to plan her actions. Although the woman's main account had previously been blocked for violating the terms of service regarding the generation of violent content, OpenAI's systems failed to link the new activity to a person already banned from the platform. The use of artificial intelligence to plan murders raises questions about the boundaries of privacy and user monitoring. In response to these shortcomings, OpenAI's management has announced the implementation of new, rigorous security protocols. A key change is the commitment to proactively inform law enforcement agencies of every case of detecting "credible threats to life or health." Until now, the company has mainly responded to court orders, rarely initiating contact with the police on its own. Since the debut of public versions of large language models in 2022, the ethics of their use and the risk of exploitation by criminals have been the subject of intensive regulatory work in the USA and the European Union. The changes will also include better identification of users attempting to circumvent sanctions imposed on their accounts. An internal analysis conducted by OpenAI showed that under the new rules, the perpetrator's activity would have been flagged as critical within minutes of entering queries. However, experts emphasize that increased surveillance of users may raise concerns among civil rights defenders. The company declares that a balance between privacy and security will be maintained, but the priority remains preventing the use of language models for criminal purposes. „OpenAI would have alerted police to Canadian shooter if account was discovered today.” — OpenAI Spokesperson This initiative is seen as a signal to the entire tech industry not to wait for top-down legal regulations but to independently create safety filters.
Perspektywy mediów: Emphasizes the need for rigorous control of tech corporations and the protection of life at the cost of limiting full anonymity online. Expresses concerns about preventive censorship and the creation of dangerous precedents for close cooperation between Big Tech and police without judicial oversight.