OpenAI did not reveal security breach in 2023 – NYT

07/06/2024 08:00
OpenAI did not reveal security breach in 2023 – NYT

OpenAI experienced a security breach in 2023 but did not disclose the incident outside the company, The New York Times said on July 4.

OpenAI did not reveal security breach in 2023 – NYT OpenAI did not reveal security breach in 2023 – NYT 36 seconds ago · 2 min read

The AI firm reportedly did not inform the FBI, law enforcement, or the public.

2 min read

Updated: Jul. 6, 2024 at 12:17 am UTC

OpenAI did not reveal security breach in 2023 – NYT

Cover art/illustration via CryptoSlate. Image includes combined content which may include AI-generated content.

OpenAI experienced a security breach in 2023 but did not disclose the incident outside the company, the New York Times reported on July 4.

OpenAI executives allegedly disclosed the incident internally during an April 2023 meeting but did not reveal it publicly because the attacker did not access information about customers or partners.

Furthermore, executives did not consider the incident a national security threat because they considered the attacker a private individual without connection to a foreign government. They did not report the incident to the FBI or other law enforcement agencies.

The attacker reportedly accessed OpenAI’s internal messaging systems and stole details about the firm’s AI technology designs from employee conversations in an online forum. They did not access the systems where OpenAI “houses and builds its artificial intelligence,” nor did they access code.

The New York Times cited two individuals familiar with the matter as sources.

Ex-employee expressed concern

The New York Times also referred to Leopold Aschenbrenner, a former OpenAI researcher who sent a memo to OpenAI directors after the incident and called for measures to prevent China and foreign countries from stealing company secrets.

The New York Times said Aschenbrenner alluded to the incident on a recent podcast.

OpenAI representative Liz Bourgeois said the firm appreciated Aschenbrenner’s concerns and expressed support for safe AGI development but contested specifics. She said:

“We disagree with many of [Aschenbrenner’s claims] … This includes his characterizations of our security, notably this incident, which we addressed and shared with our board before he joined the company.”

Aschenbrenner said that OpenAI fired him for leaking other information and for political reasons. Bourgeois said Aschenbrenner’s concerns did not lead to his separation.

OpenAI head of security Matt Knight emphasized the company’s security commitments. He told the New York Times that the company “started investing in security years before ChatGPT.” He admitted AI development “comes with some risks, and we need to figure those out.”

The New York Times disclosed an apparent conflict of interest by noting that it sued OpenAI and Microsoft over alleged copyright infringement of its content. OpenAI believes the case is without merit.

Mentioned in this article

Posted In: , AI

Latest US Stories
Latest Alpha Market Report
Latest Press Releases

Read more --->