WikiStock

Global Securities Firms Regulatory Inquiry App

English
Download
Home-News-

OpenAI's Design Details Stolen By Hacker In 2023 Who Infiltrated Company's Messaging System: Report

iconBenzinga

2024-07-05 16:41

The incident was disclosed to OpenAI employees and its board of directors during an all-hands meeting at the company's San Francisco offices in April 2023.

  In a shocking revelation, OpenAIwas found to have suffered a security breach last year. The intruder managed to infiltrate the company‘s internal messaging systems and stole details about the design of OpenAI’s AI technologies.

  What Happened: The hacker was able to gain access to an online forum where OpenAI employees were discussing the companys latest technologies, The New York Times reported on Friday. The intruder lifted details from these discussions, although they did not manage to gain access to the systems where the company builds its artificial intelligence, according to two individuals familiar with the incident.

  The incident was disclosed to OpenAI employees and its board of directors during an all-hands meeting at the companys San Francisco offices in April 2023. The executives decided not to share the news publicly as no customer or partner information was stolen, and they did not consider the incident a threat to national security.

  However, the breach did raise concerns among some OpenAI employees about the potential for foreign adversaries, such as China, to steal AI technology that could eventually pose a threat to U.S. national security.

  See Also: Nvidia Suppliers Parent Company Pledges $56B Investment In AI, Chip Tech By 2026 As Competition From Samsung, Micron Heats Up

  Following the breach, Leopold Aschenbrenner, an OpenAI technical program manager, sent a memo to OpenAI's board of directors, arguing that the company was not doing enough to prevent foreign adversaries from stealing its secrets.

  Aschenbrenner was later fired for leaking information outside the company. He alluded to the breach on a recent podcast, stating that OpenAI's security wasn't strong enough to protect against the theft of key secrets if foreign actors were to infiltrate the company.

  OpenAI spokeswoman, Liz Bourgeois, disagreed with Aschenbrenner‘s characterizations of the company’s security, stating that the incident had been addressed and shared with the board before Aschenbrenner joined the company.

  Matt Knight, OpenAI's head of security, despite the concerns, emphasized the importance of having the best minds working on AI technology, acknowledging that it comes with some risks.

  Why It Matters: This incident comes on the heels of OpenAIs decision to disband a team dedicated to ensuring the safety of potentially ultra-capable AI systems. The company also restricted access to its tools and software in China amid rising tensions.

  These actions were seen as an attempt to exclude users from countries where its services are not available. The breach also coincides with a report that China has surged ahead in the global race for generative AI patents, despite U.S. sanctions.

  The report comes at the heels of another reported security flaw in OpenAIs macOS App where conversations were being stored in plain texts, making them vulnerable to unauthorized access. The issue has been resolved.

Disclaimer:The views in this article only represent the author's personal views, and do not constitute investment advice on this platform. This platform does not guarantee the accuracy, completeness and timeliness of the information in the article, and will not be liable for any loss caused by the use of or reliance on the information in the article.