The compromise of ChatGPT users’ login credentials resulted in unauthorized access and account misuse, which OpenAI has formally confirmed. This statement follows recent claims made by ArsTechnica, in which the corporation strongly denied that ChatGPT was disclosing private conversations, including sensitive information like usernames and passwords, based on screenshots submitted in by a reader.

OpenAI disputed the original ArsTechnica story as false and made it clear that their fraud and security teams were looking into the situation. As per OpenAI, a malicious actor was able to obtain access to the leaked login credentials and exploit the impacted accounts. It was not the case that ChatGPT displayed the history of another user; rather, the disclosed chat history and files were the consequence of this illegal access.

The impacted person did not think their account had been accessed, despite reports that their account had been compromised. OpenAI highlighted that further information about the scope of the breach and the actions required to resolve the security issue will become available as the investigation continued.

When ArsTechnica first revealed that ChatGPT was exposing private discussions, there were worries that private information would be exposed. After using ChatGPT for an irrelevant question, the impacted user found more chats in their chat history that did not belong to them.

The conversations that were stolen contained information regarding troubleshooting issues, the name of the app, store numbers, and extra login passwords. The support system was used by staff members of a pharmacy prescription medicine portal. Details on a presentation and an unpublished research proposal were made public in another leaked chat.

This incident contributes to earlier security issues with ChatGPT. Chat titles were allegedly leaked in March 2023 due to a glitch, and in November 2023, researchers were able to manipulate queries and retrieve confidential data that was used to train the Language Model.

As a result, users are encouraged to use caution when utilizing AI bots such as ChatGPT, particularly when utilizing bots developed by other parties. Concerns over the platform’s security measures have also been raised by the lack of common security features like two-factor authentication (2FA) and the ability to examine recent logins on the ChatGPT website.

Topics #AI #Artificial Intelligence #ChatGPT #Login Credentials #news #OpenAI #Sam Altman