On Monday, OpenAI was forced to disable the ChatGPT AI bot for a while after a bug in the system allowed users to briefly access other users’ conversation history. However, they could only see the headlines, not the content of the conversations. On Friday, the company announced the first findings on the incident.
Image Source: Jonathan Kemper/unsplash.com
To clarify all the circumstances of the incident, the company turned off ChatGPT for almost 10 hours. As a result, the AI bot’s security problems are much deeper: a chat history bug could also potentially reveal the personal data of 1.2% of paid ChatGPT Plus subscribers.
“In the hours before we disabled ChatGPT on Monday, some users could see another active user’s first and last name, email address, billing address, the last four digits of a credit card number, and its expiration date. Full credit card numbers have never been released,” the OpenAI team said on Friday.
“Open the subscription confirmation email sent on Monday, March 20 from 1:00 AM to 10:00 AM PT. Due to a bug, some subscription confirmation emails generated during this period were sent to the wrong users. These emails contained the last four digits of the other user’s credit card number, but the full credit card numbers were not displayed. It is possible that a small number of subscription confirmation emails were sent by mistake prior to March 20, although we have not confirmed any such cases,” OpenAI warned users.
It is also reported that experts have fixed an issue related to a library vulnerability that OpenAI has identified as an open source Redis client library – redis-py. To prevent similar incidents from happening again, the company has tightened control of library calls, and also “programmatically checked the logs to make sure that all messages are available only to those to whom they are intended.”
If you notice an error, select it with the mouse and press CTRL + ENTER.