How Generative AI Can Dupe SaaS Authentication Protocols — And Effective Ways To Prevent Other Key AI Risks in SaaS
Discription

[![Generative AI](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAQAAAC1HAwCAAAAC0lEQVR42mP8Xw8AAoMBgDTD2qgAAAAASUVORK5CYII=)]()

Security and IT teams are routinely forced to adopt software before fully understanding the security risks. And AI tools are no exception.

Employees and business leaders alike are flocking to generative AI software and similar programs, often unaware of the major SaaS security vulnerabilities they’re introducing into the enterprise. A February 2023 [generative AI survey of 1,000 executives]() revealed that 49% of respondents use ChatGPT now, and 30% plan to tap into the ubiquitous generative AI tool soon. Ninety-nine percent of those using ChatGPT claimed some form of cost-savings, and 25% attested to reducing expenses by $75,000 or more. As the researchers conducted this survey a mere three months after ChatGPT’s general availability, today’s ChatGPT and AI tool usage is undoubtedly higher.

Security and risk teams are already overwhelmed protecting their SaaS estate (which has now become the operating system of business) from common vulnerabilities such as misconfigurations and over permissioned users. This leaves little bandwidth to assess the AI tool threat landscape, unsanctioned AI tools currently in use, and the implications for SaaS security.

With threats emerging outside and inside organizations, CISOs and their teams must understand the most relevant AI tool risks to SaaS systems — and how to mitigate them.

### 1 — Threat Actors Can Exploit Generative AI to Dupe SaaS Authentication Protocols

As ambitious employees devise ways for AI tools to help them accomplish more with less, so, too, do cybercriminals. Using generative AI with malicious intent is simply inevitable, and it’s already possible.

AI’s ability to impersonate humans exceedingly well renders weak SaaS authentication protocols especially vulnerable to hacking. According to _Techopedia_, threat actors can misuse generative AI for password-guessing, CAPTCHA-cracking, and building more potent malware. While these methods may sound limited in their attack range, the [**January 2023 CircleCI security breach**]() was attributed to a single engineer’s laptop becoming infected with malware.

Likewise, three noted technology academics recently posed a plausible hypothetical for generative AI running a phishing attack:

__

> _”A hacker uses ChatGPT to generate a personalized spear-phishing message based on your company’s marketing materials and phishing messages that have been successful in the past. It succeeds in fooling people who have been well trained in email awareness, because it doesn’t look like the messages they’ve been trained to detect.”_

Malicious actors will avoid the most fortified entry point — typically the SaaS platform itself — and instead target more vulnerable side doors. They won’t bother with the deadbolt and guard dog situated by the front door when they can sneak around back to the unlocked patio doors.

Relying on authentication alone to keep SaaS data secure is not a viable option. Beyond implementing multi-factor authentication (MFA) and physical security keys, security and risk teams need visibility and continuous monitoring for the entire SaaS perimeter, along with automated alerts for suspicious login activity.

These insights are necessary not only for cybercriminals’ generative AI activities but also for employees’ AI tool connections to SaaS platforms.

### 2 — Employees Connect Unsanctioned AI Tools to SaaS Platforms Without Considering the Risks

Employees are now relying on unsanctioned AI tools to make their jobs easier. After all, who wants to work harder when AI tools increase effectiveness and efficiency? Like any form of [shadow IT](), employee adoption of AI tools is driven by the best intentions.

For example, an employee is convinced they could manage their time and to-do’s better, but the effort to monitor and analyze their task management and meetings involvement feels like a large undertaking. AI can perform that monitoring and analysis with ease and provide recommendations almost instantly, giving the employee the productivity boost they crave in a fraction of the time. Signing up for an AI scheduling assistant, from the end-user’s perspective, is as simple and (seemingly) innocuous as:

* Registering for a free trial or enrolling with a credit card
* Agreeing to the AI tool’s Read/Write permission requests
* Connecting the AI scheduling assistant to their corporate Gmail, Google Drive, and Slack accounts

This process, however, creates invisible conduits to an organization’s most sensitive data. These AI-to-SaaS connections inherit the user’s permission settings, allowing the hacker who can successfully compromise the AI tool to move quietly and laterally across the authorized SaaS systems. A hacker can access and exfiltrate data until suspicious activity is noticed and acted on, which can range from weeks to years.

AI tools, like most SaaS apps, use [**OAuth access tokens for ongoing connections to SaaS platforms**](). Once the authorization is complete, the token for the AI scheduling assistant will maintain consistent, API-based communicationwith Gmail, Google Drive, and Slack accounts — all without requiring the user to log in or authenticate at any regular intervals_. _The threat actor who can capitalize on this OAuth token has stumbled on the SaaS equivalent of spare keys “hidden” under the doormat.

[![AI tool](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAQAAAC1HAwCAAAAC0lEQVR42mP8Xw8AAoMBgDTD2qgAAAAASUVORK5CYII=)]()

Figure 1: An illustration of an AI tool establishing an OAuth token connection with a major SaaS platform. Credit: AppOmni

Security and risk teams often lack the SaaS security tooling to monitor or control such an attack surface risk. Legacy tools like cloud access security brokers (CASBs) and secure web gateways (SWGs) won’t detect or alert on AI-to-SaaS connectivity.

Yet these AI-to-SaaS connections aren’t the only means by which employees can unintentionally expose sensitive data to the outside world.

### 3 — Sensitive Information Shared with Generative AI Tools Is Susceptible to Leaks

The data employees submit to generative AI tools — often with the goal of expediting work and improving its quality — can end up in the hands of the AI provider itself, an organization’s competitors, or the general public.

Because most generative AI tools are free and exist outside the organization’s tech stack, security and risk professionals have no oversight or security controls for these tools. This is a growing concern among enterprises, and generative AI data leaks have already happened.

A March incident inadvertently enabled [ChatGPT users to see other users’ chat titles and histories]() in the website’s sidebar. Concern arose not just for sensitive organizational information leaks but also for user identities being revealed and compromised. OpenAI, the developers of ChatGPT, announced the ability for users to turn off chat history. In theory, this option stops ChatGPT from sending data back to OpenAI for product improvement, but it requires employees to manage data retention settings. Even with this setting enabled, OpenAI retains conversations for 30 days and exercises the right to review them “for abuse” prior to their expiration.

This bug and the data retention fine print haven’t gone unnoticed. In May, [Apple restricted employees from using ChatGPT]() over concerns of confidential data leaks. While the tech giant took this stance as it builds its own generative AI tools, it joined enterprises such as Amazon, Verizon, and JPMorgan Chase in the ban. Apple also directed its developers to avoid GitHub Co-pilot, owned by top competitor Microsoft, for automating code.

Common generative AI use cases are replete with data leak risks. Consider a product manager who prompts ChatGPT to make the message in a product roadmap document more compelling. That product roadmap almost certainly contains product information and plans never intended for public consumption, let alone a competitor’s prying eyes. A similar ChatGPT bug — which an organization’s IT team has no ability to escalate or remediate — could result in serious data exposure.

Stand-alone generative AI does not create SaaS security risk. But what’s isolated today is connected tomorrow. Ambitious employees will naturally seek to extend the usefulness of unsanctioned generative AI tools by integrating them into SaaS applications. Currently, [ChatGPT’s Slack integration]() demands more work than the average Slack connection, but it’s not an exceedingly high bar for a savvy, motivated employee. The integration uses OAuth tokens exactly like the AI scheduling assistant example described above, exposing an organization to the same risks.

## How Organizations Can Safeguard Their SaaS Environments from Significant AI Tool Risks

Organizations need guardrails in place for AI tool data governance, specifically for their SaaS environments. This requires comprehensive SaaS security tooling and proactive cross-functional diplomacy.

Employees use unsanctioned AI tools largely due to limitations of the approved tech stack. The desire to boost productivity and increase quality is a virtue, not a vice. There’s an unmet need, and CISOs and their teams should approach employees with an attitude of collaboration versus condemnation.

Good-faith conversations with leaders and end-users regarding their AI tool requests are vital to building trust and goodwill. At the same time, CISOs must convey legitimate security concerns and the potential ramifications of risky AI behavior. Security leaders should consider themselves the accountants who explain the best ways to work within the tax code rather than the IRS auditors perceived as enforcers unconcerned with anything beyond compliance. Whether it’s putting proper security settings in place for the desired AI tools or sourcing viable alternatives, the most successful CISOs strive to help employees maximize their productivity.

Fully understanding and addressing the risks of AI tools requires a comprehensive and robust SaaS security posture management ([**SSPM**]()) solution. SSPM provides security and risk practitioners the insights and visibility they need to navigate the ever-changing state of SaaS risk.

To improve authentication strength, security teams can use SSPM to enforce MFA throughout all SaaS apps in the estate and monitor for configuration drift. SSPM enables security teams and SaaS app owners to enforce best practices without studying the intricacies of each SaaS app and AI tool setting.

The ability to inventory unsanctioned and approved AI tools connected to the SaaS ecosystem will reveal the most urgent risks to investigate. Continuous monitoring automatically alerts security and risk teams when new AI connections are established. This visibility plays a substantial role in reducing the attack surface and taking action when an unsanctioned, unsecure, and/or over permissioned AI tool surfaces in the SaaS ecosystem.

AI tool reliance will almost certainly continue to spread rapidly. Outright bans are never foolproof. Instead, a pragmatic mix of security leaders sharing their peers’ goal to boost productivity and reduce repetitive tasks coupled with [**the right SSPM solution**]() is the best approach to drastically cutting down SaaS data exposure or breach risk.

Found this article interesting? Follow us on [Twitter __]() and [LinkedIn]() to read more exclusive content we post.Read More

Back to Main

Subscribe for the latest news: