As we entered into 2023, Artificial Intelligence (AI) tools and technologies started to take hold on almost every aspect of the business. The benefits are many, as they present great potential for progress and increased productivity in the workplace. The risk(s)? Like any tool that shows the promise of bringing speed and agility to the business is security.
We already exist in a data-laden society that is currently re-evaluating what is appropriate in terms of personal standards of privacy. Customer data can easily lead to privacy breaches and a loss of trust, which may result in financial harm and damage to the company's reputation. The reclamation of personal data for individuals has brought on so many regulations and standards to give people more control over how their information is processed, handled, and protected.
There is a material risk of end users inadvertently sharing sensitive or personal information while using AI tools, thus compromising privacy – especially in content collaboration applications where data is created, accessed, shared, and manipulated in high volume. The bigger problem that occurs is the fact that when you introduce an unsanctioned application into the environment, the risk of a supply chain-based attack increases significantly.
Why? Application-to-application connectivity increases the threat vector by introducing machine identities that are often over privileged, unsanctioned, and typically not within the visibility of the security team. When machine identities become compromised (via OAuth token) they can provide unauthorized access to sensitive data within the application that it's connected to.
These “non-human” identities can gain permissions to read, write, and delete sensitive data – which can significantly impact an organization's security, business, and compliance risk. Supply-chain based attacks involving machine identities and their associated credentials are more common now than ever before, and the increased use of AI-powered applications will only make matters worse.
The DoControl Data Research team recently reported a surge in 3rd party AI-powered application installations on Google Workspace and Microsoft 365 environments since the beginning of this year – essentially as soon as these tools hit the marketplace. Many of these applications are considered to have risky OAuth permissions attached to them (i.e. they have far more permissions than required).
The team highlights that 59% of companies had 5 or more AI shadow apps, 24% of companies with AI shadow apps had extensive data permissions, and 12% of companies had extremely high-risk shadow apps within their environment. What sort of applications are we talking about here? Here’s a short list of some of the more compelling categories that were found:
- Data scraping tools that extract data from websites and turn it into a spreadsheet or an API;
- Neural language model chatbots that generate human-like text responses and participate in contextual conversations;
- Code testing that analyzes code and generates meaningful tests to catch bugs before it is shipped;
- Chrome extensions that autocomplete sentences to write emails, create documents, and chat at a faster pace;
- Sales and marketing lead correlation to automatically associate email messages, meetings and attachments to the right accounts;
- An AI-powered search engine that helps connect directly with ideal customers to build pipeline, shorten sales cycles, and close more deals;
- Photo and video editing workflow generators, as well as professionally generated slide decks/presentations.
There are some really strong business cases for the applications highlighted in this list. However, if these connections are misconfigured or excessively permissive they can pose material risk to the business– they also do not have the same security checks and controls applied to them as is with sanctioned apps. Attackers can impersonate apps and engage in OAuth phishing attacks through malicious third-party connections. Outdated tokens, keys and unused applications are redundant and impose unnecessary risk. Unverified marketplace apps can disclose authorizations that enable the ability to read, edit, download, and permanently delete files as well as connect to external services – these are just a few examples of the risks imposed by 3rd party OAuth apps.
As AI tools and applications continue to gain traction, it is critical for organizations to consider the downstream effects of bringing new technologies into the fold, as they may inadvertently increase the risk exposure of the organization. Some of the applications mentioned above enable business users to be more effective in their role, but this convenience can never come at the cost of security. The introduction of sanctioned, AI-powered applications will inherently widen the threat vector.
Modern businesses need to prevent the compromise of risky interconnected applications (i.e. unsanctioned, abandoned, vulnerable, malicious, and over privileged apps) in order to protect business-critical data with the SaaS estate. This goes for AI-powered apps, or otherwise. Organizations need to be able to establish full visibility, including all sanctioned and unsanctioned SaaS applications, and enforce strong governance controls that automatically close compliance gaps and remediate the risk supply chain-based attacks.
Watch this video to see how DoControl’s SaaS Security Platform provides shadow application governance through SaaS application discovery, control, and remediation.
Learn more by visiting our Solutions Page, and when you’re ready request a demo or take a risk assessment today.