Discover the Alliancy's latest editorial 👉 Shadow IT and digital resilience: SaaS Management on the ExCom agenda

Blog

From Shadow IT to Shadow AI: A New Layer of Risk

The rise of generative AI (GenAI) has led to the emergence of Shadow AI, adding a new layer of risk to those of Shadow IT. Employees are increasingly using these tools without proper training or guidance.

  • ShadowIT

The widespread adoption of Generative AI (GenAI) has sparked enthusiasm among employees across all sectors, by enabling the creation of content, and other media with remarkable speed and quality.

These highly publicized tools are now rapidly adopted in the workplace, without any training or guidelines for proper use. And just as it is difficult for organisations to avoid Shadow IT (the use of any software, hardware or IT resource without the IT department’s approval, knowledge or oversight), it will be difficult to prevent Shadow AI.

The recent Fishbowl study conducted among over 11,000 corporate employees revealed that 68% of employees who use ChatGPT and other AI tools are currently doing so without informing their immediate superiors or the IT department. The study also shows that the adoption of GenAIs by employees has significantly increased, with nearly one in two employees using such AI to assist them in their professional tasks.

This unchecked adoption introduces major legal and security risks, which may exceed those of Shadow IT.

  • Confidential Data Risks: Unauthorised use of AI tools can result in data breaches, as employees may inadvertent input sensitive information (personal or business-sensitive data). This data is fed into AI models and unintentionally shared, exposing confidential information to the public. The Samsung example strikingly highlights the need to train employees in these tools. Samsung engineers introduced some of the company’s confidential code into ChatGPT in the hope of correcting problems, instead of causing a security breach. In response, the company quickly banned all staff from accessing the AI tools.

  • Compliance Risks: The disclosure of personal information on this type of tool also violates current regulations such as the GDPR. When employees input sensitive data into unauthorised AI tools, they not only expose the company’s liability leading to potential penalties but also put its reputation at risk.

  • Bugs and Vulnerabilities: A number of recent incidents have highlighted vulnerabilities. For example, a bug in an open-source library used by ChatGPT allowed access to others’ conversation history and exposed sensitive internal discussions or confidential client information from companies. Similarly, Microsoft’s AI research team accidentally exposed 38 terabytes of private data (passwords and more than 30K internal Microsoft Teams messages).

Like SaaS applications, AI tools are now an integral part of employee work habits. Obstructing its use would be counter-productive as they are reshaping industries and offering unprecedented opportunities for enterprises.

IT leaders must prioritise managing the hundreds of tools that keep crawling into their IS. Yet striking a balance between solid governance and business needs for autonomy is proving to be a daunting challenge, especially when using a variety of sources/tools to make sense of this ecosystem.

Instead, IT teams need to establish a framework that harnesses the full potential of this technology while ensuring compliance with the company’s policy on confidentiality and compliance, in order to minimise the risks.

Beamy icon colour