Listen to the article
As artificial intelligence becomes integral to business, hidden use of unsanctioned AI tools—known as shadow AI—poses mounting security and compliance challenges, prompting a shift from bans to nuanced governance strategies.
Advanced artificial intelligence (AI) technologies, especially large language models (LLMs), are rapidly becoming fundamental tools in modern business operations. According to a 2025 McKinsey survey, over three-quarters of companies deploy AI in at least one business function, with 71% regularly utilising generative AI for tasks ranging from data analysis to report generation. This widespread adoption underscores AI’s critical role as a competitive necessity rather than a mere luxury for businesses today. However, the integration of AI into workplaces has also brought to the fore a significant security and privacy concern: shadow AI.
Shadow AI refers to the use of AI tools by employees without the knowledge or permission of their organisation’s IT departments, including popular public LLMs like Google’s Gemini and OpenAI’s ChatGPT, alongside various independent AI-powered SaaS applications. IBM’s 2025 Cost of a Data Breach Report revealed that one in five organisations has employees using unsanctioned and unprotected AI tools. Furthermore, a 2024 analysis by RiverSafe found that 20% of UK companies had experienced exposure of sensitive corporate data due to such unsupervised generative AI use. This covert utilisation of AI poses a critical challenge to securing business data and maintaining compliance with regulatory requirements.
The core of the shadow AI threat lies in the loss of control and visibility over how organisational data is accessed and used. Cybersecurity research from Google indicates that 77% of UK cyber leaders believe generative AI has contributed to increased security incidents, mainly through inadvertent data leakage and AI hallucinations—where AI generates convincing yet false information. Anton Chuvakin, a security advisor in Google Cloud’s Office of the CISO, emphasised that employees entering confidential notes into unvetted chatbots risk handing over proprietary data to systems that may retain and reuse it, complicating efforts to protect critical assets. Dan Lohrmann, field CISO at Presidio, further highlighted that inadequate management of shadow AI impairs an organisation’s ability to demonstrate compliance to stakeholders and regulators, potentially resulting in data breaches, legal challenges, and poor business outcomes.
Shadow AI also introduces unique challenges distinct from those posed by traditional shadow IT—the hidden use of unauthorised cloud applications. While cloud access security broker (CASB) solutions have been developed to monitor and control shadow IT, they fall short with shadow AI due to the nature of AI’s integration within existing IT systems and its seamless operation through web browsers or personal devices, often leaving no discernible traces on corporate networks. Chuvakin noted that unlike unauthorized cloud apps, shadow AI communications meld with regular online activities and rarely register on corporate defence systems, making them especially difficult for IT teams to detect or control.
In response to these risks, some organisations have resorted to banning AI tools internally. Samsung, for example, reportedly prohibited generative AI usage in a key division after incidents involving sensitive data being shared with ChatGPT. However, experts warn that outright bans may be ineffective and counterproductive. Diana Kelley, CISO at AI security firm Noma Security, pointed out that avoiding AI use altogether could result in competitive disadvantages. Chuvakin described bans as “security theatre” in today’s distributed working environments, suggesting they may prompt employees to circumvent controls further, using AI tools on less secure personal networks or mobile devices, thereby exacerbating the very problem they intend to solve.
Rather than imposing prohibitions, organisations are advised to develop nuanced governance strategies that balance risk management with enabling innovation. This includes implementing robust AI access controls, enhancing employee awareness of data protection in AI interactions, and carefully vetting AI tools for security compliance. IBM’s 2025 report found that 97% of AI-related breaches involved organisations lacking proper AI access controls, highlighting the critical importance of this approach. Moreover, shadow AI breaches were found to significantly increase incident costs, adding an average of $670,000 to the total cost of data breaches, underscoring the financial stakes involved.
As AI continues to embed itself deeply into business processes, organisations must evolve from reactive bans to proactive governance frameworks. These should address licensing, security and privacy policies, workforce training, and continuous monitoring. In doing so, businesses can mitigate the risks of shadow AI while reaping the substantial rewards that AI technologies promise in operational efficiency and innovation.
📌 Reference Map:
Source: Noah Wire Services