Shadow AI — IT definition
The use of generative AI tools by employees without the approval or oversight of IT, security, legal or compliance functions.
Shadow AI refers to the use of artificial intelligence tools — primarily generative AI such as ChatGPT, Claude, Gemini, Copilot, Midjourney, along with browser extensions, plugins, agents and code copilots — by employees, without the validation, supervision or guardrails of the IT department, the CISO, or compliance functions. It is the generative-AI-era extension of Shadow IT, with materially higher stakes.
The phenomenon is massive and widely under-estimated. According to the 2024 Work Trend Index by Microsoft and LinkedIn, 75 % of global knowledge workers already use generative AI at work, and 78 % of them bring their own tools (BYOAI) — typically through personal accounts invisible to IT (Microsoft & LinkedIn, May 2024). The Cost of a Data Breach Report 2025 by IBM finds that 20 % of organizations have already suffered a data breach linked to Shadow AI, with an average added cost of USD 670,000 per incident (IBM Security & Ponemon, 2025).
of knowledge workers already use generative AI at work
Microsoft & LinkedIn 2024
of AI users bring their own tools to work (BYOAI)
Microsoft & LinkedIn 2024
of organizations have suffered a Shadow-AI-related breach
IBM Cost of a Data Breach 2025
average added cost per Shadow AI incident
IBM 2025
What is Shadow AI, concretely?
Shadow AI covers any business interaction with an AI model that is not governed by IT: a salesperson pasting a deal memo into ChatGPT to summarize it, a developer asking Claude to fix a bug while sending 200 lines of proprietary code, an HR specialist running an employment contract through Gemini to rephrase it, a marketing team generating visuals on Midjourney from a confidential product mockup.
The Promises and Pitfalls of AI at Work survey by Salesforce and YouGov (14,000 employees across 14 countries) measured that 40 % of generative-AI users have shared sensitive work information with these tools without their employer's knowledge, and that more than half use generative AI at work without formal approval (Salesforce/YouGov, 2023). KPMG arrives at comparable figures in the US: 50 % of the workforce uses AI tools at work without knowing whether it is allowed, and 57 % of employees hide their AI use and present AI-generated output as their own (KPMG, August 2024).
What defines the phenomenon is the absence of governance, not AI usage in itself:
- •Governed AI: enterprise licenses (ChatGPT Enterprise, Copilot for Microsoft 365, Claude for Work), models deployed under DPA, anti-reuse guarantees on prompts, SSO and logging enabled.
- •Shadow AI: free or personally paid accounts, browser extensions installed without review, locally downloaded open-source models, third-party plugins wired into Slack/Notion/Teams, wrapper websites that look like the official tool but aren't.
Shadow IT vs Shadow AI: same phenomenon, different risk logic
Shadow AI inherits the mechanics of Shadow IT — bottom-up adoption, procurement bypass, freemium access — but multiplies its consequences. Data is no longer simply stored at an unlisted SaaS vendor; it may train a model, reappear in another user's response, and resist the GDPR right to erasure.
Why Shadow AI is exploding
Five structural dynamics feed the ungoverned adoption of AI at work:
- •Immediate productivity payoff: BCG's AI at Work 2025 measures that regular users save at least 5 hours per week thanks to GenAI (BCG, June 2025). Asking an employee to wait for IT approval is asking them to give up that gain.
- •Frictionless freemium access: ChatGPT, Claude, Gemini, Copilot, Mistral Le Chat and Perplexity are one tab away. Cyberhaven observed a 485 % increase in corporate data flowing into AI tools between March 2023 and March 2024, with 73.8 % of ChatGPT usage at work going through non-corporate accounts (Cyberhaven, 2024).
- •Policy and training gap: ISACA finds that only 15 % of organizations have a formal AI policy and that 40 % provide no AI training (ISACA, 2024). McKinsey adds that only 18 % of organizations have an enterprise-wide council or board with authority over responsible AI governance (McKinsey State of AI 2024).
- •Cultural pressure and job anxiety: Microsoft measures that 52 % of AI users are reluctant to admit using it for important tasks, and 53 % fear being seen as replaceable. Hiding AI use becomes rational for the individual.
- •Perceived slowness of IT: Software AG, surveying 6,000 knowledge workers, finds that 33 % of employees justify Shadow AI by saying IT does not offer the tools they need, and 46 % refuse to give up their AI tools even if formally banned (Software AG, October 2024).
Which tools, which uses?
Shadow AI is not just ChatGPT. The footprint observed across Netskope, Harmonic Security and Cyberhaven deployments spans at least six categories:
- •General-purpose LLM chatbots: ChatGPT, Claude, Gemini, Copilot, Le Chat (Mistral), Perplexity, You.com.
- •Code copilots: GitHub Copilot personal, Cursor, Codeium, Tabnine, Replit Agent, Aider.
- •Image and video generators: Midjourney, DALL·E, Stable Diffusion, Runway, Sora, Pika, Kling.
- •Transcription and notes: Otter.ai, Fireflies, Granola, tl;dv, Read.ai, local Whisper.
- •Browser extensions and plugins: Monica, Merlin, Harpa, ChatGPT Sidebar — plus dozens of wrappers indistinguishable from the official tools.
- •AI agents and frameworks: AutoGPT, AgentGPT, Devin, Manus, Lindy, Make/n8n scenarios calling LLMs.
According to Netskope Threat Labs, 94 % of organizations now use GenAI applications, and 47 % of enterprise AI users rely on personal AI apps (Netskope, 2025). The median number of distinct GenAI applications per organization tripled in a year, rising from 3 to 9.6 between June 2023 and June 2024 (Netskope, 2024).
The eight major risks of Shadow AI
### 1. Confidential data leakage
The most documented risk. Cyberhaven found that 27.4 % of the data an employee submits to an AI tool is sensitive, up from 10.7 % a year earlier. Source code, customer data, M&A deal memos, contracts and HR data dominate. Harmonic Security, after analyzing 22.4 million enterprise prompts, observed that 87 % of sensitive-data leakage incidents go through ChatGPT Free (personal free accounts), with the top three exposed data categories being source code (~30 %), legal documents (22.3 %) and M&A data (12.6 %) (Harmonic Security, 2025).
### 2. Hallucinations and binding errors
Generative AI can produce plausible-sounding but factually wrong output. Two recent rulings now form precedent:
- •Mata v. Avianca (SDNY, June 2023): lawyers Schwartz and LoDuca were sanctioned USD 5,000 for citing six fictitious court decisions generated by ChatGPT in a brief (Justia decision).
- •Moffatt v. Air Canada (BCCRT, February 2024): Air Canada was ordered to pay CAD 650.88 because its chatbot communicated a bereavement fare policy that did not exist. The tribunal rejected the argument that the chatbot was "a separate legal entity" (CanLII decision). The company is liable for AI outputs it exposes, governed or not.
### 3. GDPR and AI Act compliance
The Italian Data Protection Authority (Garante) was the first European regulator to act: temporary ban on ChatGPT in March 2023 for lack of legal basis, lack of transparency and missing age verification — then a EUR 15 million fine to OpenAI in December 2024 (Garante, 2024). Across the EU, Regulation (EU) 2024/1689 (EU AI Act) entered into force on August 1, 2024 and, since February 2, 2025, imposes AI literacy obligations and bans certain practices (manipulation, social scoring, etc.) — which apply even to clandestine usage.
### 4. Copyright and trade-secret leakage
The New York Times Company v. Microsoft & OpenAI (SDNY, 1:23-cv-11195), filed in December 2023, illustrates the gray zones: the NYT demonstrates that ChatGPT reproduces verbatim excerpts from its articles and asks for the destruction of training datasets (CourtListener docket). The symmetric risk for enterprises: a trade secret submitted to a public model can lose its trade-secret protection under EU and French law.
### 5. LLM technical vulnerabilities (OWASP LLM Top 10)
The OWASP GenAI Security project maintains a reference list of LLM-specific vulnerabilities (OWASP Top 10 LLM Applications 2025):
- •LLM01 Prompt Injection: an attacker alters model behavior via crafted text.
- •LLM02 Sensitive Information Disclosure: the model reveals confidential data in its context or memory.
- •LLM03 Supply Chain: risks from third-party models, datasets or plugins.
- •LLM05 Improper Output Handling: model output executed without validation (XSS, SSRF, RCE).
- •LLM06 Excessive Agency: AI agents granted overly broad system permissions.
- •LLM07 System Prompt Leakage: (new 2025): leakage of confidential system instructions.
- •LLM10 Unbounded Consumption: runaway cost and latency via agent loops.
### 6. Amplified cyber threats
ENISA reports in its Threat Landscape 2025 that more than 80 % of social engineering activity observed in early 2025 involves AI-generated content, and that state actors exploit ChatGPT, Gemini and malicious tools such as WormGPT and FraudGPT (ENISA, 2025). IBM 2025 confirms: 16 % of data breaches involved attackers using AI, with 37 % via AI-generated phishing and 35 % via deepfakes.
### 7. Hidden costs and tool redundancy
Like SaaS, AI subscriptions sprawl in parallel: ChatGPT Plus licenses paid individually, Midjourney charged on personal corporate cards, Notion AI activated on top of an existing Copilot license. Without automated discovery, the inventory gap stays open.
### 8. Loss of operational visibility
The 2025 Cisco Cybersecurity Readiness Index finds that 60 % of organizations lack confidence in their ability to identify the use of unapproved AI tools, and 60 % have no visibility into the prompts or requests submitted by their employees through GenAI tools (Cisco, 2025). The Verizon DBIR 2025 goes further: of the 15 % of employees who regularly access GenAI tools from their work device, 72 % use a non-corporate email account (Verizon, 2025).
A timeline of Shadow AI
Nov 2022
Public launch of ChatGPT
Massive adoption in weeks, with no enterprise framework in place.
Jan 2023
Amazon warns its staff
An internal lawyer asks employees on Slack not to paste "any Amazon confidential information" into ChatGPT.
Feb 2023
JPMorgan restricts ChatGPT
Global ban for the bank, followed by Goldman Sachs, Citi, BoA, Wells Fargo, Deutsche Bank.
Mar 2023
Italian Garante bans ChatGPT
First decision of a European data-protection authority against an LLM.
Apr 2023
Three Samsung leaks in 20 days
Source code, test sequences and meeting transcripts pasted into ChatGPT by Samsung Semiconductor engineers.
May 2023
Samsung & Apple ban public AI
May 2: company-wide ban at Samsung. May 18: Apple restricts ChatGPT & Copilot.
Jun 2023
Mata v. Avianca
First sanction of lawyers for citing fictitious case law generated by ChatGPT.
Jan 2024
ISO/IEC 42001 published
First global AI management system standard.
Feb 2024
Air Canada held liable
The BCCRT establishes that the company is responsible for hallucinations of its chatbot.
Aug 2024
EU AI Act enters into force
Regulation (EU) 2024/1689 published in the OJEU on July 12, applicable progressively until 2028.
Dec 2024
OpenAI fined EUR 15 M in Italy
European confirmation of regulatory risk on training practices.
Feb 2025
First EU AI Act obligations
Article 5 bans and AI literacy obligations now applicable.
Aug 2025
GPAI rules applicable
Obligations on general-purpose AI models enter into application.
Governance frameworks applicable to Shadow AI
Four reference frameworks shape enterprise AI governance today. None of them are exclusive — most CIOs combine them.
- •[NIST AI Risk Management Framework (AI RMF 1.0)](https://www.nist.gov/itl/ai-risk-management-framework): (January 2023, United States) — Voluntary, sector-agnostic, structured around four functions.
- •[ISO/IEC 42001:2023](https://www.iso.org/standard/42001): (December 2023, international) — First global AI management system (AIMS) standard, certifiable, aligned with the Plan-Do-Check-Act logic of ISO 27001 and ISO 9001.
- •[EU AI Act — Regulation (EU) 2024/1689](https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai): (August 2024, European Union) — Risk-tiered approach (unacceptable / high / limited / minimal). Fines up to EUR 35 M or 7 % of global revenue for prohibited practices.
- •[OECD AI Principles](https://oecd.ai/en/ai-principles): (2019, updated 2024) — First intergovernmental standard, reused in most national frameworks.
To structure a defensible approach, the NIST AI RMF is probably the most operational:
- 1
Govern
Set AI policies, roles, accountability. Map all AI use cases.
- 2
Map
Inventory AI tools in use (sanctioned and shadow), identify business context.
- 3
Measure
Quantify risks (security, compliance, bias), evaluate controls.
- 4
Manage
Prioritize, mitigate, monitor continuously and evolve the posture.
How to detect Shadow AI
Six complementary methods bring the real AI inventory back into view:
- •SSO and identity logs: list every AI domain accessed through Okta, Azure AD, Google Workspace. Limitation: personal accounts bypass this by construction.
- •Automated SaaS discovery: a SaaS Management platform like Kabeen cross-references SSO, network logs, browser signals and expense data to continuously identify every AI tool in use — including those paid personally and reimbursed.
- •Browser extension / endpoint agent: capture AI-domain requests directly from the user device — the only way to cover personal-account usage.
- •AI DLP / CASB: inspect outbound requests for sensitive patterns (API keys, source code, personal data) before they leave the company.
- •Expense analysis: corporate card statements, expense reports, Stripe invoices from AI vendors — useful to surface personally paid subscriptions.
- •Anonymous surveys: pair behavioral telemetry with anonymous surveys to reveal the gap between declared and actual usage.
Building an enterprise AI policy in 6 steps
- 1
Discover
Continuous inventory of every AI tool actually used, sanctioned and shadow.
- 2
Classify
Categorize by risk tier (NIST AI RMF) and type of data processed.
- 3
Approve
Build a catalog of validated AI tools, with DPA contracts and anti-retraining guarantees.
- 4
Govern
Acceptable-use policy, training, awareness, AI literacy (EU AI Act mandatory).
- 5
Tool up
Deploy SSO, AI DLP, prompt monitoring, CASB.
- 6
Measure
Continuous KPIs: governed-usage rate, incidents avoided, productivity ROI.
A defensible AI policy includes, at minimum: an approved-tools catalog (and the process to request additions), an explicit list of prohibited data (PII, trade secrets, proprietary code if client commitments apply), an output review regime for legal- or client-facing use cases, and an AI incident reporting process.
KPIs for a Shadow AI program
A handful of indicators is enough to steer the program:
- •Governed-usage rate: share of AI users going through approved tools and SSO.
- •Volume of sensitive prompts blocked: from an AI DLP, weekly trend.
- •AI literacy training coverage: % of staff trained (EU AI Act requirement).
- •Declared vs detected AI incidents: .
- •AI cost per active user: comparable to SaaS cost per user (TCO).
Gartner estimates that more than 40 % of organizations will suffer a security or compliance incident linked to Shadow AI by 2030, and that 75 % of employees will acquire, modify or create technology outside IT's visibility by 2027, up from 41 % in 2022 (Gartner, November 2025). Shadow AI is not a transient spike — it becomes the new normal state of the information system.
How Kabeen helps regain control over Shadow AI
Kabeen extends its SaaS Management platform to the Shadow AI use case. The platform combines SSO discovery, browser signals, expense data and AI account inventory to continuously deliver:
- •a live inventory of every AI tool in use (sanctioned and shadow),
- •a view by user, team, AI application, and risk tier,
- •personal-account detection: for work usage,
- •a factual basis for security, procurement and EU AI Act reviews.
The goal is never to ban AI: it is to make usage visible, to steer each employee toward the right governed alternative, and to align the AI portfolio with the rest of the application portfolio already managed by IT.
Frequently asked questions
What is Shadow AI in simple terms?
+
Shadow AI is the use of generative AI tools — ChatGPT, Claude, Gemini, Copilot, Midjourney, code copilots, agents — by employees without IT validation or oversight. Concrete examples: a sales rep pasting a deal memo into ChatGPT, a developer sending proprietary code to Claude, a marketing team running a confidential mockup through Midjourney. It is the generative-AI version of Shadow IT.
What is the difference between Shadow IT and Shadow AI?
+
Shadow IT covers SaaS applications, scripts and databases deployed without IT. Shadow AI adds a specific dimension: data submitted to a public model may be used to train it, resurface in another user's response, and resist the GDPR right to erasure. The dominant risk shifts from hidden cost to data leakage, binding hallucinations, and copyright issues.
How widespread is Shadow AI in the enterprise?
+
According to Microsoft & LinkedIn (2024), 75 % of knowledge workers already use generative AI at work and 78 % bring their own tools. Cyberhaven measures that 73.8 % of ChatGPT usage at work goes through non-corporate accounts, and Harmonic Security observes that 87 % of sensitive-data leakage incidents go through ChatGPT Free. On the incident side, IBM finds that 20 % of organizations have already suffered a Shadow-AI-related breach, with an average added cost of USD 670,000 per incident.
What are the main risks of Shadow AI?
+
Eight risks stack up: (1) confidential data leakage (code, PII, M&A), (2) binding hallucinations (cf. Mata v. Avianca, Air Canada), (3) GDPR and EU AI Act compliance, (4) copyright and trade secrets, (5) LLM-specific technical vulnerabilities (OWASP LLM Top 10) such as prompt injection, output handling and excessive agency, (6) amplified cyber threats (AI phishing, deepfakes), (7) hidden costs and tool redundancy, (8) loss of operational visibility.
How do I detect Shadow AI in my organization?
+
Six complementary methods: SSO and identity logs (Okta, Azure AD), automated SaaS discovery through a SaaS Management platform that cross-references SSO + browser signals + expense data, a browser extension or endpoint agent to capture requests to AI domains, AI DLP / CASB to inspect sensitive prompts, expense analysis (expense reports, corporate cards), and anonymous surveys to measure the gap between declared and actual usage.
What does the EU AI Act say about Shadow AI?
+
Regulation (EU) 2024/1689 entered into force on August 1, 2024. Since February 2, 2025, the practices listed in Article 5 (manipulation, social scoring, etc.) are prohibited and all organizations must ensure a sufficient level of AI literacy among employees. General-purpose AI model (GPAI) obligations apply since August 2, 2025. The regulation targets AI usage whatever its form, governed or not — Shadow AI does not exempt the company from compliance.
Should we ban ChatGPT and similar tools?
+
In the vast majority of cases, no. A ban without an alternative drives concealment: Software AG measures that 46 % of users refuse to give up their AI tools even when banned. The right posture is to offer a governed alternative (ChatGPT Enterprise, Copilot for Microsoft 365, Claude for Work, Gemini Enterprise, Mistral, etc.) covering legitimate use cases, train employees on prohibited data, and deploy detection of personal accounts.
Which governance frameworks should we use to structure the AI program?
+
Four references shape enterprise AI governance: (1) the NIST AI RMF 1.0 (Govern, Map, Measure, Manage) — operational and sector-agnostic; (2) ISO/IEC 42001:2023, first global AIMS standard, certifiable; (3) the EU AI Act, mandatory in the EU; (4) the OECD AI Principles, an intergovernmental standard. In France, the CNIL recommendations complement the GDPR framework.
All terms
5R Method
A strategy used during application rationalization to determine the best approach for managing applications.
8R Method
An extended version of the 5R method used in application portfolio management and migration strategies.
Application
A computer program or set of programs designed to automate a business process or deliver value to end users.
Architecture
Refers to the structure and behavior of IT systems, processes, and infrastructure within an organization.
Need help mapping your IT landscape?
Kabeen helps you inventory, analyze and optimize your application portfolio.