According to a new study, nearly 90% of AI tools used in organizations are completely invisible to IT teams. This means that decision-making, meeting summaries, and data analysis often occur outside the scope of security and compliance controls. What looks like a technical oversight has already turned into a management and board-level crisis, further intensified by new regulations.
The latest stage of the EU’s AI Act requires companies to document how general-purpose AI processes data. The potential penalty is up to €35 million or 7% of global turnover. Yet many companies still have no idea which AI features are active within their environments.
The problem is that AI is not limited to headline tools like ChatGPT. It is already embedded in everyday platforms: Zoom transcribes and summarizes meetings, Salesforce automatically generates reports, Slack analyzes conversations. These features often appear silently through updates, unnoticed by IT, while processing sensitive data.
This creates “AI sprawl” — a situation where dozens or even hundreds of parallel AI tools operate within an organization, but IT controls only a small fraction. Other reports indicate that 80% of AI tools remain completely unmanaged. As a result, management lacks a complete picture of which data is being processed, how it is stored, and whether certain features are even active.
The greatest danger of AI lies in its “silent” arrival: it doesn’t appear as a new program for IT to review; instead, it activates automatically inside already trusted applications. Today Slack may just be a messaging platform; tomorrow it will summarize conversations and suggest actions. The same is true for Salesforce, Zoom, and Microsoft 365.
Invisible AI can become one of the biggest risks for an organization unless it is identified, regulated, and monitored in time.