AI Adoption Research from Nudge Security Reveals How Widespread AI Use Is Transforming Security Governance

PR Newswire
Today at 11:00am UTC

AI Adoption Research from Nudge Security Reveals How Widespread AI Use Is Transforming Security Governance

PR Newswire

New report finds that AI agents, integrations and AI-native development platforms are taking hold, raising new and critical security governance challenges

AUSTIN, Texas, Feb. 11, 2026 /PRNewswire/ -- Nudge Security, the leading innovator in SaaS and AI security governance, today announced a new research report, AI Adoption in Practice: What Enterprise Usage Data Reveals About Risk and Governance, which provides insights into workforce AI adoption and usage patterns. The report found that AI use has moved beyond experimentation and general-purpose chat tools, and is now embedded into workflows, integrated with core business platforms, and increasingly capable of taking autonomous action.

"AI adoption is no longer experimental—it's operational," said Russell Spitler, CEO and co-founder of Nudge Security. "This shift means AI governance can't be reactive or policy-only anymore. It requires real-time visibility into what AI tools are in use, how they're integrated with critical systems, and where sensitive data is flowing. The teams that succeed will be the ones who treat AI governance as a continuous, adaptive process, not a one-time audit."

Key findings include:

  • Usage of core LLM providers is nearly ubiquitous. OpenAI is present in 96.0% of organizations, with Anthropic at 77.8%
  • The most-used AI tools are diversifying beyond chat. Meeting intelligence (Otter.ai at 74.2%, Read.ai at 62.5%), presentations (Gamma at 52.8%), coding (Cursor at 48.4%), and voice (ElevenLabs at 45.2%) are now widely present.
  • Agentic tooling is emerging. Agent tools like Manus (22%), Lindy (11%), and Agent.ai (8%) are establishing an early footprint.
  • Integrations are prevalent and varied. OpenAI and Anthropic are most commonly integrated with the organization's productivity suite, as well as knowledge management systems, code repositories, and other tools.
  • Usage is concentrated. Among the most active chat tools observed, OpenAI accounts for 67% of prompt volume.
  • Data egress via prompts is non-trivial. 17% percent of prompts include copy/paste and/or file upload activity.
  • Sensitive data risks skew toward secrets. Detected sensitive-data events are led by secrets and credentials (47.9%), followed by financial information (36.3%) and health-related data (15.8%).

The research report is based on anonymized and aggregated telemetry across Nudge Security customer environments. Rather than relying on surveys or self-reported usage, this analysis is grounded in direct observation of AI activity within enterprise environments. The percentages referenced reflect the % of organizations where each tool or behavior was observed, unless otherwise noted.

AI governance in practice differs from this reality

AI governance has emerged as a top priority for security and risk leaders, but many programs remain narrowly focused on vendor approvals, acceptable use policies, or model-level risk. While necessary, these controls alone are insufficient. As this research illustrates, the most consequential AI risks now stem from how employees actually use AI tools day to day—what data they share, which systems AI is connected to, and how deeply AI is embedded into other tools and operational workflows. Understanding these intersections between people, permissions, and platforms is the foundation of effective AI governance.

Download the full report: https://www.nudgesecurity.com/content/ai-adoption-in-practice

About Nudge Security

Nudge Security delivers SaaS and AI security governance at the Workforce Edge—where employees make thousands of technology decisions daily. Our automated, policy-driven guardrails reach employees when and where they work, enabling rapid technology adoption while minimizing risk and sprawl. Through unrivaled discovery capabilities, AI-driven risk insights, and behavioral science-based engagement, we make security a natural part of how modern work gets done rather than an obstacle to innovation. Nudge Security was founded in 2021 by Russell Spitler and Jaime Blasco and is backed by Cerberus Ventures, Ballistic Ventures, Forgepoint Capital, and Squadra Ventures.

Learn more at www.nudgesecurity.com and follow Nudge Security on LinkedIn, Reddit, X, BlueSky, and Instagram.

Media Contact
Danielle Ostrovsky
Hi-Touch PR
Ostrovsky@Hi-TouchPR.com

Cision View original content to download multimedia:https://www.prnewswire.com/news-releases/ai-adoption-research-from-nudge-security-reveals-how-widespread-ai-use-is-transforming-security-governance-302684127.html

SOURCE Nudge Security