Demo Request
Take a personalized product tour with a member of our team to see how we can help make your existing security teams and tools more effective within minutes.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Home
Blog

Anthropic Won't Let You Run Mythos. But Claude Is Already in Your Salesforce.

Tal Shapira
Updated
April 11, 2026
April 17, 2026
4 min read
Ready to Close the SaaS Security Gap?
Chat with us

When the New York Times' Kevin Roose described Project Glasswing as a frontier AI model “so powerful that Anthropic is not releasing it to the public,” he wasn’t being sensational. That’s the accurate read. Anthropic built something capable enough that they decided the responsible move was to gate it behind a coalition of 50 organizations and $100 million in controlled access credits before anyone else could touch it.

The results justify the caution. Claude Mythos located a 27-year-old vulnerability in OpenBSD and a 16-year-old flaw in FFmpeg that automated testing missed five million times. The window between discovery and exploitation has collapsed. What once took months now happens in minutes.

Alex Albert, Anthropic’s Head of Developer Relations, called it “possibly the most consequential event in the AI industry I’ve seen up close since joining Anthropic almost 3 years ago.” That conviction is warranted.

It is also pointing at half the problem.

The AI your employees are actually using

Glasswing is built around a specific threat: attackers using AI to find and exploit vulnerabilities in software infrastructure before defenders can patch them. That is a real and serious problem worth the investment.

It is not, however, where most enterprise security teams are encountering AI risk day to day.

The AI most employees interact with isn’t a foundation model their company deployed or controls. It’s a feature inside a SaaS subscription. Copilot inside Microsoft 365. Einstein inside Salesforce. Gemini inside Google Workspace. These didn’t arrive through a separate procurement process or a security review. They came embedded in tools employees already used, with permissions already granted, at the pace of a software update.

That’s AI delivered as a layer on top of SaaS — and it represents the majority of enterprise AI activity. Cyera’s team described the visibility problem well: AI visibility without identity context is just a list. Knowing an AI agent exists tells you almost nothing. Knowing what it can access, what it’s doing, and whether that behavior makes sense given who authorized it — that’s the actual question security teams need to answer.

Most can’t.

The threat that doesn’t need to find a bug

Glasswing targets the attack path that requires an adversary to identify a vulnerability and exploit it from outside the system. There’s a gap to cross. Time, skill, and opportunity all constrain how quickly that can happen.

An AI agent operating inside your SaaS environment with a valid OAuth token doesn’t have that gap. It’s already in. It was provisioned, connected, and started operating. In many organizations, that happened without a formal security review, without a defined scope of access, and without any monitoring on what it does after the fact.

One security team recently discovered 150 distinct Copilot agents running in their environment. All deployed in a single week. None reviewed by security.

An attacker who compromises one of those agents — through prompt injection, a supply chain attack on the underlying model, or a misconfigured permission scope — doesn’t need to find a decade-old vulnerability. They inherit whatever the agent was authorized to do: read access to sensitive files, write access to shared drives, the ability to query CRM records or trigger downstream automations.

The model most security tools are missing

Most security tools were built to watch humans. They track logins, file access, configuration changes — all tied to human accounts. When an AI agent accesses 400 files in 15 minutes, those tools either attribute the action to the person who authorized it, or miss it entirely.

That’s the wrong model. An AI agent acting on behalf of a user is not the same as the user acting. The behavioral baseline is different. The risk profile is different. The question you actually need to answer is whether this agent’s behavior makes sense given what it’s authorized to do, and given what the authorizing human normally does. Answering that requires holding identity, behavior, and SaaS context together in the same view.

Most organizations don’t have that. Most tools weren’t built to provide it. The security community is already asking what Glasswing doesn’t cover. They’re right to ask.

A note to Anthropic

Project Glasswing is a genuine contribution. Using frontier AI to find vulnerabilities before attackers do is exactly the kind of asymmetric defense the industry needs, and the commitment from the launch partners reflects real organizational will.

But here’s what’s worth sitting with: Claude is one of the AI agents already operating inside enterprise SaaS environments today. So is GPT. So is Gemini. The same class of models being pointed at software infrastructure to find vulnerabilities are also the agents that enterprise security teams need governance over — their access, their behavior, their blast radius if something goes wrong.

Mythos is too powerful to release to the public. That’s a responsible call. The Claude versions already running inside enterprise SaaS are another matter entirely. They’re there. They have access. And in most organizations, nobody is watching them.

Glasswing secures the infrastructure those models run on. That’s necessary. The other half — governing the agents already operating inside the application layer — is just as urgent. And it’s mostly still undone.

No items found.

Dr. Tal Shapira

ABOUT THE AUTHOR

Tal is the Cofounder & CTO of Reco. Tal has a Ph.D. from the school of Electrical Engineering at Tel Aviv University, where his research focused on deep learning, computer networks, and cybersecurity. Tal is a graduate of the Talpiot Excellence Program, and a former head of a cybersecurity R&D group within the Israeli Prime Minister's Office. In addition to serving as the CTO, Tal is a member of the AI Controls Security Working Group with the Cloud Security Alliance.

Technical Review by:
Gal Nakash
Technical Review by:
Dr. Tal Shapira

Tal is the Cofounder & CTO of Reco. Tal has a Ph.D. from the school of Electrical Engineering at Tel Aviv University, where his research focused on deep learning, computer networks, and cybersecurity. Tal is a graduate of the Talpiot Excellence Program, and a former head of a cybersecurity R&D group within the Israeli Prime Minister's Office. In addition to serving as the CTO, Tal is a member of the AI Controls Security Working Group with the Cloud Security Alliance.

Ready to Close the SaaS Security Gap?
Chat with us
Table of Contents
Get the Latest SaaS Security Insights
Subscribe to receive updates on the latest cyber security attacks and trends in SaaS Security.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Explore Related Posts

Why We Integrate with Cyera to Fight AI Sprawl Across SaaS and Data
Todd Wilson
Reco and Cyera are partnering to close the gap between SaaS security and data security, delivering an integrated solution that combines Cyera's data classification intelligence with Reco's visibility across 225+ SaaS and AI applications. Together, they give enterprise security teams a unified, context-rich view of data risk — from storage and access to sharing and exfiltration — without the manual work of reconciling two separate tools.
Anodot Breach Lessons: When Your Vendor Is the Vulnerability
Cynthia Ardman
The recent breach of Anodot, an AI analytics platform acquired by Glassbox in November 2025, exposed a growing attack vector: SaaS supply chain compromise. Threat actors used stolen Anodot credentials to access 12+ Snowflake customer environments, bulk-extracting data and demanding ransom.
AI Agents Are Talking, Are You Listening?
Gal Nakash
As AI agents increasingly communicate with each other across enterprise SaaS platforms, they create implicit, runtime trust chains that existing security tools — built for human identities and explicit permissions — cannot observe or control. Organizations must build dedicated visibility into agent interaction graphs and enforce chain-level controls before these blind spots become serious security liabilities.
See more featured resources

Ready for SaaS Security that can keep up?

Request a demo