I’ve spent the last decade building AI systems designed to detect fraud before it happens. So when I saw what Anthropic’s Mythos model is capable of, my first reaction wasn’t alarm, it was recognition. The quiet balancing act that fraud and AML teams have managed for years, the narrowing gap between attackers and defenders, just became significantly harder to sustain.
Mythos isn’t just another large language model. It represents a step-change in how AI systems can reason, adapt, and simulate complex behaviors at scale. And while that’s a breakthrough for innovation, it also introduces a new class of risk, particularly for financial institutions already navigating increasingly sophisticated, AI-driven fraud. What once required coordination, expertise, and time can now be executed faster, more convincingly, and at a far greater scale.
At DataVisor, we’ve long operated at the intersection of machine learning and financial crime prevention, where subtle behavioral signals often matter more than static rules. From that vantage point, Mythos doesn’t feel like a distant innovation, it feels like an acceleration of trends we’re already seeing in the wild.
This piece argues that Mythos is not an isolated technological leap, it’s a signal. A signal that the fraud landscape is entering a new phase where traditional detection strategies will struggle to keep pace. And for those responsible for protecting digital ecosystems, the implication is clear: the rules of engagement are changing, whether we acknowledge it or not.
From Models to Threats: What Claude Mythos Signals for Financial Crime
Anthropic’s Claude Mythos is being framed through a cybersecurity narrative, but for financial services, the more important point is structural. This is not a purpose-built fraud or security system—it’s a general-purpose frontier model that sits beyond Opus in how it reasons, plans, and adapts. That matters because banks and fintechs don’t face “tools”; they face adversaries who use tools. When the baseline capability of general AI improves, so does the baseline capability of attackers. Mythos signals a shift in that baseline—one that fraud and AML systems will have to contend with.
Autonomous Exploits, Not Assisted Attacks
What Mythos reportedly demonstrated - autonomous vulnerability discovery and exploit chaining - has direct parallels in financial crime. This is not just about software exploits; it’s about systems that can probe complex environments, identify weaknesses, and iteratively refine attack paths. In a financial context, that could translate into faster discovery of onboarding loopholes, payment flow weaknesses, or identity verification gaps—executed at scale and with minimal human oversight. The shift is from scripted fraud operations to adaptive, self-improving attack systems.
Why Mythos Wasn’t Released
Anthropic’s decision to withhold Mythos under its Responsible Scaling Policy is particularly relevant for regulated industries like finance. Instead of open release, capabilities are being tested within controlled initiatives like Project Glasswing. This suggests a future where the most advanced AI systems, those with the highest potential for misuse, are developed and evaluated behind closed doors. For financial institutions, this creates an asymmetry problem: defenders may not have visibility into the capabilities shaping the next generation of attacks until they begin to surface in production environments.
Separating Signal from Hype
From a financial industry perspective, it’s important to separate signals from hype. Mythos’ capabilities have been demonstrated in controlled settings, and questions remain around consistency, scalability, and real-world constraints. Fraud teams have seen this before—what works in a lab doesn’t always translate directly into production. But the direction is difficult to ignore. Even if current performance is uneven, the trajectory points toward more autonomous, adaptive systems that can operate within financial ecosystems. The question isn’t whether these capabilities will materialize in fraud—it’s how quickly institutions will be forced to respond.
Why Financial Services is the most Exposed Sector
Financial services isn’t uniquely careless, it’s uniquely complex. No other industry operates on such a deeply layered, heterogeneous technology stack: decades-old core banking systems, stitched together with middleware, extended through APIs, and now overlaid with cloud and real-time infrastructure. That accumulated complexity is the true attack surface. A model like Anthropic’s Mythos doesn’t need to uncover a novel, sophisticated flaw. It only needs to find any weakness in systems that, in many cases, have never been fully audited end-to-end. And in banking, those systems don’t exist in isolation. They are interconnected, interdependent, and systemically important. That’s what elevates Mythos from a technical milestone to a sector-wide risk signal.
Old Infrastructure, New Vulnerabilities
Banks don’t run on clean, modern stacks—they run on history. Core systems built decades ago continue to power critical functions, often wrapped in layers of incremental updates, patches, and integrations. Over time, this creates sprawling codebases that no single team fully understands or has comprehensively audited. For traditional attackers, navigating this complexity is slow and resource-intensive. For a Mythos-class system, it’s an opportunity. The ability to systematically probe, map, and identify weak points across large, unstructured environments changes the economics of discovery. In this context, the vulnerability isn’t a single flaw—it’s the inevitability that flaws exist somewhere in the stack.
How Interconnected Systems Amplify Impact
Financial institutions are not isolated entities. They are nodes in a tightly coupled system. Payment networks, clearing houses, correspondent banking relationships, and shared infrastructure mean that a compromise in one institution can propagate quickly. Unlike other industries, where breaches are often contained within organizational boundaries, failures in financial services can have second- and third-order effects across the ecosystem. This is what makes the risk systemic. A Mythos-enabled attack doesn’t just target a bank, it potentially targets the connective tissue of the financial system itself.
When Only A Few Institutions Are Prepared
If access to frontier capabilities like Mythos is limited to a small subset of institutions through initiatives like Project Glasswing, it creates an uneven playing field—but not in the way people assume. This isn’t about competitive advantage; it’s about systemic resilience. If only a handful of institutions have early visibility into how these models behave, the broader ecosystem remains underprepared. Financial stability depends on collective defense, not isolated readiness. Concentrated access to critical insights risks creating blind spots across the rest of the sector—blind spots that sophisticated attackers can exploit.
How Regulators Are Viewing This Risk
When senior policymakers like Jerome Powell and Scott Bessent engage at an emergency level, it reflects a shift in framing—from technological curiosity to systemic risk. Regulators are not evaluating Mythos as a standalone innovation; they are assessing its implications for financial stability, market confidence, and operational resilience. This signals that AI-driven threats are moving out of the experimental category and into the domain of macroprudential concern. For institutions, that means the question is no longer if this matters—but how quickly they need to adapt.
How Mythos Affects Every Layer of the Money Movement Ecosystem
The impact of Mythos-class models is not uniform. It maps directly to how each segment of the financial ecosystem is built, operated, and regulated. Each layer carries a different mix of legacy complexity, speed constraints, and dependency risk. The result is not a single point of failure, but a distributed set of exposures, each shaped by how money actually moves through the system.
- Retail & commercial banks: Banks carry the widest legacy surface in the ecosystem. Core banking systems, in some cases dating back to the 1970s, have never been subjected to AI-speed vulnerability discovery—creating an environment where weaknesses exist by default. The challenge is not just exposure, but inertia: remediation in these environments is slow, expensive, and operationally constrained.
- Card networks & schemes: Card infrastructure is built for high-volume, low-latency processing, where even microseconds of disruption matter. Tokenization logic, 3DS authentication flows, and settlement systems are highly structured and well understood—making them ideal targets for systematic mapping and probing by autonomous models.
- Real-time & instant payment rails: This is the speed paradox. Systems like instant payment rails are designed to move money in milliseconds, leaving no window for human review or intervention. In that environment, an AI-generated exploit doesn’t need persistence—it only needs to work once, at speed.
- Cross-border & correspondent banking: Cross-border flows rely on multi-jurisdiction chains with uneven security maturity across participants. Messaging infrastructure like SWIFT and the counterparty relationships it depends on create a complex, distributed attack surface—one that benefits from coordinated, automated exploration rather than isolated intrusion attempts.
- Remittances & MSBs: Money service businesses operate with leaner technology stacks and fewer dedicated security resources. At the same time, they sit under intense regulatory scrutiny for illicit flows—creating a dual exposure where technical compromise and compliance failure are tightly coupled.
- Neobanks & fintechs: Modern architecture reduces legacy burden but introduces shared dependency risk. Heavy use of open-source components and third-party services means a vulnerability in a widely used authentication library or API layer can propagate across dozens of institutions simultaneously—the shared stack becomes a shared point of failure.
- Crypto exchanges & custodians: This segment spans both on-chain and off-chain systems, expanding the attack surface. Hot wallet infrastructure, withdrawal logic, and API key management represent concentrated points of value—systems that can be systematically tested and exploited at a fraction of the cost of traditional penetration efforts.
- DeFi protocols: DeFi has the highest exposure and the weakest defensive assumptions. Its security model—multisig approvals, timelocks, and audits—relies on friction and human response time. Mythos-class capabilities fundamentally challenge that model by compressing discovery and execution into machine timescales.
- Stablecoins & tokenized assets: This is fast-growing infrastructure where regulatory expectations—such as evolving frameworks around reserve backing and issuance—are maturing faster than the underlying security architecture. Reserve management systems and issuance logic remain underexplored attack surfaces with systemic implications.
- CBDCs & central bank systems: Central bank digital currencies are still largely in pilot or design phases. That makes them uniquely important: protocol design, key management, and access architecture decisions being made now will define risk for decades. The window to incorporate Mythos-class threat models exists—but it is time-bound.
- Trade finance: Trade finance has historically been under-digitized, which limited visibility but also constrained attack surfaces. As it digitizes, it inherits the vulnerabilities of the software systems it adopts—at the same time regulators are increasing scrutiny on trade-based money laundering typologies.
- Embedded finance & BaaS: Embedded finance pushes payment capabilities into consumer platforms where control and visibility are fragmented. Security controls are often defined by the platform rather than the regulated institution, creating unclear accountability, limited oversight, and an attack surface that is expanding faster than governance frameworks can keep up.
What Does this Mean for Fraud?
Mythos doesn’t commit fraud. But it changes who can. The gap between a sophisticated attacker and an average one, something the industry has quietly relied on as a filter, is narrowing fast. After a decade of building AI-native fraud detection at DataVisor, what’s clear is this: the challenge is no longer just identifying complex attacks. It’s operating in an environment where capability is democratized, and attack patterns evolve faster than traditional defenses were designed to handle.
Fraud Is No Longer Constrained By Human Limits
Fraud used to scale with effort. That constraint is gone. What changes with Mythos-class capabilities isn’t just sophistication. It's the ability to generate, test, and refine attacks continuously. Detection systems built for periodic updates are now facing adversaries that iterate in real time.
Synthetic Identity Moves From Tactic To System
Synthetic identity fraud is no longer a manual craft. It becomes industrialized. AI can generate consistent, multi-layered identities that pass fragmented KYC checks across channels. Most onboarding systems were never designed to detect identities that are coherent by design but entirely artificial.
Account Takeover Becomes A Persistence Game The Attacker Always Wins
Social engineering has always relied on patience. AI removes that cost. Attackers can run thousands of adaptive interactions simultaneously, learning and refining as they go. The imbalance is simple: attackers can iterate endlessly, while defenses are still threshold-based.
Where The Industry Is Ahead: Detection Is Becoming Behavior-Driven
There are areas where the industry has made meaningful progress. Behavioral biometrics and device intelligence shift detection away from static credentials toward how users actually behave. Unsupervised machine learning has also proven effective in identifying emerging fraud patterns without relying on predefined rules.
Where The Industry Is Behind
The biggest gaps are structural. Financial institutions still lack real-time data sharing across ecosystems, authentication standards remain inconsistent, and liability frameworks, especially around APP scams, are misaligned with how fraud actually occurs. These are not technology problems alone; they are coordination failures.
This Is Now An Arms Race And Static Models Won’t Keep Up
Fraud detection has entered a phase where static models degrade quickly. Systems trained on historical data are inherently reactive, while AI-driven attacks adapt continuously. Effective defense now requires the same principle: continuous learning, real-time adaptation, and systems designed to evolve as fast as the threats they face.
What This Means For AML — Where We Are Ahead, And Where We Are Not
AML has had a structural problem long before Mythos. Most programs are still built around rules designed to catch the last generation of financial crime, not the current one. Mythos does not create that gap. It accelerates the consequences of it. The issue is no longer whether AML needs to evolve. It is whether the industry can do so fast enough to keep up with how financial crime is changing.
False Positives Are Already Unsustainable
Most AML programs operate with false positive rates in the 90 to 95 percent range. That inefficiency was manageable when suspicious activity scaled with human effort. It breaks down when detection systems are flooded with AI generated activity. Rules based systems cannot absorb that volume without overwhelming investigators and degrading signal quality.
Modern Transaction Monitoring Is Still The Exception
Truly modern transaction monitoring is dynamic, context aware, and driven by machine learning rather than static thresholds. It connects behavior across accounts, channels, and time. The reality is that many institutions are still operating far from this model, relying on legacy rules that struggle to detect adaptive and coordinated activity.
Fraud And Aml Can No Longer Operate Separately
The line between fraud and money laundering is increasingly artificial. Fraud generates illicit funds. AML is expected to track and report their movement. Running these as separate programs creates blind spots at exactly the points where signals need to connect. What was once an organizational choice is now a structural liability.
Trade Based Money Laundering Becomes Harder To Detect
Trade based money laundering has always relied on complexity and opacity. AI changes that dynamic. Attackers can now analyze documents, invoices, and trade flows at scale, identifying inconsistencies and exploiting gaps more efficiently. As trade finance digitizes, the attack surface expands at the same time regulators are increasing scrutiny.
Where The Industry Is Ahead
There are areas of real progress. Graph based entity resolution has improved the ability to uncover hidden relationships across networks. AI assisted SAR drafting is reducing investigator burden. In crypto, on chain analytics has set a higher standard for transparency and traceability than traditional systems have achieved.
Where We Are Behind
The biggest gaps remain structural. Data is still siloed across institutions, limiting the ability to detect cross network activity. Smaller financial institutions remain under-resourced relative to the risks they face. And the explainability gap persists. As regulators move toward stricter expectations on model transparency, many current systems will struggle to meet them without significant redesign.
What Regulators Are Doing — And The Gap That Still Needs Closing
Regulators are moving faster than many expected. What was initially framed as a technology discussion is now being treated as a financial stability issue. That shift matters. But it does not change the underlying constraint: regulatory timelines operate in months and years, while Mythos-class capabilities are already here. The response is necessary. It is also, by design, incomplete.
This is now being treated as systemic risk
Recent actions signal a clear reframing. Conversations that once sat within innovation or policy teams are now being elevated to central banks and treasury-level discussions. This is no longer about regulating AI as a tool. It is about understanding its potential to disrupt financial systems at scale.
The frameworks that matter right now
In Europe, the Anti-Money Laundering Authority is pushing toward a centralized AML rulebook. The EU AI Act introduces stricter expectations around explainability and model governance. Digital Operational Resilience Act raises the bar on operational resilience across financial institutions. At the same time, gaps remain in areas like decentralized finance, where existing frameworks such as the Bank Secrecy Act do not fully apply.
Coordination remains the limiting factor
The regulatory response is fragmented by jurisdiction, while financial systems and attack surfaces are not. That mismatch creates a structural advantage for attackers, who can exploit inconsistencies across regions and frameworks. No single regulator can solve for this alone, and coordination at the pace required remains the hardest problem to address.
What The Industry Needs To Do — And What I Think Comes Next
The response to Mythos cannot be incremental. Treating this as another item in the security backlog is a category error. The institutions that move slowly will not just fall behind—they will operate against adversaries that are learning and adapting faster than their defenses can respond. From what we’ve seen building AI-native detection systems at DataVisor, the difference is not intent. It is execution, and how quickly that execution compounds.
Now: Pressure-Test Your Legacy Stack
Most institutions test for known vulnerabilities using known tools. That approach assumes you already understand where your weaknesses are. You don’t. AI changes discovery from targeted to exhaustive. If you can access AI-driven scanning, use it to simulate how an adaptive system would move through your environment—across APIs, third-party integrations, and edge cases that traditional testing rarely reaches.
Now: Brief Your Board With Specificity
“AI is a risk” is not a strategy. Boards need to understand exposure in concrete terms: which systems are most vulnerable, what an AI-driven exploit path could look like, and how quickly it could translate into financial loss or regulatory impact. This is about shifting the conversation from abstract risk to operational reality—and aligning investment with that reality.
Near-Term: Unify Fraud, Aml, And Cyber
Today’s attack chains do not stop at functional boundaries. A single incident can begin as social engineering, move through account takeover, and end in layered money movement that triggers AML obligations. Treating these as separate domains slows detection and fragments response. A unified intelligence layer—shared signals, shared models, shared visibility—is the only way to match how attacks actually unfold.
Near-Term: Outsource Where You Cannot Build
The capability gap between large institutions and smaller players is widening. Not every bank or fintech can build advanced AI infrastructure, maintain real-time detection systems, or continuously retrain models. That makes external platforms not just a convenience, but a necessity. The shift toward compliance-as-a-service reflects a deeper reality: resilience will increasingly depend on access to shared intelligence and scale.
Near-Term: Move From Rules To Adaptive Systems
Rules still have a role, but they cannot be the foundation. Static thresholds and predefined scenarios degrade quickly in an environment where attacks are constantly evolving. What works in practice are systems that learn from new data, identify deviations without prior labeling, and update continuously. This is not a model upgrade. It is a shift in how detection systems are designed and maintained.
Strategic: Secure Defensive Access To Advanced Ai
There is a growing asymmetry in who can access frontier models. Institutions that can use Mythos-class capabilities defensively—to identify vulnerabilities, simulate adversarial behavior, and stress-test systems—will move faster and with more confidence. Those without access will be forced into a reactive posture. Over time, that gap compounds.
Strategic: Build For Real-Time Collaboration Across Institutions
Fraud and financial crime are network problems, but defenses remain largely institution-centric. Real-time data sharing—signals, indicators, behavioral patterns—will become critical to closing detection gaps. This requires not just technology, but trust frameworks, incentives, and regulatory alignment. Without it, attackers will continue to exploit the seams between institutions.
What I Think Comes Next
Mythos is not the ceiling. It is a signal of a broader shift toward autonomous, adaptive systems that operate at machine speed. The next phase will not be defined by single breakthroughs, but by continuous capability gains that compound over time.
The institutions that hold their ground will not be the ones that aim for perfect prevention. They will be the ones that design for constant change—systems that assume failure points exist, detect them early, and adapt continuously. In this environment, resilience is not a fixed state. It is an operating model.





