Mythos: The inside story
If you’re not Microsoft or Google, security just got complicated.
What Claude Mythos and Project Glasswing mean for the rest of us.
Over the next few months, a small coalition of tech giants and major banks will use a new AI tool called Claude Mythos to find and fix major security flaws in their systems before hackers can exploit them. The organised hacking groups that target the rest of the world (that means us) will acquire comparable AI tools from less controlled sources. If you run a digital platform or website that matters to your business: be aware, this will impact you.
This is not scaremongering. On 8 April 2026, Anthropic announced Claude Mythos Preview, an AI model that in pre-release testing discovered thousands of previously unknown vulnerabilities across every major operating system and web browser. It produced working hacks on its first attempt in more than 80% of cases. Anthropic’s own estimate is that comparable capability will appear from other AI labs within six to eighteen months.
To their credit, Anthropic decided not to release it publicly. Instead, they set up “Project Glasswing”, a coalition of around 40 to 50 organisations getting early access to harden their systems. Named partners include AWS, Apple, Google, Microsoft, Nvidia, Cisco, CrowdStrike, Palo Alto Networks and JPMorgan Chase.
In the UK, the Bank of England, the Financial Conduct Authority, HM Treasury and the National Cyber Security Centre are running urgent briefings with major banks, insurers and exchanges through the Cross Market Operational Resilience Group. In an open letter to business, Technology Secretary Liz Kendall and Security Minister Dan Jarvis called Mythos substantially more capable at cyber offence than any model previously tested by the UK AI Security Institute. Anthropic has said it will extend Glasswing access to UK financial institutions within days.
So, what does that mean for everyone else?
What actually changed
Mythos is a general purpose AI model that happens to be exceptionally good at reading code, reasoning about it, and finding ways to break it. It is not a security product. It is an AI system with a striking side effect, possibly one of many of the unintended consequences of advances in AI.
Anthropic’s approach is the responsible one. They have gated access, launched Glasswing to help critical infrastructure get ahead of disclosure, donated to open source security initiatives, and released a less cyber capable public model, Claude Opus 4.7, with safeguards to test how the market adapts. They deserve credit for this.
But not every lab will take the same staged release approach. And capability doesn’t stay in labs. Well organised hacking and extortion groups, some state sponsored, already operate AI tools at scale. The question is not whether similar hacking ability reaches them, but when.
The issue no one is talking about
Glasswing partners are, broadly, the companies whose infrastructure most of the internet runs on. Hardening their systems benefits everyone downstream. That is genuinely good news.
The gap is everyone else. The mid-market commercial operator, the public sector organisation, the cultural institution, the charity, the challenger brand. These organisations are not getting early access. They are running the same systems they had last month, facing a hacker baseline that is rising.
Meanwhile, the window between vulnerability discovery and exploitation has collapsed from months to minutes. Defenders now need to move at the speed of AI. Most organisations do not.
The systems you don't own but still depend on
Your website runs on a CMS you didn’t write. That CMS runs on a stack of software libraries maintained by people you’ve never met. Your cloud hosting is a service. Your plugins are other people’s code. Your analytics, your tag manager, your payment gateway, your email automation. Every layer has potential vulnerabilities that have been around for years simply because finding them required more effort than most hackers were capable of.
That is no longer true.
Mythos class capability, used by defenders, will surface these flaws fast. This is good. But some people have described this as a “Patch Tsunami”, your systems will need patching more often, sometimes urgently, sometimes with downtime. The “we built this three years ago and it’s been fine” argument no longer holds. The next security review will surface more issues than the last one. Clients will receive invoices they didn’t budget for, and conversations they didn’t expect to have.
This isn’t a reason to panic. It is a reason to rebuild digital systems as live operational infrastructure rather than a finished project from a few years ago.
What a mature response looks like
A few things, none of them revolutionary.
- Know your software dependency tree. If you can’t list what is in your stack, you can’t defend it. The basics of supply chain security now matter more than ever.
- Treat AI driven security testing as part of build cost, not an optional extra. The economics of adding it now are far better than the economics of responding to an incident later.
- Patch software often and regularly. Boring, unglamorous, essential. The organisations that patch weekly will fare better than the ones that patch quarterly.
- Pair AI driven scanning with human judgement. AI will surface more than humans can, but humans still need to decide what matters, in what order, and how to fix it without breaking something else.
Small agencies with a handful of developers and no dedicated security practice will struggle with this. Not because they lack intent, but because the attack surface area is too large for a small team to credibly cover. This is uncomfortable to say, but it is true, and clients are entitled to ask the question.
What we don't know yet
We don’t know what the next Mythos looks like. We don’t know which other AI labs are close, or how close. We don’t know who has access to what right now. We don’t know how quickly defensive tools will reach the mid-market.
This is not a reason to freeze. It is a reason to move deliberately, to tighten what you can tighten, and to make sure your partners are doing the same.
Where Bolser stands
Bolser has been building digital systems for mature organisations for over twenty years. We hold Cyber Essentials Plus. We pass an externally validated Microsoft security audit every year. We take the infrastructure we run seriously because the people we run it for rely on it.
We are in an evaluation stage on how AI driven security testing fits into our client delivery, and we are talking with clients about the implications of all this rather than pretending we have all the answers. If you run a digital platform that matters, to your customers, your revenue, your reputation, and you want a conversation about what Mythos class AI means for the systems we’ve built, or the ones you built elsewhere, get in touch.
This is not a time to panic, nor is it for agencies who dabble. It is a moment for digital partners who take this seriously.
Ash
CEO