Mythos: AI cyber capability and the real risk to critical infrastructure.

Mythos, AI Cyber Capability, and the Real Risk to Critical Infrastructure

Why AI doesn’t need to create new vulnerabilities in industrial environments to become dangerous, it only needs to get better at finding the ones already there

Mythos, AI Cyber Capability, and the Real Risk to Critical Infrastructure | Alana Murray

What This Article Covers

What Mythos Appears to Represent

Separating the signal from the hype around Anthropic’s restricted AI model

Why This Matters for ICS and OT

How industrial environment characteristics make accelerated cyber capability more dangerous

AI Amplifies Existing Fragility

The danger isn’t new weaknesses, it’s faster discovery and exploitation of old ones

SBOM and Hidden Software Risk

Why software transparency becomes even more critical in an AI-accelerated threat landscape

What Resilience Actually Looks Like

Practical steps that reduce fragility regardless of which AI capability headlines come next

Frequently Asked Questions

Honest answers to the questions I’m hearing from plant engineers and OT teams

Introduction

There’s a lot of noise right now about Anthropic’s Claude Mythos Preview. Then, within days, OpenAI announced GPT-5.4-Cyber under similar restricted access terms. OpenAI has since released GPT-5.5 with expanded advanced cybersecurity capability and stronger cyber safeguards, while still naming GPT-5.4-Cyber as the cyber-permissive model available through Trusted Access for Cyber. Two of the most capable AI labs on the planet have now both concluded that their most sensitive cybersecurity capabilities shouldn’t be released without trust-based controls and that’s the signal we need to pay attention to.

Some of the online conversation is stuck debating whether the claims are hype. I think that’s the wrong debate. The bigger question for anyone working in industrial controls, SCADA, or OT security is this: what happens if advanced AI gets significantly better at finding and exploiting the weaknesses that already exist inside critical infrastructure environments?

After 36 years in the field, I can tell you that the environments I’ve worked in water utilities, wastewater plants, power distribution, carry characteristics that make accelerated cyber capability especially dangerous. Long asset lifecycles. Patching constraints. Fragile integrations. Weak segmentation. And a lot of hidden software dependencies that nobody ever inventoried. This article is about what the Mythos moment signals for our industry and what practical resilience actually looks like on the ground.

What Mythos Appears to Represent And Why OpenAI’s Parallel Move Matters

Let me be direct about this: I’m not here to either validate or dismiss Anthropic’s claims wholesale. What I can do is lay out what’s being reported, what’s independently confirmable, and what the implications might be for industrial environments.

Anthropic says its Claude Mythos Preview can autonomously identify and exploit serious software flaws. The company claims the model identified thousands of zero-day vulnerabilities across major operating systems, browsers, and other critical software, with many related exploits developed autonomously.

Field Insight: The Pattern, Not Just the Product

Here’s what I think is actually important: rather than releasing Mythos broadly, Anthropic created Project Glasswing, a restricted program. Then OpenAI followed within days, releasing GPT-5.4-Cyber under their Trusted Access for Cyber (TAC) program, similarly limiting access to vetted defenders and critical infrastructure organizations. OpenAI’s GPT-5.5 release continued that direction by expanding advanced cyber capability for verified defenders while tightening controls around higher-risk workflows. When two competing AI labs both choose trust-based access over broad release for their most sensitive cyber capabilities, that suggests genuine concern about misuse, not just one company’s cautious approach.

Both models are reportedly trained to be “cyber-permissive”, meaning they’ll perform vulnerability research and exploit simulation that standard models would refuse. OpenAI’s TAC program requires automated identity verification and tiered vetting. Anthropic has been in ongoing discussions with U.S. government officials. It’s also worth noting that Anthropic’s own risk report is more measured than many public reactions, they assess significantly harmful outcomes as very low, though higher than prior models.

U.S. Treasury Secretary Scott Bessent, Federal Reserve Chair Jerome Powell, and the European Central Bank have all warned institutions about AI-linked cyber risks. When financial regulators start issuing warnings, the conversation has moved past theoretical.

Key Distinction

Whether every claim about Mythos, GPT-5.4-Cyber, or GPT-5.5 proves accurate is not the issue. The issue is that advanced AI appears to be getting better at finding and exploiting software weaknesses and the companies building these capabilities have both concluded that the riskiest cyber workflows need stronger access controls. For industrial environments, that industry-wide trend alone changes the risk calculus.

Why This Matters for ICS and OT

I’ve spent most of my career inside environments where uptime isn’t a business preference, it’s a public safety requirement. Water treatment. Wastewater processing. Power distribution. These systems don’t get the luxury of rebooting on a Tuesday night and hoping for the best.

Industrial environments carry a set of characteristics that make accelerated AI-driven cyber capability especially concerning. Not because the technology is fundamentally different from what IT faces, but because the constraints and consequences are.

CharacteristicIT EnvironmentsOT / ICS Environments
Asset Lifecycle3–5 years typical refresh15–25+ years is common
Patching CadenceMonthly or fasterQuarterly, annually, or never
Downtime TolerancePlanned maintenance windowsUnplanned downtime may endanger public safety
Network SegmentationMature, well-fundedOften incomplete, with legacy flat networks
Asset VisibilityGenerally well-inventoriedGaps are wider than most people admit
Remote AccessCentrally managed VPN/SSOOften grew organically instead of by design
Recovery PlansRegularly tested, cloud-backedOften documented but never actually tested

This isn’t about AI magically breaking systems that were otherwise secure. It’s about AI becoming better at finding weak paths, chaining weaknesses together, working through large attack surfaces, and compressing attacker timelines. In environments where the response cycle is already slow and the architecture is already fragile, that compression is the real threat.

Real-World Example: The Compounding Problem

In utilities I’ve worked with, it’s rarely one dramatic vulnerability in isolation that keeps me up at night. It’s the combination: a legacy HMI running an unsupported OS, connected to a historian with default credentials, on a network segment that was supposed to be air-gapped but had a cellular modem added three years ago for vendor access. Each one of those is documented somewhere. Nobody has connected the dots. Now imagine an AI that can connect those dots faster than any human red team.

AI Amplifies Existing Fragility

Here’s the line I keep coming back to: AI doesn’t need to create entirely new weaknesses in industrial environments to become dangerous. It only needs to get much better at finding and exploiting the weaknesses that already exist.

That’s the core insight. And it shifts the conversation from “Is this AI hype?” to “Are our environments resilient enough if vulnerability discovery and exploitation become faster and more scalable?”

How AI Accelerates the Existing Threat Model

Speed

AI compresses the time between vulnerability discovery and exploit development

Scale

AI can analyze vast codebases and firmware simultaneously, not one at a time

Persistence

AI doesn’t get tired, distracted, or stop looking after the first finding

Adaptability

AI can chain vulnerabilities together in combinations humans might not consider

In industrial environments, the issue is rarely one dramatic vulnerability in isolation. It’s the combination of technical debt, weak visibility, aging software, permissive trust relationships, and operational constraints. If advanced AI improves the speed and scale at which those conditions can be analyzed and exploited, then already-fragile environments become even harder to defend.

What I’ve Seen Over the Years

Technical debt isn’t just an accounting metaphor in ICS. It’s physical. It’s the PLC running firmware from 2009. It’s the SCADA server that nobody will touch because the integrator who configured it retired five years ago. It’s the undocumented serial-to-Ethernet converter sitting in a cabinet that nobody remembers installing. All of that becomes more dangerous when the tools to find and exploit it get better.

The supporting factors that make this worse in OT are well known to anyone who’s been in the field:

  • Technical debt , systems running well past their intended lifecycle with no upgrade path
  • Undocumented pathways , network connections, remote access points, and integrations that were never formally documented
  • Poor lifecycle discipline , firmware, OS patches, and software updates that are years or decades behind
  • Hidden software components , third-party libraries, embedded stacks, and dependencies that operators don’t know are there
  • Slow response cycles , operational constraints that make rapid patching or network changes extremely difficult

SBOM and Hidden Software Risk

This is where I think the conversation gets most practical and where most OT organizations are least prepared.

A Software Bill of Materials (SBOM) is essentially a detailed inventory of every software component inside a product. Think of it like a nutritional label for software. And in an environment where AI may accelerate how fast vulnerabilities are found in common software libraries, knowing what’s actually running inside your HMIs, historians, PLCs, and gateways isn’t optional anymore, it’s foundational.

The Hidden Risk Most OT Teams Miss

Many of the underlying risks in an AI-accelerated threat landscape may sit in software components that asset owners don’t even realize are inside their platforms. A vulnerability in a widely used open-source library could affect your SCADA system, your historian, and your remote access gateway simultaneously and you’d never know unless you had SBOMs from your vendors.

Here’s what I’ve seen in practice: most OT asset owners have never asked their vendors for an SBOM. Many vendors have never been asked to provide one. That gap is going to become increasingly untenable as the speed of vulnerability discovery accelerates.

The Executive Order 14028 and subsequent guidance from CISA and NIST have been pushing SBOM adoption for years. But in OT environments, adoption remains low. The Mythos moment whether you believe every claim or not should accelerate that conversation inside your organization.

SBOM Readiness Checklist for OT Teams

Vendor Engagement

Start asking your ICS/SCADA vendors for SBOMs as part of procurement. Include SBOM requirements in new contracts and RFPs.

Asset Inventory Baseline

You can’t evaluate SBOM data without knowing what assets you have. Close your asset visibility gaps first.

Vulnerability Correlation

Establish a process to cross-reference SBOM contents against published CVEs. Automation helps here.

Internal Awareness

Educate your engineering and operations teams on why software transparency matters. This isn’t just an IT concern.

What Resilience Actually Looks Like

Let me be direct: the right response to the Mythos conversation isn’t panic. It isn’t dismissal, either. It’s resilience practical, operational resilience built into how you architect, maintain, and defend industrial environments.

The good news is that the things that make you more resilient against AI-accelerated threats are the same things that make you more resilient against every other threat. There’s nothing on this list that a competent OT security program shouldn’t already be working toward.

Five Pillars of OT Resilience in an AI-Accelerated World

Architecture

Proper network segmentation, defined trust zones, and conduit controls based on ISA/IEC 62443 principles. If an attacker gets into one zone, they shouldn’t be able to reach the rest.

Visibility

Complete, accurate asset inventories including software versions, firmware levels, network connections, and communication flows. You can’t defend what you can’t see.

Transparency

SBOM adoption, vendor accountability, and understanding what software components are running inside your critical systems not just the product labels on the outside.

Recovery

Tested backup and recovery plans. Not documented plans. Not plans that worked once three years ago. Plans that have been validated against realistic scenarios within the last year.

Lifecycle Discipline

Firmware updates, OS patching where feasible, software lifecycle management, and retirement planning for assets that are past end of support. Technical debt is a security liability.

The Real-World Lesson Here

I’ve seen organizations spend six figures on monitoring tools while leaving default credentials on their historians and running Windows XP on their HMIs. Resilience isn’t about buying another product. It’s about disciplined fundamentals, the boring work that doesn’t make for good conference talks but actually reduces your attack surface.

Conclusion: Less Hype, More Resilience

The Mythos conversation will continue to evolve. More details will come out. Some claims will be validated; others may be walked back. That’s how these things go.

But here’s what I know from 36 years in industrial environments: the weaknesses that would make AI-accelerated threats dangerous are already there. They’ve been there for years. Technical debt, poor visibility, hidden software dependencies, untested recovery plans, and architectures that were designed for connectivity without sufficient thought about consequence, none of that is new.

What may be new is the speed at which those weaknesses can be found and exploited. And that’s a reason to invest in resilience, not in fear, but in the disciplined fundamentals that reduce fragility no matter what the next headline says.

The Bottom Line

You don’t need to believe every claim about Mythos to take action. You just need to look honestly at your own environment and ask: if vulnerability discovery and exploitation get significantly faster, are we ready? For most OT environments, the honest answer is not yet. And that’s the conversation worth having, with your team, your leadership, and your vendors, starting now.

Frequently Asked Questions

Is Mythos actually a threat to SCADA and ICS systems right now?

Field Perspective

Mythos itself is currently restricted under Anthropic’s Project Glasswing program. The direct, immediate threat isn’t Mythos specifically, it’s the broader trend that AI-assisted vulnerability discovery and exploit development are improving. That trend affects the software running inside ICS environments whether or not Mythos is ever publicly released.

Is this just another IT problem being projected onto OT?

Important Clarification

No. The vulnerabilities in question affect operating systems, browsers, communication stacks, and software libraries that are present inside OT platforms. Many SCADA systems, historians, and HMIs run on Windows. Many use embedded web servers. Many contain third-party libraries that share vulnerabilities with the broader software ecosystem. This isn’t IT projecting, it’s shared infrastructure carrying shared risk.

Our systems are air-gapped. Does this still apply?

A Common Misconception

In 36 years, I’ve seen very few truly air-gapped environments. Most have some form of connectivity, vendor remote access, cellular modems, USB transfers, data diodes with exceptions, or historian links to the corporate network. If there’s any pathway in or out, the vulnerability surface matters.

What should I prioritize first if my budget is limited?

Start Here

Asset inventory and network visibility. You can’t protect what you can’t see, and every other security investment builds on knowing what you have, where it connects, and what software it runs. This doesn’t require expensive tools, it requires discipline and dedicated time.

How do I start asking vendors for SBOMs?

Practical Approach

Include SBOM requirements in your next procurement or contract renewal. Reference the NTIA minimum elements as a baseline. Start with your most critical systems, your SCADA platform, your historian, your remote access solution. Most vendors are further along than you think; they just haven’t been asked.

Does ISA/IEC 62443 address AI-driven threats?

Framework Context

Not specifically. But the ISA/IEC 62443 series provides the architectural foundation, zones, conduits, security levels, and lifecycle requirements, that makes environments more resilient regardless of whether the threat actor is human, automated, or AI-assisted. The framework is still the right starting point.

Should we be using AI defensively in OT?

Emerging Area

There are promising applications for AI in anomaly detection, network monitoring, and threat analysis for OT environments. But AI-powered defense tools are only as good as the data they’re trained on and the architecture they’re deployed into. Fix the fundamentals first. Add AI-powered detection as a layer, not a substitute for proper segmentation, visibility, and lifecycle management.

How do I explain this risk to leadership without sounding alarmist?

Communication Strategy

Frame it this way: the tools available to adversaries are improving faster than most industrial environments are hardening. That gap isn’t new, but it may be widening. The investment case isn’t about one specific AI model,it’s about reducing the technical debt and visibility gaps that every credible threat assessment already flags. Use the financial regulators’ warnings as external validation that this isn’t just an engineering concern.

Resources

Standards and Frameworks

SBOM Resources

Government Advisories and Threat Intelligence

Professional Development

Professional Disclaimer

The information provided in this article represents general engineering principles and field experiences accumulated over 36 years in industrial automation. This content is intended for educational and informational purposes only and should not be considered as specific engineering recommendations for your particular application.

Every industrial facility presents unique safety, environmental, regulatory, and operational requirements that must be thoroughly evaluated by qualified professional engineers familiar with your specific systems and local codes. Always consult with qualified engineers, follow applicable safety standards, and conduct proper testing and validation before implementing any solutions in production environments.

The author and publisher disclaim any liability for damages, losses, or injuries that may result from the use or misuse of information contained in this article.

Alana Murray
Alana Murray
Articles: 19