Forrester Research Note - Mythos 10 Consequences - Finance Translation

Posted on

A few weeks ago Forrester Research, who would not normally be on my radar, posted a very concise “Project Glasswing: The 10 Consequences Nobody’s Writing About Yet.” Short version: the infosec industry is about to be turned upside down over the next 2–5 years, and in many ways the transition is well underway.

This is one of the most important things I’ve read this year, and it exists at the intersection of infosec, artificial intelligence, and business & economics. I feel like people in our industry are snoozing on this.

Moreover, I’ve sent it around to a number of non-technical friends because I believe this is relevant to the world we live in from a variety of perspectives, not just infosec. I think there will be significant fallout from business, finance, and economic perspectives.

Feedback I’ve gotten from my non-technical friends is that this article is a bit of a steep climb if you don’t have a computer security background. Upon further reflection, I agree with this assessment.

I used one of my preferred AI tools to generate a translation of the article specifically tailored to my finance friend. The end result, aside from a few edits of my own, is below. I think it’s a very good read, and in many ways better than the original. I wish more people would read it.

Original article: Forrester Research

My prompt to the AI:

Please translate the following article for my friend who does not have a background in technical computer security work. He is very comfortable reasoning about complex issues. He just doesn’t have technical computer experience. He does, however, have lots of technical experience in finance and the economy. How AI, technology, and computer security interact with and affect/are affected by finance, economics, & business, are very interesting to him.

Please highlight the following as an additional insight (do not make it sound like it’s from the article): Mythos and Glasswing are really just hyper distilled demonstrations of the changes that are afoot. These changes are happening with or without Anthropic & Mythos.

AI-generated translation follows.

Project Glasswing: A Translation for the Finance/Economics-Minded

The Big Picture

Your friend should think of this as a supply shock in the vulnerability discovery market. Project Glasswing is Anthropic’s demonstration that AI can now autonomously find software security flaws at massive scale—flaws that previously required scarce, expensive human expertise to discover.

In economic terms: discovery has been commoditized. The equilibrium is shifting from “finding problems” (now cheap and abundant) to “fixing problems” (still expensive and human-constrained).


An Important Framing Insight

Mythos and Glasswing are best understood as hyper-distilled demonstrations of changes already in motion—not the cause of those changes.

The underlying capability shift is happening across the industry, with or without Anthropic’s public demonstration. Multiple actors (other AI labs, nation-states, well-resourced cybercriminal organizations) are independently converging on this same capability frontier. Anthropic simply chose to surface it openly and responsibly.

This is analogous to how the first firm to publicly demonstrate a new financial modeling technique isn’t causing the technique to exist—they’re just the first to acknowledge what the market is already capable of. The genie doesn’t go back in the bottle because one actor steps forward. The appropriate mental model is “demonstration of inevitability,” not “introduction of novelty.”

This framing matters because it shifts where one directs attention: away from Anthropic specifically and toward the structural transformation now underway regardless of any single player’s choices.


Background: Key Concepts Translated

Security Term Finance/Economics Analogy
Zero-day vulnerability A previously unknown arbitrage opportunity—knowledge that others don’t have, creating asymmetric value. “Zero-day” means zero days of awareness, so no one has had time to prepare.
Penetration testing Like hiring an auditor or stress-tester to probe your systems for weaknesses before a crisis hits.
CVE system A centralized disclosure repository—think of it as a credit rating agency for software flaws, tracking known risks so the market can price them.
Open-source maintainers Unpaid or underpaid contributors maintaining critical infrastructure. Like volunteers keeping the interstate highway system running.
Remediation The work of actually fixing the problem. Discovery is identifying a bad loan; remediation is restructuring it.

Two Critical Distinctions

Vulnerability vs. Software Bug

These terms are often conflated, but the distinction matters economically:

  • A bug is any flaw in software—a button that doesn’t work, a calculation that rounds incorrectly, a feature that crashes under unusual input. Most bugs are quality issues, not security issues.
  • A vulnerability is a bug that creates an exploitable security risk—a flaw that an attacker can leverage to gain unauthorized access, escalate privileges, extract data, or cause other harm.

Analogy: A bug is like a defect in a building’s design—a squeaky door, poor lighting, an inconvenient layout. A vulnerability is like a defect that creates a security problem: a lock that can be picked, a window that doesn’t latch, an alarm system with a known bypass. Both are flaws, but only one creates asymmetric risk.

Finance framing: A bug is like an operational error in a trading system (wrong rounding, slow execution). A vulnerability is like a flaw that allows unauthorized trading, data exfiltration, or manipulation. Both are defects, but only one attracts adversarial actors.


Vulnerability vs. Exploit

These are frequently treated as synonyms in casual discussion, but they represent distinct stages in the risk chain:

  • A vulnerability is the condition—the structural weakness that exists in the software, whether or not anyone knows about it. Think of it as the underlying flaw in the system.
  • An exploit is the method—the specific technique, code, or procedure that takes advantage of a vulnerability. It’s the weaponization of the flaw.

A vulnerability can exist for years before anyone discovers an exploit for it. Conversely, once a vulnerability is known, multiple different exploits can be developed for it (different attack paths, different tools, different levels of sophistication).

Analogy: A vulnerability is a flaw in a lock mechanism—a manufacturing defect that makes the lock pickable. An exploit is the specific picking technique and tool designed to exploit that particular defect. The lock was always vulnerable; the exploit is the method.

Finance framing: A vulnerability is like a structural arbitrage opportunity in a market (mispricing due to information asymmetry). An exploit is the specific trading strategy that captures that arbitrage. The mispricing existed whether or not anyone traded on it; the strategy is how value is extracted.

Why this matters for the AI discussion: Mythos discovers vulnerabilities at scale. It (currently) does not automatically produce exploits. However, in the hands of an adversarial actor, discovered vulnerabilities can become exploits very quickly. The economic question is: who holds the vulnerabilities, and what are their incentives?


The 10 Consequences, Translated

First-Order Effects (Immediate)

1. Critical infrastructure maintenance becomes the bottleneck

Open-source software powers most of the digital economy (like public roads power commerce). AI can now find vulnerabilities in this infrastructure at exponential scale. But the “road crews”—volunteer maintainers—are still human, underpaid, and finite. Supply of fixes cannot meet demand for fixes. This risks systemic instability in the software supply chain.

2. The market value of “finding” collapses

Penetration testing previously commanded premiums because discovery was the scarce skill. When AI makes discovery essentially free, the market value shifts to what remains scarce: interpretation, prioritization, and remediation guidance. This is analogous to how algorithmic trading compressed margins in execution, pushing value toward strategy and risk management.

3. Anthropic becomes a systemic institution

If Anthropic controls the most capable discovery model, cybersecurity vendors face a dependency risk—similar to how financial institutions became dependent on credit rating agencies or cloud providers. They must formalize partnerships or lose control over their own value proposition.


Second-Order Effects (6–18 Months)

4. Remediation services become the new high-margin category

A new service category will emerge: firms that interpret AI-discovered vulnerabilities and coordinate mass patching. The value has migrated downstream from discovery to resolution. Think of this as the difference between identifying non-performing loans and actually restructuring them—the latter requires judgment, relationships, and operational capacity.

5. The risk disclosure system gets overwhelmed

The CVE system (the “rating agency” for software flaws) will face volume it wasn’t designed to handle. When thousands of zero-days pour in faster than humans can triage, risk prioritization breaks down. Markets can’t price what they can’t classify.

6. Nation-state strategy pivots from “hoarding” to “racing”

Previously, zero-days were like rare strategic assets—gold reserves to be held and deployed sparingly. But if AI can discover them at scale, the scarcity premium collapses. Nation-states will likely shift to immediate deployment (exfiltrating data, establishing footholds) rather than stockpiling. This accelerates the arms race.

7. Cyber insurance reprices abruptly

Insurers will incorporate AI-driven verification into underwriting models. Organizations that haven’t patched known AI-discovered vulnerabilities will face premium spikes and new exclusions. This is comparable to how flood insurance repriced once climate modeling improved—the risk became measurable and the market adjusted.

8. Regulatory standards reset around AI capability

AI-level discovery becomes the new benchmark for “reasonable care.” CISOs will face compliance gaps—their traditional vulnerability management cycles are too slow for AI-speed discovery. Regulators will point to the capability and ask: “Why didn’t you know about this flaw?”


Third-Order Effects (2–5 Years)

9. AI-assisted security governance becomes its own compliance industry

Just as Sarbanes-Oxley created an audit industry around financial controls, new audit markets will emerge around AI-driven security actions. Organizations will need documented “human-in-the-loop” oversight—proving that a qualified person reviewed and approved each AI-driven decision.

10. Labor markets pivot from discovery to judgment

Demand for traditional vulnerability discovery skills will decline (like demand for manual calculators declined after spreadsheets). The new premium skills will be judgment under uncertainty: validating AI findings, making accountable decisions, and navigating the gap between what AI finds and what businesses can actually fix.


The Economic Summary

What’s Happening Economic Parallel
Discovery becomes commoditized Like algorithmic trading democratizing execution
Value migrates to remediation & governance Like value migrating from execution to alpha/risk management
Bottleneck shifts to human capacity Like discovering oil reserves faster than refineries can process
Insurance reprices on new data Like flood insurance after better climate models
Regulatory standards ratchet upward Like post-crisis compliance requirements

The core insight: This isn’t primarily a technical story—it’s a market structure story. AI has broken the old equilibrium where discovery was the scarce resource. The new equilibrium will reward whoever can best manage the downstream consequences: fixing, governing, and insuring against vulnerabilities that are now too numerous to ignore.