As my coworker succinctly put it, "nobody uses Firefox anymore."
I don't know if hundreds of millions of people is exactly, "nobody" but I personally agree that open source software is just going to crush closed source for exactly the reasons we're seeing unfold in front of us; you can audit and correct incorrect behavior for the benefits of all.
For closed-source, I'd expect defenders to have a greater advantage because they can run Mythos on the source code, while attackers only get an opaque API/protocol to try messing with.
There is definitely a closed-source defender advantage where an attacker doesn't have access to the code, binary, or environment that can be instrumented (so basically, running in the cloud), but there have been several very effective technical demonstrations of LLM guided or agentic approaches to assessing the security of closed source tools, and I have had some successes personally using LLMs with tool use to manage binary analysis tools to perform reverse engineering of closed source packages.
For many attack scenarios the boundary is really if you can establish an effective canary or oracle for determining if a change in input results in a change in output, once you have that, it's simply a matter of scaling your testing or attack (for fuzzing, for blind injection, or any other number of attacks that depend on getting signal from a service).
Idk Mozilla has its issues but I still primarily use Firefox and librewolf on my Linux desktop. I refuse to use chrome except in instances where necessary.
I wonder how many false positives there were. Typically this types of static analysis tools come up with a ton of potential bugs, but only a few of them are actual bugs.
The basic technique (as has been publicly described by Anthropic) is you ask one agent to come up with a test case that triggers, say, an ASan use-after-free. Then you have a second agent that validates the test case. This eliminates a lot of false positives. It gets a little tricky when you allow the first agent to modify the code, which is necessary for things like sandbox escapes where you want to demonstrate that sending bad IPC causes problems.
And the following part that is more important is the agent that attempts the fix in code, the agent that tests the fix and reports on perfomance and functional impacts, and the agent the triggers the build and release to production.
Everything up to finding and validating the bug is a huge win in vuln/exploit development, everything after validating the bug is a huge win for defensive security and a massive gap until the tools are generally available :S
Last three CVEs are collections of bugs. CVE-2026-6784 is a collection of 55 bugs. CVE-2026-6785 is a collection of 154 bugs. CVE-2026-6786 is a collection of 107 bugs.
As for credits, I think bugs are ultimately credited to people, and this time Mozilla people used Mythos, as opposed to Anthropic people using Opus or Mythos.
Apropos of anything else, I do like that if one of the big bullet points of Mythos is security, that in their list of "preview users" Anthropic chose orgs like Firefox who might have the largest blast radii, and are the most tempting of targets.
Big news here, I think, is that they agree with Anthropic's prediction that it's a transitory issue, and expect to come out the other end more secure after fixing a finite number of bugs. Not looking forward to my turn at the firehose, but it could have been a lot worse.
I can only speak for SpiderMonkey, as that’s the team I’m on, but we humans are definitely writing and reviewing the patches for these bugs. Sometimes the AI suggestions are good, often they’re not, and we never send off a fix for a security bug unless we thoroughly understand the problem and have assessed its severity ourselves.
As my coworker succinctly put it, "nobody uses Firefox anymore."
I don't know if hundreds of millions of people is exactly, "nobody" but I personally agree that open source software is just going to crush closed source for exactly the reasons we're seeing unfold in front of us; you can audit and correct incorrect behavior for the benefits of all.
For closed-source, I'd expect defenders to have a greater advantage because they can run Mythos on the source code, while attackers only get an opaque API/protocol to try messing with.
There is definitely a closed-source defender advantage where an attacker doesn't have access to the code, binary, or environment that can be instrumented (so basically, running in the cloud), but there have been several very effective technical demonstrations of LLM guided or agentic approaches to assessing the security of closed source tools, and I have had some successes personally using LLMs with tool use to manage binary analysis tools to perform reverse engineering of closed source packages.
For many attack scenarios the boundary is really if you can establish an effective canary or oracle for determining if a change in input results in a change in output, once you have that, it's simply a matter of scaling your testing or attack (for fuzzing, for blind injection, or any other number of attacks that depend on getting signal from a service).
To some extent yes, but models are good at reverse engineering such that it isn't as great advantage as you might think.
Idk Mozilla has its issues but I still primarily use Firefox and librewolf on my Linux desktop. I refuse to use chrome except in instances where necessary.
I use Firefox + uBlock Origin because it give me complete control over what I see.
Same.
I wonder how many false positives there were. Typically this types of static analysis tools come up with a ton of potential bugs, but only a few of them are actual bugs.
The basic technique (as has been publicly described by Anthropic) is you ask one agent to come up with a test case that triggers, say, an ASan use-after-free. Then you have a second agent that validates the test case. This eliminates a lot of false positives. It gets a little tricky when you allow the first agent to modify the code, which is necessary for things like sandbox escapes where you want to demonstrate that sending bad IPC causes problems.
And the following part that is more important is the agent that attempts the fix in code, the agent that tests the fix and reports on perfomance and functional impacts, and the agent the triggers the build and release to production.
Everything up to finding and validating the bug is a huge win in vuln/exploit development, everything after validating the bug is a huge win for defensive security and a massive gap until the tools are generally available :S
So where are they, then? Am I misunderstanding the process and this stuff is kept under wraps even after release?
There's three CVEs in today's security advisory that mention Anthropic.
https://www.mozilla.org/en-US/security/advisories/mfsa2026-3...
There's also no write-up I can see that distinguishes to what extent this is the work of the seven people credited alongside Mythos.
Last three CVEs are collections of bugs. CVE-2026-6784 is a collection of 55 bugs. CVE-2026-6785 is a collection of 154 bugs. CVE-2026-6786 is a collection of 107 bugs.
As for credits, I think bugs are ultimately credited to people, and this time Mozilla people used Mythos, as opposed to Anthropic people using Opus or Mythos.
Apropos of anything else, I do like that if one of the big bullet points of Mythos is security, that in their list of "preview users" Anthropic chose orgs like Firefox who might have the largest blast radii, and are the most tempting of targets.
Big news here, I think, is that they agree with Anthropic's prediction that it's a transitory issue, and expect to come out the other end more secure after fixing a finite number of bugs. Not looking forward to my turn at the firehose, but it could have been a lot worse.
What they did not say is how many of these vulnerabilities were addressed by LLM-created fixes, if any.
I can only speak for SpiderMonkey, as that’s the team I’m on, but we humans are definitely writing and reviewing the patches for these bugs. Sometimes the AI suggestions are good, often they’re not, and we never send off a fix for a security bug unless we thoroughly understand the problem and have assessed its severity ourselves.
Source: https://blog.mozilla.org/en/firefox/ai-security-zero-day-vul...