> Our tests gave models the vulnerable function directly, often with contextual hints (e.g., "consider wraparound behavior").
"Often with contextual hints" is doing some heavy lifting here, IMO. I agree with the article's premise -- you don't need Mythos to use AI to find novel, complex vulnerabilities -- but these results as presented are somewhat misleading.
AFAIU, their claim is that Mythos is in reality used in a framework that builds such contextual hints, and that their (Aisle's) own framework does the same:
"(...) a well-designed scaffold naturally produces this kind of scoped context through its targeting and iterative prompting stages, which is exactly what both AISLE's and Anthropic's systems do."
All evidence is point to LLMs not being sufficient for the taks everyone want them to do. That harness and agentic capabilities that shove them through JSON-shaped holes are utterly necessary and along with all the security, that there's no great singularity happening here.
The current tech is a sigmoid and even using the abilities of the AI, novelty, improvements don't appear to be happening at any exponetial pace.
> TL;DR: We tested Anthropic Mythos's showcase vulnerabilities on small, cheap, open-weights models. They recovered much of the same analysis. AI cybersecurity capability is very jagged: it doesn't scale smoothly with model size, and the moat is the system into which deep security expertise is built, not the model itself. Mythos validates the approach but it does not settle it yet.
Notably, Kimi K2 and GPT-OSS-120b do quite well when provided with the isolated context. Article seems to be heavily LLM-assisted, but the content itself is good.
I'm awaiting general release so I can root and jailbreak some old Android/iphones. If it succeeds, I'm a fan. If it fails, then it's obviously not a leap, it's another step.
In my experience asking OpenAI or Anthropic models to do anything FAANG doesn’t want you to do is usually rejected. For example reverse engineering an app, cracking your own device, etc…
> Our tests gave models the vulnerable function directly, often with contextual hints (e.g., "consider wraparound behavior").
"Often with contextual hints" is doing some heavy lifting here, IMO. I agree with the article's premise -- you don't need Mythos to use AI to find novel, complex vulnerabilities -- but these results as presented are somewhat misleading.
AFAIU, their claim is that Mythos is in reality used in a framework that builds such contextual hints, and that their (Aisle's) own framework does the same:
"(...) a well-designed scaffold naturally produces this kind of scoped context through its targeting and iterative prompting stages, which is exactly what both AISLE's and Anthropic's systems do."
All evidence is point to LLMs not being sufficient for the taks everyone want them to do. That harness and agentic capabilities that shove them through JSON-shaped holes are utterly necessary and along with all the security, that there's no great singularity happening here.
The current tech is a sigmoid and even using the abilities of the AI, novelty, improvements don't appear to be happening at any exponetial pace.
> The current tech is a sigmoid
What makes you say that? I'm only asking because the data I've seen looks pretty cleanly exponential still, e.g. https://metr.org.
> TL;DR: We tested Anthropic Mythos's showcase vulnerabilities on small, cheap, open-weights models. They recovered much of the same analysis. AI cybersecurity capability is very jagged: it doesn't scale smoothly with model size, and the moat is the system into which deep security expertise is built, not the model itself. Mythos validates the approach but it does not settle it yet.
Notably, Kimi K2 and GPT-OSS-120b do quite well when provided with the isolated context. Article seems to be heavily LLM-assisted, but the content itself is good.
I'm awaiting general release so I can root and jailbreak some old Android/iphones. If it succeeds, I'm a fan. If it fails, then it's obviously not a leap, it's another step.
In my experience asking OpenAI or Anthropic models to do anything FAANG doesn’t want you to do is usually rejected. For example reverse engineering an app, cracking your own device, etc…
This is actually a solid test