The idea of using large language models (LLMs) to discover security problems is not new. Google's Project Zero
investigated
the feasibility of using LLMs for security research in 2024. At the time, they found that models could identify real problems, but required a good deal of structure and hand-holding to do so on small benchmark problems. In February 2026, Anthropic
published a report
claiming that the company's most recent LLM at that point in time, Claude Opus 4.6, had discovered
real-world vulnerabilities in critical open-source software, including the Linux
kernel, with far less scaffolding. On April˙7, Anthropic announced a new experimental model that is
supposedly even better; which they have
partnered with the Linux Foundation
to supply to some open-source developers with access to the tool for security reviews.
LLMs seem to have progressed significantly in the last few months, a change which is being noticed in the open-source community.
https://lwn.net/Articles/1066581/
--- SBBSecho 3.37-Linux
* Origin: Palantir * palantirbbs.ddns.net * Pensacola, FL * (86:200/23)