I don’t think even the most perceptive forecaster would have identified a 90s LucasArts video format being a flashpoint for a discussion of the state of the security. We live in an age of generative AI agents rampaging through OSS though, and that seems to be what has happened.
Open source is one of the great triumphs in loose, global coordination. In most meaningful ways, proprietary software… lost. The scale and effectiveness of open source projects consistently outstripped closed source components, across the stack, leaving proprietary software mainly existing at the application level.
This also had the effect of shifting open source from being in contrast to corporate, top-down development of proprietary software to being deeply intertwined with it. The expectations and requirements intermingled volunteer-ish communities and profit-seeking businesses, leading to tension in several areas, including security.
Luckily, the loving grace of the megacorps invested in things like Google’s Project Zero to provide the type of security investments that need corporate-scale backing.
The flow for things like Project Zero look like:
- Investigate popular projects and find real security risks before the bad guys do
- Share a report with the project, and give them time to fix it before disclosing it
- If the project doesn’t fix it in a certain time, disclose it so that folks can work around the issue rather than being vulnerable to it.
That’s their mission: “make the discovery and exploitation of security vulnerabilities more difficult, and to significantly improve the safety and security of the Internet for everyone. “
Inherently, that’s a pretty good idea as the incentive for various bad actors is:
- Investigate popular projects and find a real security risk
- Tell no one
- Use it (or sell it to the national intelligence agency of choice)
That seems worse!
Something, however, was rotten in the state of Stallman. The folks who maintain some of the most popular package repositories recently published an open letter: Open Infrastructure is Not Free: A Joint Statement on Sustainable Stewardship that starts:
“Not long ago, maintaining an open source project meant uploading a tarball from your local machine to a website. Today, expectations are very different”
Today’s expectations include complex distribution infra, signed packages, deterministic builds, CI coverage across many types of hardware, and resilience against security concerns. These expectations aren’t unfounded: the PyPitfalls paper: [2507.18075] PyPitfall: Dependency Chaos and Software Supply Chain Vulnerabilities in Python, released earlier this year, took an extensive look into one particular community:
“By analyzing the dependency metadata of 378,573 PyPI packages, we quantified the extent to which packages rely on versions with known vulnerabilities. Our study reveals that 4,655 packages have guaranteed dependencies on known vulnerabilities, and 141,044 packages allow for the use of vulnerable versions. Our findings underscore the need for enhanced security awareness in the Python software supply chain.”
As the world centralized around open source, some aspects of the infrastructure have scaled up, but the support and investment model really didn’t.
It’s very easy for the corporations building on OSS to treat it like an infinitely available good, especially when they don’t have to deal with the impact of their usage. Again, from the letter.
“Automated CI systems, large-scale dependency scanners, and ephemeral container builds, which are often operated by companies, place enormous strain on infrastructure. These commercial-scale workloads often run without caching, throttling, or even awareness of the strain they impose. The rise of Generative and Agentic AI is driving a further explosion of machine-driven, often wasteful automated usage, compounding the existing challenges.”
Because this code ends up in production for some very large products, maintainers end up as unpaid on-call. Folks with good intentions want to keep a library in healthy shape and feels the pressure of knowing that perhaps millions of people are (indirectly) depending on it. Then we mixed in AI.
The Big Sleep
The FFMPeg project is at the center of a storm right now about the demands from security research teams:
Google have spent billions of dollars training Gemini, and a hefty chunk moreon a project called BigSleep: an agent to do the security research work at scale. That tool is exactly what the FFMPEG developers are reacting to, with issues like this use-after-free write in SANM process_ftch [440183164]
The vulnerability is in a codec for the LucasArts SMUSH format, which was used in games like Grim Fandango: a security risk targeting a very narrow group of people in their 40s. In a world of human researchers, I suspect that neither attacker or researcher would have spent much time on that codec.
For an AI agent, it’s feasible to scale up the search if you have the compute and model resources, which Google do. So now that (very real!) vulnerability is documented1. That also scales up the demands on maintainers, who don’t have the equivalent billions to do research into generative AI security patch systems.
Security has always been asymmetric, in that it’s easier to break than to build. Scaling up discovery tips that scale off the table. The bulls are in the bazaar, finding vulnerabilities in code for rendering 1995 Rebel Assault 2 cutscenes, and the maintainers just want someone to help clean up after them. Global-scale coordination on global-scale problems remains hard.
- and, to be clear, fixed! ↩︎