Last year, Andres Freund, a Microsoft engineer, spotted a backdoor in xz Utils, an open source data compression utility that is found on nearly all versions of GNU/Linux and Unix-like operating sys…
The xz attack required years of patient work to build Jia Tan’s credibility through hundreds of legitimate patches. These [LLM] tools can now generate those patches automatically, creating convincing contribution histories across multiple projects at once.
I don’t know, but maybe the current hype could have the opposite effect: if you try to flood many projects with AI-generated patches, you’ll be marked as AI-slopper and blocked from the projects, rather than become a trusted contributor? (OK, assuming a nation-state-like powerful adversary, it can probably do the real job of checking the AI output diligently in order not to be detected as AI spamming, while still getting some advantage of it, unlike those hordes of need-famous-open-source-contribution-in-my-CV who just copypaste the first nonsense which came out of AI into a PR.)
I don’t know, but maybe the current hype could have the opposite effect: if you try to flood many projects with AI-generated patches, you’ll be marked as AI-slopper and blocked from the projects, rather than become a trusted contributor? (OK, assuming a nation-state-like powerful adversary, it can probably do the real job of checking the AI output diligently in order not to be detected as AI spamming, while still getting some advantage of it, unlike those hordes of need-famous-open-source-contribution-in-my-CV who just copypaste the first nonsense which came out of AI into a PR.)
the only solution is to refuse AI commits, but you need to find out that an LLM was used…