• mormegil@programming.dev
    link
    fedilink
    arrow-up
    2
    ·
    2 days ago

    The xz attack required years of patient work to build Jia Tan’s credibility through hundreds of legitimate patches. These [LLM] tools can now generate those patches automatically, creating convincing contribution histories across multiple projects at once.

    I don’t know, but maybe the current hype could have the opposite effect: if you try to flood many projects with AI-generated patches, you’ll be marked as AI-slopper and blocked from the projects, rather than become a trusted contributor? (OK, assuming a nation-state-like powerful adversary, it can probably do the real job of checking the AI output diligently in order not to be detected as AI spamming, while still getting some advantage of it, unlike those hordes of need-famous-open-source-contribution-in-my-CV who just copypaste the first nonsense which came out of AI into a PR.)

    • Int32@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      2 days ago

      the only solution is to refuse AI commits, but you need to find out that an LLM was used…