If so are these programs that claim to ‘poison’ the training datasets effective ?

  • stravanasu@lemmy.ca
    link
    fedilink
    arrow-up
    19
    ·
    edit-2
    4 days ago

    It is actually not so difficult to see this for yourself in a much simplified setting. One can easily build a “Small Language Model” that extracts correlations between only three consecutive words. On the web there’s plenty of short scripts that do this; here and here is one example. The output created by such a SLM can have remarkably long sentences with grammatical meaning (see the examples in the links above); this is remarkable since all it learned was correlations between triplets of words.

    Now you can take a large amount of output from such a SLM, and use it to train a second, identical or even better SLM, then check the output generated by this second one. You’ll see that the new output is less coherent than the one from the first SLM. Give the output of the second SLM to a third, and you’ll see even less coherent text coming out. And so on.

    • partofthevoice@lemmy.zip
      link
      fedilink
      arrow-up
      5
      ·
      4 days ago

      Yeah but there’s also some interesting nuances. I’ve seen smaller models on HuggingFace that, if I interpret them correctly, were tuned unsupervised using the output of larger models. So it seems there might be some validity to doing some things this way, so long as the other model is larger.