He generally shows most of the signs of the misinformation accounts:

  • Wants to repeatedly tell basically the same narrative and nothing else
  • Narrative is fundamentally false
  • Not interested in any kind of conversation or in learning that what he’s posting is backwards from the values he claims to profess

I also suspect that it’s not a coincidence that this is happening just as the Elon Musks of the world are ramping up attacks on Wikipedia, specially because it is a force for truth in the world that’s less corruptible than a lot of the others, and tends to fight back legally if someone tries to interfere with the free speech or safety of its editors.

Anyway, YSK. I reported him as misinformation, but who knows if that will lead to any result.

Edit: Number of people real salty that I’m talking about this: Lots

  • douglasg14b@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    23 days ago

    It’s likely this is a bot if it’s wide spread. And Lemmy is INCREDIBLY ill suited to handle even the dumbest of bots from 10+ years ago. Nevermind social media bots today.

    • kava@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      23 days ago

      To be fair, it’s virtually impossible to tell whether a text was written by an AI or not. If some motivated actor is willing to spend money to generate quality LLM output, they can post as much as they want on virtually all social media sites.

      The internet is in the process of eating itself as we speak.

      • douglasg14b@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        23 days ago

        You don’t analyze the text necessary, you analyze the heuristics, behavioral patterns, sentiment…etc It’s data analysis and signal processing.

        You, as a user, probably can’t. Because you lack information that the platform itself is in a position to gather and aggregate that data.

        There’s a science to it, and it’s not perfect. Some companies keep their solutions guarded because of the time and money required to mature their systems & ML models to identify artificial behavior.

        But it requires mature tooling at the very least, and Lemmy has essentially none of that.