• 0 Posts
  • 135 Comments
Joined 1 year ago
cake
Cake day: June 16th, 2023

help-circle




  • kromem@lemmy.worldtoProgrammer Humor@lemmy.mlLittle bobby 👦
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    4 months ago

    Kind of. You can’t do it 100% because in theory an attacker controlling input and seeing output could reflect though intermediate layers, but if you add more intermediate steps to processing a prompt you can significantly cut down on the injection potential.

    For example, fine tuning a model to take unsanitized input and rewrite it into Esperanto without malicious instructions and then having another model translate back from Esperanto into English before feeding it into the actual model, and having a final pass that removes anything not appropriate.



  • You’re kind of missing the point. The problem doesn’t seem to be fundamental to just AI.

    Much like how humans were so sure that theory of mind variations with transparent boxes ending up wrong was an ‘AI’ problem until researchers finally gave those problems to humans and half got them wrong too.

    We saw something similar with vision models years ago when the models finally got representative enough they were able to successfully model and predict unknown optical illusions in humans too.

    One of the issues with AI is the regression to the mean from the training data and the limited effectiveness of fine tuning to bias it, so whenever you see a behavior in AI that’s also present in the training set, it becomes more amorphous just how much of the problem is inherent to the architecture of the network and how much is poor isolation from the samples exhibiting those issues in the training data.

    There’s an entire sub dedicated to “ate the onion” for example. For a model trained on social media data, it’s going to include plenty of examples of people treating the onion as an authoritative source and reacting to it. So when Gemini cites the Onion in a search summary, is it the network architecture doing something uniquely ‘AI’ or is it the model extending behaviors present in the training data?

    While there are mechanical reasons confabulations occur, there are also data reasons which arise from human deficiencies as well.








  • kromem@lemmy.worldtoPrivacy@lemmy.ml*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    85
    arrow-down
    3
    ·
    edit-2
    5 months ago

    Literally just after talking about how people are spouting confident misinformation on another thread I see this one.

    Twitter: Twitter retains minimal EXIF data, primarily focusing on technical details, such as the camera model. GPS data is generally stripped.

    Yes, this is a privacy thing, we strip the EXIF data. As long as you’re not also adding location to your Tweet (which is optional) then there’s no location data associated with the Tweet or the media.

    People replying to a Twitter thread with photos are automatically having the location data stripped.

    God, I can’t wait for LLMs to automate calling out well intentioned total BS in every single comment on social media eventually. It’s increasing at a worrying pace.