• 1 Post
  • 69 Comments
Joined 1 year ago
cake
Cake day: August 5th, 2023

help-circle


  • This may be, but the probability is unarguably higher than with Trump. Voting exclusively for candidates you morally agree on only works if enough people have the same morale (in this case i.e. are educated on Israel and so on) and are also not willing to make compromises.

    Even if unfortunate, this is currently not the case; and you voting independent has smaller chances of changing that than voting democratic. So you will probably have to accept this situation for the moment and choose the “best actually feasible” strategy— and feasible means having the highest probability to win in real life, not merely trying.

    Personally, I’d even argue that it’s unethical to not vote for a candidate like Harris, just because the chances of getting stuff like ranked choice voting or educating voters done (which will then lead to you being able to realistically vote for others) is significantly higher when voting Democrats than… letting Trump win?

    Notice that I don’t say you have to agree with anything else she stands for, you’re trying to achieve certain goals/get out of the very unfortunate current situation, and even a low chance of reaching that is infinitely better than none.








  • Quik@infosec.pubtoMemes@lemmy.mlMe but ublock origin
    link
    fedilink
    arrow-up
    237
    arrow-down
    3
    ·
    3 months ago

    Billy should really not support them, Ad Block Plus let’s advertisers pay for having their ads checked as “acceptable advertisements”, i.e. is selling out the core functionality of their product. Billy should use uBlock origin, which afaik does not accept donations, he could however support something like PiHole .










  • Interesting take on LLMs, how are you so sure about that?

    I mean I get it, current image gen models seem clearly uncreative, but at least the unrestricted versions of Bing Chat/ChatGPT leave some room for the possibility of creativity/general intelligence in future sufficiently large LLMs, at least to me.

    So the question (again: to me) is not only “will LLM scale to (human level) general intelligence”, but also “will we find something better than RLHF/LLMs/etc. before?”.

    I’m not sure on either, but asses roughly a 2/3 probability to the first and given the first event and AGI in reach in the next 8 years a comparatively small chance for the second event.