Cheers! Got a bit clearer now.
Cheers! Got a bit clearer now.
Appreciated if someone can explain what is the problem and its context in simple terms 🙏
I understand the GNU “framework” is built on free, open source software. So I don’t understand how one can “discover” that there were pieces of non-free software there… They were put there by mistake?
😂
The current security philosophy almost seems to be: “In order to make it secure, make it difficult to use”. This is why I propose to go a step further: “In order to make it secure, just don’t make it”. The safest account is the one that doesn’t exist or that can’t be accessed by anyone, including its owner.
We aren’t supposed to accept that. We can simply not use their software. And as users that’s the only power we have on devs. But it’s a power that only works on devs who are interested in having many users.
Which can be further summarized: academics (🙋🏻) are basically a bunch of idiotic sheep, despite being in academia.
See also https://pluralistic.net/2024/08/16/the-public-sphere/#not-the-elsevier
Fantastic, this is extremely helpful, thank you! 🥇 I wanted to test a couple of distros for my Thinkpad, and I’ll make sure to check and save this kind of information from live USBs.
Thank you, that’s useful info, I didn’t know about this. Could you be so kind to share some link, or say something more, about lspci and lsmod and how to proceed from them to identifying which drivers one should install? Cheers!
Yeah to me too. I’m not clicking on that “Download client” link for sure.
Thanks for the recommendations!
You brought back memories and I got interested. Interesting reading about privacy:
https://www.irchelp.org/security/privacy.html
How much of it is true?
Travelors = travellers + sailors. I like that!
Agree (you made me think of the famous face on Mars). I mean that more as a joke. Also there’s no clear threshold or divide on one side of which we can speak of “human intelligence”. There’s a whole range from impairing disabilities to Einstein and Euler – if it really makes sense to use a linear 1D scale, which very probably doesn’t.
Title:
ChatGPT broke the Turing test
Content:
Other researchers agree that GPT-4 and other LLMs would probably now pass the popular conception of the Turing test. […]
researchers […] reported that more than 1.5 million people had played their online game based on the Turing test. Players were assigned to chat for two minutes, either to another player or to an LLM-powered bot that the researchers had prompted to behave like a person. The players correctly identified bots just 60% of the time
Complete contradiction. Trash Nature, it’s become only an extremely expensive gossip science magazine.
PS: The Turing test involves comparing a bot with a human (not knowing which is which). So if more and more bots pass the test, this can be the result either of an increase in the bots’ Artificial Intelligence, or of an increase in humans’ Natural Stupidity.
This is so cool! Not just the font but the whole process and study. Please feel free to cross-post to Typography & fonts.
You’re simplifying the situation and dynamics of science too much.
If you submit or share a work that contains a logical or experimental error – it says “2+2=5” somewhere – then yes, your work is not accepted, it’s wrong, and you should discard it too.
But many works have no (visible) logical flaws and present hypotheses within current experimental errors. They explore or propose, or start from, alternative theses. They may be pursued and considered by a minority, even a very small one, while the majority pursues something else. But this doesn’t make them “rejected”. In fact, theories followed by minorities periodically have breakthroughs and suddenly win the majority. This is a vital part of scientific progress. Except in the “2+2=5” case, it’s a matter of majority/minority, but that does emphatically not mean acceptance/rejection.
On top of that, the relationship between “truth” and “majority” is even more fascinatingly complex. Let me give you an example.
Probably (this is just statistics from personal experience) the vast majority of physicists would tell you that “energy is conserved”. A physicist specialized in general relativity, however, would point out that there’s a difference between a conserved quantity (somewhat like a fluid) and a balanced quantity. And energy strictly speaking is balanced, not conserved. This fact, however, creates no tension: if you have a simple conversation – 30 min or a couple hours – with a physicist who stated that “energy is conserved”, and you explain the precise difference, show the equations, examine references together etc, that physicist will understand the clarification and simply agree; no biggie. In situations where that physicist works, this results in little practical difference (but obviously there are situations where the difference is important.)
A guided tour through general relativity (see this discussion by Baez as a starting point, for example) will also convince a physicist who still insisted that energy is conserved even after the balance vs conservation difference was clarified. With energy, either “conservation” makes no sense, or if we want to force a sense, then it’s false. (I myself have been on both sides of this dialogue.)
This shows a paradoxical situation: the majority may state something that’s actually not true – but the majority itself would simply agree with this, if given the chance! This paradoxical discrepancy arises especially today owing to specialization and too little or too slow osmosis among the different specialities, plus excessive simplification in postgraduate education (they present approximate facts as exact). Large groups maintain some statements as facts simply because the more correct point of view is too slow to spread through their community. The energy claim is one example, there are others (thermodynamics and quantum theory have plenty). I think every physicist working in a specialized field is aware about a couple of such majority-vs-truth discrepancies. And this teaches humbleness, openness to reviewing one’s beliefs, and reliance on logic, not “majorities”.
Edit: a beautiful book by O’Connor & Weatherall, The Misinformation Age: How False Beliefs Spread, discusses this phenomenon and models of this phenomenon.
Peer review, as the name says, is review, not “acceptance”. At least in principle, its goal is to help you check whether the logic behind your analysis is sound and your experiments have no flaws. That’s why one can find articles with completely antithetical results or theses, both peer-reviewed (and I’m not speaking of purchased pseudo peer-review). Unfortunately it has also become a misused political or business tool, that’s for sure – see “impact factors”, “h-indexes”, and similar bulls**t.
That’s how I interpret it. My question is if it’s generally interpreted that way, or misinterpreted.
I really want to see what happens. It seems to me these “agents” are still useless in handling tasks like customer inquiries. Hopefully customers will get tired and switch to companies that employ competent humans instead…