This means absolutely nothing. It scanned a large amount of text and found something. Great, that’s exactly what it’s supposed to do. Doesn’t mean it’s smart or getting smarter.
People often dismiss AI capabilities because “it’s not really smart”. Does that really matter? If it automates everything in the future and most people lose their jobs (just an example), who cares if it is “smart” or not? If it steals art and GPL code and turns a profit on it, who cares if it is not actually intelligent? It’s about the impact AI has on the world, not semantics on what can be considered intelligence.
It matters, because it’s a tool. That means it can be used correctly or incorrectly . . . and most people who don’t understand a given tool end up using it incorrectly, and in doing so, damage themselves, the tool, and/or innocent bystanders.
True AI (“general artificial intelligence”, if you prefer) would qualify as a person in its own right, rather than a tool, and therefore be able to take responsibility for its own actions. LLMs can’t do that, so the responsibility for anything done by these types of model lies with either the person using it (or requiring its use) or whoever advertised the LLM as fit for some purpose. And that’s VERY important, from a legal, cultural, and societal point of view.
I don’t know if you read the article, but in there it says AI is becoming smarter. My comment was a response to that.
Irrespective of that, you raise an interesting point “it’s about the impact AI has on the world”. I’d argue it’s real impact is quite limited (mind you I’m referring to generative AI and specifically LLMs rather than AI generally), it has a few useful applucations, but the emphasis here is on few. However, it’s being pushed by all the big tech companies and those lobbying for them as the next big thing. That’s what’s really leading to the “impact” you’re perceiving.
This means absolutely nothing. It scanned a large amount of text and found something. Great, that’s exactly what it’s supposed to do. Doesn’t mean it’s smart or getting smarter.
People often dismiss AI capabilities because “it’s not really smart”. Does that really matter? If it automates everything in the future and most people lose their jobs (just an example), who cares if it is “smart” or not? If it steals art and GPL code and turns a profit on it, who cares if it is not actually intelligent? It’s about the impact AI has on the world, not semantics on what can be considered intelligence.
It matters, because it’s a tool. That means it can be used correctly or incorrectly . . . and most people who don’t understand a given tool end up using it incorrectly, and in doing so, damage themselves, the tool, and/or innocent bystanders.
True AI (“general artificial intelligence”, if you prefer) would qualify as a person in its own right, rather than a tool, and therefore be able to take responsibility for its own actions. LLMs can’t do that, so the responsibility for anything done by these types of model lies with either the person using it (or requiring its use) or whoever advertised the LLM as fit for some purpose. And that’s VERY important, from a legal, cultural, and societal point of view.
Ok, good point. It also matters if AI is true intelligence or not. What I meant was the comment I replied to said
Like if it is not true AI nothing it does matters? The effects of the tool, even if not true AI, matters a lot.
i feel like people are misunderstanding your point. yes, generative ai is bullshit, but it doesn’t need to be good in order to replace workers
I don’t know if you read the article, but in there it says AI is becoming smarter. My comment was a response to that.
Irrespective of that, you raise an interesting point “it’s about the impact AI has on the world”. I’d argue it’s real impact is quite limited (mind you I’m referring to generative AI and specifically LLMs rather than AI generally), it has a few useful applucations, but the emphasis here is on few. However, it’s being pushed by all the big tech companies and those lobbying for them as the next big thing. That’s what’s really leading to the “impact” you’re perceiving.
How hilariously reductionist.
AI did what it’s supposed to do. And it found a difficult to spot security bug.
“No big deal” though.