

Yeah, this is walking one thing back to get away with their still-bad behaviors.


Yeah, this is walking one thing back to get away with their still-bad behaviors.


“Distances” isn’t the same as “severing ties from”.
Cancel the contract, you fucks!


open-weights aren’t open-source.
This always has been a dumb argument, and really lacks any modicum of practicality. This is rejecting 95% of the need because it is not 100% to your liking.
As we’ve seen in the text-to-image/video world, you can train on top of base models just fine. Or create LoRAs for specialization. Or change them into various styles of quantized GGUFs.
Also, you don’t need a Brazilian LLM because all of the LLMs are very multilingual.
Spending $3000 on training is still really cheap, but depending on the size of the model, you can still get away with training on 24GB or 32GB cards, which cost you the price of the card and energy. LoRAs take almost nothing to train. A university that is worth anything is going to have the resources to train a model like that. None of these arguments hold water.


DeepSeek API isn’t free, and to use Qwen you’d have to sign up for Ollama Cloud or something like that
To use Qwen, all you need is a decent video card and a local LLM server like LM Studio.
Local deploying is prohibitive
There’s a shitton of LLM models in various sizes to fit the requirements of your video card. Don’t have the 256GB VRAM requirements for the full quantized 8-bit 235B Qwen3 model? Fine, get the quantized 4-bit 30B model that fits into a 24GB card. Or a Qwen3 8B Base with DeepSeek-R1 post-trained Q 6-bit that fits on a 8GB card.
There are literally hundreds of variations that people have made to fit whatever size you need… because it’s fucking open-source!


For a company named “Open” AI their reluctance to just opening the weights to this model and washing their hands of it seem bizarre to me.
It’s not when you understand the history. When StabilityAI released their Stable Diffusion model as an open-source LLM and kickstarted the whole text-to-image LLM craze, there was a bit of a reckoning. At the time, Meta’s LLaMA was also out there in the open. Then Google put out an internal memo that basically said “oh shit, open-source is going to kick our ass”. Since then, they have been closing everything up, as the rest of the companies were realizing that giving away their models for free isn’t profitable.
Meanwhile, the Chinese have realized that their strategy has to be different to compete. So, almost every major model they’ve released has been open-source: DeepSeek, Qwen, GLM, Moonshot AI, Kimi, WAN Video, Hunyuan Image, Higgs Audio. Black Forest Labs in Germany, with their FLUX image model, is the only other major non-Chinese company that has adopted this strategy to stay relevant. And the models are actually good, going toe-to-toe with the American close-sourced models.
The US companies have committed to their own self-fulfilling prophecy in record time. Open source is actively kicking their ass. Yet they will spend trillions trying to make profitable models and rape the global economy in the process, while the Chinese wait patiently to stand on top of their corpses, when the AI bubble grenade explodes in their faces. All in the course of 5 years.
Linux would be so lucky to have OS market share dominance in such an accelerated timeline, rather than the 30+ years it’s actually going to take. This is a self-fail speedrun.


delete my account on one site
Lemme guess: Reddit?


OpenAI should have been fucking open in the first place. The Chinese are the only ones bother to open-source their models, and the US corpo’s decision to immediately close-source everything going to fuck them over in the end.


And still way way way too many redactions. With no explanations, as required by law.
Wut? I see no logo on their website.


Nevermind that he’s never gotten this angry or political on any of his YT videos.


If you’re a regular on YouTube, then you already know Technology Connections is one of those must-watch YouTubers.


I save NINETY FUCKING MINUTES of my life.
I save 2 hours of my life by not watching a movie and not being entertained by an experience. But then I’m just filling it with something less impactful, more mundane bullshit.
Some things are worth putting the time into. Complaining about how long it is doesn’t change the fact that it’s WORTH NINETY FUCKING MINUTES OF YOUR LIFE!


It’s amazing that this is now a downvoted opinion.
The overall concept seemed fine, but it’s mired in some truly dogshit design decisions.


It’s kind-of funny. Nowadays, I find the AI search assistants (I used the one with Kagi) work better than search results with all of these shitty AI sites.
We’re back to the age of pre-StackOverflow, when Expert Sex Change was always plaguing my search results with fucking pay-to-view bullshit. Except it’s free-but-useless websites now.


Meh, it won’t be long before Reddit admins will crackdown on posts and start their usual censorship campaigns.


Who the fuck calls Instagram “IG”?


Copyright as it is now is an injustice.
At best, copyright with a limit of 25 years, the law before Mark Twain fucked all of us over, would suck a lot less.
At worst, corporations would still exploit it to totality, because they have money, and you don’t.
Copyright was created with an agreement that the public would receive their public domain dues in a timely manner. The corpos broke that contract with the public. Therefore, piracy is not only justified, but a moral duty to preserve what corporations casually throw away, or exploit with mindless memberberries.
I would not be sad at all to see the entirety of copyright completely abolished. Open source is already doing a damn good job, and AI might end up hammering the final nail.


whether the victim was 18 years old or 17.”
I kind of get what he’s saying here, especially when draconian California laws can put 18-year-olds in prison for daring to have sex with a 17-year-old, when they are both in high school. (I think they finally fixed that legal gap, but it existed for a long time.)
But, completely outside the whole age and human brain development “debate”, there’s also power dynamics at play here that aren’t even considered. Epstein is a powerful man that used his influence to coerce girls to have sex with other powerful men. Even if she was 18 or 25, a woman in that position is still being exploited, with human trafficking in the mix.


It’s not easy, but it’s done all the time. New models, new LoRAs, and in some cases, the training data doesn’t even need to be very large for a specific task.
You don’t need the entire training dataset that the model was built from.
This is a total lie. This has nothing to do with AI. They’ve hated archive sites because forums like this one hate their paywalls, and we prefer to be able to actually read their articles and discuss them instead of getting blackballed every time.
NYT is one of the worst offenders, and NYT as a company has turned for the worse in the last 5-10 years, maybe even worse than Amazon Post. None of the old media companies really understand how to adapt in the Internet age, so they are slowly dying. It’s like they are perpetually in an economic bubble that hasn’t figured out how to pop itself. There’s so much damn news and news places copying their own news, and regurgitating it a hundred times, that we’re forced to aggregate it and have YouTubers hawk shit like Ground News just to process it all.