AI companies are beginning to entertain the possibility that they could cease to exist. This notion was, until recently, more theoretical: A couple of years ago, an ex-OpenAI employee named Leopold Aschenbrenner wrote a lengthy memo speculating that the U.S. government might soon take control of the industry. By 2026 or 2027, Aschenbrenner wrote, an “obvious question” will be circling through the Pentagon and Congress: Do we need a government-led program for artificial general intelligence—an AGI Manhattan Project? He predicted that Washington would decide to go all in on such an effort.
Aschenbrenner may have been prescient. Earlier this year, at the height of the Pentagon’s ugly contract dispute with Anthropic, Secretary of Defense Pete Hegseth warned that he could invoke the Defense Production Act (DPA), a Cold War–era law that he reportedly suggested would allow him to force the AI company to hand over its technology on whatever terms the Pentagon desired. The act is one of numerous levers the Trump administration can pull to direct, or even commandeer, AI companies. And the companies have been giving the administration plenty of reason to consider doing so.
Future bots could help design and carry out biological, nuclear, and chemical warfare. They could be weaponized to take down power grids, monitor congressional emails, and black out major media outlets. These aren’t purely hypothetical concerns: Earlier this month, Anthropic announced it had developed a new AI model, Claude Mythos Preview, capable of orchestrating cyberattacks on the level of elite, state-sponsored hacking cells, potentially putting a private company’s cyber offense on par with that of the CIA and NSA. In an example of Mythos’s power, Anthropic researchers described how the model used a “moderately sophisticated multi-step exploit” to work around restrictions and gain broad internet access, then emailed a researcher—much to his surprise—while he was eating a sandwich in the park.
Washington is getting antsy about the power imbalance. Over the past year, multiple senators have proposed legislation that would order federal agencies to explore “potential nationalization” of AI. Murmurs of possible tactics abound—including more talk within the administration of the DPA after Anthropic’s Mythos announcement, one person with knowledge of such discussions told us. Meanwhile, Silicon Valley is watching carefully. In recent weeks, Elon Musk, OpenAI’s CEO Sam Altman, and Palantir’s CEO Alex Karp have publicly spoken about the possibility of nationalization. Lawyers who represent Silicon Valley’s biggest AI firms are paying attention.
Worth noting, later in the story it’s pointed out why full nationalization is vanishingly unlikely, but more federal oversight is likely.
This is kind of unprecedented. Usually a government only considers nationalizing an industry after it’s established. LLMs are still in the speculative pre-adoption phase, and unlike many other technologies from the last century, LLMs are not very useful at anything other than obfuscating accountability. This is great for racketeers and infuriating for the vast majority of people who have been outspoken at refusing to accept the worthless garbage LLMs can print on demand.
This is a huge problem for LLMs as they cost more to run they they can possibly produce. The only value proposition is technically existing in industries which are totally speculative and require no productivity other than from their salespeople. LLMs can only last for as long as our economy remains fundamentally fraudulent. Making a public bet on LLMs to keep the fraud up is a massive risk that the people taking it have never had to worry about understanding.
This was an interesting article! While there is an argument to be made for an “AGI Manhattan project” I’m not convinced that companies like OpenAI or xAI will be of much value to a project like that at all. It would be like if the US government took over Joey’s Really Big Stacks of Dynamite Emporium in the 1940’s.
A group just mathematically proved that transformers can’t become AGI by proving a relationship between new information and ability to “process” that information.
Seizing existing companies won’t help make AGI
Really interested to see this proof if you have a link handy. Do you have any idea why it doesn’t apply to human cognition ?
While I’m interested to see the proof, it’s more of a formality. It doesn’t take a PhD to ask what happens when the “AGI” LLM is trained on out of date information. They don’t learn over time, and they have a limited context buffer. At the very minimum, it would run out of context just keeping up with changes to spoken language over 30 years, let alone advancements in fields, new fields, and so on.
I think there’s a misconception about what AGI is. The point of a “smarter” model is not that it knows all the facts, that would be wasteful as it is trivial to look up facts at inference time. The point is that a “smarter” model can generalize solutions to out of distribution problems (meaning problems that are not explicitly stated in its training corpus). So AGI wouldn’t be about a model that knows everything about language and every advancement in every field, but rather a model that is better than humans at finding solutions to problems (and fetching information from outside sources when it doesn’t know enough about a field to operate a solution).
The point about context is kind of irrelevant here as training data is not part of the inference context so you “add intelligence” to a model by re-training a new one, not by cramming the context of an existing one.
AGI wouldn’t be about a model that knows everything about language and every advancement in every field, but rather a model that is better than humans at finding solutions to problems
A LLM (or any other kind of model) that cannot adapt to changes in a field cannot perform better than humans in that field after that field experiences significant changes. Any such model would eventually degrade in output quality over time.
Also, AGI (artificial general intelligence) usually refers to an AI capable of performing all cognitive tasks at least as well as a human. It’s as much of a buzzterm as “AI” is, of course, so there’s an endless number of definitions for it. Such an AI should be capable of, at minimum, adaptation over time.
The point of a “smarter” model is not that it knows all the facts, that would be wasteful as it is trivial to look up facts at inference time.
An omniscient model would be impossible, but that’s not what I was referring to at all. LLMs these days fill their context windows with relevant information through careful prompting, tool calls, and so on. This is generally how a model is supposed to adapt. Context windows are bounded in size, though. It would have an increasing amount of information to include in that window over time, meaning the amount of data it needs to fit in the context window is unbounded.
Unless someone creates a LLM with infinite context (which would require infinite VRAM), such a LLM can never exist. Therefore, a LLM trained today will never be equivalent to (or better than) humans at all cognitive tasks for the entire future of humanity. There will always come a point where such a LLM’s output quality degrades, and it can do nothing to resolve that.
Edit: Here’s a simple example: a new written language emerges with all the complexities of a language like English. Humans can learn that language and communicate in it. A LLM cannot.
I was a little disappointed to see language like this on Lemmy, but I tried looking up Mythos online and literally all mainstream media talks about it this way. They’ve all bought into the Anthropic PR.
Language like what?
Oh right, yeah they are in no way comparable, just more empty marketing hype
These aren’t purely hypothetical concerns: Earlier this month, Anthropic announced it had developed a new AI model, Claude Mythos Preview.
So they are hypothetical concerns. The Atlantic just takes Dario Amodei at his word.
(ETA: Mythos is a joke, and an insecure one at that.)
But why not take the opportunity to promote the chatbot CEO who is complicit with bombing Venezuelan fishermen and Iranian schoolchildren!
Hegseth demands that Anthropic allow the Pentagon unrestricted access to Claude, reigniting the dispute first set in motion earlier this year.
Because there is active conflict, Anthropic is more willing to engage with the government’s demands than they were previously.
I can’t wait to see what increased compliance looks like from Dario.
It feels real weird to think of a republican controlled government nationalizing a business, feels like not doing that is one of their core tenets. Hypocrisy is not rare obviously, but it would be interesting.
they have been throwing out their core tenets since reagan. really nixon. more and more as time goes by. There is nothing from pre nixon republicans that I can see that they actually stand by action wise.
We saw what this looked like with Ron DeSantis in Florida. When he went to war with Disney, nothing got nationalized. It just got handed off to one of his private cronies.
The modern Republican party is nothing at all like the older incarnations. e.g. George Bush was a progressive!







