I feel like Nix is a good deployment platform, whereas Arch is a good development platform.
I feel like Nix is a good deployment platform, whereas Arch is a good development platform.
A release candidate is released. Wow!
The distribution is fine, maybe even good.
The politicking and project management around the distro has annoyed a lot of people.
Yeah. My theory is that the AMD GPU driver has swapped out too much to main memory from VRAM because anything with high VRAM usage seems to cause it.
I’m on an RX7600.
Flickering how?
Does the contents of the image change? Does it move? Do the monitors lose signal? Is there a rhythm / pattern to it? Does it flicker when you do something (e.g. move the mouse)?
Mine sometimes flicker when Chrome is loaded. When it happens part of the screen gets corrupted, and updates to the screen (e.g. moving windows) tend to make it happen. The monitors stay locked the whole time, but it will only effect one of my two screens.
The standard library is where project go to die.
Computer programming, regardless of language, is hard. The computer does exactly what you tell it to.
Is that an armadillo? Forgetting how my own code works is my forte.
Screen 0 deleted because of no matching config section.
Your XOrg conf ig has got hard definitions of your devices which aren’t matching what is being detected. Probably best to let auto detection do it’s thing.
Arch people tell you “I use arch BTW”
Rust people make PRs rewriting your code in rust.
Rust people are worse.
Memory ownership isn’t the only source of vulnerabilities. It’s a big issue, sure, but don’t think rust code is invulnerable.
That’s disengenuous though.
We’re not forcing you to learn rust. We’ll just place code in your security critical project in a language you don’t know.
Rust is a second class citizen, but we feel rust is the superior language and all code should eventually benefit from it’s memory safety.
We’re not suggesting that code needs to be rewritten in rust, but the Linux kernel development must internalise the need for memory safe languages.
No other language community does what the rust community does. Haskellers don’t go to the Emacs project and say “We’d like to write Emacs modules, but we think Haskell is a much nicer and safer functional language than Lisp, so how about we add the capability of using Haskell and Lisp?”. Pythonistas didn’t add Python support to Rails along side Ruby.
Rusties seem to want to convert everyone by Trojan horsing their way into communities. It’s extremely damaging, both to those communities and to rust itself.
The question was “How do you define GPL compatible?”. The answer to that question has nothing to do with code being split between files. Two licenses are incompatible if they can’t both apply at the same time to the same thing.
…because they are incompatible licenses.
Not under a license which prohibits also licensing under the GPL. i.e. it has no conditions beyond what the GPL specifies.
My experience is that AMDs virtual memory system for VRAM is buggy and those bugs cause kernel crashes. A few tips:
If running both cards is overstressing your PSU you might be suffering from voltage drops when your GPU draws maximum power. I was able to run games absolutely fine on my previous PSU, but running diffusion models caused it to collapse. Try just a single card to see if it helps stability.
Make sure your kernel is as recent as possible. There have been a number of fixes in the 6.x series, and I have seen stability go up. Remember: docker images still use your host OS kernel.
If you can, disable the desktop (e.g. systemctl isolate multi-user.target
, and run the web gui over the network to another machine. If you’re running ComfyUI, that means adding --listen
to the command line options. It’s normally the desktop environment that causes the crashes when it tries to access something in VRAM that has been swapped to normal RAM to make room for your models. Giving the whole GPU to the one task boosts stability massively. It’s not the desktop environment’s fault. The GPU driver should handle the situation.
When you get a crash, often it’s just that the GPU has crashed and not the machine (Won’t be true of a power supply issue). ssh
ing in and shutting down cleanly can save your filesystems the trauma of a hard reboot. If you don’t have another machine, grab a ssh
client for your phone like Juice SSH on android. (Not affiliated. It just works for me)
Using rocm-smi
to reset the card after a crash might bring things back, but not always. Obviously you have to do this over the network as your display has gone.
Be aware of your VRAM usage (amdgpu_top
) and try to avoid overcommitting it. It sucks, but if you can avoid swapping VRAM everything goes better. Low memory modes on the tools can help. ComfyUI has --low-vram
for example and it more aggressively removes things from VRAM when it’s finished using them. Slows down generations a bit, but better than crashing.
With this I’ve been running SDXL on a 8GB RX7600 pretty successfully (~1s per iteration). I’ve been thinking about upgrading but I think I’ll wait for the RX8000 series now. It’s possible the underlying problem is something with the GPU hardware as AMD are definitely improving things with software changes, but not solving it once and for all. I’m also hopeful that they will upgrade the VRAM across the range. The 16GB 7600XT says to me that they know <16GB isn’t practical anymore, so the high-end also has to go up, right?
With batteries that would have a multi-day cycle like these ones, you’re going to be trying to flatten out the demand curve (and supply, but the two are related).
The US generates 4.2 PWh a year, and so averages a consumption rate of about 480GW. So, in an ideal system we’d only need this level of generation capacity and if it was higher sometimes and lower others the batteries would smooth it all out.
I’m going to take your 560GW figure as representative of normal demand above the 480GW average. I’ll say half of every day is 80GW above average (when we’d be draining batteries) and half is 80GW below (when we’d be charging). The real curves are much more nuanced, but we’re establishing context. 80GW for 12 hours is 960GWh, so let’s call it 1TWh of battery capacity needed for the whole USA to smooth out a day.
That’s 117 of these installation, which frankly I find amazing that it’s so low.
It’s meant to be that malloc
fails and the application handles it.
Trouble is applications are written expecting it to never fail.
Very different solutions.
Wireguard all the way. Exposing just a VPN endpoint that can’t be connected to without the right cryptographic keys is a much more secure and maintainable attack surface.
BTW I assume that’s what you meant by “DuckDNS”. Using that service is orthogonal to making HA visible externally, but is (I think) the common pairing.