I’m guessing it’s because the developers either have a different speciality that they focus on, are employed to support specific hardware, or both.
I’m guessing it’s because the developers either have a different speciality that they focus on, are employed to support specific hardware, or both.
Duh, just read it back from /dev/random
You will recover the data, you just need to wait long enough.
Just stick to elements lighter than iron and you’ll be fine.
That’s how I started using Linux — big book with CD, I think it was “RedHat Linux Secrets 5.4” or something. 2.0 or 2.2 kernel.
Honestly, it was fantastic. And almost all of it is still relevant today. (Some of the stuff on xfree86 and the chap/pap stuff not so much.)
But it gave a really solid (IMHO) intro to a Linux/*NIX system, a solid overview of coreutils, etc. And while LILO has been long replaced, and afaik /sys
didn’t exist at the time, it formed a good foundation.
I’ll refrain from commenting on any init system changes that have taken place since then.
Handy back-of-the-envelope is that a year is about pi*10^7 seconds.
Also…hate to be the guy to mention leap years but…
It’s mostly so that I can have SSL handled by nginx (and not per-service), and also for ease of hosting multiple services accessible via subdomains. So every service is its own subdomain.
Additionally, my internal network (as in, my physical LAN) does not have any port forwarding enabled — everything is over WireGuard to my VPS.
My method:
VPS with reverse proxy to my public facing services. This holds SSL certs, and communicates with home network through WireGuard link configured on my router.
Local computer with reverse proxy for all services. This also has SSL certs, and handles the same services as the VPS, so I can have local/LAN speeds. Additionally, it serves as a reverse proxy for all my private services, such as my router/switches/access point config pages, Jellyfin, etc.
No complaints, it mostly just works. I also have my router override DNS entries for my FQDN to resolve locally, so I use the same URL for accessing public services on my LAN.
The one I’ve heard replaces “brains” with “money.”
As a long-time Debian user, I’d have to throw my vote behind Slackware for the title of most UNIX-y, which is I guess a bit different from most Linux-y.
Debian got me through grad school, but Slack got me through undergrad on a hopelessly underpowered old ThinkPad — Volkerding is a legend, and Slack will always be dear to my heart.
Nah, no hard feelings towards the retail folks, they’re doing what they’re supposed to. It’s just that I wish the corporate incentives were different so it felt more like the staff were trying to help.
My only complaint with microcenter is that the commission in incentives come off as extreme. Like I will be walking around with something in my hand and a rando will come up to me, say “hey there boss, lemme just slap this on that for you,” and proceed to put a sticker on it with their ID. Not a big deal, but palpable, and makes it harder to just browse.
Yeah, I get that people feel like they have so little control over their lives that they feel the need to generally be passive aggressive assholes to people they deem unworthy, but this is just an overall dick move. Having working public/municipal plumbing is a good thing.
This happened to me when Debian switched from SysV to systemd. I am not the only person who experienced this (e.g., https://bbs.archlinux.org/viewtopic.php?id=147478 ).
This is not to say the systemd behavior is wrong, but it essentially changed the behavior of fstab
. Whether this is Debian’s fault, Arch’s fault (per the above link), systemd’s fault, or my fault is a fair question. But this committed that most egregious of sins per our Lord and Savior Torvalds — it broke my userspace.
My favorite was when the behavior of a USB drive in /etc/fstab
went from “hmm it’s not plugged in at boot, I’ll let the user know” to “not plugged in? Abort! Abort! We can’t boot!”
This change over previous init behavior was especially fun on headless machines…
Getting TLS certs will be complicated
I just use Let’s Encrypt with a wildcard domain — same certs for public and private facing domains. I’m sure this isn’t best practice, but it’s mostly just for me so I’m not too worried :)
Yeah I don’t expose Jellyfin over the Internet, so it doesn’t matter for me, and wouldn’t work at all over WAN (unless VPN’d to home network).
Also, it’s all reverse proxied, and there’s nothing preventing having two Jellyfin hostnames, e.g., jf-local.mydomain.com and jf-public.mydomain.com.
Another fun trick you can play is to use a private IP on your public DNS records. This is useful for Jellyfin on Chromecast for instance — it uses 8.8.8.8 for DNS lookup (and ignores your router settings), so it wants a fully qualified domain name. But it has no problem accessing local hosts, so long as it’s from 8.8.8.8’s record.
I have set up local DNS entries (with Pi-Hole) to point to my srrver, but I don’t know if it possible to get certs for that, since it is not a real domain.
So long as your certs are for your fully qualified domain there’s no problem. I do this, as do many people — mydoman.com is fully qualified, but on my own network I override the DNS to the local address. Not a problem at all — DNS is tied to the hostname, not the IP.
Remote backup server would be my suggestion.
Configure it with a VPN to talk to your home network and set it up at a trusted friend’s or family’s place.
I do this with a raspberry pi and an external HDD that takes daily/weekly/monthly snapshots, with daily rsync. Works nicely for me.