• 3 Posts
  • 366 Comments
Joined 7 months ago
cake
Cake day: June 9th, 2024

help-circle

  • No.

    I pirate everything, but am very very reluctant to do so with software or games.

    I only pirate in cases where the company involved is just too gross to support (looking at you, Adobe), or if there’s absolutely no other option.

    But I consider pirated software and games absolutely suspect 100% of the time, because I’m old enough to remember when every keygen was also a keylogger, and every crack was also a rootkit and touching any pirated software was going to give you computer herpes without fail.

    So maybe it’s not that bad anymore, but I mean, do you fully trust in the morals of someone who would spend the time helping you steal someone else’s shit to not add just one more little thing to it for themselves?


  • I don’t disagree, but if it’s a case where the janky file problem ONLY appears in Jellyfin but not Plex, then, well, jank or not, that’s still Jellyfin doing something weird.

    No reason why Jellyfin would decide the French audio track should be played every 3rd episode, or that it should just pick a random subtitle track when Plex isn’t doing it on exactly the same files.



  • One thing I ran into, though it was a while ago, was that disk caching being on would trash performance for writes on removable media for me.

    The issue ended up being that the kernel would keep flushing the cache to disk, and while it was doing that none of your transfers are happening. So, it’d end up doubling or more the copy time because the write cache wasn’t actually helping removable drives.

    It might be worth remounting without any caching, if it’s on, and seeing if that fixes the mess.

    But, as I said, this has been a few years, so that may no longer be actively the case.




  • If you share access with your media to anyone you’d consider even remotely non-technical, do not drop Jellyfin in their laps.

    The clients aren’t nearly as good as plex, they’re not as universally supported as plex, and the whole thing just has the needs-another-year-or-two-of-polish vibes.

    And before the pitchfork crowd shows up, I’m using Jellyfin exclusively, but I also don’t have people using it who can’t figure out why half the episodes in a tv season pick a different language, or why the subtitles are somtimes english, and sometimes german, or why some videos occasionally don’t have proper audio (l and r are swapped) and how to take care of all of those things.

    I’d also agree your thought that docker is the right approach to go: you don’t need docker swarm, or kubernetes, or whatever other nonsense for your personal plex install, unless you want to learn those technologies.

    Install a base debian via netinstall, install docker, install plex, done.



  • Timely post.

    I was about to make one because iDrive has decided to double their prices, probably because they could.

    $30/tb/year to $50/tb/year is a pretty big jump, but they were also way under the market price so capitalism gonna capital and they’re “optimizing” or someshit.

    I’ve love to be able to push my stuff to some other provider for closer to that $30, but uh, yeah, no freaking clue who since $60/tb/year seems to be the more average price.

    Alternately, a storage option that’s not S3-based would also probably be acceptable. Backups are ~300gb, give or take, and the stuff that does need S3-style storage I can stuff in Cloudflare’s free tier.




  • You can find reasonably stable and easy to manage software for everything you listed.

    I know this is horribly unpopular around here, but you should, if you want to go this route, look at Nextcloud. It 's a monolithic mess of PHP, but it’s also stable, tested, used and trusted in production, and doesn’t have a history of lighting user data on fire.

    It also doesn’t really change dramatically, because again, it’s used by actual businesses in actual production, so changes are slow (maybe too slow) and methodical.

    The common complaints around performance and the mobile clients are all valid, but if neither of those really cause you issues then it’s a really easy way to handle cloud document storage, organization, photos, notes, calendars, contacts, etc. It’s essentially (with a little tweaking) the entire gSuite, but self-hosted.

    That said, you still need to babysit it, and babysit your data. Backups are a must, and you’re responsible for doing them and testing them. That last part is actually important: a backup that doesn’t have regular tests to make sure they can be restored from aren’t backups they’re just thoughts and prayers sitting somewhere.


  • Then the correct answer is ‘the one you won’t screw up’, honestly.

    I’m a KISS proponent with security for most things, and uh, the more complicated it gets the more likely you are to either screw up unintentionally, or get annoyed at it, and do something dumb on purpose, even though you totally were going to fix it later.

    Pick the one that makes sense, is easy for you to deploy and maintain, and won’t end up being so much of a hinderance you start making edge-case exceptions because those are the things that will 100% bite you in the ass later.

    Seen so many people turn off a firewall or enable port forwarding or set a weak password or change permissions to something too permissive and just end up getting owned that have otherwise sane, if maybe over-complicated, security designs and do actually know what they’re doing, but just getting burned by wandering off from standards because what they implemented originally ends up being a pain to deal with in day-to-day use.

    So yeah, figure out your concerns, figure out what you’re willing to tolerate in terms of inconvenience and maintenance, and then make sure you don’t ever deviate from there without stopping and taking a good look at what you’re doing, what could happen if you do it, and coming up with a worst-case scenario first.


  • What’s your concern here?

    Like who are you envisioning trying to hack you, and why?

    Because frankly, properly configured and permissioned (that is, stop using root for everything you run) container isolation is probably good enough for anything that’s not a nation state (barring some sort of issue with your container platform and it having an escape), and if it is a nation state you’re fucked anyways.

    But more to your direct question: I actually use dns scopes and nginx acls to seperate public from private. I have a *.public and a *.private cname which points to either my external or internal IP, and ACLs in the nginx site configuration to scope where access is allowed.

    You can’t access a *.private host outside the network, but can access either from inside it, and so (again, barring nginx having an oopsie somewhere) it’s reasonably secure and not accessible, and leaves a very clear set of logs (and I’m pulling those logs in and parsing them for anything suspicious and doing automated alerting if I find anything I would not otherwise expect) so I’m happy enough with the level of security that this is, when paired with the services built-in authentication options.


  • Quicksync

    Yeah, it doesn’t sound like you’re transcoding in a way that’ll show any particular benefit from Quicksync over AMF or anything else. My ‘it’s better’ use case would be something like streaming to a cell phone at 3-5mbps, and not something local or just making a file to save on your device.

    DDR4 and no ECC

    That’s what my build is: 128gb of Corsair whatever on a 10850k. I’m sure there’s been some silent corruption somewhere in some video file or whatever, but, honestly, I don’t care about the data enough to even bother with RAID, let alone ECC.

    I will say, though, if you’re going to delve into something like ZFS, you should probably consider ECC since there are a lot more ‘well shits’ that can happen than what I’m doing (mergerfs + snapraid).

    power consumption

    A $30 or whatever they are kill-a-watt plus something like s-tui running on the NAS itself to watch what the CPU is doing in terms of power states and usage. I’ve got a 8-drive i9-10850k under 60w at “idle” which is not super low power, but it’s low enough that the cost of hardware to improve on it even a little bit (and it’d be a very little bit) has a ROI period of longer than I’d expect the hardware to last.


  • If you’re going to be doing transcoding for remote users at lower bitrates, quicksync is still better than AMF, so I’d vote Team Intel.

    If you’re not, then buy whatever meets your power envelope desires and price point.

    For Intel, anything 8th gen or newer should be able to natively do anything you need in Quicksync, so you don’t need to head to Amazon and buy something new, unless you really want to.

    Also, I’d consider hardware that has enough SATA ports for the number of drives you want so that you can avoid dealing with a HBA card: they inflate the power envelope of the system (if power usage is something you’re concerned with), and even in IT mode, I’ve found them to be annoyingly goofy at times and am MUCH happier just using integrated SATA stuff.




  • New (7000 and 9000) ryzen CPUs have an iGPU that can transcode via AMF, so the ‘equivalent’ would just be buy a modern AMD CPU.

    AMF isn’t quite as good as Quicksync, but it’s probably fine for most use cases for most people, though I can notice the image quality losses when you’re doing something like transcoding to 1080p low(ish) bitrate for remote streaming, and so have a very big bias in favor of nvenc or quicksync.

    Also, I’m in the more-ram-is-better camp, so buy as much as you want and/or the platform supports.