

It’s the new hyped up version of “no-code” or low-code solutions, but with AI so you have more flexibility to footgun.
It’s the new hyped up version of “no-code” or low-code solutions, but with AI so you have more flexibility to footgun.
Not any lazier. Script kiddies didn’t write the code themselves, either.
The one I grabbed to test was the ROG Azoth.
I also checked my Iris and Moonlander - both cap out at 6, but I believe I can update that to be higher with QMK or add a config key via Oryx on the Moonlander to turn it on.
Per this thread from 2009, the limit was conditional upon using a particular keyboard descriptor documented elsewhere in the spec, but keyboards are not required to use that descriptor.
I tested just now on one of my mechanical keyboards, on MacOS, connected via USB C, using the Online Key Rollover Test, and was able to get 44 keys registered at the same time.
If you want to generate audiobooks using your own / a hosted TTS server, check out one of these options:
If you don’t have a decent GPU, Kokoro is a great option as it’s fast enough to run on CPU and still sounds very good.
If you’re going to use Kokoro, Audiblez (posted by another commenter) looks like it makes that more of an all-in-one option.
If you want something that you can use without an upfront building of the audiobook, of the above options, only OpenReader-WebUI supports that. RealtimeTTS is a library that handles that, but I don’t know if there are already any apps out there that integrate it.
If you have the audiobook generation handled and just want to be able to follow along with text / switch between text and audio, check out https://storyteller-platform.gitlab.io/storyteller/
You can run a NAS with any Linux distro - your limiting factor is having enough drive storage. You might want to consider something that’s great at using virtual machines (e.g., Proxmox) if you don’t like Docker, but I have almost everything I want running in Docker and haven’t needed to spin up a single virtual machine.
Copied from the post:
You may have seen reports of leaks of older text messages that had previously been sent to Steam customers. We have examined the leak sample and have determined this was NOT a breach of Steam systems.
We’re still digging into the source of the leak, which is compounded by the fact that any SMS messages are unencrypted in transit, and routed through multiple providers on the way to your phone.
The leak consisted of older text messages that included one-time codes that were only valid for 15-minute time frames and the phone numbers they were sent to. The leaked data did not associate the phone numbers with a Steam account, password information, payment information or other personal data. Old text messages cannot be used to breach the security of your Steam account, and whenever a code is used to change your Steam email or password using SMS, you will receive a confirmation via email and/or Steam secure messages.
You do not need to change your passwords or phone numbers as a result of this event. It is a good reminder to treat any account security messages that you have not explicitly requested as suspicious. We recommend regularly checking your Steam account security at any time at
We also recommend setting up the Steam Mobile Authenticator if you haven’t already, as it gives us the best way to send secure messages about your account and your account’s safety.
You don’t have to finish the file to share it though, that’s a major part of bittorrent. Each peer shares parts of the files that they’ve partially downloaded already. So Meta didn’t need to finish and share the whole file to have technically shared some parts of copyrighted works. Unless they just had uploading completely disabled,
The argument was not that it didn’t matter if a user didn’t download the entirety of a work from Meta, but that it didn’t matter whether a user downloaded anything from Meta, regardless of whether Meta was a peer or seed at the time.
Theoretically, Meta could have disabled uploading but not blocked their client from signaling that they could upload. This would, according to that argument, still counts as reproducing the works, under the logic that signaling that it was available is the same as “making it available.”
but they still “reproduced” those works by vectorizing them into an LLM. If Gemini can reproduce a copyrighted work “from memory” then that still counts.
That’s irrelevant to the plaintiff’s argument. And beyond that, it would need to be proven on its own merits. This argument about torrenting wouldn’t be relevant if LLAMA were obviously a derivative creation that wasn’t subject to fair use protections.
It’s also irrelevant if Gemini can reproduce a work, as Meta did not create Gemini.
Does any Llama model reproduce the entirety of The Bedwetter by Sarah Silverman if you provide the first paragraph? Does it even get the first chapter? I highly doubt it.
By the same logic, almost any computer on the internet is guilty of copyright infringement. Proxy servers, VPNs, basically any compute that routed those packets temporarily had (or still has for caches, logs, etc) copies of that protected data.
There have been lawsuits against both ISPs and VPNs in recent years for being complicit in copyright infringement, but that’s a bit different. Generally speaking, there are laws, like the DMCA, that specifically limit the liability of network providers and network services, so long as they respect things like takedown notices.
I’d just like to interject for a moment. What you’re referring to as Alpine Linux Alpine Linux is in fact Pine’s fork, Alpine / Alpine Linux Pine Linux, or as I’ve taken to calling it, Pine’s Alpine plus Alpine Linux Pine Linux. Alpine Linux Pine Linux is an operating system unto itself, and Pine’s Alpine fork is another free component of a fully functioning Alpine Linux Pine Linux system.
Wow, there isn’t a single solution in here with the obvious answer?
You’ll need a domain name. It doesn’t need to be paid - you can use DuckDNS. Note that whoever hosts your DNS needs to support dynamic DNS. I use Cloudflare for this for free (not their other services) even though I bought my domains from Namecheap.
Then, you can either set up Let’s Encrypt on device and have it generate certs in a location Jellyfin knows about (not sure what this entails exactly, as I don’t use this approach) or you can do what I do:
On your router, forward port 443 to the outbound secure port from your PI (which for simplicity’s sake should also be port 443). You likely also need to forward port 80 in order to verify Let’s Encrypt.
If you want to use Jellyfin while on your network and your router doesn’t support NAT loopback requests, then you can use the server’s IP address and expose Jellyfin’s HTTP ports (e.g., 8080) - just make sure to not forward those ports from the router. You’ll have local unencrypted transfers if you do this, though.
Make sure you have secure passwords in Jellyfin. Note that you are vulnerable to a Jellyfin or Traefik vulnerability if one is found, so make sure to keep your software updated.
If you use Docker, I can share some config info with you on how to set this all up with Traefik, Jellyfin, and a dynamic dns services all up with docker-compose services.
Why should we know this?
Not watching that video for a number of reasons, namely that ten seconds in they hadn’t said anything of substance, their first claim was incorrect (Amazon does not prohibit use of gen ai in books, nor do they require its use be disclosed to the public, no matter how much you might wish it did), and there was nothing in the description of substance, which in instances like this generally means the video will largely be devoid of substance.
What books is the Math Sorcerer selling? Are they the ones on Amazon linked from their page? Are they selling all of those or just promoting most of them?
Why do we think they were generated with AI?
When you say “generated with AI,” what do you mean?
And what’s the result? Are the books misleading in some way? That’s the most legitimate actual concern I can think of (I’m sure the people screaming that AI isn’t fair use would disagree, but if that’s the concern, settle it in court).
Look up “LLM quantization.” The idea is that each parameter is a number; by default they use 16 bits of precision, but if you scale them into smaller sizes, you use less space and have less precision, but you still have the same parameters. There’s not much quality loss going from 16 bits to 8, but it gets more noticeable as you get lower and lower. (That said, there’s are ternary bit models being trained from scratch that use 1.58 bits per parameter and are allegedly just as good as fp16 models of the same parameter count.)
If you’re using a 4-bit quantization, then you need about half that number in VRAM. Q4_K_M is better than Q4, but also a bit larger. Ollama generally defaults to Q4_K_M. If you can handle a higher quantization, Q6_K is generally best. If you can’t quite fit it, Q5_K_M is generally better than any other option, followed by Q5_K_S.
For example, Llama3.3 70B, which has 70.6 billion parameters, has the following sizes for some of its quantizations:
This is why I run a lot of Q4_K_M 70B models on two 3090s.
Generally speaking, there’s not a perceptible quality drop going to Q6_K from 8 bit quantization (though I have heard this is less true with MoE models). Below Q6, there’s a bit of a drop between it and 5 and then 4, but the model’s still decent. Below 4-bit quantizations you can generally get better results from a smaller parameter model at a higher quantization.
TheBloke on Huggingface has a lot of GGUF quantization repos, and most, if not all of them, have a blurb about the different quantization types and which are recommended. When Ollama.com doesn’t have a model I want, I’m generally able to find one there.
I recommend a used 3090, as that has 24 GB of VRAM and generally can be found for $800ish or less (at least when I last checked, in February). It’s much cheaper than a 4090 and while admittedly more expensive than the inexpensive 24GB Nvidia Tesla card (the P40?) it also has much better performance and CUDA support.
I have dual 3090s so my performance won’t translate directly to what a single GPU would get, but it’s pretty easy to find stats on 3090 performance.
The above post says it has support for Ollama, so I don’t think this is the case… but the instructions in the Readme do make it seem like it’s dependent on OpenAI.
16 GB of RAM, though? Is it even optimized for the Ryzen 9950X3D?
And a 4 TB SSD - not even necessarily NVME?
Doesn’t seem high powered to me.
Are you saying that NAT isn’t effectively a firewall or that a NAT firewall isn’t effectively a firewall?
Is there a way to use symlinks instead? I’d think it would be possible, even with Docker - it would just require the torrent directory to be mounted read-only in the same location in every Docker container that had symlinks to files on it.
If they do the form correctly, then it’s just an extra step for you to confirm. One flow I’ve seen that would accomplish this is:
That said, if you’re regularly seeing the wrong address pop up it may be worth submitting a request to get your address added to the database they use. That process will differ depending on your location and the address verification service(s) used by the sites that are causing issues. If you’re in the US, a first step is to confirm that the USPS database has your address listed correctly, as their database is used by some downstream address verification services like “Melissa.” I believe that requires a visit to your local post office, but you may be able to fix it by calling your region’s USPS Address Management System office.
I think the best way to handle this would be to just encode everything and upload all files. If I wanted some amount of history, I’d use some file system with automatic snapshots, like ZFS.
If I wanted to do what you’ve outlined, I would probably use rclone with filtering for the extension types or something along those lines.
If I wanted to do this with Git specifically, though, this is what I would try first:
First, add lossless extensions (
*.flac
,*.wav
) to my repo’s.gitignore
Second, schedule a job on my local machine that:
.mp3
,.ogg
- possibly also with a confirmation that the codec is up to my standards with a call to ffprobe, avprobe, mediainfo, exiftool, or something similar), it encodes the file to your preferred lossy format.git status --porcelain
to if there have been any changes.git add --all && git commit --message "Automatic commit" && git push
Added album: "Satin Panthers - EP" by Hudson Mohawke
orRemoved album: "Brat" by Charli XCX; Added album "Brat and it's the same but there's three more songs so it's not" by Charli XCX
Third, schedule a job on my
remote machineserver that runsgit pull
at regular intervals.One issue with this approach is that if you delete a file (as opposed to moving it), the space is not recovered on your local or your server. If space on your server is a concern, you could work around that by running something like the answer here (adjusting the depth to an appropriate amount for your use case):
Another potential issue is that what I described above involves having an intermediary git to push to and pull from, e.g., running on a hosted Git forge, like GitHub, Codeberg, etc… This could result in getting copyright complaints or something along those lines, though.
Alternatively, you could use your server as the git server (or check out forgejo if you want a Git forge as well), but then you can’t use the above trick to prune file history and save space from deleted files (on the server, at least - you could on your local, I think). If you then check out your working copy in a way such that Git can use hard links, you should at least be able to avoid needing to store two copies on your server.
The other thing to check out, if you take this approach, is git lfs.EDIT: Actually, I take that back - you probably don’t want to use Git LFS.