https://wiki.archlinux.org/title/Firefox#Hardware_video_acceleration
Firefox is a bit annoying on wayland, you often have to force enable hw accel. If you cant figure it out send me a DM, ill be glad to help you through signal/matrix/discord
https://wiki.archlinux.org/title/Firefox#Hardware_video_acceleration
Firefox is a bit annoying on wayland, you often have to force enable hw accel. If you cant figure it out send me a DM, ill be glad to help you through signal/matrix/discord
OciContainers just added rootless mode for podman. I was planning on playing a bit more with it but I’m quite busy and haven’t fount the time recently. For the time being I run everything as rootfull since I don’t expose stuff directly through the internet.
I might repond here if I don’t forget once I’ve experimented a bit more.
If you are willing to pay 10$ a month. You should get GithubCopilot, it provides near unlimited claude 3.5 usage. RooCode can hook into the github copilot api, and use it for its generations.
I use Qwen Coder and Mistral small locally too. It works ok, but its nowhere near GPT/Claude in terms of response quality.
As much as I’d like to praise the open-weight models. Nothing comes close to Claude sonnet in my experience too. I use local models when info are sensitive and claude when the problem requires being somewhat competent.
What setup do you use for coding? I might have a tip for minimizing claude cost you depending on what your setup is.
Their software is pretty nice. That’s what I’d recommand to someone who doesn’t want to tinker. It’s just a shame they don’t want to open source their software and we have to reinvent the wheel 10 times. If you are willing to tinker a bit koboldcpp + openewebui/librechat is a pretty nice combo.
Qwen coder or the new gemma3.
But at this size using privacy respecting api might be both cheaper and lead to better results.
Can’t you tag the NSFW to filter it out?
Most LLM projects support Vulkan if you have enough VRAM
Well they are fully closed source except for the open source project they are a wrapper on. The open source part is llama.cpp
Try Podman Desktop if you want a GUI to manage your container , and docker desktop is the source of the the crashes. You can run docker images / container / kube through it as well as podman one.
nixos doesn’t play well with rootless containers in my experience
Logseq is also really really slow once you have a lot of notes unfortunately.
you can add authentic/authelia with keys for login and it should be fine