If you were running a LLM locally on android through llama.cpp for use as a private personal assistant. What model would you use?
Thanks for any recommendations in advance.
If you were running a LLM locally on android through llama.cpp for use as a private personal assistant. What model would you use?
Thanks for any recommendations in advance.
Just because things are natural doesn’t mean you can’t apply reason. Anything that’ll end up killing ourselves, in the greater picture, is petty stupid, especially if it only provides short term comfort we easily can do without.