If you were running a LLM locally on android through llama.cpp for use as a private personal assistant. What model would you use?
Thanks for any recommendations in advance.
If you were running a LLM locally on android through llama.cpp for use as a private personal assistant. What model would you use?
Thanks for any recommendations in advance.
I was honestly impressed with the speed and accuracy I was getting with Deepseek, llama, and Gemma on my 1660ti.
$100 used and it was seconds to get responses.