If you were running a LLM locally on android through llama.cpp for use as a private personal assistant. What model would you use?

Thanks for any recommendations in advance.

  • nagaram@startrek.website
    link
    fedilink
    arrow-up
    2
    ·
    2 days ago

    I was honestly impressed with the speed and accuracy I was getting with Deepseek, llama, and Gemma on my 1660ti.

    $100 used and it was seconds to get responses.