Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Libre ... seems to be co-opting the term a bit if it you can't run your llm local

What does this have that kobold.cpp, llama.cpp backends + sillytavern et al frontends with one of the thousand llm weights won't give you?

For entertainment, these offer simultaneous llm output with methods to retain context, allows outputs that can be vocalized via TTS voice simulation, inputs via mucrophone, and can provide illustrations of content via stable diffusion, and allow multiple chat bot "characters" to be in the same conversation, all of which honestly gets a bit surreal.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: