Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Most production AI applications aren't running 405B models. They're running 7B-70B models that need low latency and high throughput.

Really? At least for LLMs, most actual usage is concentrated on huge SOTA models. 1 trillion parameters or more. And LLMs seem to be the lion's share of AI compute demand.





OpenAI is trying to move as many requests as they can to a "smaller" model (still suspected to be ~200B).

I suspect it to be >1T, just without reasoning.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: