What do you mean by “work problems”. Writing regex? SQL? Exclusively software development?
Outside of this, even using for “summarizing” documents, you are lucky if it doesn’t distort or twist meanings such that it isn’t useful, except now you have spent as much or more time checking it’s work than just doing it yourself. Checking others’ work is much harder than writing it.
Every time I’ve attempted to ask it something I can’t answer myself or through immediate googling it has been completely useless.
I’m unconvinced that it isn’t just developers with a poor eye for nuance who aren’t realising how much information they are giving in the questions who rave about it. Horses can count, if you give enough context.
It seems to be generally good at novelty style transfers.
> It's like when you win the lottery and get the support person who's been at the company 15 years and knows everything in and out. That's what these LLM's can be.
This is pure fantasy, extrapolating what you want to see into an arbitrary future where it’s true. More likely it gaslights the customer onto thinking problems are their fault until they give up, but this scenario is mildly cheaper for the companies who don’t need to pay humans to do the runaround.
I recently gave a code review to a colleague where the regex they had was obviously unfit for its purpose and I politely informed them of such. They responded "Then why would ChatGPT have told me to use it?"
I trust exactly 0 output from any LLM. The problem with any of this sort of generative AI is that there's nothing that stops it from hallucinating facts and spewing those with a confident tone. Until we can figure out the trust and validation step, none of it is truly helpful.
I'm not a luddite, I just find these tools to be woefully lacking. Anything they can do takes me more time to validate than just doing it myself.
Outside of this, even using for “summarizing” documents, you are lucky if it doesn’t distort or twist meanings such that it isn’t useful, except now you have spent as much or more time checking it’s work than just doing it yourself. Checking others’ work is much harder than writing it.
Every time I’ve attempted to ask it something I can’t answer myself or through immediate googling it has been completely useless.
I’m unconvinced that it isn’t just developers with a poor eye for nuance who aren’t realising how much information they are giving in the questions who rave about it. Horses can count, if you give enough context.
It seems to be generally good at novelty style transfers.
> It's like when you win the lottery and get the support person who's been at the company 15 years and knows everything in and out. That's what these LLM's can be.
This is pure fantasy, extrapolating what you want to see into an arbitrary future where it’s true. More likely it gaslights the customer onto thinking problems are their fault until they give up, but this scenario is mildly cheaper for the companies who don’t need to pay humans to do the runaround.