It occurs to me that LLM have the exact opposite strengths and weaknesses of traditional computing - where trad comp is good at hard calculation and storage and bad at soft interpretive, LLMs are good at soft interpreting but bad at anything that requires runtime state storage.
It seems like we’re going to plateau and see incremental improvements in AI and will take a novel technique to bridge that gap.
This isn’t a new idea of course, but as someone with only light AI knowledge this article helped me understand why that is
Similar idea has occurred to me. In a lot of ways, 'traditional' rules-based AI has more 'understanding' than a LLM can have. Because those rules are fixed by humans to the machine's inputs, and they create a relationship between the software's model and reality, and usually in human-understandable terms (a database entry on a specific person with their facts listed, for example). That doesn't really exist in a LLM. It's entirely self-referential to itself. Which is why it's not possible for it to explain its reasoning, or give a (real) source for a fact. There is no fixed anchor, no reference point, to validate the LLM's output with.
It seems like we’re going to plateau and see incremental improvements in AI and will take a novel technique to bridge that gap.
This isn’t a new idea of course, but as someone with only light AI knowledge this article helped me understand why that is