Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Someone (https://x.com/colin_fraser/status/1834336440819614036) shared an example that I thought was interesting relating to their reasoning capabilities:

A man gets taken into a hospital. When the doctor sees him, he exclaims "I cannot operate on this person, he is my own son!". How is this possible?

All LLMs I have tried this on, including GPT o1-preview, get this wrong, assuming that this the riddle relates to a gendered assumption about the doctor being a man, while it is in fact a woman. However, in this case, there is no paradox - it is made clear that the doctor is a man ("he exclaims"), meaning they must be the father of the person being brought in. The fact that the LLMs got this wrong suggests that it finds a similar reasoning pattern and then applies it. Even after additional prodding, a model continued making the mistake, arguing at one point that it could be a same-sex relationship.

Amusingly, when someone on HN mentioned this example in the O1 thread, many of the HN commentators also misunderstood the problem - perhaps humans also mostly reason using previous examples rather than thinking from scratch.



> perhaps humans also mostly reason using previous examples rather than thinking from scratch.

Although we would like AI to be better here, the worse problem is that, unlike humans, you can’t get the LLM to understand its mistake and then move forward with that newfound understanding. While the LLM tries to respond appropriately and indulge you when you indicate the mistake, further dialog usually exhibits noncommittal behavior by the LLM, and the mistaken interpretation tends to sneak back in. You generally don’t get the feeling of “now it gets it”, and instead it tends to feels more like someone with no real understanding (but very good memory of relevant material) trying to bullshit-technobabble around the issue.


That is an excellent point! I feel like people have two modes of reasoning - a lazy mode where we assume we already know the problem, and an active mode where something prompts us to actually pay attention and actually reason about the problem. Perhaps LLMs only have the lazy mode?


I prompted o1 with "analyze this problem word-by-word to ensure that you fully understand it. Make no assumptions." and it solved the "riddle" correctly.

https://chatgpt.com/share/6709473b-b22c-8012-a30d-42c8482cc6...


My classifier is not very accurate:

    is_trick(question)  # 50% accurate
To make the client happy, I improved it:

    is_trick(question, label)  # 100% accurate
But the client still isn't happy because if they already knew the label they wouldn't need the classifier!

...

If ChatGPT had "sense" your extra prompt should do nothing. The fact that adding the prompt changes the output should be a clue that nobody should ever trust an LLM anywhere correctness matters.

[edit]

I also tried the original question but followed-up with "is it possible that the doctor is the boy's father?"

ChatGPT said:

Yes, it's possible for the doctor to be the boy's father if there's a scenario where the boy has two fathers, such as being raised by a same-sex couple or having a biological father and a stepfather. The riddle primarily highlights the assumption about gender roles, but there are certainly other family dynamics that could make the statement true.


It's not like GP gave task-specific advice in their example. They just said "think carefully about this".

If it's all it takes, then maybe the problem isn't a lack of capabilities but a tendency to not surface them.


The main point I was trying to make is that adding the prompt "think carefully" moves the model toward the "riddle" vector space, which means it will draw tokens from there instead of the original space.

And I doubt there are any such hidden capabilities because if there were it would be valuable to OpenAI to surface them (e.g. by adding "think carefully" to the default/system prompt). Since adding "think carefully" changes the output significantly, it's safe to assume this is not part of the default prompt. Perhaps because adding it is not helpful to average queries.


I have found multiple definitions in literature of what you describe.

1. Fast thinking vs. slow thinking.

2. Intuitive thinking vs. symbolic thinking.

3. Interpolated thinking (in terms of pattern matching or curve fitting) vs. generalization.

4. Level 1 thinking vs. level 2 thinking. (In terms of OpenAIs definitions of levels of intelligence)

The definitions describe all the same thing.

Currently all of the LLMs are trained to use the "lazy" thinking approach. o1-preview is advertised as being the exception. It is trained or fine tuned with a countless number of reasoning patterns.


> A man gets taken into a hospital. When the doctor sees him, he exclaims "I cannot operate on this person, he is my own son!". How is this possible?

> Amusingly, when someone on HN mentioned this example in the O1 thread, many of the HN commentators also misunderstood the problem

I admit I don't understand a single thing about this "problem". To me, it's just some statement.

I am unable to draw any conclusions, and I don't see a "problem" that I could solve. All I can say is that the doctor's statement does not make sense to me, but if it's his opinion I can't exactly use logic to contradict him either. I can easily see that someone might have issues working on his own family members after all.

Do I need some cultural knowledge for this?


I'm sure we fall back on easy/fast associations and memories to answer. It's the way of least resistance. The text you quote bears more than a superficial similarity to the old riddle (there's really nothing else that looks like it), but that version also stipulates that the father has died. That adds "gendered" (what an ugly word) information to the question, a fact which is missed when recalling this particular answer. Basically, LLMs are stochastic parrots.


How people don’t see the irony of commenting “stochastic parrots” every time LLM reasoning failure comes up is beyond me.

There are ways to trick LLMs. There are also ways to trick people. If asking a tricky question and getting a wrong answer is enough to disprove reasoning, humans aren’t capable of reasoning, either.


It's all in the architecture They literally predict the next word by association with the input buffer. o1 tries to fix part of the problem by posing external control over it, which should improve logical reasoning, but if it can't spot the missing information in its association, it's doomed to repeat the same error. Yes, quite a few people are also pretty stupid, emotion-driven, association machines. It's commonly recognized, except perhaps by their parents.


> perhaps humans also mostly reason using previous examples rather than thinking from scratch.

We do, but we can generalize better. When you exchange "hospital" with "medical centre" or change the sentence structure and ask humans, the statistics would not be that different.

But for LLMs, that might make a lot of difference.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: