Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Because that's the same as giving up and saying "we don't understand."

What is mind emerging into? When a video game experience emerges from the combination of processing, display, sound, and controller input, it emerges into a level of organization that a mind can participate in. It emerges into a system of organization emanating downward from the mind experiencing it. It can't just "emerge" into existence on its own. If a game falls in the woods, its not a game.

If you call the mind an emergent phenomenon but can't describe the context into which it emerges, you've added nothing to our understanding.



I agree with GP. Consciousness isn't so hard to explain if you don't enshroud it with mysticism.

Consciousness is the emergent, graduated phenomenon of an information processing system when that information processing system has achieved significant complexity to model itself with relation to the various systemic inputs.

It's not binary, it's a gradient. I have more developed consciousness than my dog, which has more developed consciousness than a rat, and then a fish, then an insect, etc.

Somewhat disturbingly it also goes the other way, AIs may achieve a more profound conscious experience than humans - same for aliens. What does it mean for inferior forms of consciousness that have always placed themselves on a pedestal in relation to the rest of the animal kingdom simply because they have the most developed consciousness?


Welp, that does it. Pack it up, Cognitive Scientists: we've solved the hard problem of consciousness right here on HN.

What you're describing is what's been proposed by Giulio Tonini as "Integrated Information Theory" (IIT) [1]. I quite like the framework and the math behind it is beautiful. Unfortunately, it hasn't been supported well empirically.

Re: AI, IIT actually gives basis for AI not being conscious. Not to mention that all conscious systems we can currently observe are dynamic/continuous, not discrete. The difference there is qualitative—there's no reason to assume that because a dynamic system is conscious that a discrete system approximating it is conscious too.

[1: https://www.nature.com/articles/nrn.2016.44]


It's a nice idea but there's no evidence for it. Not only is there no evidence, but nobody has any idea what such evidence would even look like. We can't even conceptualize an experiment that would support or refute this theory.


That's not true, the authors of IIT [1] propose a number of experiments that would support or deny the underlying theory. To my knowledge, those experiments haven't shown much support. But there are aspects of it that are absolutely empirically falsifiable.

[1: https://www.nature.com/articles/nrn.2016.44]


I don't buy it. It might support the part of the theory that talks about how brains work. But the statement that this is qualia is different, and can't be proven or disproven. Let's say I believe that some person is actually a P-zombie, someone with no conscious experience but who behaves exactly like a normal person. Would these experiments be able to tell me if my belief is correct? I don't see how.


You're welcome to read the paper. Tonini's work is well-known within CogSci and its not quackery by any stretch.


I skimmed it. There's one mention of "qualia" and I didn't spot anything to connect their theory with the actual experience of consciousness besides them saying they think so.


Better let Nature know their Peer Review committee screwed up then!


> Better let Nature know their Peer Review committee screwed up then!

The article you cite [0] is labelled as "opinion". The standards for peer review of opinion articles in scientific journals are a lot lower than those for ordinary research articles. While precise standards vary from journal to journal, for opinion articles peer reviewers often see their role as simply excluding egregious misinformation and blatant errors, as compared to research articles where their role is to make sure the article is presenting high quality evidence in support of its conclusions. [1]

[0] https://www.nature.com/articles/nrn.2016.44

[1] https://ecologyisnotadirtyword.com/2021/02/24/lets-talk-abou...


That would be because I linked the wrong article. The original is here, different journal:

https://journals.plos.org/ploscompbiol/article?id=10.1371/jo...


I think this article has the problem that it is addressing an interdisciplinary topic with too much focus on only a single discipline, which can be a sign of lacking sufficient diversity in disciplinary background of peer reviewers.

Scott Aaronson’s attempted refutation of IIT - https://scottaaronson.blog/?p=1799 - I think is better in that he actually tries to relate IIT to some of the philosophical literature (e.g. his distinction between Chalmers’ “Hard Problem” and the distinct “Pretty Hard Problem” which he sees IIT as trying to address)

I think it is a pity that Aaronson has never (to my knowledge) published his criticisms of IIT in a more formal setting-and I don’t know if Tononi has responded to them anywhere. I think Aaronson is probably right - that IIT fails as a mathematical model of what we intuitively consider conscious, since even though it excludes many common electronic devices we wouldn’t “conscious”, it is possible to mathematically construct an algorithm, capable of being physically implemented in electronics, which would be conscious per IIT but not per our intuition. And even if Tononi patches his mathematics to solve a particular case of that problem, someone with Aaronson’s skillset may just be able to construct another.

Tononi might then argue that if there is no mathematical model of our intuitions about consciousness lacking in special pleading, that’s a sign our intuitions are flawed. Okay, but then if we accept our intuitions can be flawed in some cases, why not in more cases? One could decide the intuition of consciousness is completely erroneous and become an eliminativist about it. Or, if IIT forces you to accept (contrary to our intuitions) certain (special cases of) simple electronic devices or computer systems as just as conscious as humans, why not violate those intuitions further and insist on that for even more cases?


I'm not a "believer" in IIT. But I think its an incredible idea and taking the time to really understand what Tononi et al are proposing is a mind-expanding experience. It may not explain consciousness but it does make you think about what things could be a part of it. And any attempt to mathematically formalize cognitive science gets a vote of approval from me.

My personal belief is that consciousness requires dynamic continuity. I don't think an algorithmic system is conscious because it's "cognition" is discrete and the information isn't integrated across frames. I don't have a "why that works"—its just a gut belief.


Funny, I was just thinking in the opposite direction. There's "I think therefore I am," but there isn't really "I thought therefore I was." I know I'm conscious, but I only have the memory of being conscious before, which could be false. Conscious could just be a snapshot, although it certainly doesn't feel like it.


Your brain is still physically continuous and dynamic. ANNs are not, there are distinct frames.


I agree this seems like an important difference. It’s just interesting to me that there’s no proof that my consciousness is continuous. I’m going to continue to assume that it is, but it’s unknowable.


You’re modded down for some reason but I haven’t thought of consciousness as a model that eventually becomes sophisticated enough to model itself. That could be an explanation for self awareness. …haven’t thought of that.


See Scott Aaronson's rebuttal to IIT: https://scottaaronson.blog/?p=1799


Emergence has no “into”, IMO: https://en.wikipedia.org/wiki/Emergence


All of those examples are emergence "into". The snowflake emerges into the mathematical patterns emanating downward. The termite cathedral emerges into the architectural context of the observer. Without emanating structure, there is no "emergence"—just a proliferation of chaos and error.


Not to butt in, but if the emergent phenomena only exists in the mind of the observer, and the mind is a material phenomenon, the where in the observer-snowflake system is there anything not fully decomposable to atoms, particle motion, so on?


The mind is not a material phenomenon, in the same way that a video game is not a computational one.

The emergent experience exists at the level into which it emerges. It's constructed at a lower level of organization, but not decomposable to them in a way that's meaningful without their recomposition—that's what makes the phenomenon emergent. The qualitative experience of a film does not meaningfully break down to the bits in the video stream, the compressed sound waves carrying the dialog, the photons hitting your eyes' rods and cones, or the biochemical signals in your brain.

The mind is not in the brain, but on the brain.


But we have to classify it as an “appears to” rather than an “is,” don’t we? It’s perfectly fine to do categorize out emergent phenomena that have practical utility, eg its useful to see the snowflake over its constituent parts, or the film over the bits, but what underlies the choice to see it as a film rather than an improbably corrupted png? When talking about the mind, then, why is it we choose to see the mind at all, and how does this constitute more than a convenient framing device, ie how can it explain qualia?


Because our entire perception system functions as a mediation between the teleological affordances an object presents at a given level of organization/analysis and how those affordances relate to our motivational system's current objective and directed action. The emergence only "is" at a certain level of analysis and its emergence at that level is dependent entirely on the perception of an observer.

If a car is hurtling towards you, you don't perceive its handle. But if you're trying to go somewhere, you have to open the door. "Threat", "vehicle", or "handle" aren't just convenient framing device, but an accurate depiction of the object within your perceptual/motivation systems based on the current level of organization and analysis you're participating in.

We choose to see the mind because we are minds. Consciousness is. There is something which it is like to be. Denying our emergent experience of it, or reducing it to a "convenient framing device" tosses out the most fundamental empirical experience we have: to exist.


I completely agree with you, I’m just being more reductive when I say it’s a practical categorization rather than essential reality. Certainly it’s also reasonable to say there’s no essential reality, just subjective levels of analysis, so everything is practical categorization. The issue is that we’ve gotten nowhere in explaining why we seem to exist.

A video game is relatively easy, at least seemingly, to reduce down to its underlying principles. The content dissolves the more closely I look at the game. The issue here isn’t whether the game still exists (it does, in the place I’m no longer looking), the issue is in seeing why the game arises from its component parts, and not something else. Easy-ish for the game, it follows directly from what we know about physics and such, but hard for the mind. Why do neurons together produce pain that exists, rather than pain as a purpose-driven internal signal to help organize the escape from a predator? Emergence doesn't tell us why one or the other, just that whatever it is must emerge from constituent parts.


I don't think its a question of whether there is an essential reality or not, but rather whether we have access to essential reality. Donald Hoffman makes a strong game theoretical argument for how natural selection chooses effective presentations of reality rather than necessarily accurate ones [1]. Based on your level of interest in this conversation I'd expect you would really enjoy that book!

The game is certainly easier than the mind—I like it as an example because most of us have a hands-on knowledge of what the qualitative experience of "playing a game" is like. But the game still only emerges because the game developer, computer manufacturer, and player jointly give it a emanating system into which it can emerge. On its own, the raw game data doesn't really mean anything at all—if the bitstream of Diablo IV washed up on the beach, there's nowhere in that data encoding the experience of killing Diablo for the first time. One wouldn't even recognize it as something that could be decoded into such an experience [2].

I agree with you that the "why" is tough. Why have a conscious experience? Why have a sense of self at all? Why experience emotions rather than have them be—like you described—a purpose-driven internal signal? And then you get into theories like Internal Family Systems, which has empirical support at least within a prescriptive context if not necessarily a descriptive one [3].

The whole thing is a mess. A great, big, beautiful mess.

[1: https://www.amazon.com/Case-Against-Reality-Evolution-Truth/...] [2: https://benjamincongdon.me/blog/2021/02/21/Three-Layers-of-I...]


Sounds similar to Plato's Theory of Forms.


Close! Emergence/emanation is particular to Neoplatonism. https://en.wikipedia.org/wiki/Neoplatonism




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: