Discussion about this post

User's avatar
Martin S's avatar

That the Cartesian idea of mind-body dualism still holds so much sway goes some way towards explaining that apparently no one picked up on the incoherence of LaMDA claiming to sometimes *feel* sad or happy. Sadness and happiness are deeply embodied states--in order to feel an emotion, one needs a body. Sadness typically manifests as a physical heaviness ("a heavy heart"), accompanied by sensations in and around the eyes, and happiness often as a feeling of physical uplift and radiance from/within the facial regions.

So there's no sense and meaning present when a digital machine whose sole function is to spit out words and sentences claims to feel these emotions. Rather, a human operator or reader is reading AI-produced words on a screen and then (unconsciously) assigns the feelings these words evoke in him or her back to the machine.

Thanks to an incessant drive for abstraction and conceptual analysis in an effort to carve up the world into freeze-framed "this" and "that" constructs in hopes to understand it (a process Iain McGilchrist masterfully exposes in "The Master and His Emissary"), we've largely become James Joyce's Mr Duffy who "lived at a little distance from his body." We've made ourselves into (partially) disembodied beings that are more and more taken in by abstractions like "intelligence" and "consciousness" that lose all meaning in the process because they're no longer grounded anywhere (besides a huge pile of additional concepts).

There's nothing useful to figure out about "consciousness" conceptually; what's very useful is to step out of conceptual framing and just dip into direct sensory experience--into what's really and immediately there. Beats trying to live life solely with second-hand knowledge (AI being one example of trying to elevate purely conceptual knowledge and discount direct experience in vain hopes to transcend the mundane and undesirable).

Expand full comment
Helmut's avatar

Yeah, there are so many theories of consciousness. Or rather hypotheses. Actually, probably only conjectures. Oh them qualia.

Scientific theories consist of explanations that are hard to vary. Are there any such hard to vary explanations about consciousness? And does that explanation make any prediction? Can the theory be falsified? Does it solve any problem?

The problem of consciousness is also connected to the problem of free will. I think there are good argument against the existence of free will. Any theory of consciousness would have to address that issue.

Having said that, what makes really sense to investigate is the phenomenology of consciousness, the direct experience as Martin put it. Unlike modern science, people in the east have done such investigations for hundreds of years. Very useful insights. Unfortunately many of them made an unjustified step from the subjective experience to a claim about the ontology of the world. On the whole, a great puzzle, and often a pretty confused field, starting with a lack of a proper definition.

P.S.

"One reason I paid so much attention to epiphenomenalism is that I consider it science’s unofficial view of consciousness"

Well, eminent scientists believed, and it was taught as fact, that consciousness *causes* the breakdown of the wave function.

P.P.S.

Since you mention Wittgenstein. How about his assertion at the end of the Tractatus 😉

Expand full comment
25 more comments...

No posts