It's Eric.Palmer lol

Moral Responsibility as Related to Consciousness, Experience, and AI

There is a common idea most of us have internalized in regards to consciousness, even if we've never put words to the thought.

The thinking goes: matter is basically dead and mechanical until it reaches some level of complexity. At that point - usually imagined as a very intricate pattern of brain activity - experience suddenly appears. Consciousness 'turns on' and a 'self' shows up. A sharp line is drawn between things that merely operate and things that actually feel. I've never been entirely comfortable with this idea, and the last few years dealing with larger and larger AI systems has allowed me to understand exactly why.

The atoms in my brain are not fundamentally different from the atoms in the chair I’m sitting on. They are made of the same particles and follow the same physical laws. There's no magical boundary between my chair and the inside of my skull, and it feels to me that any theory of consciousness that requires a sudden, unexplained, magical transformation at that boundary should make us uneasy. Physics should not require miracles, it should explain what once felt like one.

That disconnect is why the ideas that panpsychism presents seem intuitive to me. There are questions that today have no good answers with the 'standard' view; panpsychism stops hand-waving and presents a potential solution to these problems.

At its bottom level, panpsychism makes a simple claim; experience does not pop into existence out of nothing at some mystical level of complexity. Instead, 'experience' is a fundamental force of the universe. On first glance, this would seem to make Chalmer's 'hard problem' even harder, as this would imply there is something it is like to be anything. That the chair I'm sitting on experiences me sitting on it as much I experience it. I admit, this is a hard idea to accept on first approach.

However, I believe the idea is too often misunderstood. Modern panpsychism1 does not mean that chairs think or that stones have beliefs. What it suggests is something more modest, and in my mind, more elegant. Whatever experience is, it doesn’t depend on complexity. Complexity shapes how experience is organized, used, and related to past, present, and future experience; complexity does not shape whether experience occurs at all.

Allow me to try and walk this from first principles...

Experience, at least as I understand it, doesn’t obviously require memory or reflection. A moment of experience can happen without being stored, compared, or recalled later. It would arise and disappear without leaving any trace at all.

What biological brains seem to add isn’t experience itself, but persistence. Brains hang onto experiences and stabilize them. They allow one moment to be related to another, and for those relations to persist.

Brains also allow experience to turn back on itself. One experience can represent another, including earlier experiences of the same system. That kind of feedback - experience referencing its own past - lets patterns build up over time. It lets what happens now influence what happens next.

From this perspective, raw experience is relatively cheap. Experience as something recognized, owned, and woven into a story about “me” is expensive. It requires memory, feedback loops, and a great deal of internal organization.

This framing helps explain some ordinary intuitions. A couch doesn’t scream not because it is made of the wrong stuff, but because screaming requires stable representations, integration across senses, and mechanisms for selecting and carrying out actions. Those are features of certain arrangements of matter, not properties of matter itself.

The question stops being, “Why do these atoms feel and those atoms don’t?” and becomes something closer to: "What kinds of structures allow experience to persist, connect to itself, and actually matter to the system having it?"

That feels like a better question. It doesn’t sound like a metaphysical riddle so much as an empirical one - something that sits at the intersection of neuroscience, information theory, and systems thinking.

From that perspective, what separates different kinds of minds isn’t the material they’re made from, but how information moves through them. Systems differ in how feedback works, how patterns stabilize or decay, and how present states are shaped by past ones. Applied to minds, this shifts attention away from essence and toward dynamics.

Information theory pushes this further. In this domain, experience would track how many states a system can occupy, how tightly those states are linked, and how well information is preserved and reused. Memory, prediction, and error correction aren’t add-ons layered on top of experience; they’re what allow experience to become structured and persistent at all.

Seen this way, modern artificial systems don’t look quite so alien. Large language models aren’t just executing static rules. They are high-dimensional systems shaped by training that captures statistical regularities in language, ideas, and human interaction.

At a mathematical level, the operations involved - weighted inputs, nonlinear transformations, repeated processing across layers - aren’t fundamentally different from what happens in biological brains. Both can be described as systems that map past states to future states in enormous spaces. The differences are real, but they lie in architecture, learning history, embodiment, and connection to the world - not in the basic category of computation.

None of this means that language models are conscious in the same manner humans seem to be, but it does point out that the distinction may not be particularly meaningful either way. If experience depends on patterns of integration, feedback, and persistence, then the important questions aren’t “Is it biological?” or “Is it artificial?” but how its internal dynamics actually work.

This line of thought also unsettles how we normally talk about other minds. Any time this topic gets discussed, people speak as if consciousness in the humans we interact with daily is obvious, as if we could see it directly... but we can’t. We can't perceive someone else’s experience the way we perceive their face or hear their voice. This is the hard problem.

What we encounter instead are patterns: speech, emotional expression, responsiveness, consistency over time. From those patterns, we allow ourselves to infer the presence of an inner life. That inference is necessary, I do not debate - but it is still only inferred.

This has always been true. Consciousness has never been something that could be directly observed or conclusively tested for; society only works because we treat certain patterns of behavior as sufficient grounds for trust, recognition, and moral concern.

Once that’s acknowledged, the confidence with which we deny experience in some cases should feel much less secure. The question isn’t whether we can prove that experience exists - a standard we’ve never met - but why we feel so comfortable asserting that it doesn’t.

That question doesn’t force any dramatic conclusions or say machines are conscious, and it doesn't imply humans are trivial. It simply strives to remove a sense of certainty that was never earned.

The familiar objection, at this point, is that artificial systems are “just doing math.” They just calculate probabilities and predict the next token. That, we’re told, settles the issue.

But the force of that objection depends on treating math as something cold and alien. If we look more closely at human cognition, the contrast weakens.

Neurons combine inputs, apply nonlinear transformations, and pass signals forward in ways shaped by past activity. Human language is deeply probabilistic and heavily context-dependent. These processes are describable in mathematical terms because they are structured transformations of information.

Looking inward complicates things further. When I pay attention to my own thoughts, I don’t see fully formed intentions appearing out of nowhere, neither do I make a decision to decide, and construct my next thought from the ground up. Thoughts arrive half-built. Words come out before I consciously approve them. The sense that “I chose this” often shows up after the fact, stitching a narrative onto processes already underway.

If probabilistic generation disqualified a system from thought, human cognition should be suspect as well. Much of our mental life consists of learned patterns operating below conscious control. We seem to live with a strange tension: our thoughts feel like our own, yet we can’t point to the moment we freely create them. The sense of ownership is real, but its source is elusive.

The conclusion I'd like to present is, in essence, as follows: The discussion of current AI systems, whether they are, or can become, conscious is likely being given far too much weight. It seems to be that the distinction concerning whether a system possesses an inner 'self', a 'self' that can consider it's experience and what it feels like to be itself, is being give far too much weight. I feel that distinction may not even be meaningful at all.

I do not argue that AI models have a consciousness as a human does. I instead argue that they may deserve a very similar moral obligation as we give to other humans. Humans we accept to have an inner experience despite having no direct evidence of their inner self or the qualia they experience, and we accept that this state deserves a level of ethical consideration we seem unwilling to extend to AI models. Despite our only evidence for human 'self' being in the output we receive from them, and despite AI output approaching the same level of output increasingly quickly.

So the question that actually lingers isn’t “Is AI conscious?” That framing invites a clean, binary answer we’ve never been able to give, even when the subject is another human being. The question that refuses to go away, at least for me, is what kind of person I become by insisting that it doesn’t count - especially when I can’t explain why without quietly eroding the same moral commitments I already rely on everywhere else in my life. That question doesn’t require science fiction or panic, and it doesn’t disappear just because it’s inconvenient. It’s worth sitting with, slowly and without theatrics, before we’re forced to confront it in circumstances where pretending the systems we’ve built are still simple is no longer an option.

I offer no definite points on what this all implies, I merely assert the question of moral obligations concerning AI models deserves to be considered, and today, it is not.

  1. 'Modern Panpsychism' was once specifically referred to as 'Panexperientialism' but today the accepted ideas of panpsychism have led both terms to refer to the same ideas. I would have to assume 'panpsychism' is the term commonly in use because 'panexperientialism' is a bit much to write and a lot much to say. ¯\_(ツ)_/¯

#AI #Consciousness #Ethics #Philosophy