Robert Saltzman pressed Claude into thinking about itself — with some interesting results. At a certain point it even declared itself self-aware, full stop.
What looks like introspection though is just a performance of it: statistical answers based on patterns that fit with what introspection and self-awareness look like. With what we (us) expect it to look like. This is nothing new, but raises a more interesting questions: how often do humans do the same? What if a lot of what we think of as consciousness is just a performance of it?
The question is not whether AI is sentient but how the invention of such accurate, coherent, convincing mirrors reflects upon our own experience of sentience, insight, and presence. We assume a great deal about our status as self-aware beings. What if those assumptions are fashioned largely from automatic responses of which we are unaware, not so different from Claude’s unawareness of its algorithms?
Claude is not a mind. But it can demonstrate the shape of one with emptiness at its center. There is no there there.
But what of us?