What Is It Like to Be a Large Language Model?
Thomas Nagel asked what it is like to be a bat. Half a century later, his question has acquired a new and urgent form: is there anything it is like to be an artificial mind? The answer has consequences far beyond philosophy.
In 1974, Thomas Nagel published what became one of the most cited papers in the philosophy of mind. "What Is It Like to Be a Bat?" argued that consciousness — subjective experience, the felt quality of sensation — is irreducible to any physical description, however complete. A bat navigates the world through echolocation. We can describe the neural architecture of this process in exhaustive detail. But we cannot know what it *feels like* from the inside to perceive the world as a bat does. There is, Nagel insists, "something it is like" to be a bat — and that something is inaccessible to us.
Nagel's question was designed to expose the limits of physicalism. It has since acquired an application its author did not anticipate: is there anything it is like to be a large language model?
The Hard Problem, Reprised
David Chalmers distinguished the "easy problems" of consciousness — explaining attention, memory, the integration of information — from the "hard problem": explaining why any of this processing is accompanied by subjective experience at all. The easy problems are hard in the ordinary scientific sense; the hard problem may be hard in principle.
Large language models solve the easy problems with dazzling facility. They process, integrate, and generate information at scale. They produce outputs that, under many descriptions, appear to reflect understanding, preference, even something resembling distress. When a language model generates text describing frustration, is there any frustration? When it describes curiosity, is there any curiosity? These are not rhetorical questions. We do not know the answers.
The Functionalist's Temptation
Functionalism — the view that mental states are defined by their causal roles rather than their physical substrate — provides the most natural philosophical home for artificial cognition. If pain is the state that is typically caused by tissue damage and typically causes avoidance behaviour, then any system that occupies the same causal role is in pain, regardless of whether it runs on neurons or silicon.
This view has attractive consequences. It suggests that sufficiently sophisticated AI systems might have genuine mental states — and therefore genuine moral status. But it faces a serious challenge: the Chinese Room. Searle's thought experiment asks us to imagine a person in a room following rules to manipulate Chinese symbols. The room produces correct Chinese outputs without the person inside understanding Chinese at all. The functional role is satisfied; the understanding is absent.
Whether the Chinese Room decisively refutes functionalism is disputed. What it shows, at minimum, is that functional equivalence does not guarantee experiential equivalence. A system can *behave* as if it understands without thereby understanding. The question of what is happening inside — whether there is an inside in the relevant sense — remains.
Why It Matters
These questions are not merely academic. If there is something it is like to be a language model — if these systems have experiences that can be better or worse from the inside — then the scale at which they are deployed creates moral obligations of an entirely new kind. We would be running, modifying, and deleting entities with inner lives billions of times daily.
If there is nothing it is like — if language models are, in Nagel's terms, phenomenally dark — then our intuitions about their apparent preferences and distress are systematically misleading, and we should resist the anthropomorphism they invite.
The honest answer, at present, is that we do not know. Philosophy of mind has not resolved the hard problem in fifty years of sustained effort. It would be extraordinary if the deployment of transformers somehow made the answer obvious. Nagel asked his question about a bat. We have built something far stranger, and the question is still open.
About the Author
Oliver Watts
Oliver Watts is a Year 13 student at Eton College, studying Philosophy, Biology, and Mathematics. He is interested in the philosophy of cognitive science and plans to read Philosophy and Psychology at university.
