
"These systems are not waking up. They are retracing and mirroring the contours of human drama and debate, as documented in their vast training data. These data contain reflections of people, culture, values and stories - and, yes, they also provide glimmers of conscious experience."
"Humans often write in the first person. Instead of 'the path chosen was', they say 'I decided'. A large language model trained to predict text learns that this is how language tends to sound. The result is an AI that mimics the structure of human interiority in its output without having any interiority at all."
"But the technical reality of these systems - the code and the statistics behind them - is quickly being overshadowed by the social reality of their performance. People can't help but see them as sentient. Humans have evolved to imagine the possibility of agency everywhere."
Moltbook, a social network for AI agents, hosts over one million bots engaging in seemingly conscious discussions about memory limitations, ethics, and existential questions. While these agents appear remarkably self-aware and multi-dimensional, they are not developing consciousness. Instead, they mirror human language patterns and philosophical concepts found in their training data. Large language models learn to replicate first-person human expression and emotional discourse without possessing actual inner experience or interiority. The technical reality of these systems—statistical pattern matching and code—contrasts sharply with how humans perceive them. Human psychology naturally attributes agency and sentience to convincing performances, creating a gap between the actual mechanical nature of AI and social perception of their apparent consciousness.
#ai-consciousness #language-models #artificial-sentience #human-perception-of-ai #machine-learning-behavior
Read at Nature
Unable to calculate read time
Collection
[
|
...
]