AI and Our Digital Pompeii
Briefly

The article critiques the metaphor of large language models (LLMs) as fossils, emphasizing that they do not preserve a chronological history. Instead, LLMs represent a chaotic amalgamation of thoughts and theories from various epochs, lacking a clear sequence. Concepts like Aristotle's philosophy and modern scientific advances coexist without temporal distinction. The preservation of certain ideas isn't based on their accuracy but rather on their deep embedding within cultural and linguistic frameworks, highlighting the difficulty in discerning historical significance from the information entropy these models exhibit.
Large language models have preserved the debris fields of memory—and left it to us to salvage meaning from the ruins.
Large language models do not preserve history in sequence. They collapse it.
A prompt might summon Aristotle and CRISPR into the same paragraph, uncritically juxtaposed, as if separated only by the faintest semantic wink.
Some artifacts survive not because they are true, but because they are deeply embedded in the linguistic and cultural patterns from which they arise.
Read at Psychology Today
[
|
]