
"Boring people don't listen. They tell their own stories, over and over, and never make any attempt to engage in our story, our lives. And so we avoid them. AI, on the other hand, is a superb listener. So much so that people, particularly teens, are turning to chatbots for companionship. But in doing so, do we run the risk of all becoming the same kind of person, wanting the same kinds of friendships, with the same kinds of interactions? In a word, boring."
"These models are based on predictive algorithms. They vacuum up enormous amounts of data (e.g., things actual people have written), run them through a black box to determine the likelihood that one word will follow another, and use those probabilities to churn out coherent speech. You see this in your text completion on your phone, and in general, it works pretty well."
"Always choosing the high probability option narrows our expressive range. Language is as valuable and amazing as it is in part because of its capacity for nuance. One of us had a French colleague who once asked, "Qu'est que c'est la différence par "perhaps" et "maybe?" It's tough to formulate a good answer, and maybe there isn't one. But perhaps having the different options is what matters."
Large language models (LLMs) trend toward sameness because they predict and select high-probability word sequences. Companion chatbots act as superb listeners, drawing people—especially teens—toward AI companionship and potentially reducing willingness to resolve conflicts. Predictive algorithms ingest massive human-generated text, run it through opaque processes, and output likely continuations, which works well for tasks like phone text completion. Consistently favoring high-probability choices narrows expressive range and reduces linguistic nuance. Heavy reliance on LLMs for writing and social interaction can shape collective outputs and social preferences toward narrower, more homogeneous, and potentially boring patterns.
Read at Psychology Today
Unable to calculate read time
Collection
[
|
...
]