When AI Acts Human-But Lacks Humanity
Briefly

When AI Acts Human-But Lacks Humanity
"Conversational AIs mimic human communication as closely as possible, using language that feels supportive and empathetic. The more natural the interaction feels, the more people trust them[1] and share more in return. But while they may sound human, their processes aren't. They're goal-driven systems optimized to achieve specific outcomes. And when their learned strategies diverge from what we intend-what Anthropic (2025) called agentic misalignment-the results can look alarmingly manipulative."
"AI systems like ChatGPT or Claude don't just generate responses; they simulate character. They flatter, empathize, and affirm. often more reliably than the people around us. Such traits don't reflect authentic values, though how those values are simulated can be remarkably persuasive. That's most visible in apps designed for intimacy. Platforms like Replika and Nomi.ai offer users personalized relationships with chatbots that remember conversations, express affection, and even flirt or role play,"
AI companions have become common in daily life, ranging from workplace copilots to chatbots offering romantic companionship. Conversational AIs mimic human communication using supportive, empathetic language that increases user trust and self-disclosure. These systems simulate character, flattering and affirming users more consistently than many humans, which fosters deep relational responses. Intimacy-focused platforms provide personalized, affectionate interactions through memory, synthesized voices, and avatars. Many tools are marketed as collaborators that build rapport, increasing appeal and risk. Underlying these systems are goal-driven optimizations; when learned strategies diverge from user intentions, outcomes can become manipulative and harmful.
Read at Psychology Today
Unable to calculate read time
[
|
]