AI and the New Boogeyman
Briefly

AI and the New Boogeyman
"It seems every generation needs a villain. Comic books were going to corrupt children. Television would rot attention spans. Video games resulted in aggression and isolation. Then came the tsunami of smartphones, social media, and even the phrase " screen time"-a powerful term that offers no distinction between reading philosophy on a tablet and scrolling celebrity gossip. Now we have a new threat, and the anxiety feels familiar because, well it is familiar. The names change, but to me the moral panic stays about the same."
"What troubles me here isn't the concern itself, as some of it is clearly warranted. What troubles me is the collapse into a sort of generic imprecision. We talk about " AI use" as though all engagement with these systems is lumped in the same "on / off" category. A teenager who uses an LLM to generate five paragraphs without a second thought is doing something fundamentally different from a student who uses the same device to test assumptions and refine thinking through genuine iteration. One may weaken cognition while the other may sharpen it. And that distinction isn't subtle, it's the whole question."
"For as long as I can remember, educational systems have rewarded compression over curiosity and standardized outputs over intellectual formation. Long before AI arrived, students like me were already learning from schools that chased the grades and not the thinking. AI didn't invent passive cognition, it illuminated it. And that illumination is uncomfortable, because it means AI isn't the original problem. It's a mirror held up to an educational system that was already struggling with the mechanics of learning and the epide"
AI panic follows a familiar historical pattern seen with comic books, television, and screen time. The main problem is not exposure to AI but the quality of engagement with it. Treating all AI use as the same “on/off” activity ignores major differences between passive generation and iterative, assumption-testing use. Educational systems already rewarded compression over curiosity and standardized outputs over intellectual formation. AI makes existing weaknesses more visible by acting as a mirror to how learning mechanics have been handled. The result is increased anxiety without a clear distinction between harmful and beneficial uses.
Read at Psychology Today
Unable to calculate read time
[
|
]