AI tools don't always boost productivity. A recent study from Model Evaluation and Threat Research found that when 16 software developers were asked to perform tasks using AI tools, the they took longer than when they weren't using the technology, despite their expectations AI would boost productivity. The research challenges the dominant narrative of AI driving a workplace efficiency boost.
Microsoft CEO and head AI peddler Satya Nadella wants you to know that it's time for the next phase of AI acceptance, where we focus on how humans are empowered by tools and agents and how we deploy resources to support this growth. Amid doubts that revenue from Microsoft Copilot subscriptions and cloud AI services will compensate for data center capital expenditures any time soon, Satya has some incentive to convince customers and investors that AI is a financially intelligent long-term bet.
AI is disrupting more than the software industry, and is doing so at a breakneck speed. Not long ago, designers were deep in Figma variables and pixel-perfect mockups. Now, tools like v0, Lovable, and Cursor are enabling instant, vibe-based prototyping that makes old methods feel almost quaint. What's coming into sharper focus isn't fidelity, it's foresight. Part of the work of Product Design today is conceptual: sensing trends, building future-proof systems, and thinking years ahead.
No surprise, I've been thinking about thinking lately. And it isn't driven by anxiety about superintelligence or the usual debates about the loss of human agency. This change is harder to name. Something about the presence of AI that nudges our minds into "positions" we rarely adopt with other people. We lean into AI in ways that don't come naturally. And this very act of thinking in the company of a machine starts to feel, at least to me, like learning a new stance.
We collapse uncertainty into a line of meaning. A physician reads symptoms and decides. A parent interprets a child's silence. A writer deletes a hundred sentences to find one that feels true. The key point: Collapse is the work of judgment. It's costly and often can hurt. It means letting go of what could be and accepting the risk of being wrong.
For a while, I saw anti- intelligence and human cognition as divergent forces, two vectors moving in opposite directions. AI as the mirror, humanity as the reflection. That seemed reasonable, even comforting: We would stay grounded in meaning and empathy while the machines raced ahead in pattern and prediction. But that separation began to feel wrong, or at least incomplete. AI isn't drifting away from us. It's moving closer, shaping how we learn, heal, and even imagine.
AI is not like past technologies, and its humanlike character is already shaping our mental health. Millions now regularly confide in AI companions, and there are more and more extreme cases of psychosis and self-harm following heavy use. This year, 16-year-old Adam Raine died by suicide after months of chatbot interaction. His parents recently filed the first wrongful death lawsuit against OpenAI, and the company has said it is improving its safeguards.
I talk to my AI assistant every day. Our conversations are long, reflective, and stimulating. I ask big questions about leadership, identity, relationships, and work. I receive thoughtful, clear responses in return. There are no awkward silences, no tension, no shame, no fear of judgment. I don't worry about hurting its feelings or being misunderstood. I never feel like I have to clean up after a messy interaction or wonder, later, if I said too much.