The View From Inside the AI Bubble
Briefly

The View From Inside the AI Bubble
"The threat of technological superintelligence is the stuff of science fiction, yet it has become a topic of serious discussion in the past few years. Despite the lack of clear definition-even OpenAI CEO Sam Altman has called AGI a "weakly defined term"-the idea that powerful AI contains an inherent threat to humanity has gained acceptance among respected cultural critics. Granted, generative AI is a powerful technology that has already had a massive impact on our work and culture."
"Superintelligence has become one of several questionable narratives promoted by the AI industry, along with the ideas that AI learns like a human, that it has "emergent" capabilities, that "reasoning models" are actually reasoning, and that the technology will eventually improve itself. I traveled to NeurIPS, held at the waterfront fortress that is the San Diego Convention Center, partly to understand how seriously these narratives are taken within the AI industry."
Max Tegmark warned that artificial general intelligence (AGI) could threaten human survival and presented an AI-safety index at NeurIPS where no company scored above a C+. The concept of technological superintelligence has shifted from science fiction to serious public concern despite weak definitions of AGI. Generative AI already affects work and culture, but several industry narratives are questionable: that AI learns like humans, that capabilities are emergent, that reasoning models truly reason, and that systems will auto-improve. Tegmark asserted that major AI companies aim to build AGI, citing founders' public statements, while conference attendance has surged in recent years.
Read at The Atlantic
Unable to calculate read time
[
|
]