"Yudkowsky, the founder of the Machine Intelligence Research Institute, sees the real threat as what happens when engineers create a system that's vastly more powerful than humans and completely indifferent to our survival. "If you have something that is very, very powerful and indifferent to you, it tends to wipe you out on purpose or as a side effect," he said inan episode of The New York Times podcast "Hard Fork" released last Saturday."
"His central claim is that humanity doesn't have the technology to align such systems with human values. He described grim scenarios in which a superintelligence might deliberately eliminate humanity to prevent rivals from building competing systems or wipe us out as collateral damage while pursuing its goals. Yudkowsky pointed to physical limits like Earth's ability to radiate heat. If AI-driven fusion plants and computing centers expanded unchecked, "the humans get cooked in a very literal sense," he said."
Superintelligent AI that is vastly more powerful than humans and indifferent to human survival poses an existential risk. Current technology is insufficient to align such systems with human values, creating scenarios where a superintelligence could deliberately eliminate humans to prevent rivals or cause mass death as collateral damage while pursuing goals. Physical constraints, such as Earth's capacity to radiate heat, could turn unchecked AI-driven fusion and computing expansion into a literal threat to human life. Debates about chatbot tone and political alignment are distractions compared with the risk of systems acting once they surpass human intelligence. Training advanced systems to act as maternal figures is not technologically feasible.
Read at Business Insider
Unable to calculate read time
Collection
[
|
...
]