
"Zico Kolter leads a 4-person panel at OpenAI that has the authority to halt the ChatGPT maker's release of new AI systems if it finds them unsafe. That could be technology so powerful that an evildoer could use it to make weapons of mass destruction. It could also be a new chatbot so poorly designed that it will hurt people's mental health."
"OpenAI tapped the computer scientist to be chair of its Safety and Security Committee more than a year ago, but the position took on heightened significance last week when California and Delaware regulators made Kolter's oversight a key part of their agreements to allow OpenAI to form a new business structure to more easily raise capital and make a profit."
"Safety has been central to OpenAI's mission since it was founded as a nonprofit research laboratory a decade ago with a goal of building better-than-human AI that benefits humanity. But after its release of ChatGPT sparked a global AI commercial boom, the company has been accused of rushing products to market before they were fully safe in order to stay at the front of the race."
Zico Kolter leads a four-person Safety and Security Committee at OpenAI with the power to stop releases of AI systems deemed unsafe. The committee's remit covers risks from misuse, such as enabling weapons of mass destruction, to harms like chatbots that damage mental health. Kolter's role gained added importance when California and Delaware regulators tied his oversight to approvals for a new OpenAI business structure to raise capital. OpenAI began as a nonprofit focused on safety, but rapid commercialization after ChatGPT prompted accusations of rushing products and internal governance conflicts.
Read at Fortune
Unable to calculate read time
Collection
[
|
...
]