As the AI world gathers in Seoul, can an accelerating industry balance progress against safety?
Briefly

This week, artificial intelligence caught up with the future or at least Hollywood's idea of it from a decade ago. It feels like AI from the movies, wrote the OpenAI chief executive, Sam Altman, of his latest system, an impressive virtual assistant.
For some experts, that new AI, GPT-4o, will be an unsettling reminder of their concerns about the technology's rapid advances, with a key OpenAI safety researcher leaving this week in a disagreement over the company's direction.
The inaugural AI Safety Summit at Bletchley Park in the UK last year announced an international testing framework for AI models, after calls from some concerned experts and industry professionals for a six-month pause in development of powerful systems.
The Bletchley declaration signed by UK, US, EU, China and others hailed the enormous global opportunities from AI but also warned of its potential for causing catastrophic harm. It also secured a commitment from big tech firms including OpenAI, Google and Mark Zuckerberg's Meta to cooperate with governments on testing their models before they are released.
Read at www.theguardian.com
[
add
]
[
|
|
]