"There needs to be an order of magnitude more effort": AI security experts call for focused evaluation of frontier models and agentic systems
Briefly

At RSAC Conference 2025, experts highlighted the pressing need for deeper evaluations of AI security risks. Jade Leung from the UK AI Security Institute pointed out that safety assessments are lagging behind AI advancements. While some companies invest heavily in evaluating dangerous capabilities, the process remains challenging. Daniel Rohrer from Nvidia echoed the sentiment, stating that as AI systems grow in complexity, organizations must continuously reevaluate their assessments to accurately predict AI behavior and ensure security.
Jade Leung, CTO at the UK AI Security Institute, emphasized that many AI companies are making substantial investments to evaluate risks, but more efforts are required.
Daniel Rohrer from Nvidia stated that the evaluation of increasingly complex AI systems necessitates organizations to continuously adapt their security assessments.
Read at IT Pro
[
|
]