Anthropic CEO Dario Amodei has a grave warning for fellow AI titans who dismiss the public's concerns about AI. "You can't just go around saying we're going to create all this abundance, a lot of it is going to go to us, and we're going to be trillionaires, and no one's going to complain about that," Amodei told Axios in an interview. "Look, you're going to get a mob coming for you if you don't do this in the right way."
Today, I'm talking with Alex Lintner, who is the CEO of technology and software solutions at Experian, the credit reporting company. Experian is one of those multinationals that's so big and convoluted that it has multiple CEOs all over the world, so Alex and I spent quite a lot of time talking through the Decoder questions just so I could understand how Experian is structured, how it functions, and how the kinds of decisions Alex makes actually work in practice.
Marketing organizations are racing to adopt AI while simultaneously trying to contain it. About 76.6% of marketers now have AI policies in place, up from 55.3% just a year earlier, per the Association of National Advertisers' January 2026 survey (registration required). Investment is also surging. Nearly 89% plan to increase AI spending, and two-thirds would maintain that investment even during an economic downturn.
Political leaders could soon launch swarms of human-imitating AI agents to reshape public opinion in a way that threatens to undermine democracy, a high profile group of experts in AI and online misinformation has warned. The Nobel peace prize-winning free-speech activist, Maria Ressa, and leading AI and social science researchers from Berkeley, Harvard, Oxford, Cambridge and Yale are among a global consortium flagging the new disruptive threat posed by hard-to-detect, malicious AI swarms infesting social media and messaging channels.
One year ago this week, Silicon Valley and Wall Street were shocked by the release of China's DeepSeek mobile app, which rivaled US-based large language models like ChatGPT by showing comparable performance on key benchmarks at a fraction of the cost while using less-advanced chips. DeepSeek opened a new chapter in the US-China rivalry, with the world recognizing the competitiveness of Chinese AI models, and Beijing pouring more resources into developing its own AI ecosystem.
Salesforce-owned integration platform provider MuleSoft has added a new feature called Agent Scanners to Agent Fabric - a suite of capabilities and tools that the company launched last year to rein in the growing challenge of agent sprawl across enterprises. Agent sprawl, often a result of enterprises and their technology teams adopting multiple agentic products, can lead to the fragmentation of agents, turning their workflows redundant or siloed across teams and platforms.
The country's top internet regulator, the Cyberspace Administration of China (CAC), requires that any company launching an AI tool with "public opinion properties or social mobilization capabilities" first file it in a public database: the algorithm registry. In a submission, developers must show how their products avoid 31 categories of risk, from age and gender discrimination to psychological harm to "violating core socialist values."
Taoiseach rejects suggestion that current legislation is not strong enough to deal with issue Women's Aid removes itself from X, calling the crisis a 'tipping point' Human rights lawyer Caoilfhionn Gallagher said such sexualised abuse of children online has 'devastating' impacts Ministers are scrambling to find a way to combat an explosion of digitally created images of semi-nude women and children on the social media platform X.
One of the key findings is that 53% of organizations cannot remove personal data from AI models once it has been used, creating long-term exposure under GDPR, CPRA, and emerging AI regulations. All respondents said agentic AI is on their roadmap, but the controls to govern those systems are lagging. Overall, 63% cannot enforce purpose limitations on AI agents, 60% lack kill-switch capabilities, and 72% have no software bill of materials (SBOM) for AI models in their environment.
It's not only law firms and legal departments that are adopting GenAI systems without fully understanding what they can and cannot do - court systems may also be tempted to adopt these tools to short circuit workloads in the face of limited resources. And that poses some risks and concerns to the rule of law, a notion that hinges on accuracy, fairness, and public perception.
Across organizations of every size, I am seeing the same operational pattern take shape. Legal teams are carrying more work, adopting more technology, and fielding increasing demands from the business, yet the underlying infrastructure has not evolved at the same pace. The result is a readiness gap that grows quietly and gradually, often in the background of an otherwise high-functioning department. The encouraging part is that the leaders who recognize the pattern early are already finding practical ways to close it.
In 2025, nearly every security conversation circled back to AI. In 2026, the center of gravity will shift from raw innovation to governance. DevOps teams that rushed to ship AI capabilities are now on the hook for how those systems behave, what they can reach, and how quickly they can be contained when something goes wrong. At the same time, observability, compliance, and risk are converging.
The Well‑Architected Framework, long used by architects to benchmark cloud workloads against pillars such as operational excellence, security, reliability, performance efficiency, cost optimization, and sustainability, now incorporates AI-specific guidance across these pillars. The expanded lenses reflect AWS's recognition of the increasing complexity and societal impact of AI workloads, particularly those powered by generative models.
The flap of a butterfly's wings in South America can famously lead to a tornado in the Caribbean. The so-called butterfly effect-or "sensitive dependence on initial conditions," as it is more technically known-is of profound relevance for organizations seeking to deploy AI solutions. As systems become more and more interconnected by AI capabilities that sit across and reach into an increasing number of critical functions, the risk of cascade failures-localized glitches that ripple outward into organization-wide disruptions-grows substantially.
But Osborne's latest job is the most eye-opening and is an alarming augur of what is to come. OpenAI, the maker of ChatGPT, has become the latest organisation to employ Osborne. He will run OpenAI for Countries, a unit tasked with working directly with governments while expanding the company's Stargate datacentre programme beyond the US. At least it was announced with a tweet, rather than a LinkedIn post.
Enterprise IT execs know well the dangers of relying too much on third-parties, how automated decision systems need to always have a human in the loop, and the dangers of telling customers too much/too little when policy violations require an account shutdown. But a saga that played out Tuesday between Anthropic and the CEO of a Swiss cybersecurity company brings it all into a new and disturbing context.
We are now at the point where automation, machine learning and agentic orchestration can genuinely work together. This is not theory. It is already happening in defense and civilian agencies that have moved past pilots and into production, using agents that bring context, consistency and speed to complex workflows while preserving accountability. These seven principles for an agentic government give leaders a practical framework for adopting automation and AI responsibly.
CNH has scored some wins that Schroeder has been able to track. The company is leaning on AI to assist software engineers who are focused on precision agricultural technology and the FieldOps farm management systems, where AI, machine learning, and sensors are applied to digitally enhance farming. Early data has shown that these engineers are reducing the time needed for documentation by 60%, giving them more time to write new code.
As Nature reported last week ( Nature https://doi.org/qhbv; 2025), one country is pushing forwards with plans to change that. China is proposing to set up a global body to coordinate the regulation of AI, to be known as the World Artificial Intelligence Cooperation Organization (WAICO). Establishing such a body is in all countries' interests, and governments around the world should get on board.
The House Democratic Commission on AI and the Innovation Economy - which will convene throughout 2026 - includes Reps. Ted Lieu, D-Calif., Josh Gottheimer, D-N.J., and Valerie Foushee, D-N.C., as co-chairs. Reps. Zoe Lofgren, D-Calif., and Frank Pallone, D-N.J., will serve as ex officio co-chairs, due to their positions as ranking members of the Science, Space and Technology Committee and the Energy and Commerce Committee, respectively.
I began the year with a blunt reality check: leadership today is forged in public, under pressure, and in real time. With Donald Trump already installed as US president for his second term, markets have moved faster than at any point in my career, reacting not to speculation but to executive action, rhetoric, and resolve. The first lesson this year has burned itself into my thinking: certainty beats comfort.
In line with our AI Principles, we're thrilled to announce that New Relic has obtained ISO/IEC 42001:2023 (ISO 42001) certification in the role of an AI developer and AI provider. This achievement reflects our commitment to developing, deploying, and providing AI features both responsibly and ethically. The certification was performed by Schellman Compliance, LLC, the first ANAB accredited Certification Body based in the United States.
This Is for Everyone reads like a family newsletter: it tells you what happened, recounting the Internet's origin and evolution in great detail, but rarely explaining why the ideal of a decentralized Internet was not realized. Berners-Lee's central argument is that the web has strayed from its founding principles and been corrupted by profit-driven companies that seek to monetize our attention. But it's still possible to "fix the internet", he argues, outlining a utopian vision for how that might be done.