The contract includes a 20% prepayment and positions IREN as a key hyperscaler partner for the first time. Four new liquid-cooled data centers, Horizon 1 through Horizon 4, will support 200 megawatts of critical IT load, while a separate $5.8 billion agreement with Dell Technologies covers the purchase of GPUs, servers, and associated infrastructure. CEO Daniel Roberts said the partnership could generate roughly $1.94 billion in annualized revenue once fully deployed.
This vast pipeline represents the core of the AI economy, channeling demand for GPU shipments, data center constructions, and intensive model training cycles. As AI adoption accelerates across industries, these contracts signal sustained investment in infrastructure, with no indications of deceleration. Recent earnings show Microsoft's commercial remaining performance obligations at $392 billion, Amazon's AWS at $200 billion, and Alphabet's cloud backlog at $155 billion, totaling over $742 billion as of the end of the third quarter.
Microsoft is leaving no stone unturned in its quest to secure more compute capacity for meeting its customers' heavy demand for AI services. On Monday, the Redmond-based tech giant signed a $9.7 billion, five-year contract with Australia's IREN to secure further AI cloud capacity. The deal will give Microsoft access to compute infrastructure built with Nvidia's GB300 GPUs, which will be deployed over phases through 2026 at IREN's facility in Childress, Texas, planned to support 750 megawatts of capacity.
Huawei announces the launch of a third Availability Zone in the Huawei Cloud region of Ireland. It will be operational in early 2026. Huawei claims the new zone will improve the reliability of storage and database services by 10x. Data center capacity will grow five times faster, enabling the platform to better cope with demand fluctuations. Versatile, Huawei Cloud's AI agent platform for businesses, will also be rolled out in the new zone.
Nvidia just announced plans to invest up to $100 billion in OpenAI to build out a new generation of AI data centers, one of the largest AI computing projects in history. These are remarkable numbers that get even bigger when you consider the countless supporting projects, like many subsea cables, or local corporate and public sector computing projects, that will likely be essential for this global AI network.
The digital divide is evolving beyond household access to broadband. As artificial intelligence (AI) is woven into the fabric of everyday life - from smart homes and virtual assistants to creative and professional tools - a new divide is emerging: those with a fiber connection, and those without.
With this switch, Cisco says it's offering an Nvidia Cloud Partner-compliant reference architecture for neocloud and sovereign cloud deployments. Available to order before the end of the year, the Cisco N9100 series switches offer a choice of Cisco NX-OS or Sonic operating systems, supporting Ethernet for AI networks, and providing greater flexibility in how neocloud and sovereign cloud customers build their AI infrastructure.
Chief financial officer Amy Hood said: "This quarter, roughly half of our spend was on short-lived assets, primarily GPUs [graphics processor units] and CPUs [central processor units], to support increasing Azure platform demand, growing first-party apps and AI solutions, accelerating R&D by our product teams, as well as continued replacement for end-of-life server and networking equipment." There is also longer term expenditure, which includes $11bn of finance leases that are primarily for large datacentre sites.
A recent nationwide survey of more than 1,400 U.S. households found that two-thirds of Americans believe AI is already driving up their power bills, and most said they can't afford more than a $20 monthly increase. They're right to be worried. As tech companies pour hundreds of billions into new data centers, the surge in electricity demand is rewriting the economics of the grid - and households are footing the bill for an "AI power tax" they never voted for.
Organizations have long adopted cloud and on-premises infrastructure to build the primary data centers-notorious for their massive energy consumption and large physical footprints-that fuel AI's large language models (LLMs). Today these data centers are making edge data processing an increasingly attractive resource for fueling LLMs, moving compute and AI inference closer to the raw data their customers, partners, and devices generate.
When it comes to artificial intelligence, a few names dominate the conversation like Nvidia ( NASDAQ:NVDA ), Taiwan Semiconductor Manufacturing ( ), or even Intel ( NASDAQ:INTC ) in recent months. These companies rightfully claim the spotlight. These players drive the AI narrative because they deliver tangible results - record revenues, market share gains, and innovations that fuel everything from chatbots to autonomous systems. Investors flock to them, bidding up shares on every earnings beat or product launch. Yet beneath the hype, AI's foundation relies on more than just processing power and fabrication prowess. Data storage and high-speed memory are the unsung necessities that enable seamless data flow , preventing bottlenecks in the AI pipeline.
The two companies announced the deal on Thursday, with Anthropic pitching it as "expanded capacity" that the company will use to meet surging customer demand and allow it to conduct "more thorough testing, alignment research, and responsible deployment at scale." Google's take on the deal is that it will enable Anthropic to "train and serve the next generations of Claude models," and involves "additional Google Cloud services, which will empower its research and development teams with leading AI-optimized infrastructure for years to come."
In August, the US government announced it was converting about $9 billion in federal grants that Intel had been issued during the Biden administration into a roughly 10 percent equity stake in the company. During its third-quarter earnings on Thursday-its first financial update since Trump's surprise investment-Intel reported that it earned $13.7 billion in revenue over the past three months, a three percent increase year-over-year. It's the fourth consecutive quarter that Intel has beat revenue guidance.
For storage, Dell's network-attached storage (NAS) component, PowerScale, now integrates with Nvidia GB200 and GB300 NVL71 - much of this update is about reduction as it will use up to five times less rack space, 88% fewer network switches, and up to 72% lower power consumption when compared to rival services. According to Dell, the integration will also deliver 16,000-plus GPU-scale.
With their enhanced processing capabilities, large "hyperscale" complexes are the preferred data centers for the computation-heavy training and use of AI models. They can cover an area of over 1 million square feet, roughly equal to 17 football fields. Water is used to maintain humidity and as a coolant for the heat-generating machines, and as American data centers have grown in size and number, so has their water consumption, from 5.6 billion gallons in 2014 to 17.4 billion in 2023.
Datacenters are set to standardize on the larger, 21-inch rack format by 2030, according to Omdia, as hyperscalers and server makers fully embrace it, leaving enterprises to the existing 19-inch standard. The analyst biz forecasts that the larger rack format, popularized by the Open Compute Project (OCP), will make up over 70 percent of kit shipped by the end of the decade, as it is increasingly adopted by vendors such as Dell and HPE that have been riding the AI infrastructure wave.