fromTheregister6 days agoArtificial intelligenceMicrosoft looks to drive down AI infra costs with Maia 200Microsoft unveiled the Maia 200 inference accelerator: 144 billion transistors, 10 petaFLOPS FP4, 216GB HBM3e (7TB/s), and 750W power consumption.
fromComputerWeekly.com6 days agoArtificial intelligenceMicrosoft introduces AI accelerator for US Azure customers | Computer WeeklyAzure US Central is first to receive Maia 200 inference accelerator, offering FP8/FP4 tensor cores, HBM3e memory, and improved cost and performance.
fromTheregister6 days agoArtificial intelligenceMicrosoft looks to drive down AI infra costs with Maia 200
fromComputerWeekly.com6 days agoArtificial intelligenceMicrosoft introduces AI accelerator for US Azure customers | Computer Weekly
Artificial intelligencefromTheregister2 months agoTPU v7, Google's answer to Nvidia's Blackwell is nearly hereGoogle's TPU v7 Ironwood narrows the performance gap with Nvidia Blackwell, delivering competitive FP8 throughput, high-bandwidth HBM3e memory, and scalable TPU pods.