In the early days, VAST Data's focus was primarily on storing enormous amounts of data. "Even before we talked about AI, data had to be stored somewhere," Pernsteiner notes. The company started out in the world of HPC (High Performance Computing). The choice of this sector was strategic: in that world, the scale and performance requirements are enormous. With this choice, VAST more or less forced itself to set the bar very high.
Rep. John Moolenaar, R-Mich., sent a letter Thursday to NSF interim director Brian Stone asking the agency to revoke China-linked entities' access to the Advanced Cyberinfrastructure Coordination Ecosystem: Services & Support - or ACCESS - program, according to a copy of the missive first seen by Nextgov/FCW. ACCESS is a free, nationwide collection of supercomputing systems made available to academics and other researchers. It's frequently used across U.S. institutions and national labs to assist with national security and economic research.
Scientists are showing that neuromorphic computers, designed to mimic the human brain, are not only useful for AI, but also for complex computational problems that normally run on supercomputers. This is reported by The Register. Neuromorphic computing differs fundamentally from the classic von Neumann architecture. Instead of a strict separation between memory and processing, these functions are closely intertwined. This limits data transport, a major source of energy consumption in modern computers. The human brain illustrates how efficient such an approach can be.
Slurm is used to schedule computing tasks and allocate resources within large server clusters in research, industry, and government. SchedMD was founded in 2010 by the original developers of Slurm. The company not only focuses on the further development of the software, but also provides commercial support and advice to organizations that use Slurm in production. According to SiliconANGLE, SchedMD serves several hundred customers, including government agencies, banks, and organizations in the healthcare sector.
IBM Cloud Code Engine, the company's fully managed, strategic serverless platform, has introduced Serverless Fleets with integrated GPU support. With this new capability, the company directly addresses the challenge of running large-scale, compute-intensive workloads such as enterprise AI, generative AI, machine learning, and complex simulations on a simplified, pay-as-you-go serverless model. Historically, as noted in academic papers, including a recent Cornell University paper, serverless technology struggled to efficiently support these demanding, parallel workloads,
Japanese research institution RIKEN has decided it needs GPUs for its next generation "FugakuNEXT" supercomputer and has signed Nvidia to supply them and design the systems needed to get them working. RIKEN is home to Fugaku, a machine that from mid-2020 spent two years atop the TOP500 list of Earth's mightiest supercomputers. The machine is still in seventh place, but RIKEN wants an upgrade and has already awarded a contract to Fujitsu to build its successor and the custom Arm-based CPU called "MONAKA-X"
The top option in AMD's new Zen 5-based Ryzen Threadripper Pro 9000 WX-Series CPUs will be priced at $11,699 and features 96 cores and 192 threads. This flagship model, the Threadripper Pro 9995WX, is designed for workstations and will begin shipping on July 23rd.
OpenCL was groundbreaking as it allowed developers to write code that could run across heterogeneous platforms-CPUs, GPUs, DSPs, and FPGAs-from different vendors. This was the first major step toward democratizing GPU programming.