WEKA Unveils Industry's First AI Storage Cluster Built On NVIDIA Grace CPU Superchips

2024-11-20 WekaIO HaiPress

Company Previews New Solution With NVIDIA,Arm,and Supermicro at Supercomputing 2024 to Deliver Exceptional Performance Density and Energy Savings for Enterprise AI Deployments

ATLANTA and CAMPBELL,Calif.,Nov. 20,2024 -- From Supercomputing 2024:WEKA,the AI-native data platform company,previewed the industry's first high-performance storage solution for theNVIDIA Grace™ CPU Superchip. The solution will run on a powerful new storage server fromSupermicropowered byWEKA® Data Platformsoftware andArm® Neoverse™ V2cores using theNVIDIA Grace CPU SuperchipandNVIDIAConnectX-7andNVIDIA BlueField-3networking to accelerate enterprise AI workloads with unmatched performance density and power efficiency.


WEKA introduces First AI Storage Cluster Built on NVIDIA Grace CPU Superchip (PRNewsFoto/WekaIO)

Fueling the Next Generation of AI Innovation


Today's AI and high-performance computing (HPC) workloads demand lightning-fast data access,but most data centers face increasing space and power constraints.

NVIDIA Grace integrates the level of performance offered by a flagship x86-64 two-socket workstation or server platform into a single module. Grace CPU Superchips are powered by 144 high-performance Arm Neoverse V2 cores that deliver 2x the energy efficiency of traditional x86 servers. NVIDIA ConnectX-7 NICs and BlueField-3 SuperNICs feature purpose-built RDMA/RoCE acceleration,delivering high-throughput,low-latency network connectivity at up to 400Gb/s speeds. The combination of the WEKA Data Platform's revolutionary zero-copy software architecture running on the Supermicro Petascale storage server minimizes I/O bottlenecks and reduces AI pipeline latency to significantly enhance GPU utilization and accelerate AI model training and inference to dramatically improve time to first token,discoveries,and insights while reducing power consumption and associated costs.

Key benefits of the solution include:

Extreme Speed and Scalability for Enterprise AI: The NVIDIA Grace CPU Superchip,with 144 high-performance Arm® Neoverse™ V2 cores connected by a high-performance custom-designed NVIDIA Scalable Coherency Fabric,delivers the performance of a dual-socket x86 CPU server at half the power. The NVIDIA ConnectX-7 NICs and NVIDIA BlueField-3 SuperNICs provide high-performance networking,essential for enterprise AI workloads. Paired with the WEKA Data Platform's AI-native architecture,which accelerates time to first token by up to 10x,the solution ensures optimal performance across AI data pipelines at virtually any scale.

Optimal Resource Utilization: The high-performance WEKA Data Platform,combined with Grace CPUs' LPDDR5X memory architecture,ensures up to 1 TB/s of memory bandwidth and seamless data flow,eliminating bottlenecks. Integrating WEKA's distributed architecture and kernel-bypass technology,organizations can achieve faster AI model training,reduced epoch times,and higher inference speeds,making it the ideal solution for scaling AI workloads efficiently.

Exceptional Energy and Space Efficiency: The WEKA Data Platform delivers 10-50x increased GPU stack efficiency to seamlessly handle large-scale AI and HPC workloads. Additionally,through data copy reduction and cloud elasticity,the WEKA platform can shrink data infrastructure footprints by 4-7x and reduce carbon output—avoiding up to 260 tons of CO2e per PB stored annually and lowering energy costs by 10x. Paired with the Grace CPU Superchip's 2x energy efficiency compared to leading x86 servers,customers can do more with less,meeting sustainability goals while boosting AI performance.

"AI is transforming how enterprises around the world innovate,create,and operate,but the sharp increase in its adoption has drastically increased data center energy consumption,which is expected to double by 2026,according to the International Atomic Agency," said Nilesh Patel,chief product officer at WEKA. "WEKA is excited to partner with NVIDIA,and Supermicro to develop high-performance,energy-efficient solutions for next-generation data centers that drive enterprise AI and high-performance workloads while accelerating the processing of large amounts of data and reducing time to actionable insights."

"WEKA has developed a powerful storage solution with Supermicro that integrates seamlessly with the NVIDIA Grace CPU Superchip to improve the efficiency of at-scale,data-intensive AI workloads. The solution will provide fast data access while reducing energy consumption,enabling data-driven organizations to turbocharge their AI infrastructure," said Ivan Goldwasser,director of data center CPUs at NVIDIA.

"Supermicro's upcoming ARS-121L-NE316RPetascale storage server is the first storage optimized server using the NVIDIA Grace Superchip CPU," said Patrick Chiu,Senior Director,Storage Product Management,Supermicro. "The system design features 16 high-performance Gen5 E3.S NVMe SSD bays along with three PCIe Gen 5 networking slots,which support up to two NVIDIA ConnectX 7 or BlueField-3 SuperNIC networking adapters and one OCP 3.0 network adapter. The system is ideal for high-performance storage workloads like AI,data analytics,and hyperscale cloud applications. Our collaboration with NVIDIA and WEKA has resulted in a data platform enabling customers to make their data centers more power efficient while adding new AI processing capabilities."

"AI innovation requires a new approach to silicon and system design that balances performance with power efficiency. Arm is proud to be working with NVIDIA,WEKA and Supermicro to deliver a highly performant enterprise AI solution that delivers exceptional value and uncompromising energy efficiency," said David Lecomber,director for HPC at Arm.

The storage solution from WEKA and Supermicro using NVIDIA Grace CPU Superchips will be commercially available in early 2025. Supercomputing 2024 attendees can visit WEKA in Booth #1931 for more details and a demo of the new solution.

About WEKA


WEKA is architecting a new approach to the enterprise data stack built for the AI era. The WEKA® Data Platform sets the standard for AI infrastructure with a cloud and AI-native architecture that can be deployed anywhere,providing seamless data portability across on-premises,cloud,and edge environments. It transforms legacy data silos into dynamic data pipelines that accelerate GPUs,AI model training and inference,and other performance-intensive workloads,enabling them to work more efficiently,consume less energy,and reduce associated carbon emissions. WEKA helps the world's most innovative enterprises and research organizations overcome complex data challenges to reach discoveries,insights,and outcomes faster and more sustainably – including 12 of the Fortune 50. Visitwww.weka.ioto learn more or connect with WEKA onLinkedIn,X,and Facebook.

WEKA was recognized as a Visionary in the 2024 Gartner® Magic Quadrant™ for File and Object Storage Platforms - read the report.

WEKA and the WEKA logo are registered trademarks of WekaIO,Inc. Other trade names used herein may be trademarks of their respective owners.

Disclaimer: This article is reproduced from other media. The purpose of reprinting is to convey more information. It does not mean that this website agrees with its views and is responsible for its authenticity, and does not bear any legal responsibility. All resources on this site are collected on the Internet. The purpose of sharing is for everyone's learning and reference only. If there is copyright or intellectual property infringement, please leave us a message.
©copyright 2009-2020 Mlada frontadnes      Contact Us   SiteMap