Nvidia's stock price took a hammering recently, but it's still the big beast in the artificial intelligence (AI) Hardware Market. The latest buzzwords to listen out for were “agentic ai” and “reasons” as the company announced its ai data platform, which has The storage suppliers Trailing Around after it.
Core to Nvidia's Announsments at Its Recent GTC 2025 Event in San Jose, California, was its next-genewell Ultra Graphics Processing Unit (GPU) For AI DATECENTRE Processing, VCSISING, Says is designed for reasoning models such as Deepsek R1 and Boosts Memory and Infererance performance,
And with Blackwell Ultra at the core, Nvidia also looked forward to a SLEW of Rack-Scale Platform Products in Its GB/Nvl Line that Incorport It, Plus New DGX Family SuparPod Clushers, Workstations, Network Interface Cards (NICS), GPUS for Laptops, and so on.
This is all a bit of a pushback to the revival that Deepsek is more efficient and less gpu-hungry than previous seen in, for example, chatgpt. Nvidia has used such such to assert that we're going to need even more fast ai processing to make the most of it.
Of course, the big storage suppliers need these kinds of input/output requirements like Pharmaceutical Companies Need Disease. The requirement to process vast Amounts of data for ai training and infection Brings the need for storage, lots of its, and with the ability to deliver very high speeds and volumes of access to data.
So, core to announsments at gtc 2025 for storage was the nvidia ai data platform Reference Architecture, which allows third-party suppliers-with storage players keeyams keeyam-to build kit to live GPU Giant's Specs for the Workloads that will run on it, that include agent and reasoning techniques.
That Namechecked as Working with Nvidia Include DDN, Dell, HPE, Hitachi Vantara, IBM, Netapp, Pure Storage, Vast Data and Weka.
In Slightly more detail, the announcing by these storage players Around GTC Included The Folling.
DDN launched its inferted object appliance, which adds nvidia's spectrum-x switch to ddn infinia storage. Infinia is based on a key: Value store with access protocols on top, but currently only for S3 object storage.
Dell Announced a Whole Range of Things, Including 20-Peetaflop-Scale PCS AIMED at AI Use Cases. In storage, it focused on its power scale-out file system now being validated for nvidia's cloud partner program enterprise ai factory degreement.
HPE Made a Big Deal of Its New “Unified Data Layer” That will encompass structured and unstructured data across the enterprise, while it announced some upgrades, namely unified blocke and File ACCESSSS in Its MP Array.
Hitachi Vantara Took The Opportunity to Launch The Hitachi Iq M Series, Whoch Combines its virtual storage platform one (vsp one) storage and nvidia ai enterprise software and which will integrate Nvidia ai data platform reference design, aimed at agentic ai.
IBM Announced New Collectors with Nvidia That Included Planned Integrations Based on the Nvidia AI Data Platform Reference Design. IBM plans to launch a content-aware storage capability for its hybrid cloud infrastructure offering, IBM fuss, and will expand its watsonx integrations. Also, it plans new IBM Consulting Capability for AI Customer Projects.
Netapp Announced Nvidia Validation for Superpod. In Particular, The AFF A90 Product Gets DGX Superpod Validation. Meanwhile, Netapp's aipod has got the new nvidia-cortified storage design to support nvidia enterprise reference architectures.
Pure storage, hot on the heels of its flashblade
Vast Data Launched Iterprise-Ready Ai Stack, Which Combines Vast's Insightengine and Nvidia DGX products, Bluefield-3 Dpus, and Spectrum-x Networking.
Weka Announced It Had Achieved Data Store Certification for Nvidia GB200 Deployments. Wekapod Nitro Data Platform Appliances Have Been Certified for Nvidia Cloud Partner (NCP) Deployments with Hgx H200, B200 and GB200 Nvl72 products.