NVIDIA Passes on Massive 4TB HBF Memory as Google Steps In for AI Expansion

The Architecture and Capabilities of High-Bandwidth Flash

As artificial intelligence workloads continue to demand unprecedented memory capacities, a new storage technology known as High-Bandwidth Flash (HBF) is emerging to bridge the gap between traditional NAND flash and High-Bandwidth Memory (HBM). Despite its potential to deliver stacks reaching 4 terabytes, NVIDIA has reportedly shown little interest in adopting the technology, preferring to rely on enterprise solid-state drives instead. Meanwhile, Google is positioning itself as a primary beneficiary of the new architecture as sampling begins later this year.

Co-developed by SanDisk and SK Hynix, HBF utilizes a vertical stacking methodology that mirrors HBM’s design principles. By connecting multiple layers of NAND flash through an array of Through Silicon Vias (TSVs), manufacturers can fuse numerous memory packages into a single, high-density unit. While current HBM configurations typically provide between 32 and 64 gigabytes per stack, HBF is engineered to scale up to 4 terabytes. Although HBM maintains an edge in raw speed, architectural refinements in HBF are expected to deliver sufficient throughput for demanding AI applications, particularly inference tasks that have grown critical with the rise of Agentic AI. The expanded memory footprint also helps alleviate key-value cache bottlenecks typically encountered in main processing chips.

NVIDIA’s Strategic Preference for Solid-State Storage

Industry observers note that NVIDIA has not included HBF in its near-term hardware roadmap. According to market commentary, the chipmaker maintains that enterprise SSDs can adequately resolve the capacity and bandwidth limitations that HBF aims to solve. To support this strategy, NVIDIA is collaborating with Kioxia to develop PCIe Gen7 SSDs capable of operating at speeds up to 100 times faster than conventional storage solutions. This approach aligns with the company’s existing infrastructure priorities and avoids reliance on a nascent memory standard.

Google’s Strategic Alignment with HBF Development

In contrast to NVIDIA’s stance, Google is expected to become a major adopter of HBF technology. The tech giant is accelerating its Tensor Processing Unit (TPU) ecosystem to support rapid AI growth, with multiple next-generation TPU designs currently in development. As server architectures increasingly rely on LPDDR5 and LPDDR5X memory to offset CPU bottlenecks in AI workloads, HBF’s multi-layered design offers a compelling alternative. By consolidating memory into stacked formats, developers can reduce printed circuit board real estate, increase total capacity, and maintain low power consumption without sacrificing high-speed data transfer. The technology may also eventually replace standard DDR modules in next-generation AI server builds.

Development Timeline and Industry Outlook

SK Hynix currently leads the commercialization efforts for HBF, with initial sampling samples scheduled for release in the second half of 2026. Early market indicators suggest growing corporate interest in the technology. On April 28, 2026, industry analyst Jukan noted on social media that “NVIDIA still does not appear interested in HBF. Its view is that HBF’s high bandwidth can be sufficiently addressed with eSSDs.” The same report highlighted that SanDisk has already generated purchase orders for the memory, with the primary customer believed to be engaged in active discussions regarding the technology. While HBF may eventually challenge HBM in certain applications, its immediate impact will likely center on replacing traditional DDR and LPDDR modules in next-generation AI server builds.

MT Labs helps companies across Singapore deploy AI tools they actually own. Whether you need a small assistant for one team or a full agentic AI workflow for the whole company, we size the setup to what you need and what your team can manage. Get in touch and we’ll map it out with you.

Chat with AI

Hello! I'm MTLabs AI, How can I help you today?