Sandisk assembles advisory board to guide High Bandwidth Flash strategy

Sandisk is setting up a Technical Advisory Board to help guide the development and strategy of its High Bandwidth Flash (HBF) technology.

HBF applies the High Bandwidth Memory (HBM) DRAM technology concept to flash with a stack of NAND dies grouped together and connected to host GPU via an interposer unit. This aggregates the IO channels from each stacked die so that the collective data transfer bandwidth to the GPU is a multiple of an individual die’s bandwidth. It uses proprietary stacking with ultra-low die warpage for 16-high configuration and the architecture has been developed over the past year with input from leading AI industry players.

Naturally, NAND access latency time is longer than DRAM so HBF is seen as augmenting HBM with an additional memory tier, and providing greater NAND bandwidth than a traditional SSD. This means it can be viewed as memory/storage tier between HBM and external SSDs providing, for example, a faster checkpoint store during AI training. Having announced HBF technology, Sandisk needs help in bringing it to market.

It has appointed Professor David Patterson and Raja Koduri to its Technical Advisory Board and says they will provide “strategic guidance, technical insight, market perspective, and shape open standards as Sandisk prepares to launch HBF.”

CTO Alper Ilkbahar stated: “We’re honored to have two distinguished computer architecture experts join our Technical Advisory Board. Their collective experience and strategic counsel will be instrumental in shaping HBF as the future memory standard for the AI industry, and affirming we not only meet but exceed the expectations of our customers and partners.” 

That’s the tricky bit – making HBF the future memory standard for the AI industry. Because it uses an interposer that has to be bonded to the GPU, Sandisk’s customer is the GPU manufacturer or a skilled semiconductor package level systems builder. There is an HBM-GPU connection standard effectively designed by Nvidia to which HBM manufacturers Micron, Samsung, and SK hynix adhere. HBM was primarily developed by SK hynix and Nvidia, and Nvidia ensured it was not locked into a single supplier.

Unless Nvidia adopts HBF, Sandisk will be relegated to trying to pick up the minor GPU suppliers, meaning AMD and Intel, and possibly looking to other AI accelerator suppliers as well, such as the hyperscalers with their own chips.

David Patterson

David Patterson is a foundational technology heavyweight. He is Pardee Professor of Computer Science, Emeritus at the University of California at Berkeley, and a Google distinguished engineer, and will lead the Technical Advisory Board toward actionable insights and decisions. He is a prominent computer scientist known for co-developing Reduced Instruction Set Computing (RISC), which revolutionized processor design.

Patterson, we’re told, played key roles in the development of Redundant Array of Inexpensive Disks (RAID), and Networks of Workstations (NOW). He co-authored the seminal textbook Computer Architecture: A Quantitative Approach, and was also awarded the 2017 ACM Turing Award for his contributions to the industry. 

He said: “HBF shows the promise of playing an important role in datacenter AI by delivering unprecedented memory capacity at high bandwidth, enabling inference workloads to scale far beyond today’s constraints. It could drive down costs of new AI applications that are currently unaffordable.” 

Raja Koduri

Raja Koduri is a computer engineer and business executive renowned for leading graphics architecture, with previous positions at AMD as Senior Vice President and Chief Architect, and at Intel as Executive Vice President of Accelerated Computing Systems and Graphics. He directed the development of AMD’s Polaris, Vega, and Navi GPU architectures, Intel’s Arc and Ponte Vecchio GPUs, and spearheaded Intel’s foray into discrete graphics.

In early 2023, he founded a startup focused on generative AI for gaming, media, and entertainment, and joined the Board of Tenstorrent in the AI and RISC‑V semiconductor space. Most recently, he serves as founder/CEO of Oxmiq Labs and co-founder of Mihira Visual Studios and continues to shape graphics and AI innovation through advisory and board roles across the semiconductor industry.

He said: “HBF is set to revolutionize edge AI by equipping devices with memory capacity and bandwidth capabilities that will support sophisticated models running locally in real time. This advancement will unlock a new era of intelligent edge applications, fundamentally changing how and where AI inference is performed.” 

The edge here is small datacenter edge, not the remote office/branch office edge.

A LinkedIn post by Koduri said: “When we began HBM development our focus was improving bandwidth/watt and bandwidth/mm^2 (both important constraints for mobile), while maintaining competitive capacity with the incumbent solutions. With HBF the focus is to increase memory capacity (per-$, per-watt and per-mm2) significantly while delivering competitive bandwidth. Compute(flops) * Memory capacity(bytes) * Bandwidth(bytes/sec) modulates performance of AI models for both training and inference.”

Sandisk’s Technical Advisory Board will include industry experts and senior technical leaders from both within and outside the company. Patterson is ex-AMD and Koduri ex-Intel, with AMD and Intel being the alternative GPU manufacturers to Nvidia. It’s possible that Intel will exit the GPU space as it restructures, leaving AMD as the sole competitor to Nvidia. The lack of an Nvidia representative on the Technical Advisory Board could be seen as concerning from an all-embracing “memory standard for the AI industry” point of view. Were Nvidia to agree there is a need for HBF, its future would be more certain. As it is the TAB could have an uphill struggle. You can check out an HBF fact sheet here.

Bootnote

We asked Sandisk a couple of questions about HBF and the company responded after this article was published.

Blocks & Files: Who will buy HBF from Sandisk? As it’s an HBM-like product with an interposer connection to the GPU, that suggests a GPU manufacturer, or another accelerator manufacturer, would be Sandisk’s customer. Is that right?

Sandisk: Without commenting on specific customers, we are engaged with the broader AI ecosystem.

Blocks & Files: Will the GPU-HBF interface be made public? As I understand, the GPU-HBM interfaces have been made public. That would avoid the single-supplier lock-in problem.

Sandisk: We don’t have any updates to share at this time, please stay tuned.