Join the 155,000+ IMP followers

electronics-journal.com

SK hynix Advances HBF Standardization with Sandisk

A joint Open Compute Project workstream aims to define High Bandwidth Flash as a new scalable, power-efficient memory tier for AI inference infrastructure.

  www.skhynix.com
SK hynix Advances HBF Standardization with Sandisk

Targeting hyperscale data centres and AI inference infrastructure, SK hynix Inc. has partnered with Sandisk Corporation to standardize High Bandwidth Flash (HBF), a new memory layer positioned between High Bandwidth Memory and SSD storage to improve scalability and power efficiency.

Why AI Inference Requires a New Memory Architecture
As artificial intelligence deployments shift from large-scale model training to inference, infrastructure demands are evolving. AI inference involves serving trained models to millions of users in real time, requiring rapid data access, high memory capacity and strict power efficiency.

Conventional memory hierarchies—where High Bandwidth Memory (HBM) provides performance and solid-state drives (SSD) provide capacity—face architectural limitations. HBM delivers high bandwidth but is constrained in capacity and cost, while SSDs provide density but cannot match HBM-level latency and throughput.

High Bandwidth Flash (HBF) is designed to bridge this gap by introducing an intermediate memory layer. In this architecture, HBM continues to handle bandwidth-intensive operations, while HBF provides higher-capacity expansion closer to the processor than traditional storage.

Launch of an Industry Standardization Workstream
SK hynix and Sandisk announced the launch of a dedicated workstream under the Open Compute Project (OCP), one of the largest open data centre technology initiatives globally. The initiative aims to define technical specifications and promote ecosystem-wide adoption of HBF.

Standardization is intended to ensure interoperability across CPU, GPU and accelerator platforms, enabling system-level optimization rather than isolated component improvements. In AI infrastructure, overall performance increasingly depends on coordinated memory and compute architecture rather than peak performance of individual chips.

According to SK hynix, establishing HBF as an industry standard is expected to support long-term ecosystem scalability as demand for advanced memory solutions grows toward 2030.

Technical Positioning of High Bandwidth Flash
HBF is positioned between ultra-fast HBM and high-capacity NAND-based SSDs. By leveraging NAND flash technology in a high-bandwidth configuration, HBF aims to:
  • Expand memory capacity beyond what is economically feasible with HBM alone
  • Improve power efficiency compared with relying solely on high-speed DRAM tiers
  • Support scalable AI inference workloads

In large-scale AI deployments, memory subsystem design directly influences total cost of ownership (TCO). Introducing a dedicated intermediate tier can reduce pressure on expensive HBM resources while maintaining performance levels suitable for inference tasks.

This layered approach aligns with broader industry efforts to optimize AI memory architecture and reduce energy consumption in hyperscale environments.

Commercialization and Ecosystem Strategy
SK hynix and Sandisk plan to combine their respective experience in HBM, NAND design, packaging and mass production to accelerate HBF commercialization. The companies emphasized that AI infrastructure competitiveness will increasingly depend on total memory solution providers capable of delivering multiple complementary memory technologies.

Ahn Hyun, President and Chief Development Officer of SK hynix, stated that optimizing the full ecosystem—rather than focusing solely on individual component performance—will be central to next-generation AI infrastructure.

By initiating formal standardization through OCP, the companies aim to establish technical specifications that allow system manufacturers to integrate HBF into future AI servers and accelerators with predictable performance and interoperability.

Implications for Data Centre Operators
For hyperscale operators and AI service providers, the introduction of a standardized HBF layer could provide:

  • Greater scalability for inference workloads
  • Improved power efficiency in high-density compute environments
  • More flexible memory tiering strategies
  • Lower total cost of ownership over large deployments

As AI services continue to expand globally, memory architecture innovation will play a decisive role in balancing performance, capacity and energy efficiency. The standardization of High Bandwidth Flash represents an early step toward redefining how next-generation AI systems manage data across compute and storage tiers.

www.skhynix.com

  Ask For More Information…

LinkedIn
Pinterest

Join the 155,000+ IMP followers