Join the 155,000+ IMP followers

electronics-journal.com

Samsung Advances AI Memory With HBM4E & LPDDR6 

These new high-bandwidth memory and low-power DRAM solutions target scalable AI infrastructures, edge computing, and data centers.

  semiconductor.samsung.com
Samsung Advances AI Memory With HBM4E & LPDDR6

AI infrastructure, high-performance computing, and edge AI systems are driving demand for faster, more efficient memory architectures. At NVIDIA GTC 2026 (San Jose, California, March 16–19, booth #1207), Samsung Electronics Co., Ltd. presented its latest developments in high-bandwidth memory (HBM), storage, and low-power DRAM designed to support both hyperscale data centers and on-device AI applications.

The company’s showcase focused on its sixth-generation HBM4 and its successor HBM4E, alongside complementary memory and storage technologies aimed at improving performance, scalability, and energy efficiency across AI workloads.

High-bandwidth memory for AI data centers
The transition to large-scale AI models has significantly increased the need for memory bandwidth and throughput in data centers. Samsung’s HBM4, now in mass production, is designed to meet these requirements with data transfer speeds of 11.7 Gbps per pin, with potential scaling up to 13 Gbps. This exceeds the commonly referenced 8 Gbps baseline in earlier HBM generations.

Building on this, HBM4E introduces higher performance levels with up to 16 Gbps per pin and a total memory bandwidth of 4.0 TB/s. These specifications are intended to support compute-intensive workloads such as large language models, real-time inference, and high-performance simulations.

The use of a sixth-generation 10 nm-class DRAM process (1c) contributes to improved yield stability and performance consistency, which are critical for large-scale deployment in AI data centers.


Samsung Advances AI Memory With HBM4E & LPDDR6 

Advanced packaging for higher density and thermal performance
To support further scaling of HBM stacks, Samsung introduced hybrid copper bonding (HCB) technology. Compared to conventional thermal compression bonding (TCB), HCB reduces thermal resistance by more than 20 percent while enabling memory stacks with 16 or more layers.

This development addresses one of the main limitations of high-density memory: heat dissipation. Improved thermal characteristics allow higher stacking without compromising reliability, which is essential for next-generation AI accelerators operating at high power densities.


Samsung Advances AI Memory With HBM4E & LPDDR6 

Memory and storage integration with AI platforms
Samsung also highlighted its collaboration with NVIDIA, particularly in integrating memory and storage solutions into AI computing platforms. Technologies such as HBM4, SOCAMM2 memory modules, and PCIe 6.0-based SSDs were demonstrated in the context of NVIDIA infrastructure.

SOCAMM2, based on low-power DRAM, is designed as a high-bandwidth server memory module that supports flexible system integration. Its availability in mass production positions it as a viable option for next-generation AI servers requiring both performance and energy efficiency.

On the storage side, the PM1763 SSD leverages the PCIe 6.0 interface to enable faster data transfer rates and higher storage capacities. This is particularly relevant for AI workloads where rapid access to large datasets is essential. The PM1753 SSD, integrated into NVIDIA’s BlueField-4 STX reference architecture, focuses on improving energy efficiency and performance in inference scenarios.


Samsung Advances AI Memory With HBM4E & LPDDR6 

Scaling AI manufacturing with digital twins
Beyond hardware, Samsung presented its approach to scaling semiconductor manufacturing using AI-driven processes. The integration of accelerated computing and digital twin technologies, supported by NVIDIA Omniverse libraries, enables simulation and optimization of chip production environments.

This approach is applied across the full semiconductor value chain, including electronic design automation (EDA), computational lithography, and facility operations. By using AI to model and optimize manufacturing processes, the company aims to improve production efficiency and reduce development cycles for advanced chips.


Samsung Advances AI Memory With HBM4E & LPDDR6 

Low-power memory for edge and on-device AI
In addition to data center solutions, Samsung introduced memory technologies designed for local AI workloads on personal devices. LPDDR5X and LPDDR6 DRAM modules target smartphones, tablets, and wearable devices requiring high data throughput with reduced power consumption.

LPDDR5X achieves speeds of up to 25 Gbps per pin while reducing power consumption by up to 15 percent, enabling responsive AI-enhanced applications and high-resolution processing. LPDDR6 extends performance further with bandwidth ranging from 30 to 35 Gbps per pin and incorporates features such as adaptive voltage scaling and dynamic refresh control to optimize energy usage.

These characteristics support emerging edge AI use cases, including on-device inference, real-time image processing, and AI-assisted user interfaces.


Samsung Advances AI Memory With HBM4E & LPDDR6 

Positioning within the AI memory ecosystem
Compared to existing HBM3 and HBM3E solutions on the market, HBM4 and HBM4E offer higher per-pin speeds and greater total bandwidth, addressing the increasing demands of next-generation AI accelerators. The introduction of advanced bonding techniques and higher stacking capacity further differentiates these solutions in terms of scalability and thermal management.

At the same time, the integration of memory, storage, and packaging technologies within a unified ecosystem reflects a broader industry shift toward co-optimized AI infrastructure, where performance gains depend on the interaction between compute, memory, and data movement.

By combining high-bandwidth memory, low-power DRAM, and advanced storage solutions, Samsung’s portfolio addresses both centralized AI infrastructure and distributed edge computing requirements.

Edited by Industrial Journalist, Natania Lyngdoh — Adapted by AI.

www.semiconductor.samsung.com

  Ask For More Information…

LinkedIn
Pinterest

Join the 155,000+ IMP followers