GUC advances HBM4 IP on TSMC 3nm process

GUC has demonstrated a 12Gbps HBM4 controller and PHY platform on TSMC 3nm, highlighting the next step in high-bandwidth memory integration for AI and advanced compute designs.


IN Brief:

  • GUC has shown a 12Gbps HBM4 IP platform built on TSMC 3nm technology.
  • The platform combines controller, PHY, partner HBM4 memory, and CoWoS integration.
  • High-bandwidth memory is becoming a defining system constraint in advanced AI silicon.

GUC has demonstrated a 12Gbps HBM4 controller and PHY platform on TSMC’s 3nm process, marking a further step in the drive to align advanced logic, packaging, and memory subsystems for next-generation AI and high-performance compute devices. The demonstration combines GUC’s HBM4 controller and PHY IP with partner HBM4 memory and TSMC CoWoS packaging technology.

High-bandwidth memory now sits near the centre of advanced compute design. Performance is increasingly limited not by arithmetic capability alone but by the speed, density, and energy cost of moving data. HBM has therefore shifted from being a specialist accelerator feature into one of the core design battlegrounds for AI processors, custom ASICs, and large-scale compute engines. GUC is citing 2.5 times the bandwidth of HBM3E, along with improvements in power efficiency and area efficiency.

The company has also indicated that the platform supports face-up configuration for TSMC SoIC face-to-face 3D integration, using through-silicon vias for I/O, power, ground, and power feedthrough to an upper die. That places the IP within a broader packaging roadmap rather than presenting it as a simple interface block. Memory integration at this level is tied directly to 2.5D and 3D assembly, die partitioning, thermal planning, and overall package cost.

GUC has previously established HBM3E controller and PHY deployments in 3nm customer products, and it is drawing a line from that generation into the new HBM4 platform. The inclusion of interconnect monitoring from proteanTecs also points to the level of observability now expected in advanced memory subsystems. As data rates rise and package structures become more complex, validation and in-field visibility are taking on a larger role.

Memory architecture is now shaping the semiconductor roadmap as directly as compute architecture. For years, the conversation around AI hardware focused heavily on processing cores and software ecosystems. That is no longer enough. Larger models and heavier data movement have made bandwidth, latency, thermal loading, and memory power consumption central to whether expensive compute blocks can be fully used in practice.

HBM4 arrives as advanced-node economics are also becoming harder to manage. Every square millimetre of die area and every watt of power matters more when devices are already constrained by reticle limits, packaging yield, and cooling overhead. Gains in area efficiency and power efficiency therefore carry strategic value beyond raw bandwidth. A memory interface that does more without consuming disproportionate silicon or package resource can materially improve the viability of the complete device.

That has made memory IP providers more visible in the semiconductor value chain. What used to sit in the background as supporting circuitry now influences overall competitiveness. Designers choosing an AI or HPC implementation path have to think about channel count, packaging route, bandwidth scaling, stack integration, and whether the resulting part can still be produced economically at volume.

GUC’s HBM4 work fits that change. It is not simply an interface milestone. It sits inside the infrastructure needed to build the next generation of large, complex compute devices. In advanced AI silicon, memory has stopped being a secondary constraint and become one of the primary ones.


Stories for you