IN Brief:
- Rambus has launched PCIe 7.0 Switch IP with time-division multiplexing.
- The IP targets AI, cloud, and HPC SoCs with higher bandwidth density and deterministic data movement.
- It supports disaggregated and pooled compute architectures by improving utilisation of shared PCIe links.
Rambus has introduced PCIe 7.0 Switch IP with time-division multiplexing, targeting AI, cloud, and high-performance computing systems where bandwidth density, deterministic data movement, and scalable interconnect architectures are becoming central design constraints.
The new switch IP is built on the PCIe 7.0 specification and is intended for next-generation data centre and AI system-on-chip designs. It supports bandwidth scaling, low latency, and more efficient movement of data across CPUs, GPUs, accelerators, and NVMe storage.
The use of time-division multiplexing allows traffic to be scheduled and multiplexed across shared PCIe links. Rambus is targeting disaggregated and pooled compute architectures, where system designers need to move data between heterogeneous compute and memory resources without simply adding more physical lanes, endpoints, or board-level complexity.
The IP complements Rambus’ wider PCIe 7.0 portfolio, which includes controllers, retimers, and debug solutions. The company’s PCIe 7.0 Switch is a customisable, multiport embedded switch for ASIC and FPGA implementations, enabling one upstream port and multiple downstream ports as a configurable interface subsystem. It is backwards compatible with PCIe 6.3 and PCIe 5.0.
The switch supports up to two upstream ports, up to 31 downstream ports, up to 128 lanes, and x16 link width per port. It supports link rates from 2.5GT/s through to 128GT/s per lane, placing it within the bandwidth class required by emerging AI infrastructure designs.
AI systems are changing interconnect requirements across the data centre. Training and inference platforms depend on coordinated movement of data between processors, accelerators, memory tiers, storage, and network interfaces. In many deployments, system performance is limited not only by raw compute, but by how predictably data can be routed through the platform.
PCIe has become one of the core fabrics in this environment because it spans processors, accelerators, SSDs, expansion subsystems, and peripheral devices. As PCIe speeds rise, signal integrity, latency, power, and board complexity become more difficult to manage. Switch IP that can improve fabric utilisation without multiplying physical resources can influence both SoC architecture and system cost.
Time-division multiplexing is particularly relevant to systems handling varied traffic profiles. Large-scale AI training, latency-sensitive inference, storage traffic, and accelerator-to-accelerator communication can place different demands on the same interconnect fabric. Scheduling and shared-link utilisation help designers balance those demands within the physical and power limits of a platform.
The launch also shows the increasing reliance on licensable interface IP in advanced system design. Few companies can justify building every high-speed block internally as standards move quickly and verification burdens increase. Proven IP for PCIe switching, retiming, control, and debug can reduce development risk for ASIC teams working under aggressive AI infrastructure schedules.
PCIe 7.0 will place heavy demands on PHY design, controller logic, verification, package routing, board design, compliance testing, and system validation. The switch layer is a critical part of that chain because it determines how bandwidth is shared between compute and storage resources. Rambus’ TDM approach gives SoC architects another option for building high-bandwidth systems without relying solely on lane count and board expansion.


