Advantech adds Intel Core Series 3 to edge AI systems

Advantech has added Intel Core Series 3 to industrial systems.


IN Brief:

  • Advantech is integrating Intel Core Series 3 processors into embedded boards and edge AI systems.
  • The platform combines hybrid CPU cores, Xe3 graphics, and Intel NPU 5.0 for distributed inference.
  • Long-life availability, TSN, and real-time support strengthen the platform’s role in industrial automation designs.

Advantech is integrating Intel Core Series 3 processors into its next generation of industrial embedded boards and edge AI systems.

The initial rollout includes the MIO-5356 embedded single-board computer, the ARK-1252 DIN-rail edge computer, and the ARK-2233 fanless edge computer. The systems are built for industrial robotics, smart retail, healthcare assistance devices, self-checkout systems, and other applications where local inference is required inside compact compute platforms.

Intel Core Series 3 uses a hybrid CPU design with up to six cores, comprising two Performance-cores and four Efficient-cores. The processor family also includes Xe3 graphics and Intel NPU 5.0, enabling AI acceleration across CPU, GPU, and NPU resources. Advantech says the architecture can deliver up to 40 TOPS of total AI performance.

The dedicated NPU gives embedded systems a lower-power route for always-on inference workloads. Object detection, quality inspection, gesture recognition, anomaly detection, and lightweight machine analytics can be assigned to processing blocks better suited to sustained AI tasks, reducing the need to run every workload on the CPU.

The systems also address lifecycle and timing requirements common in industrial computing. Advantech says the architecture supports up to 10 years of product availability. Intel Time Coordinated Computing and Time-Sensitive Networking support real-time and synchronised operation in automation and control applications.

Industrial edge AI is moving from pilot projects into standard machine architectures. Factories, logistics hubs, laboratories, and healthcare environments are placing inference closer to sensors and actuators to reduce latency, cut network traffic, and keep operational data inside the facility. Many of those systems need balanced compute rather than the largest available accelerator.

A compact edge node may need to run a vision model, communicate over industrial networks, process local I/O, and maintain deterministic behaviour inside a fanless enclosure. Power, thermal design, software support, and long-term availability carry as much weight as peak AI throughput.

The distributed compute model across CPU, GPU, and NPU gives system designers more flexibility when mapping workloads to hardware. Time-sensitive control can remain on suitable CPU resources, graphics and visual processing can use the GPU, and inference tasks can run on the NPU where power and efficiency are better matched.

The launch also shows how embedded computing is absorbing functions once handled by separate systems. Machine builders are adding perception, local analytics, and adaptive control to platforms that previously relied on fixed logic and remote supervision. Industrial PCs and embedded boards are now expected to support AI workloads while retaining the ruggedness, lifecycle stability, and integration discipline expected in automation hardware.


Stories for you