IN Brief:
- SEMIFIVE will lead turnkey development for FotoNation’s TriSilica perceptual AI chip family.
- The platform supports multimodal inputs including audio, mmWave, spectral, infrared, and RGB.
- Edge AI silicon is moving towards always-on perception with tighter control of power, memory, and data movement.
FotoNation and SEMIFIVE have entered a strategic collaboration to develop the TriSilica perceptual AI chip family, with SEMIFIVE leading turnkey silicon development using Samsung Foundry.
The collaboration is focused on ultra-low-power sensor-fusion SoCs for edge AI applications. TriSilica is designed as a compact-footprint perceptual AI silicon platform supporting multimodal sensor inputs including audio, mmWave, spectral, infrared, and RGB. The first product, TS-210, is planned for a multi-project wafer shuttle by the end of the year, targeting Samsung Foundry’s 8nm Low Power Ultimate process.
FotoNation, headquartered in Galway with development operations in Brașov, has built its technology around computational imaging, neural image signal processing, sensor fusion, and in-device perception. Its TriSense IP core combines neural ISP, sensor fusion, and AI for perceptual cognition under tight power constraints, while TriSilica translates that architecture into production silicon.
SEMIFIVE brings platform-based SoC design, pre-validated IP, and end-to-end custom silicon development. The agreement marks SEMIFIVE’s first European deal and gives FotoNation a route to commercial silicon for a product family intended to process complex sensor inputs close to the point of capture.
Edge AI systems increasingly require continuous sensing without the power and data penalties associated with sending raw sensor information to a host processor or cloud platform. Wearables, industrial monitoring systems, robotics, smart sensors, and compact autonomous devices all need to interpret local conditions while operating within tight energy, thermal, and memory constraints.
Multimodal sensing increases the amount of context available to a system, but it also increases design complexity. Audio, radar, spectral, infrared, and RGB data have different sampling characteristics, bandwidth demands, noise profiles, and processing requirements. A sensor-fusion SoC has to combine those data streams efficiently enough to support real-time perception without overwhelming memory bandwidth or battery capacity.
Always-on perception is now shaped less by a single headline accelerator figure and more by how efficiently the system can move, filter, fuse, and interpret data. The most constrained devices cannot afford to treat every sensor frame as raw information for a larger processor to handle later. Filtering and inference have to happen earlier in the signal chain, with enough intelligence to separate useful events from background data.
The use of an 8nm low-power process reflects that operating envelope. Edge AI products often sit between two requirements: enough performance to run meaningful local perception, and low enough consumption to remain viable in compact, battery-powered, or thermally restricted devices. Process selection, memory architecture, and IP integration all influence whether such products can support persistent sensing without moving into gateway-class power levels.
The collaboration also points to a broader European semiconductor pattern. More activity is moving from general-purpose AI acceleration towards domain-specific silicon for sensing, perception, and embedded intelligence. Industrial automation, robotics, machine vision, health monitoring, and smart infrastructure all require devices that can understand their environment without continuously exporting raw data.
FotoNation’s move from perceptual AI architecture into custom silicon gives the company a route towards production deployment, while SEMIFIVE gains a European customer for its platform-based design model. For edge systems, the direction is clear: more perception at the endpoint, less dependence on external compute, and tighter integration between the sensor front end and AI processing.



