A Korean semiconductor startup backed by Samsung and Arm has launched a rack-sized AI inference system claiming 6x lower power consumption and up to 75% cheaper acquisition costs compared to equivalent Nvidia configurations.
TechRadar reports the chip targets enterprise inference workloads — the fastest-growing segment of AI infrastructure spending — where power efficiency is becoming as critical as raw performance.
The startup, FuriosaAI, has been developing its technology for four years. The new DreamWeaver chip uses a novel architecture that processes AI models in a fundamentally different way from Nvidia's GPU-based approach.
Instead of general-purpose GPU cores, FuriosaAI uses dedicated tensor processing units optimised specifically for the types of calculations needed during AI inference (running trained models), rather than training.
Early customers include Samsung Electronics, Hyundai Motor Group, and two unnamed US hyperscalers. The company has shipped 500 units and expects to deliver 5,000 by year-end.