AI on Chip
Intelligence at your fingertips
Learners and Classifiers Anywhere
Cheap generic chipsets that can power complex AI.
How small can you go? AI at the edge of the Edge.
Training, Learning and Inferences performed and remaining on the chip.
The Future of Computing Power
True learning capability on a chip smaller and cheaper than a dime.
With the ever-expanding IoT, centralization is no longer possible or feasible. All this data transferred to and from servers create latencies which may render the process useless, and potentially compromising information.
Delocalizing computing power into connected objects turns them into smarter, more functional and cost-effective agents. Likewise — at the level — of large-scale infrastructure — it leads to what is known as ‘smart cities.’ Think about avoiding energy waste, think about controlling your house from your mobile device, the possibilities are truly infinite. Henceforth, giving cheap and localized true learning capabilities to the Edge is a revolution on its own.
Those two trends have led to the emergence of separate dedicated chipset known as accelerators, like GPU’s for example. A lot of people have heard about graphic chips which are the backbone of the gaming industry. It turns out that the same type of chip can be very effective at processing neural networks. Those are called GPU’s for graphic processing units (per analogy to CPU’s, or Central processing units). More recently, one can read about NPU’s (neuro-processing units) or DPU’s (data flow processing units).
Those separate dedicated chips are quite costly per unit.
At the moment, AI chips are working together with CPU’s and VPU’s (vector processing units).
Those chips were made initially to process very complex mathematics (known as linear algebra and vector math) at fast speeds. They could be arrayed and formed on the architecture of the supercomputers before being replaced by the newcomers GPU’s and the likes.
Those VPU’s are not good at processing images, but they are awfully great at mathematics. So if one turns AI input data into vectors, instead of basing AI input data on images, the VPU becomes the weapon of choice. The older readers will remember that before being pixelated, TV images were lines on the screen. May I say that lines are like vectors?
So a VPU can perform very sophisticated AI training at high speed, high performance and at a fraction of the cost, it just needs the correct format for data input.