AI on Chip
Intelligence at your fingertips
Learners and Classifiers Anywhere
Cheap generic chipsets that can power complex AI.
Edge-Computing
How small can you go? AI at the edge of the Edge.
Totally Cloudless
Training, Learning and Inferences performed and remaining on the chip.
The Future of Computing Power
True learning capability on a chip smaller and cheaper than a dime.
What is edge-computing?
Edge-computing is having massive computing power in everyday objects whether they are connected or not.
What is the Edge?
The pricing of the AI chips defines the Edge. For example, if one needs to invest $30 for a GPU, it limits the Edge to objects that are worth the price increase, like fridges, cars, phone handsets, and expensive appliances. On the other hand — if one can offer true AI at the cost of 80 cents, it is easy to understand that the Edge becomes larger including a doorknob, a key fob, or remote control. Likewise, if true AI embeds at such low prices, they can be used to harden any large-scale infrastructure like the electrical grid, water distribution system, you name it.
Why are AI chips so disruptive?
Up until recently, AI required huge amounts of data, which in turn needed immense computing power, solely found in the Cloud. This was the revolution of distributed or Cloud computing.
With the ever-expanding IoT, centralization is no longer possible or feasible. All this data transferred to and from servers create latencies which may render the process useless, and potentially compromising information.
Delocalizing computing power into connected objects turns them into smarter, more functional and cost-effective agents. Likewise — at the level — of large-scale infrastructure — it leads to what is known as ‘smart cities.’ Think about avoiding energy waste, think about controlling your house from your mobile device, the possibilities are truly infinite. Henceforth, giving cheap and localized true learning capabilities to the Edge is a revolution on its own.
With the ever-expanding IoT, centralization is no longer possible or feasible. All this data transferred to and from servers create latencies which may render the process useless, and potentially compromising information.
Delocalizing computing power into connected objects turns them into smarter, more functional and cost-effective agents. Likewise — at the level — of large-scale infrastructure — it leads to what is known as ‘smart cities.’ Think about avoiding energy waste, think about controlling your house from your mobile device, the possibilities are truly infinite. Henceforth, giving cheap and localized true learning capabilities to the Edge is a revolution on its own.
Will AI chips replace VPU’s and CPU’s?
In the world of AI, the trend for the past ten years has been to rely heavily on the technique called Neural Networks (NN). There are many different ways to implement such NN, but they are always going to rely on heavy computing. Another trend was to gradually rely more on imaging. This is because a data file can conceive an ‘image’ as a graph.
Those two trends have led to the emergence of separate dedicated chipset known as accelerators, like GPU’s for example. A lot of people have heard about graphic chips which are the backbone of the gaming industry. It turns out that the same type of chip can be very effective at processing neural networks. Those are called GPU’s for graphic processing units (per analogy to CPU’s, or Central processing units). More recently, one can read about NPU’s (neuro-processing units) or DPU’s (data flow processing units).
Those separate dedicated chips are quite costly per unit.
At the moment, AI chips are working together with CPU’s and VPU’s (vector processing units).
Those two trends have led to the emergence of separate dedicated chipset known as accelerators, like GPU’s for example. A lot of people have heard about graphic chips which are the backbone of the gaming industry. It turns out that the same type of chip can be very effective at processing neural networks. Those are called GPU’s for graphic processing units (per analogy to CPU’s, or Central processing units). More recently, one can read about NPU’s (neuro-processing units) or DPU’s (data flow processing units).
Those separate dedicated chips are quite costly per unit.
At the moment, AI chips are working together with CPU’s and VPU’s (vector processing units).
What are VPU’s and are they useful?
VPU's (vector processing units) are a relatively old design.
Those chips were made initially to process very complex mathematics (known as linear algebra and vector math) at fast speeds. They could be arrayed and formed on the architecture of the supercomputers before being replaced by the newcomers GPU’s and the likes.
Those VPU’s are not good at processing images, but they are awfully great at mathematics. So if one turns AI input data into vectors, instead of basing AI input data on images, the VPU becomes the weapon of choice. The older readers will remember that before being pixelated, TV images were lines on the screen. May I say that lines are like vectors?
So a VPU can perform very sophisticated AI training at high speed, high performance and at a fraction of the cost, it just needs the correct format for data input.
Those chips were made initially to process very complex mathematics (known as linear algebra and vector math) at fast speeds. They could be arrayed and formed on the architecture of the supercomputers before being replaced by the newcomers GPU’s and the likes.
Those VPU’s are not good at processing images, but they are awfully great at mathematics. So if one turns AI input data into vectors, instead of basing AI input data on images, the VPU becomes the weapon of choice. The older readers will remember that before being pixelated, TV images were lines on the screen. May I say that lines are like vectors?
So a VPU can perform very sophisticated AI training at high speed, high performance and at a fraction of the cost, it just needs the correct format for data input.
How much do the chipsets cost? Are they really as cheap as a dime?
At this moment, Aerendir’s AI can be ported into ASIC (Application Specific Integrated Circuits) without the need to design new and costly chips. We have identified chipsets worth between 80 cents and $2-3 where our AI can perfectly function. However, with a new chip design (which is a large investment, of course) we could drive the price down to 10-20 cents per units, close enough?
What are the use cases?
Infinite, limited only by the imagination of the developers.