Huawei has introduced its new Huawei Mate 10, but what is truly new is its neural processing unit (NPU) that brings artificial intelligence to the heart of the mobile.
Huawei presented yesterday si new flagship terminal, the Huawei Mate 10, which will compete with the new Galaxy Note 8, iPhone 8 Plus and X, LG V30 and Pixel 2. Regarding them, the most outstanding a priori and that more strength can give it among the public is that arrives for 699€, away from the 1159 that the new Apple terminal will set as a high-end standard.
It is not the only thing noteworthy of the terminal, but compared to all the others, in such a saturated market and with such similar proposals (double camera, end of frames, personal assistants as an attempt to base the experience), Huawei does not have, at the moment, much more to offer, hence its price remains low to gain attractiveness.
“How do we differ?someone in Huawei would think. The solution was to invoke the buzzword, artificial intelligence. Yes, Huawei has been the first company on Android to integrate a neural processing unit (NPU) next to the CPU and GPU. In this way, according to the Chinese company, the new Mate 10 is 25x faster doing artificial intelligence calculations that the CPU of the same, and not everything is there, because it is 50x more efficient while doing them, crucial aspect in environments with such limited energy.
There may be some doubt as to whether they are the first or not, for Apple’s A11 Bionic chip also features an architecture based on neural networks. Coming out of this controversy, the reality is that the arrival of artificial intelligence to dedicated mobile hardware is great news. Until now we had the possibilities offered by, for example, Google Photos, but all those processes are carried out in the cloud, and privacy and security are at risk.
Huawei’s approach is to do everything on the device, without sharing anything externally. Now, what does artificial intelligence mean on mobile today? What the company sells, above all, is that it will help detect user usage patterns, and on the one hand it will adapt to this and on the other hand it will offer advice on what is best when using the terminal. In that sense, the camera application is able to better detect the scenes and change the modes automatically. That is, choose night mode when you tap, turn on portrait mode when detecting a face, etc.
Beyond that, the NPU is very powerful for processing natural language or performing image and object recognition, so these functions are also integrated into camera apps. At the system level, it is able to regulate things like performance or the flow of energy entering during charging.
The obvious question at this point is, is that all you can do for me? No, you can do a lot more, but we have to wait for developers to get down to work taking advantage of tools such as Facebook’s Caffe2 or Google’s Tensorflow Lite, which are libraries similar to Apple’s CoreML, designed to efficiently implement calculations with neural networks. Whether the adoption will be quick or not, we don’t know. It will depend on the return it has for developers, and what manufacturers like Samsung and Qualcomm do on their chips. What is clear is that the era of mobile artificial intelligence has begun.