Machine Learning Journal
Photos for iOS and macOS has a feature that allows us to organize our photos according to the people in them. To do this, Apple has an algorithm capable of detecting faces and recognizing different people according to their features. How do you achieve this? A new publication in its blog ‘Machine Learning Journal’ reveals more details.
Entitled “An On-device Deep Neural Network for Face Detection”, the latest article from Apple’s artificial intelligence page details the process of face recognition in photographs . By harnessing the CPU and GPU power of iOS devices and macOS, you can differentiate between two different people.
Apple’s biggest problem in this regard is privacy , because it always seeks to offer the highest possible security, photos are not processed on servers in the cloud with plenty of power, but they are processed on our devices. With dedicated CPUs and GPUs, the process is easier. In addition, before photos are uploaded to the cloud for storage in the iCloud Photo Library, they are encrypted on the device and can only be decrypted by another device with the same iCloud account. Even Apple can’t see your photos.
According to Apple:
Here the challenge is to achieve a balance between the processing within the device and the operation of the device . The process must be efficient enough to process a large number of photographs in a reasonably short period of time, but without significant use of the battery or other resources. To achieve this, Apple also makes use of Metal, taking full advantage of the power of the cPU and especially the GPU.
What Apple is telling us is something that was already supposed, the process of facial recognition and so many others that for us seems something simple and secondary, has a tremendous effort . This is why at the beginning any new device is slow, or if we upgrade to a new version of iOS (which means scanning the whole photo library again) everything goes slower and consumes more battery.