A week ago, Google introduced a small lifelogging camera during its Pixel 2 event. Called Clips, the small device can be held with your hands, clipped onto your clothing or placed on some flat surface while using artificial intelligence and facial recognition to "capture beautiful, spontaneous images" of your life.
The camera was created to record a soundless video with the faces and pets it knows. As demonstrated in the demonstration during the presentation, Google Clips is primarily intended for parents and pet owners who do not want to miss out any precious moment with them.
But to be fair, it is no more frightening than several similar gadgets that came before. Narrative, for example, was a small lifelogging camera that, unlike the three-hour battery life of Google Clips, was able to capture information for up to two days in a much more understated outfit. Amazon's Echo Look may not be constantly taking pictures of you, but it was made to stay in your room, arguably one of the most intimate places in your house to put a camera. Not to mention the cameras and microphones of people’s notebooks, tablets, and smartphones.
Google Clips has a 12-megapixel sensor and is equipped with an optics with a viewing angle of 130 degrees. With the help of the new camera, you can shoot a dynamic video at a speed of 15 fps. Also, the Clips camera allows you to convert video - into photos or GIF files. The manufacturer assures that the battery life will be up to three hours without recharging. The weight of the camera is 60.5 grams. Dimensions of the device are miniature and will be 49х49х20 mm. The device supports Wi-Fi and Bluetooth LE, and there is a USB Type-C port on board as well. Google Clips also works in conjunction with a smartphone based on Android 7.0 Nougat. The camera is currently compatible with Pixel phones, iPhone 8 and 8 Plus, as well as the Samsung Galaxy S7 and S8.
Google released the 249-dollar-worth Google Clips camera on October 4. And although Google Clips has a button for shooting, the new camera assumes an automatic usage scenario: the gadget itself recognizes faces and chooses the moments that are worth it to capture. In this case, no information leaves the device without the user's knowledge - only those short clips and photos (selected frames from clips) that the user himself will choose to save on his smartphone.
An intellectual chip manufactured by Movidius and owned by Intel was installed on the new Google Clips smart camera. It allows the device to analyze what it has seen. Google Clips also uses machine learning to understand whom you most often spend your time with, and starts automatically to shoot these people, preserving your most important and interesting moments.
Google Clips uses the Myriad 2 chip, which the Movidius company itself calls visual processing unit - VPU. The chipset was specially designed for solving processing computer vision algorithms, including for object recognition. Movidius claims that it is the industry's first ever monitored processor. This is a second-generation Myriad chip, which is also used in the USB keyfob Neural Compute Stick and is responsible for the "sight" of the DJI Phantom 4 drone. Earlier it was already used in devices Project Tango from Google and unmanned aerial vehicles manufactured by the Chinese company DJI.
Google and Movidius, which since last year belongs to Intel, are long-standing partners. Project Tango, Google’s experimental smartphone project used the Myriad processors to build a 3D environment model in real time, and in Google Clips VPUs are used for tasks such as recognizing images and people's faces.
The Myriad 2 chip processes the data directly on the camera itself, without sending it to the "cloud". This is useful both from the side of confidentiality and in terms of battery life. VPU provides additional security - the risks that private information will be intercepted at the time of transmission are minimized - and lengthens the battery life as well, because Google Clips does not require a permanent connection to the network.