Facebook has presented two tools at the F8, AR Studio and Frame Studio, which will allow developers to create augmented reality applications for the camera.
Mark Zuckerberg, CEO of Facebook, has been in charge of opening the conference F8 for developers, focusing from the beginning on the possibilities of augmented reality and how this technology is part of the company’s roadmap in the coming years. Facebook knows that in the future augmented reality will be experienced through immersive devices such as simple glasses, instead of through a camera as it happens today, and that a platform is necessary for developers to start creating content already. So, to take the first steps, Zuckerberg has announced that the camera will be Facebook’s first augmented reality platform.
The camera effects platform, Camera Effects Platform, connects art and technology to build new experiences that use the augmented reality engine that allows facial recognition, 3D rendering and a reactive framework in JavaScript. The platform includes the tools AR Studio, whose beta has just been released, and Frame Studio. These applications can be used by artists, designers and developers who will have control to create interactions, animations and objects within the scene from mobile applications or web services.
AR Studio allows you to create advanced masks, animations and effects that respond to movements, gestures and facial expressions. Many of them can be carried out without programming a single line of code. Those interested developers can request access to the beta through this link. Frame Studio makes it easy for artists to create images for profile photos or for the camera, and is now available.
Facebook CTO Mike Schroepfer has also appeared on the scene to tell us more in depth about how the Facebook team is working on the technology that will allow augmented reality experiences to be as believable as possible. For this, object recognition and simultaneous localization and mapping (SLAM) will be two characteristics that artificial vision has to master. Schroepfer has shown the current advances, in which they have managed not only to identify objects and people but also to identify their mesh or skeleton. Thanks to this recognition and SLAM, the cameras are able to position objects, offer occlusion when appropriate and change the environment.
The Facebook CTO has also mentioned the autonomous prototype Santa Cruz, since it also uses SLAM through 4 cameras to have inside-out positioning by recognizing the environment, but has not shown any new prototype or given any new data. We have only been able to see the 10-year roadmap again, in which the Oculus logo no longer appears.
The session has concluded with the presentation of the social application, Facebook Spaces, for Oculus Rift.