Back in June, Apple introduced its first attempt to enter the AR/VR spectrum with ARKit. ARKit stands out as it uses an innovative technology called SLAM (Simultaneous Localization And Mapping). Today, tech giants, including Apple, Google, and Facebook, are investing heavily in SLAM technology to employ it in the best way possible to get ahead in their tech game.
SLAM is a technology used in vision technologies to get the visual data from the physical environment in the form of points and dots to feed the data into machines. SLAM provides an optical input for devices and computers, making them understand what is going on in the physical world. This data also helps AR developers to create interactive and realistic experiences for the audience. The technology can be used in different scenarios like self-driving cars, games, robotics, artificial intelligence, and augmented reality.
The simplest form of SLAM technology is understanding the floor, barriers, and walls. Currently, most AR SLAM technologies like ARKit use floor recognition and position tracking to place AR friendly objects around us. Therefore they are unable to detect what is happening around us and fail to react correctly. Advanced SLAM technologies like Google Tango create a web of the real-time environment and notify us about the floor, walls, and objects in the environment allowing everything around us to act as in interactable element.
German AR company Metaio has been leading the marker-based AR market for years. Apple’s ARKit is an improvised version of Metaio’s SLAM technology. Before Apple, companies used SLAM in iOS and Android alike with software like Wikitude and Kudan.
Marker-based AR experiences required properly defined images to point the device’s camera to use the AR technology. Using defined images allowed the device to understand the overlaid content. The only problem with marker-based technology was that users needed to have a physical object (here, the picture) to experience it. Therefore, companies had to promote both the physical object and the application. But with ARKit, this problem is now solved, and users don’t need anything except their phone and the environment.
Marker-based technology has its own limitations, but it does not lack context. In simpler words, it has an understanding of the physical environment through an adequately defined image and could change the experience depending on the image. For instance, McDonald’s AR and Strabucks’s AR will provide different experiences despite using the same technology and applications based on the content. These central applications are known as AR browsers and play a critical role in the future of AR and SLAM.
Although ARKit has advanced technology, it lacks context, and its applications will not understand where the users are using them. App developers can use various inputs like GPS data or environmental signals like light and sound to add more context to the images. It is important to note that even with GPS inputs, it is difficult for the applications to recognize the location. Thus they are not nearly as good as Google’s Tango as it can quickly identify areas indoors and outdoors alike.
Researchers claim that the future of AR is SLAM technology, but it will need more context for it to be instrumental. Otherwise the brilliant technology well just be used for Snapchat geo-filters and other mediocre games and fun elements.
Google’s take on SLAM
Google is incorporating SLAM technology with its Project Tango in collaboration with companies like Lenovo. Tango uses two cameras to detect depth and gain an understanding of the physical environment by using SLAM maps. Project Tango has displayed context in its algorithm. Also, it has the potential to support applications based on ‘Indore Navigation’ as it has a better understanding of the environment via SLAM maps. These maps are databases of the device’s virtual understandings of the physical environment. The SLAM maps allow machines to interact with the physical environment and different between locations and spots.
Google is way more ahead in its SLAM technology incorporations. Still, the chances of Project Tango taking off in the future are pretty low as it requires a two-camera hardware system to sense depth. Most devices come with double, even triple-lens cameras, but the underlying technology is still not as polished. With Google Lens, the company has already stepped up its game as the data is much more valuable as people are switching to wearables these days, like AR glasses.
Facebook’s SLAM Technology
Facebook is making its mark in the AR industry and has an advantage of its 2 billion userbases. Experts claim that the userbase acts as a massive advantage for Facebook if it leverages the community to manage the mapping of the applications. On the other hand, Apple allows users to have the technology inside its in-house applications, giving Apple a steady advantage over Facebook.
Other applications like social networking app Snapchat is leveraging its large userbase to make the most of this AR technology. Snapchat has combined GPS data and SLAM maps to offer AR technology to its users. Lenovo is also trying to create a SLAM database in collaboration with Wikitude, known as Augmented Human Cloud.
Database rules technology
Developers claim that a complete database is required for the seamless functioning of SLAM AR technology. For instance, Facebook can know the location of its users’ photos by analyzing the image. Google can put ads on virtual billboards around its users by analyzing the information gathered by Google Glasses. Self-driving cars can navigate themselves using visual data.
Tech giants are aware of the importance of having a robust database, but it’s up to them how they leverage this database in this field. Having a proper visual understanding of the physical environment is something that all tech giants are struggling to ace, as AR and SLAM technology are predicted to become a billion-dollar industry in the next ten years. Nobody wants to be left behind in the race.