Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

Augmented reality: how it all began

According to Deloitte's "Technology, Media and Telecommunications Predictions" report, 1 billion users will use an Augmented Reality app at least once. Despite the fact that augmented reality technology has become widely popular only now, it does not hurt to remember how and with whom it all began. Learn more about AR on web. The first augmented reality device was Headsight, developed by Philco Corporation. It was intended for military purposes and was developed as an auxiliary device for the pilot. Headsight is also the first head mounted display augmented reality device. By the way, the first virtual reality device appeared later, in 1962. It was at this time that Morton Hayling developed a virtual simulator called Sensorama and patented his device (this is where our company name comes from). Augmented reality has attracted a lot of interest in devices such as Google Glass, which was introduced in 2013, and Microsoft Hololens, which appeared in 2015. If we talk in more detail about Hololens, then it can be called obsolete hardware that cannot cope with serious projects due to poor performance. In addition, the device runs on a modified Windows 10 Holographic and application development using it leaves much to be desired. Hololens are heavy and uncomfortable for extended use, and the small viewing angle of the prismatic lenses onto which the image is projected makes half of the content invisible if you loosely fasten the device on your head or move it slightly. Another device that has recently been released to developers is the Magic Leap glasses. The history of the development of this device has been going on for quite a long time and has attracted a large amount of investment, but it still shows a deplorable result and does not yet live up to expectations. As for mobile AR, the first library was ARToolKit, developed in 1999. Released about 20 years ago, ARToolKit was able to recognize primitive 2D images (such as QR codes, for example), better known as markers, and track them in space relative to the camera based on video analysis using computer vision technologies. The first supported platform was Symbian in 2005. This OS was then on the first Nokia smartphones. Then they added support for iOS with the iPhone 3G in 2008, and finally Android, in 2010. Later, ARToolKit had such competitors as Wikitude, Vuforia (or originally Qualcomm Augmented Reality) and a dozen others. The situation with mobile augmented reality began to seriously change the direction of development with the release of the first version of ARKit. The situation with mobile augmented reality began to seriously change the direction of development with the release of the first version of ARKit. You can tell that there was also a project called Tango by Google. Yes, that's right, but that's a slightly different story. Tango is not just an SDK, but a niche line of devices on which this technology works. A feature of these devices is the presence of an infrared camera, which allows you to receive information about the surrounding space and about the movement of the device in it. For example, when I first came to Sensorama, I did one of the first projects just for this platform. It consisted in the following: the ability to place a real-scale car model, walk around and change its configuration (paint color, wheels, interior trim, etc.). Despite the large number of problems with the SDK itself, projects worked on it and were of good quality, the declared characteristics were present, surfaces were scanned, and the device itself positioned itself in space and tracked the user's movements. The second application I worked on was the combination of augmented and virtual reality, which in this case was the integration of the Google Cardboard library, or Google VR with the ability to navigate in the real world while watching the image from the camera combined with AR objects. Under the Lenovo Tango device, a special large-sized Cardboard was printed on a 3D printer. Hundreds of errors had to be eliminated when combining the two libraries, and after all this, the application worked. However, the device's camera has a large initial zoom, which makes the real world seem huge, and in virtual reality mode it was very strange. As a result, the Tango project did not gain much popularity, mainly due to the need to use specific devices, and it was closed in favor of ARCore. In the summer of 2017, Apple announced ARKit and provided the tools to work with it. In the summer of 2017, Apple announced ARKit and provided the tools to work with it. For the first time, a mechanism has appeared on the market that can track the position of a device and augmented content only using information from a regular camera and internal phone sensors such as an accelerometer, gyroscope and GPS sensor. The quality of such a reading of space was not ideal, there were errors in the recalculation of the position, the drift of the positioned models, “flying away” and other shortcomings. However, Apple proved the relevance of the concept and at the same time demonstrated several more interesting features of the product. One such feature was lighting prediction. By analyzing information from the light sensor, front and rear cameras, this option made it possible to simulate the behavior of light in a real environment on objects located in augmented reality. Of course, this mechanism did not always work correctly, and shadows were not generated automatically either. After such a move by Apple, the giant competitor simply could not stand aside, so Google's augmented reality library was soon announced with a promise to support a huge number of devices and provide decent quality. Is that how it really turned out? Not really. While ARCore 1.0 was coming out of beta, ARKit managed to release version 1.5 and added some cool new features to it while improving tracking quality and overall experience. As a result, Google was once again behind. For example, Apple immediately created an ARKit integration with their SceneKit, a framework that deals with loading and rendering models, rendering and structuring the scene, physics, and more. At the time of the release of the first version of ARCore, Google did not have its own library with such functionality, so normal work with models and scenes for augmented reality was possible only on the (based?) Unity engine, and they caught up with the next version of ARCore, releasing their engine under the name Sceneform. The winter update of ARKit to version 1.5 pleased users (and developers in particular) with improved quality of AR object tracking, the ability to recognize vertical surfaces and images, but it also has certain limitations. For example, vertical surfaces are recognized only with a sufficient level of contrast and lighting, while monochromatic white walls are poorly recognized or not recognized at all. There are also problems with image tracking, because the mechanism is responsible for object recognition, and not for tracking them, and it has not yet been possible to replace marker tracking with this function. During its presentation at WWDC 2018, Apple focused on augmented reality, and introduced ARKit 2.0, extremely ahead of all competitors. Today, ARKit 2.0 is the go-to tool for AR developers. As you can see, the history of AR is more than 50 years old. During this time, we managed to correct the shortcomings, announce new tools and bring the technology to a completely new level of development. It remains to be seen how this path will continue. I am sure of only one thing: it will be very interesting.



This post first appeared on How To Overcome The Cultural Nuisance For An App Product, please read the originial post: here

Share the post

Augmented reality: how it all began

×

Subscribe to How To Overcome The Cultural Nuisance For An App Product

Get updates delivered right to your inbox!

Thank you for your subscription

×