Versoteq

Blog

Synopsis

We’re looking at the two Tango devices as well as the two native AR-tools, one from Google and the other from Apple

Tango

While Tango, launched in 2014, is technically just a trademark for an AR-system which also uses sensors, it is at the same time an idea and a name for the device concept, thus a Tango device is just as normal name for the device as is an iPhone or an Android phone.

Currently there are only two devices in product, the Lenovo Phab 2 Pro (Petja, our development head loves it) and the Asus ZenFone 2 AR, which we haven’t been able to test yet. As of now other Tango devices are obsolete and aren’t being supported.

As you can see, this is a huge problem with the Tango system, we have too few devices to make it truly viable system.

The Lenovo Phab is a good phone and good at AR, similarly the pricing point is excellent and it is a lot of phone and device for the price. Similarly ZenFone is getting good reviews, but the price point is at least a double here in Finland.

As with Android, it is easy to develop apps for these devices and as always they are distributed through the Play Store.

So, what does the Tango do?

Project Tango, or rather now just Tango, is an augmented reality computing platform developed and authored by Google. It uses computer vision to enable mobile devices to detect (“see”) their position relative to the real world without GPS or other external signals or devices. Much of the technology, sensors, are based on the idea behind Microsoft’s Kinect-system for X-Box 360 and X-Box One. This is mostly because the system was developed by Johnny Lee, the main contributor behind the Microsoft product.

Tango allows motion-tracking using visual features of the environment using accelerometer and gyroscope to help and make the tracking extremely accurate. The system is also capable of area learning, as in mapping the location and this location can then be enhanced with metadata like notes and so on. Furthermore, Tango’s depth perception is the meat and potatoes that elevate it over the normal AR-systems. While Hololens is capable of depth perception, to a degree, Tango’s is much better and accurate thanks to the sensors. This all gives the Tango devices superb six-degrees of freedom unlike on any mobile system yet.

While in my own view Hololens is more promising, due to it being an HMD, as technology Tango is more mature and developed. However there are problems with the Tango system, the chief of them being the fact that Tango was never truly part of the Android package. However, this has changed now with the Android 8/Oreo and the Google ARCore. When the source was viewed it became apparent that the ARCore, launched shortly after Apple’s AR Kit, is actually the Tango software suite with the access to the extra sensors the devices have turn off. Thus we have a situation where all the hones capable of supporting Android 8 will eventually be turned into augmented reality handsets. Just like the Tango should originally have been. My hope is that we will see more Tango devices, as the software does benefit a lot from the sensors, even if the suite is advanced enough to do everything Tango does. It is even able to handle context and walls, unlike the competitors.


Competitors?

Currently ARCore is the most advanced AR-system for commercial use, even though it was a knee-jerk reflex from Google to unveil it in the wake of Apple’s AR Kit. These two augmented reality suites are both powerful and very easy to use, however they have their own quirks. While AR Kit is better at retaining the scanned area, Core overcomes this with extremely quick restart and so performance is similar. However the point where ARCore wins is that it handles context and walls, while Kit has to have a work-around by having to first manually designate and scan the area for the walls or the context of tables and books to work. Similarly other problems with the ARKit is that it drains the device battery extremely quickly and so the power use needs to be optimized by Apple. Similar oddity for the AR Kit is that it suffers from Apple’s “policy of sameness”. While the iPhone 8 does not have special sensors, it works with the AR Kit perfectly. The oddity comes with the iPhone X and its special sensors pointing towards the user for facial identification. This sensor suite is advanced, extremely good and would work perfectly with augmented reality, however it is pointed the wrong way and thus Apple’s “Tango” never made an appearance.

There is also a third competitor in this market, while their approach is quite different it also compliments the ARCore. This is of course Sony’s Xperia ZX1 and its 3D Creator that uses advanced photogrammetry for scanning. Similar software for scanning already exists at least in Play Store: SCANN3D. However, it seems that Sony’s device and scanning system is more advanced, and the phone will be equipped with Android Nougat straight from the box.


Opinions

We like both, but due to the problems with AR Kit and the openness of the ARCore and Android we prefer Android. However, both are great platforms for doing things and Apple is quickly making strides into catching up with Google.

Both are almost there, but not quite. I believe the extra sensors on the Tango devices are worth the extra price. Also while augmented reality is a neat thing on mobile devices it only becomes a real tool for professionals when combined with augmented reality headsets. Hopefully we will soon see Android 8 operating system making it into the AR-glasses as well as Apple finally bringing out their own. In my opinion the greatest strength Apple has is their fanbase and their users. They tend to be early adopters and because of this the whole paradigm for augmented reality has shifted virtually overnight. This is something Google is not capable of doing due to the fragmented Android field, though perhaps they can rectify that with their new Pixel line.

Jaani's picture
10/12/2017 - 14:49
Jaani

In this blog I talk about the Microsoft Hololens, Holography and Things Seen In News™ as well as what they will mean to us in the business as well as more importantly what they can mean to the consumers and end-users.

First, let’s take a look at…

Hololens Spectacles

You can find the news from here: https://www.vrfocus.com/2017/05/microsoft-reveals-ar-glasses-that-look-l...

While the specs themselves at the moment are fairly crude 3D-printed specs, what they can do is quite amasing. I personally would hope that Microsoft also comes out with a version that either clips onto the existing pair of prescription glasses, like I have, or I can get the HoloSpecs with prescrioption lenses for myself.

The thing that is most important about these glasses is that Microsoft uses them to demonstrate something extremely neat.

Holographic Near-Eye Displays of Virtual and Augmented Reality

To me the most interesting thing about the subject (https://www.microsoft.com/en-us/research/project/holographic-near-eye-di...), is that it can correct errors in vision and so if I understood it correctly could be used with prescription glasses. However, we are moving towards the time where we will have incredible looking graphics that are truly complex and have the feeling of “being there” with you, in your space, instead of the graphics we have now that have the feeling of being stickered on. Of course, this is just an incredibly narrow view on my part and the science is truly interesting.

Usually holograms are considered slow and pre-rendered, but if you look at the paper’s notes on the graphics power needed you see a very reasonable graphics card being used for this. Also the system already includes the vision correction, then we four-eyed people could use the system without glasses. Sure, this leaves the problem of actually seeing the world behind the hologram in a blurry or fuzzy way (in my case a a fog that seems to echo the world in 12 or so sets on top of each other, but I digress). Could we actually be on the verge of easier and safer to use laser imaging on the eye itself, or a hologram projected between the eye and glasses? None the less, I’d love to give it a go and be a guinepig.

All of the corrections are done live, I understand, so if you focus your eye into the distance, then the graphic will also follow suite and allow for correct blurring or not.

Distributed Computing

This news is older, from 2016, about how you can now use WI-FI to distribute computing for your Hololens: https://hololens.reality.news/news/hololens-can-now-wirelessly-use-pcs-c.... This has also been shown in other news on a plane and I will also touch upon it later on in another part of this blog.

While other systems are tethered with a wire to the computer Hololen and (hopefully) HoloSpecs aren’t. This allows the user the ability to move freely in the WI-FI area and do things, work, play, life. It gives you freedom to do what you want to do and need to do.

So, now we have three pieces of the forming picture together.

Safety

Minor, but important news is that the HL has passed the protective tests for workplace safety. It means that it is also safe to use out in the open, in the city, hospitals, schools, construction, etc areas. We’ve already seen a few news about Hololens being used in healthcare and it being able to cope with delicate procedures like surgical operations. It also tells quite a lot about the robustness and power of the technology.

Work

I’ve been using the https://hololens.reality.news to collate these ideas and news and the have another, neat story about how to use the Hololens as a virtual monitor: https://hololens.reality.news/news/microsofts-hololens-is-going-transfor.... Below is a video of it.

Well, here we see the HUD and AR together to create a new sort of workspace for yourself. I for one would love this thing at home or wherever. No more monitors, do things wherever you are. In my eyes this will be bigger and more important than tablets or mobile phones… And from here we get to the…

Future of Holospecs

Currently Microsoft is, in my opinion, one of the best hardware manufacturers and having killed off their smartphones and having diverted their interest into tablets that more than rival laptops and desktops they’ve been moving towards AR and VR computing.

I believe that very soon we’ll be seeing consumer and professional technology that will revolutionize how we use VR and AR. All this from one source, Microsoft, unless Google is able to finally get going with more handsets using Tango and Seurat, as well as proper AR-glasses.

I believe that the future for tablets and phones is a Mobile Computing Unit tied to Holospecs, properly good dictation and hand gesture operating system with the ability to use Bluetooth mice and keyboards, or maybe even virtual, haptic, keyboards using ultrasound or similar systems.

The MCU will be the only device we need as the specs/glasses are our screens, and soon we will also have contact lenses or even implanted lenses in our eyes. Though that is still far in the future.

However, I really do expect some sort of MCU type Surface product from Microsoft quite soon and once coupled with the Hololens or Holospecs our whole world will change and we will be at the AR and VR singularity. By this time we will also see Cortana, Siri, Bixby, Eliza and other such VI/AI agents integrated better into the technology and we will literally be living in science fact, not fiction.

What do you think?

Jaani's picture
08/14/2017 - 13:11
Jaani

I was asked to a do a post about the use of virtual reality in architecture and I have spent a week and more ruminating on it as well as perusing things dealing with it.

One of the best articles (that tried making an extremely complex thing into a simple thing) was AppReal’s Virtual Reality for Architects piece, another one I enjoyed and made me think of possibilities was BeeBreeders.com’s The Use of Virtual Reality in Architecture. Similarly this blog incorporates ideas and thoughts I got from Archsmarter’s 5 Ways Virtual Reality Will Change Architecture.

So, let’s start!

1. VR gives you a competitive edge

Sure, why not. You’re able to take your customers into the concept building and demonstrate how it is, how it works. Given that you know where the building is you can ray trace the light and create a photorealistic or even ultrarealistic building they can look around, explore. Similarly, you can change materials, surfaces, add things on the fly and so on. However, if you’re not careful you will end with a mess in the plan side and will need hours upon hours to fully clean it.

2. Be on the forefront of an industry trend

Again, yes, sure. However, as VR is being used more and more you will come to notice that everyone uses it, does it and while it allows a lot of things it too is flawed like augmented reality. The same problem applies to each other as it is the newest and hottest hot. People will be mesmerized by the “bling” and not see the depth for the surface and like with AR the depth of things that can be done will remain quite unexplored. We have a fix for this problem as a conceptual idea we’d love to explore with the professionals. Find out about it down the blog.

3. Low costs

True. From Google Cardboard to Vive the costs are low. However, in the end real-time ray tracing capable computers quickly run up the costs and to really make your work shine you want to have the best, since the biggest companies will have those. Like I noted before… Bling is a big thing. Also, you must factor into the costs the price of training to use the software as well as the hardware. The starting costs are low but the real costs might not be. Again, read on to see the idea we have.

4. Skip rounds of reworking things

Absolutely and the more you invest into the hardware the better and easier it is for you to sculpt the building or design you are creating. Which again leads to the nitty gritty of cleaning the mathematical end of the design. Which we can help with.

5. Simulate real-world

I’ve been following the list from Archsmarter, like you’ve most likely noticed, and I believe this simulation thing is both important, but also should always be included into things, automatically. How else can you really design and display your design if not in its proper place?


VR programs and layers you can use:

There are quite a few of these and they also comply to standards of the industry. Programs like Symmetry, IrisVR, ARQVR especially for Architects, TruVision to name a few. However, all these programs, while allowing for minor changes, are mostly for perusing your building you’ve created on other tools.

Versoteq’s idea for architectural VR:

We would love to find a partner to create this with. We would love to create a program, fore-end, what you will where you build the building in virtual reality. There’d a sophisticated system running in the background to help you, but the builder in the VR would be the one designing everything. So, in many ways we’d love to turn the design system upside down.

Build with tools and partly with free hand, free your mind to create things how you see them and from the point of view of the designer and the user. You needn’t worry about the mathematics or anything else as you run through the creative process. Once you have things set up as you wish, then Versoteq’s proprietary system called Refine3D will take over and you are represented with the end model you can then again change, give new surfaces, etc.

Of course, building is never this easy, but we can add automated systems like running the plumbing, electricity and so on into the model and once all this is done, it can once again be fixed for errors and you will end up with end schematics, blueprints and all the other needed things so you can have the project inspected. If you still have doubts you can load your project into your mobile and take your building to the site you’re considering and run it there in augmented reality to make sure things are up to your ideas and standards.

This file then can be transferred to an architect or building officials for inspections. Once everything has been checked and found to be correct the building contractor can then start their work for you.

What do you think of this idea? Would you like to build it with us?

Jaani's picture
05/15/2017 - 14:08
Jaani

Close your eyes and for a moment go back to your childhood. Picture how you thought the world to be in the 21st century with the flying cars, the holographic adverts and billboards. Now open your eyes and look at our dull and bland world.

I put to you that we are too conservative with the use of augmented and virtual reality in our common, normal, world. This is the near future:

Let’s take a look at the Kone, manufacturer of elevators among other things. They are able to use tablets to keep track of the elevator cabins in real time, a system created by Etteplan. Also they have created a virtual showroom where you can familiarize yourself with the finished boat. Similarly Microsoft has created a real tool in Hololens to be used in construction. While Gilbane Construction Company does say that there are problems with it, it is a massive leap towards future and a tool, not a toy. Martin Bros. has taken this even further as a small builder no longer needs blueprints. He just copies and does what the Hololens shows him. Easy-peasy. Perhaps this is the future for building package houses?

More companies and developers are entering into augmented and virtual reality market, but at the same time we keep coming to the problems with CAD and designs. Unity among others are creating a VR system where you are able to build your designs from inside the VR. To me this is the first time in a long while when AR and VR have taken correct steps. We can again extrapolate this by considering the interior architects and renovators.

What could you do with AR and VR in construction?


3D Visualize Home Renovations

You can take the interior of the building and if the blueprints have mapped out wiring, plumbing and other such things you are able to design the upcoming renovations from inside. You can show your customer a few different ways of doing things, you can “peek” inside the walls and see what you need to do to accomplish the needed tasks.

View the Future Home in Virtual Reality

You can, with your client, walk inside the existing house in virtual reality and together you can add or subtract things from it. You can change things around, change materials and more importantly you and your customer are always on the same page and this builds your customer’s confidence in you.

But how to get this done?

Of course as these designs have been done, the 3D model will be a mathematical mess. Luckily Versoteq has an answer to it, like with geo-tracking. We have expertise and experience in taking measurement data that looks like a mess and turning it into an automation that runs itself through and leaves you the correct parameters you can use to build your product, the renovations.

We have a great track record of minimizing the need for CAD-experts in engineering projects. We have even created the push-button model of 3D scanning to printing, no CAD needed in the middle. Using our help and expertise you can go from a model the customer likes to the measurements and list of needed things in seconds instead of planning and designing on computer for days.

The future is going to be bright in this field of technology as we’ve only scraped the tip of the iceberg. Get to know our solutions of VR and AR and get in touch to start a project!

Jaani's picture
02/06/2017 - 14:20
Jaani


2016 is indeed the year when Virtual Reality (VR) and Augmented Reality (AR) have gone from virtual to reality as the technologies have matured to benefit mass users. Although the number of museums adopting the technologies is still moderate, VR/AR isn't any more a fancy technology seen in large museums but has been used in various smaller cultural sites in Europe. The Helsinki City Museum (“Time Machine”), the Heureka Science Center (“Excavation in VR”), the Norwegian Maritime Museum ("Noboby will drown") and the Danish Castle Centre (“Ghost Hunt and VR Guide”), to name a few. The surge in VR/AR and the releases of more affordable devices create new opportunities for museums and make it more possible to adopt the technologies.

2017 is around the corner so we want to take a step back to review how AR/VR has transformed the museum experience and will continue to influence the cultural sector.

Why come to a museum if people can see objects virtually?

The fear of being replaced by VR and virtual platforms exists in some museums. What is the role left for museums if people can access to collections immersively and in the comfort of their home? Some consider the virtual realm a threat while others see opportunities to offer new experiences and be appealing to new audience.

VR fosters audience’s curiosity and makes them want to see more

Many of us perhaps have heard about the Bronze Age VR project of the British Museum. The project successfully demonstrated that displaying objects in virtual reality did not lessen or replace real life experiences with the objects but rather enhanced the experiences. VR adds a context to the objects; thus, as observed from the project, the VR tour fostered the audience’s curiosity, made them want to see more, and then inspired them to seek information in real life. (Read more about the case and tips for developing VR experiences)

AR enlivens museum exhibits

Why museums are starting to use AR? There is something fascinating about the concept of AR, the idea of adding life to static objects in the real world with sounds, visual contents and additional information, and the notion of extending the limits of physical space. A smartphone can be turned into a personal guide that can not only provide textual stories but also shift time and wake up the objects.

Sworldfish AR App

One of the most prominent examples of AR is the Skin & Bones AR app used at the Smithsonian National Museum of Natural History. The app adds flesh to the bones of creatures, providing a fun learning experience and a playful platform for imagination.


Approaching the millennials

Young generations nowadays don’t visit museums as often as older generations. It isn’t that the history, culture or art isn’t relevant to millennials but rather museum-going experiences don’t match their lifestyles and expectations.

Millennials from a study conducted by the Center for the Future of Museums emphasized that interactive, immersive, and participatory activities are what they want from museums. They described museums as static places (“places that exhibit things”), educational places (but not necessarily places where the learning was fun or engaging), and places where you had to be quiet and stand outside looking in.

“Even if I didn’t want to touch the Mona Lisa, I want to have the option to touch it. You go to a museum and you’re just walking around looking at everything. And not even that you want to touch anything but it just seems like ‘OK this is the museum, and this is me.’ We’re not connecting on any level other than visual.” – says one millennial.

Whether we like it or not, technologies are an inseparable part of millennials’ life today. Museums are still making their ways to the use of new technologies. Increasing sophisticated VR/AR will bring new opportunities and immersive storytelling tools to create the impact and the experience that are more optimized to what young people are looking for.



About Versoteq

Versoteq is one of the leading providers of 3D scanning and AR/VR services to cultural organizations in the Nordics. We work with museums and organizations to offer engaging and accessible visitor experiences by leveraging 3D technologies. More about us: www.versoteq.com/virtualmuseum.

Tram's picture
11/30/2016 - 12:52
Tram

BLOG ARCHIVE