Our journey from a VR to an XR Editor

Problems and insights on mobile WebAR

For the last year or so, we’ve been working on deciphering the XR design and prototyping and wrapping our minds around the boggling state of XR development. To be honest, we never had a chance to get so far without the help of the amazing A-Frame community. This is our chance to give back.

In the next few weeks, we are going to share our insights from developing some of our core features. Already on the list are AR-support, Grouping, Animations and there will be many many more to come.
Since we are all still learning the ropes of XR development, we encourage you to comment, ask questions, and if you there are specific topics you’d like to learn about — please let us know!

Our first post will be about the development of MobileAR-support.

As AR is still an uncharted territory for designers and developers alike, I’m going to tell my discovery process while working our MobileAR support.

Hope you’ll enjoy it as much as we had enjoyed working on it!
Dan from Halo Dev Team.


So, what does building an AR Prototype actually mean?

Today’s AR market can be divided into two main hardware fields:

  • AR HMDs such as Hololens, Meta and Magic Leap. That’s what most people think about when they think about AR.
  • Mobile AR applications, who runs on Android (ARCore) or Apple (ARKit) device:. If you ever used Snapchat or Pokemon Go, you probably already experienced the basic implementations of it.

One of our product goals is to allow designers to create prototypes for consumer application. That’s why we decided to initially focus on MobileAR since true AR HMDs are still out of consumer price range.
With that being said, as of today both Mobile AR SDKs can only detect “surfaces” (horizontal planes), and use them to compute the relative position of the phone. Allowing developers to position objects in the AR space, even if the phone is moving.

A Porg demonstrates surface detection. Notice it remembers its position even when I’m not looking!

After trying several AR apps, we understood that the common denominators are:

  1. Making objects appear on a surface (AKA: “spawning”)
  2. Making them move locally.

So it was decided for us — Our first goal is to make robots appear on a surface. AND MAKE THEM DANCE.

Our inspiration and goal.

How hard can that be, right? :)


Choosing our framework

First, we had to choose our framework. It was obvious for us to keep using A-Frame — It was working fantastically so far. However, A-Frame does not have their own XR support yet. After exploring the options, I came across aframe-xr , also build by the Mozilla team — and it became the foundation which on top of which we implemented MobileAR support.

When we started out, WebVR 1.1 API was standard. Now, evolved with the industry, we have WebXR 2.0 API! The new API allows browsers to integrate to any XR platforms without any distinction and in a simple uniform matter. 
 Luckily, the Mozilla team is ahead of the curve and aframe-xr is already taking advantage of this using the new webxr-polyfill, while still supporting the old WebVR API.
 This allows us to render both AR and VR scenes on the same framework, hassle-free (for more about WebXR, I highly recommend you read this article).

Setting up the environment

Google has developed Chromium builds for both iOS and Android devices. Android works very well (no surprise there), but the iOS build seems to not work with aframe-xr.
Pro-tip: Mozilla’s browser build does the trick, but don’t work hard to install is with xCode, as it is already published on the App Store!

Integrating with our Editor

Time to get our hands dirty. I started to experiment by simply implementing the original aframe-xr components in our framework, tried to render it and —

Or, Trying to catch some boxes

Quite as expected, the boxes just floated around the room and didn’t spawn at a certain point. This wouldn’t do for our robots — we need to spawn them on a surface, or as we like to call it — anchor them.

Before we continue, let’s all get acquainted with the new AR terminology — Anchors are “a fixed location and orientation in the real world” (from ARCore API). Each anchor has a unique ID and has a position and quaternion values in a scene that we can retrieve through aframe-xr.

So, given an anchor ID, you can quite easily “lock” an object to a certain location within the AR scene by giving it the anchor pose!

With the help of aframe-xr, I hard-coded an AR scene and run it with our player. But that was not our goal — we were trying to build a dynamic AR-editor!

…That’s where the VR and AR started to be a bit more complex, and this is where I had an Eureka moment.


Problem: Positioning settings are irrelevant in AR

It didn’t take too long before I realized the first conflict with our platform — our elements each have a position property. However, when I “spawned” them at the office, their position didn’t have any meaning.

Unlike in VR, World Position has no significance in AR.

That was the first design principle I’ve learned — you do not know where the user will choose to position your models in its environment. That’s why it is useless to place them in the editor space.
BUT, we can’t not ignore the significance of local (relative) position.

Let me explain:
When working on 3D worlds, each object has the absolute or world position — This is determined by where the scene sets its world. However, each object also has it’s own coordination world, imagining it is the center of the world.

We usually treat the center of the world position as the player, and we place all our objects in relation to the player world. But when trying to create an AR scene, we do not know where the player will want to position our objects. We do know how we place the objects in relation to each other, making each object a center of its own world — a local position.

When dealing with a single object, even the local position has no meaning — Each model lives in a world of its own, and it doesn’t matter what is the relative position between a different one.
However, if we want to add 2 different models that would appear on the same spot, there’s no way for me to “force” the user to spawn them at the same relative distance as I planned.

We understood we need something that is like a group — but a bit more than that. This is how we came up with the idea of building our own Anchors components.


Solution: Introducing the Anchors Component

After a lot of brainstorming, we came to the conclusion that we can’t just throw around the objects at the scene without any positional logic.

We just implemented the grouping system (that Or will tell you all about on our next blog post) and as we already concluded in the last part, every object in the scene has to be connected to some kind of AR anchor.

Then we thought, what will happen if we will just combine both?

Anchor in action. Two models who have a relative position

The idea behind the anchor component is simple — it’s just a group with spawning abilities.

Now, even the anchor component can have a relative position to the spawning point. (Looking at the example, you see the circle helper. the center of it is the position where the anchor spawns, which means my element will spawn relative to the surface intersection point).

And the best thing — designer no longer need to browse through elements to see their spawning logic. One look at the anchors and they know who is spawning what and when.

And Grand Finale..

After adding some animations and conditions, our prototype looked like this:

Our mighty pirate showing the power of anchors

In the next few weeks we will share insights on webXR matters, react-hacking and more.

But ‘till then, as promised:

DANCING ROBOTS!

Thanks volkanongun for your monocycle robot model, estudio3d.com.es for your Dancing robot model, WelsEvil for your minion model and Don Carson for your Pirates of the Caribbean Skull model, you are all awesome :)
The Porg model is from the great
AR Stickers app by Google.


Feel free to connect with us: Halolabs.io | Twitter | LinkedIn

Would love to hear your thoughts and if you liked this, give a round of applause so other people will see it here on Medium.

Like what you read? Give Dan Pollak a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.