The journey continues — Grouping

This is the second part of our “From VR to XR” series, and in this post, I’m going to talk about our Grouping implementation.

At Halo Labs, we are developing a design, prototyping and collaboration platform for VR and AR. 
For those of you who haven’t read our previous post, in this series we share our insights from developing some of our core features.
In this post, I’m going to explore the different ways to implement grouping and give you some insights into the challenges we’ve faced and the tools we’ve used.

But first, what is Grouping?

Grouping is the ability to group together two or more entities, so they can be handled as a single entity.
Three boxes grouped together

We wanted to add grouping support for three main reasons:

  1. To allow users create “Composed objects” — objects composed of multiple entities.
    For example, if you want to create a globe with tacks on it, you’ll want to group a globe model with multiple tacks models. 
    such composition will allow you to spin the globe with the tacks, but also to define a different “action” for each tack (e.g: a different onClick event for each tack).
  2. To simplify “crowded” scenes.
    When a scene contains a lot of entities, it’s getting hard to manage.
    grouping entities together simplify the scene and make it easier to manage.
  3. Anchors.
    As Dan mentioned in the previous post, Anchors, which are needed in AR scenes, are basically groups. Implementing the grouping functionality will take us a big step towards Anchors implementation.

Now that it’s clear what grouping gives us, let’s dive into the implementation.

Grouping Types

Grouping can be implemented in one of two ways, each has it’s pros and cons:

Parent/Child entity

In the Parent/Child entity method, any entity can have children entities.
If you want to group two entities, you simply set one entity as a child of the other entity. This method is very popular in 3D and CAD software. 
A-Frame (and its “engine” Three.js) supports this method by default. In A-Frame you can add children to any entity simply by putting it under another entity in the hierarchy.
One of the main problems with this method is raised when trying to change the position (or the rotation/scale) of grouped entities.
In such case, the parent entity is referencing the scene (“world position”), but the child entity is now referencing the parent (“local position”), so the coordinates mean different things for each entity.

Parent/Child Entity In Three.js editor. Notice how the ball contains the box.

“Folder” entity

This method is based on the previous one, with a tiny modification.
When grouping two entities, instead of setting one as a parent and the second as a child, we create a third, “dummy” entity — we’ll call it the “folder” entity, and we set it as the parent of the other two. That way, all the entities under this group have a local position (position relative to the parent), and the “folder” entity has a world position (position relative to the scene).

The folder is just a “dummy” entity

It took us some time to decide which method to use, but eventually, we chose the “folder” entity method, mainly because we think it’s clearer and more consistent.

Now that we’ve decided how to implement, all is left is to actually implement it. How hard can it be? right?
Well, as you can guess, the real challenge haven’t even started…
In the next two sections, I‘ll share a few of the challenges we’ve faced.

Flat scenes are easier

Before adding the groups, the structure of our scenes was always flat. Adding groups made the scene structure hierarchical, which forced us to make some changes:

Entities list

On the left side of the editor, we show all the scene’s entities.
Until the groups were added, we presented a list of entities, divided by type (UI objects, Scene objects, etc.). Adding groups forced us to change the display method. The obvious solution was a Tree View.
Our platform is written in React and it was clear to us that someone had already implemented a React Tree View component before, so it would be a shame to reinvent the wheel. After a quick search, we found 3 relevant libraries: React-treeview, rc-tree, and React-treebeard. 
Eventually, we chose rc-tree, mainly because of the built-in Drag and Drop support and its declarative nature. The way rc-tree operates was a bit different from ours so we needed to work-around it sometimes, but all in all, it saved us a lot of time.

Entity list (on the left) VS Entity tree (on the right)

Data structure and scene rendering

It was clear to us that we want to keep the data structure (in the database) flat. Keeping hierarchical data flat isn’t very complex, but it complicated the scene’s rendering process. We soon realized that the rendering should be recursive, which required some changes in the structure of the components and the division of responsibility between them.

Interacting with groups

Interacting with groups is different from interacting with “regular” entities.
Here are two examples of challenges we’ve faced that are unique to group entities:

What am I referring to?

When adding an element to a group, its reference point changes.
If until now the element’s position was relative to the scene, it’s now relative to the parent entity — the group. As a result, dragging an element into a group causes it to “jump”.

Notice how the purple box “jumps”

Obviously, that’s not the desirable behaviour, so we had to recalculate the position. When adding an object to a group, we calculate the relative position (object relative to the parent) that leaves the object in its (current) absolute position (object relative to the scene).
Luckily, Three.js has some built-in functions that saved us from writing those calculations by ourselves (if you ever need to implement something similar, make sure to check getWorldPosition and worldToLocal functions).

Are Groups visible?

During development, the question arose about whether a group entity should be visible. If so, how does it look like? Can I interact with a group? Or only with specific objects? 
The approach we finally chose was a hybrid one. 
Groups do not have a visual representation in the scene, which makes it impossible to interact with a group. However, we allow setting animations on a group, so that a specific object is a trigger, but the animation is performed on the group as a whole. This approach is essentially a compromise between the desire to maximize the benefits of using groups on one hand and reduce complications resulting from using them on the other.

Group animation, triggered by a specific entity

And here’s the result:

In this blog post, I tried to give you a taste of the challenges of implementing a grouping solution in a VR/AR editor. 
Working on it was really fun and it helped me better understand the new challenges arise from programming in a 3D world.
One of the most interesting questions I found myself dealing with is how to apply software development best practices to VR/AR development, and how should those practices change in the context of VR/AR.
I think that step by step, the WebXR community is transitioning from creating scenes and tools, to building more comprehensive systems that involve multiple domains. I believe that we’ll soon see more and more discussions on that subject, and I have no doubt that these discussions will push our community forward.


Feel free to connect with us: Halolabs.io | Twitter | LinkedIn

Would love to hear your thoughts and if you liked this, give a round of applause so other people will see it here on Medium.