Behind the Scenes of the World’s Best XR Designers

Interview with Sam Brewton from R/GA

At Halo Labs we are on a mission to reveal the ultimate workflow for creating XR digital products. We believe that this mission can only be achieved by working together with the community, sharing knowledge and exchanging thoughts.

Behind the Scenes of the World’s Best XR Designers is a series of interviews with the best and brightest talents from top companies in the immersive space, sharing with the community about their failures, best practices and lessons learnt on their way to build outstanding XR products.

Sam Brewton, an interaction designer at R/GA, is one of the most creative and knowledgeable designers I’ve met in the XR industry. He is an agent of innovation and XR evangelist within the organization and pushing R/GA’s AR/VR efforts forward. Beside that, Sam is involved in different activities for the XR community and consistently shares his knowledge. He is passionate about education and was happy to share from his experience in this interview.

Can you tell us about your background and how did you end up working in the AR/VR space?

I am an interaction designer focused on creating game changing products and services with AR and VR. I work with clients to define their business goals based on user needs. Then I see projects from concept through final deployment. I also teach interaction design to undergraduates at the School of Visual Arts here in New York City.

Like many forward-thinking designers, I’ve always been interested in designing for the nexus of the physical and the digital. In Singapore, I worked with a pre-Kinect gesture recognition system funded by government grants. At the time, augmented reality was only a feature. Since then, my efforts have revolved around applying gesture recognition to environments like projection mapping. It was only in the past few years that I realized information could be projected into your eye instead of onto the environment. That caused my views on the future of human computer interaction to really change.

Please share more about R/GA and its AR/VR activities

In 2015, I joined the Nike team at R/GA in New York City. R/GA is a global design company with a background in special effects for film. R/GA created the title sequence for the original Superman. The text flying across the screen was a new application of design and technology to cinema. As an agency that reinvents itself every 9 years, being at the forefront of innovation is in our DNA. Working towards that future is what brings a lot of top talent and clients to R/GA. R/GA has provided me the opportunity to design 3D business applications and augmented reality campaigns. I have achieved more in a short span of three years than I believe I could have achieved elsewhere.

R/GA Studios created an early Tango prototype that understood the physical space, including vertical planes, before the Hololens developer’s edition was available. Since then, R/GA has created work across the mixed reality spectrum. Everything from award-winning VR experiences and AR that leads to commerce and prizes, to even more functional spatial computing that leverages financial information. As AR and VR technologies becomes available in every modern mobile device, brands are increasingly requesting AR experiences.

Tell us about your design team

With regard to teams and their roles, one particularly beautiful aspect about developing AR and VR is the creative people it brings together. Since it encompasses all disciplines, it attracts people from every creative background and specialization. We gather input for new AR experiences not only from developers and engineers, but creative technologists, 3D designers, product designers, and social media strategists.

With AR’s accelerating adoption rate and increase in demand, and R/GA’s heritage in the field, our teams have actively been testing and exploring the possibilities of AR and VR . In the wider creative communities, we see people are extremely interested in learning how to start incorporating AR and VR, but they often don’t have any starting point or know what tools to use. That is one of the reasons we host a class on Prototyping with AR and VR at R/GA University here in New York City.

Can you walk us through your design workflow?

Our initial workflows were usually around prototyping within Unity or Unreal pushing a build every time to test new functionality. But that is not really prototyping so much as it is iterative development. Early on we realized that designing on 2D screens for 3D experiences was a barrier. We explored various methods such as LARPING at room-scale or using physical props like LEGOs. We found WebVR, specifically A-Frame, to be effective at defining 3D experiences. While A-Frame is easy to code for anyone who knows HTML, it was still not entirely accessible to all creatives. Furthermore, there is not a collaborative cloud-based implementation of A-Frame that allows multiple users to edit the same scene.

An early WebVR prototype made while on vacation to explore basic interaction with 3D objects (shoe via in a 360 environment (Blend’s San Diego shoe store) built in A-Frame.

My current efforts are focused on refining our prototyping processes so we can quickly manifest a concept that can be shared and iterated on before development begins. I’ve outlined three criteria for prototypes. The first being that the output is something that breaks beyond the 2D canvas so it can be mutually understood. Even the initial concepting output has to break beyond the 2D canvas. The second is that the output must be repeatable. It has to be a simple enough of process so it can be iterated upon and so it is accessible for different roles to have creative input. And the third requirement is that it is precise enough to lead to development.

AR Run Data Prototype — an example of how I prototype to expand my skills. Run data exported, made 3D in TinkerCAD, Google Maps background, placed into Halo Labs.

What is your general approach to VR/AR design?

People often want to think about AR at the micro-interaction level. With seemingly endless interaction possibilities in AR, that is an easy trap for someone to succumb to. Interaction concepts may also not be fully achievable because of technological limitations.

I prefer to think at a high level. You can still design an AR experience based on user needs. By defining a user journey, you can quickly see the moments where AR can apply. This also allows you to design how a user accesses the AR moment and where the AR leads to.

Creating multiple moments that feature AR allows us to make more compelling stories that literally change over space and time. But the point where our current methods for information architecture break is in defining those AR moments. As we design for spatial computing, site maps no longer sufficiently describe the experience of information across space.

Working with Cognitive 3D, an R/GA Ventures backed company, I learned about defining the difference between objects and the scene, a concept similar to set design where something is either a prop or part of the set. These new paradigms are vastly different from what 2D designers are accustomed to.

What is the biggest challenge you face around your workflow and how do you solve it?

Being able to create a compelling vision even at a low fidelity. By vision I mean both functional experiences that allow stakeholders to understand the purpose, and the ability to experience the interaction. The concept of form and function for 3D applications means both the visual form and interaction functions, but visuals in relation to the context and interaction in relation to the user’s body. To date, there has not been a quick way to achieve sharing a vision with someone else.

We solve this by designing the logic and the interaction separate from the design of the content and visuals. Of course we design the logic with the content in mind and vice versa, as we want to be able to merge the form and the function together early in our process.

What were the biggest mistakes you made when designing for AR at first?

A common early mistake I see among product designers involved in creating 3D experiences, whether AR or VR, is understanding the Z-Axis and the camera as the viewport.

They may say “We’ll have this button on top of the object,” when they mean in front. Or they may say “When the user interacts with the object, it scales,” because in a 2D platform like Photoshop, the image may scale, but in a 3D experience it just moves closer to the camera. But this is largely the result of designing and viewing 3D experiences on 2D computer screens. We can’t fault the creatives for the software world not keeping up with the hardware technology.

Earlier in the design process the necessity for a visual reference in front of us is paramount. When workshopping a concept with clients, we can use a whiteboard to define the 2D user interface and refer to it. I found out that miming 3D experiences is not beneficial as “here” or “there” can be mutually agreed on without the environment in front of us. A mini-map can partially solve this conundrum, but there is still be a disparity between the 2D UI and the non-visible environment, even if you are pointing to a location.

Later in the development cycle QAing often consists of screenshots or recordings because we were QAing the functionality, visuals, and lighting all at the same time. Prototyping interactions with primitive shapes or placeholder objects is desirable so we can focus discussion on the right questions at the right time.

What is the most exciting project you are working on these days?

I am extremely excited about teaching more people how to prototype AR and VR so we can collectively push the limit. Beyond dense game development tutorials, there are not that many solid sources of knowledge on how to start creating AR or VR experiences. Those that do exist are targeted towards people with highly technical backgrounds, not necessarily designers or other creatives. I recently completed technical editing on Google Daydream VR Cookbook, by Sam Keene, a UX Engineer at Google and my former colleague. It is the first book on both Google Daydream and ARCore. Sam Keene’s book is one early example that lowers the barrier to entry for creating AR and VR experiences. I believe there should be a lot more methods for creatives to learn these new ways of working.

My current efforts based around this problem space are twofold. The first is focused on refining our prototyping process and spreading it across creative networks. The second is teaching people how to prototype with AR and VR through platforms like R/GA University, where attendees experience some of the work we have created like NIKEiD VR Studio and even get to prototype with us.

One development I anticipate but does not yet exist is an AR and VR learning platform specifically for creating AR and VR. How can someone learn the fundamentals of creating an AR or VR experience through a platform that is 3D and spatial? Similar to learning a foreign language in the target language, the best way to learn about designing for interactive three dimensions is in three dimensions. A platform like that could be game changing on a meta-level.

Feel free to connect with us: | Twitter | LinkedIn

Liked what you read? Hold down the 👏 to say “thanks!” and help others find this article.

Like what you read? Give Dror Spindel a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.