Camera AR “Walk-Through” (Adobe Aero)

I recently downloaded Adobe Aero, a simple visual editor for AR experiences, and used it to create prototypes for an educational model of a camera.

  • Aero is extremely easy to use but quite basic in its features. I experimented within its constraints to come up with an experience for informal education. One of its native features is the ability to import a Photoshop file as a set of flat layers.

  • Inspired by this import feature, I used Adobe Illustrator and Photoshop to create graphic assets for a layered “exploded” view of a camera.

  • I believe one of AR’s greatest strengths is its ability to create dynamic spaces. With this in mind, my goal was to design a “surreal” physical exhibition for a science museum, where the user can walk through models, view them from all angles, and toggle different layers of the experience.

Tabletop model

First, I created a model scaled for a tabletop, for the use case of an educational app for students to view and manipulate AR models using a personal device. This version also served as a scale model before creating a full room-sized experience.

Major design choices:

  • Tabletop models are viewed top-down, so the info cards are angled upwards.

  • The text is scaled to be legible on a screen as small as a smartphone.

  • The layers are spaced such that it’s easy to differentiate them visually, but it’s not hard to pan across all of them from a seated position.

I also created a layer showing the paths of light rays that refract through the body of the camera. A user could toggle this layer on and off.

Room-scale model

Next, I prototyped my original goal of a model that a user can walk through and navigate from multiple angles to easily trace the path that light follows into the camera. I tested my design on both a tablet and a phone.

Major design decisions:

  • I chose to keep the camera layers as 2D assets to make it more intuitive to walk straight through them on a path that strongly corresponds to the path of light into the lens. 3D pieces seem like solid items that can be observed from the side, while 2D makes the directionality more clear.

  • The info tiles always rotate to face the user, which improves their legibility from a distance in any direction. This action also enables users to navigate the model along any path, as shown in the videos below.

  • I tested interactions such as the tiles appearing only when the user approaches each layer, or the layers of the camera fading when the user is about to walk through them. However, these interactions mainly seemed to distract users from the content. It proved difficult to tap the screen of one’s device while holding it up and walking.

IMG_4425.PNG

User feedback

Users reported that the camera visual was intuitive to understand in both prototypes, as the designs made the directionality of the model clear. Additionally, the light rays were a fun touch and helped them understand how light from the environment is manipulated inside the camera, which is a concept that is often confusing in a static 2D diagram. However, it was not clear without guidance that it was possible to walk straight through the layers of the room-scale model, even though that path was an intended affordance.

Future work

I imagine a full implementation of this exhibit would include more affordances for walking through the layers instead of just around them, perhaps by including footprints on the floor. In keeping with the idea of a surreal physical exhibit, the AR could include a virtual kiosk for toggling the model, instead of something like a button in the app. The kiosk would preserve the immersion of manipulating a physical exhibit, rather than a button that draws attention to the device and app being used.

Other material for the camera exhibit could include an animation of light entering the camera, the shutter snapping, the sensor absorbing the light, and the final image being displayed.

Appropriate mediums for the full-size AR experience could include a tablet, which I used here, or an AR headset with hand-tracking capabilities for interacting directly with the model.


Previous
Previous

Embodying a Narrative: Spatializing Themes of Occupy City Hall (VR, Journalism)

Next
Next

Digital Loupe (Mixed Photography)