1st prize for best use of ESRI's AR SDK at the MIT Media Lab Reality Virtually Hackathon, 2019.
In January 2019, the MIT Media Lab hosted the Reality, Virtually Hackathon. Together with four of my teammates, we built an AR app that lets you create "stop motion videos" you can walk through frame-by-frame. We won first prize for best use of ESRI's AR SDK.
The night before I got to Boston I kept thinking about how we record and experience memories. Specifically I got caught up in trying to understand nostalgia, why it was so powerful, and what triggers it. Photography and video are some causes. Physically visiting places, scents, and sound can trigger just as much (or even more!).
I came up with a bunch of different one-liners and analogies to describe what I was imagining: a stream of photos/videos you physically had to walk through in order to view them. Memory worms and nostalgia tunnels stuck the most.
We were heavily inspired by time-travel science fiction, in particular Donnie Darko. I won't explain too much about Donnie Darko here, but one part of the movie centers around "intention beams" which are streams that come out of people's chests that show their future life path.
My ideas were also heavily influenced by Imagining 10 Dimensions - the Movie, a documentary that describes how to imagine theoretically higher dimensions of spacetime beyond 3D. In physics, a "world line" essentially describes the path that an object traces as it goes through time. If you took an entire snapshot of your life, you could imagine it as a long, undulating snake that starts with you as a baby, and ends with your death. Below is an example of this world line.
After some more frantic brainstorming, I ended up stuck between making a photo app, a video app, and letting people actually create narratives, stories, or choose your own adventure games. Especially with Netflix's Bandersnatch release not too much earlier, I pitched "make stories in space" to a bunch of people and managed to pull together a surprisingly perfect team.
We used paper-prototypes to test out the idea of content floating in space. Surprisingly, no scissors, printers, or colored paper were available for us to use at this event, which was really frustrating especially when all we really needed to do was see what floating stuff looks like. As honest millennials, we broke up The Road Not Taken into several stanzas and laid it out on our table.
We quickly iterated and debated multiple parts about this prototype, but most importantly we considered whether AR would actually improve the experience of reading the poem. We were skeptical about a textual AR experience partly because of this prototype, and partly because we kept revisiting photos and videos as the main part of the experience.
Learning how to balance UX design and just jumping into coding (especially for such an experience heavy application) was tough. We collaboratively worked on design and development in tandem, but focused on getting core features and interactions well planned out first, and only working on basic development features (like placing content in space, recording the environment, etc) before specifying.
My teammates Jenny, Mira, and Adriana (bless their souls) all worked hard on the UX flows for on-boarding, creation, and editing. We all edited discussed and edited our flows collaboratively which helped direct my development and feature focus. When we spoke to a few mentors and pitched the idea of being unsure what to focus on - a story creation tool, a story viewing tool, etc., they told us to focus on one core, primary part of the experience that makes this unique. After discussing our current wireframes and ideas, we realized it was really about the idea of going through a memory.
We were the only team that used Xcode and Swift for the implementation of the mobile app. Development wise, everything was built with Apple's ARKit. While I was working on the frontend and primary interactions, my teammate James was working on integrating the ESRI API into our app to give it location persistence. One of our artistic pitches was being able to take a video in the summer and revisit that same experience in the winter time, so prototyping location persistence was a key to providing context for the app.
Our first coded prototype was able to capture the location of the camera in space, take a snapshot of the camera feed, and place it onto a plane at that location. If you pass through the photo's location it fills the camera view.
Our first prototype used previews of each frame inside of the frames. This was pretty trippy but we realized that having the image previews cover each frame didn't make the experience as mysterious as it could've been.
Our second prototype just used gray frames to make each tunnel more mysterious and more easily identify the frames from the environment.
Trying out this prototype was definitely emotional. Here we can see our teammate Mira appearing then disappearing as we pass through a memory tunnel. Being able to capture a scene of a person then revisit it without that person there made nostalgia hit home.
Admittedly, we did a lot less user testing than we should have. But, on the second to last day of coding, we got feedback from five different people to see how they interacted with the app with no interference. Very quickly we saw a few things we didn't expect that people did or complained about:
Nobody couold tell where these "memory tunnels" started.
Most weren't aware when they were recording the tunnels
Most didn't know how to start recording
Nobody could stay perfectly in-line with the captured images
Nobody could see the surrounding frames
Some people were drawing with the frames instead of making straight paths (this was really cool!!!)
In the last 12 hours or so of the competition (including getting some sleep), we implemented several features to make it as user-friendly as possible for judging.
Added brief instructions to indicate what to do during first-time use
Added a large recording indicator that flashes while a tunnel is being created
Changed the frame previews from solid colors/images to just outlines
Added animated indicators at the start of every memory tunnel
Displayed the frame that is slightly ahead of the camera position rather than the frame that is perfectly overlapped with the position. This let us create a feeling of movement and gave the user context for the location of the rest of the frames, making curved paths easy to follow.
Added stronger haptic feedback during every frame change
In a very well caffeinated period of three days, we prototyped a functioning version of Continuum. Our final submission (in our opinion!) ended up awesome and really captured the idea of walking through a memory. Below is our submission video, and the devpost can be found here.
We won a thing!
We were so honored to have won ESRI's prize for best use of their API. James worked incredibly hard on the integration with my AR frontend to give our app location persistence-- and while the final version didn't have every feature of it totally working, we were able to save our photo sequences to ESRI's cloud service and link them together. This project was an incredible group effort- please check out the work of my teammates linked in the caption above.
After the awards ceremony, there was a public expo where we got to show our creations to visitors! We received so many warm compliments and many were excited about its future development. We got a lot of feedback about the use cases people proposed for it and several expressed interest into developing the project further, perhaps as an open source app.
We want to let you save your projects in space (location persistence) so you can access older or other users' stop motion photo sequences in any space. Also adding sound syncing to each photo path and editing (photo/sound) to each path is a top priority.
I personally think something like this will eventually become a common form of new media and a way we experience and share memories. Once Apple has some AR glasses out, this will be one of the first things I'm going to try to make for it 😊