Demo or Die: How Reality Labs’ Display Systems Research Team Is Pushing the VR Industry Toward the Future
UPDATE August 9, 2023: We’re honored to share that the Display Systems Research team at Reality Labs Research has been awarded three prizes as part of the SIGGRAPH 2023 Emerging Technologies program.
Flamera, the team’s reprojection-free light field passthrough demo, was named Best in Show, while the team’s Butterscotch Varifocal prototype received the Audience Choice Award and is an Official Selection for the Digital Content Association of Japan.
Congratulations to the entire DSR team!
It’s an old saying, sure, maybe even a little cliché. And yes, there are skeptics out there—but we’d argue they just haven’t found the right VR experience to make them a convert []-)
While there are few things as magical as your first VR “a-ha”moment, we’ve seen glimpses of the future in recent years that have proven equally powerful. And those lucky enough to attend SIGGRAPH 2023 in Los Angeles August 6 – 10 will have the opportunity to experience that “wow” factor twice over, thanks to two new demo prototypes from the Display Systems Research team at Reality Labs Research.
Optical Scientist Yang Zhao and Research Scientist Grace Kuo.The first is Butterscotch Varifocal, which combines the varifocal technology from our Half Dome series of prototypes that we’ve been working on since 2015 (and talking about publicly since 2018) with a retinal-resolution VR display that we debuted in 2022.
The second is Flamera, a computational camera that uses light field technology for reprojection-free VR passthrough. Like our reverse passthrough demo, which we originally shared in 2021 in a far from consumer-ready form factor, it certainly doesn’t look or function like a contemporary headset, but it may not be as off-the-wall as it appears at first glance.
Both Butterscotch Varifocal and Flamera are very much in the research stage, and the technologies they’re investigating may never make their way into a consumer-facing product. And, as we hope you’ll come to see, that’s very much the point.
Butterscotch Varifocal: Fine Details in Focus
Optical Scientist Yang Zhao wearing the Butterscotch Varifocal research prototype.Today’s headsets do a great job of immersing us in virtual worlds, whether they’re highly stylized or photorealistic. However, they’re currently limited by a fixed focal distance. Anything that’s about 1 meter or so away from your eyes appears clear, but try to hold something up close to your face and most people are unable to focus on it with full clarity. Luckily most VR developers design around that fact, keeping the action at a comfortable distance so everything stays in focus.
But what if you’re using VR for work and need to read text on a virtual screen? Or there’s an interesting object that you’d like to inspect up close to appreciate all the tiny details? That’s where varifocal comes in.
By leveraging eye tracking technology and moving the display closer to or farther away from your eyes depending on where you’re looking, the system allows you to focus at various depths for a more natural, realistic, and comfortable experience. Combine that with a retinal-resolution display and you get crisp, clear visuals that rival what you can see with the naked eye.
“This research aims to demonstrate a VR display system which provides visual clarity that can closely match the capabilities of the human eye,” says Optical Scientist Yang Zhao. “Retinal resolution means that the headset can provide sharp details that are close to the limit of what human eyes can perceive. On top of that, a varifocal display supports the range of accommodation that human eyes have so that the high resolution can be perceived at different focal depths.”
Optical Scientist Yang Zhao.The culmination of eight years of research motivated by a desire to improve resolution while enhancing visual comfort, Butterscotch Varifocal is, to our knowledge, the first prototype headset to achieve varifocal with a retinal resolution display of roughly 60 pixels per degree (PPD), which is sufficient for 20/20 visual acuity.
“In some sense, Butterscotch Varifocal is the ‘greatest hits’ of the Half Dome and Butterscotch series: drawing on the proven mechanical varifocal mechanism in Half Dome 1 and 2 to rapidly and reliably change the optical focus, integrating the full rendering, distortion correction, eye-tracking, and varifocal control software from past Half Dome headsets to support full PC VR content, and using the Butterscotch display and optics as the foundation to achieve retinal resolution,” explains Display Systems Research Director Douglas Lanman. “So why don’t we call this Half Dome 4? The reason is that this headset, unlike past Half Dome headsets, is solely focused on showcasing the experience of retinal resolution in VR—but not necessarily with hardware technologies that are ultimately appropriate for the consumer.”
Display Systems Research Director Douglas Lanman.In order to achieve retinal resolution with LCD panels readily available on the market, Butterscotch Varifocal has a reduced field of view (50 degrees compared to greater than 90 degrees with Meta Quest 2). And the form factor is somewhat larger than consumer headsets like Quest 2.
“In contrast, our work on Half Dome 1 through 3 focused on miniaturizing varifocal in a fully practical manner, albeit with lower-resolution optics and displays more similar to today’s consumer headsets,” adds Lanman. “Our work on Half Dome prototypes continues, but we’re pausing to exhibit Butterscotch Varifocal to show why we remain so committed to varifocal and delivering better visual acuity and comfort in VR headsets. We want our community to experience varifocal for themselves and join in pushing this technology forward.”
While Half Dome 3 introduced electronic varifocal, which significantly reduced the size of the system compared to its mechanical predecessors, the team opted to return to a mechanical design for Butterscotch Varifocal in light of the tradeoffs involved.
“While the electronic varifocal solution in Half Dome 3 emphasizes the small form factor, a lighter weight, and being completely solid-state, adding another optical element or module into the optical design inevitably introduces performance loss like light transmission or reduces image quality, no matter how small those changes might be,” Zhao says. “For Butterscotch Varifocal, our top priority was to create the best visual quality. Thus, we chose to integrate a mechanical varifocal system to optimize the image quality at some cost to the system size and weight.”
And just as the hardware has evolved over time through both the Half Dome and Butterscotch projects, the software has improved and matured in tandem.
“To create a great varifocal experience, the hardware and software need to work together seamlessly,” says Research Scientist Olivier Mercier. “Our understanding of distortion correction, eye tracking, rendering, and latency have all been refined to create a high-quality experience that can use our best varifocal hardware to its maximum capabilities. Over the years, varifocal has also moved from a niche research topic to something that a lot more people have interest in. Our varifocal software has evolved from an unstable, research-y, one-off branch of the main code into something that’s now much better integrated with the rest of our VR platform. This makes collaboration with other teams much easier and makes for a much more polished experience where varifocal is an integral part of the rendering pipeline. This tighter integration also means that we support many more games and applications today, compared to our early varifocal headsets that would often use more simplistic or custom demo content.”
Research Scientist Olivier Mercier.The end result is nothing short of extraordinary. As you look around a scene, the things that draw your attention snap magically into focus and tiny details that may have previously gone unnoticed shine. Take Lone Echo II from Ready At Dawn as an example. From the textures of your android chassis to the highlights of Liv’s hair, all of the intricacies pop. And the demo experience brings to life the developers’ years of passionate efforts and meticulous attention to detail like never before.
“With Butterscotch Varifocal, we can see things in VR in a similar resolution and quality compared to what we see in the physical world,” says Zhao, “which opens up doors to many new possibilities of building amazing experiences in VR.”
Flamera: A Radical New Take on Passthrough
Research Scientist Grace Kuo wearing the Flamera research prototypeVR is great when you want a fully immersive experience, like being transported to new worlds, living out an interactive narrative, or watching a captivating film on the biggest possible screen. But there are also times when it pays to be a little more connected to the outside world—when something unexpectedly enters your playspace, for example, or if you want to bring virtual content into your physical environment. Cue mixed reality (MR)—and passthrough, which lets you see a digital recreation of the physical world inside your headset.
This is something that modern headsets like Quest Pro do well (and Quest 3 will do even better). However, today’s passthrough relies on cameras mounted on the headset, typically a couple inches in front of where your eyes sit. That means the cameras capture a different view than what you’d see if you weren’t wearing a headset. The images can be computationally reprojected to the “correct” view, but this can result in visual artifacts. And even if you placed the cameras directly in front of the eyes, the view would still be off due to the thickness of the headset.
“To address this challenge, we brainstormed optical architectures that could directly capture the same rays of light that you’d see with your bare eyes,” says Research Scientist Grace Kuo. “By starting our headset design from scratch instead of modifying an existing design, we ended up with a camera that looks quite unique but can enable better passthrough image quality and lower latency.”
Research Scientist Grace Kuo.Unlike a traditional light field camera featuring a lens array, Flamera (think “flat camera”) strategically places an aperture behind each lens in the array. Those apertures physically block unwanted rays of light so only the desired rays reach the eyes (whereas a traditional light field camera would capture more than just those light rays, resulting in unacceptably low image resolution). The architecture used also concentrates the finite sensor pixels on the relevant parts of the light field, resulting in much higher resolution images.
The raw sensor data ends up looking like small circles of light that each contain just a part of the desired view of the physical world outside the headset. Flamera rearranges the pixels, estimating a coarse depth map to enable a depth-dependent reconstruction.
All of this results in a view of the physical world seen through the lens of the headset that more closely approximates what the eye would see naturally and with fewer artifacts compared to commercial headsets on the market today and at a higher resolution than traditional light field cameras. That opens up the door for even more realistic MR experiences that seamlessly blend virtual content with our view of the physical world in the future.
“The Flamera optical design works best when the headset is thin, which lets us put the passthrough cameras as close to the user’s eyes as possible,” Kuo notes. “However, the camera sensor we were using had a lot of electronics that made the headset substantially thicker, so we had to design custom flexes to move those electronics out of the way. In general, it was an exciting challenge to go from the concept of light field passthrough to a full, working headset.”
As with Butterscotch Varifocal, the goal of Flamera isn’t to show something that’s viable for a consumer product—at least, not yet.
“With our work on light field passthrough, we’re focused on previewing the experience of perspective-correct MR passthrough,” says Lanman. “We previously shared our work on MR passthrough with the SIGGRAPH community through the neural passthrough project, which was aimed at using machine learning methods to synthesize passthrough imagery with fewer artifacts than existing commercial systems. However, neural passthrough still produces reprojection artifacts and requires workstation-class GPUs to operate in real time. Light field passthrough takes a computational imaging approach, where the camera hardware and reprojection algorithms have been designed to work in concert—greatly simplifying the computational challenge of synthesizing a viewpoint with the correct perspective.”
Research Scientist Nathan Matsuda spearheaded this recent wave of experientially focused research, kicking off and managing the perspective-correct passthrough program, as well as programs on reverse passthrough and high dynamic range (HDR) VR that the team previously demoed at SIGGRAPH. Much like Half Dome Zero, which began the Display Systems Research team’s varifocal investigations, the hardware coming out of these programs is usually bulky, somewhat unwieldy, maybe even a little strange. But it hints at what the future might hold if researchers and product engineers buy into the underlying vision and continue to push the envelope.
Research Scientist Nathan Matsuda.“Our light field passthrough demo gives you a preview of what MR passthrough might one day become,” Lanman says, “if we can fully refine the light field camera hardware driving this prototype. As with reverse passthrough, it may look odd in this prototype form, but we hope the community is inspired by their experience with this early demo. It points a path forward for perspective-correct passthrough that may seem ‘wacky’ today but could very well prove practical to meet the user experience and form factor challenges of tomorrow’s MR headsets.”
“We envision a passthrough experience that lets you seamlessly interact with your surroundings while wearing a headset,” adds Kuo. “High-quality passthrough lets us convincingly overlay digital content on our view of the physical world while leveraging the high-contrast, wide-field-of-view displays of VR headsets. We focused on one important aspect of passthrough, which is the camera perspective, and our demo shows a first look at what passthrough could look like with correct perspective and without the distortion artifacts caused by computational passthrough approaches.”
In other words: There’s still a long road ahead, but it’s a promising beginning.
Demo or Die: The Road (Back) to SIGGRAPH
While the Display Systems Research team has been researching and publishing in the open for years—its “Starburst” HDR VR prototype headset marked its first public demo—the team has long held to an ethos of “demo or die,” which has served the team well under Lanman’s leadership.
“The expression ‘demo or die’ originates, for me, from the MIT Media Lab, where I and thousands of other researchers learned this mantra,” he says. “The point is to use the demo to explain the ‘why’ rather than the ‘what’—to help someone new to a technical challenge understand why they’d want a technology, like varifocal, light field passthrough, and HDR in the first place and, through the demo, to offer some decent approximation of what the ultimate experience would be if this technology could be fully realized in a practical, scalable manner. That is the core of Emerging Technologies at SIGGRAPH: to demo nascent, primarily hardware-driven technologies and, in doing so, to encourage the SIGGRAPH community to bring their talents to specific problems in computer graphics and interaction.”
The culmination of roughly a year of internal demos and user studies, Starburst was a rousing success and managed to take home SIGGRAPH 2022’s Emerging Technologies Best in Show Award.
“The experience was great for those of us involved,” recalls Matsuda. “We got to meet and run the demo for a number of luminaries in the HDR display field as well as many interesting people from a wide range of other backgrounds—CEOs, execs at major film studios, NASA scientists. The success of the demo motivated us to double down on showing our work to the community and served as a blueprint for the specifics of our demo setups this year. In particular, I wanted to try sort of a science museum approach to explaining our work last year by showcasing an exploded view of the hardware as the centerpiece for the booth. This proved to be an excellent mechanism for explaining the work to people with widely varying technical expertise, to draw in visitors and keep people in the long lines interested. We’re replicating that with explanatory hardware displays for both prototypes this year as a result. I’m very excited to see the new demo booths come together and for the conversations that we’ll have.”
The Display Systems Research team likes to place a heavy emphasis on the “systems” in its name—these are multifaceted, multitalented individuals working across disciplines to arrive at unique, holistic, and oftentimes unexpected solutions to the challenges facing the industry. Its North Star vision is to show the world that there’s a potential path forward—not with an emphasis on productization, but rather with a keen eye toward inspiring others to see the promise of this (admittedly difficult) work and take up the mantle as well in order to push all of us—academia, industry, and consumers—forward to a brighter future. And now, having expanded its show floor presence from one demo to two, the Display Systems Research team seems well on its way to doing just that.
“Demoing at SIGGRAPH has always been the plan,” says Lanman. “For myself, SIGGRAPH Emerging Technologies has been the most important event in my career, starting from my first demos of light field displays over a decade ago through to the two we’re exhibiting this year. We had hoped to demo past Half Dome headsets, but the COVID-19 pandemic put a stopper in those plans. The whole team is delighted to finally ‘demo or die.’ As a VR/MR enthusiast, I’m happy we can be judged on the merit of our prototypes, rather than the quality of our press releases.”

Before we close, we asked the team what one thing they hoped readers would take away from this story. Here’s what they had to say:
“I think passthrough is an important technology for letting people stay connected with their surroundings while in a headset. By building demos without the constraints of a commercializable product, we can test new technologies and showcase different experiences in VR. I hope people who try our demos can get a glimpse into what future VR could look like—even if the headsets have a very different appearance than what we’re used to today.” —Research Scientist Grace Kuo
“With our work, I hope people can see from a first-person point of view what experiences and abilities VR can unlock for them. It might not be something they can pick up in a store right away, but it could be some day if the demand is there and the technology continues to advance.” —Optical Scientist Yang Zhao
“There’s a big distinction between what’s possible in VR and what’s currently available on the market. Our two SIGGRAPH Emerging Technologies demos offer a unique chance to see how much passthrough and accommodation can help create more realistic virtual experiences if each axis is pushed to a much higher quality—without taking into account the constraints of a productizable headset.” —Research Scientist Olivier Mercier
“I hope that those planning to attend SIGGRAPH set aside time to see these two demos. While we can’t say when these technologies may be commercialized, we do believe that sharing them with the community will accelerate the development of both accommodation-supporting displays and perspective-correct passthrough. SIGGRAPH is an incredibly talented and academically broad community—if only a small fraction of its members move their attention to these challenges, that will pull in the timeline to develop considerably better visual experiences in VR/MR. For those outside of SIGGRAPH, I hope they take inspiration in the fact that this small team in industry continues to pursue solutions to these daunting challenges spanning both hardware and software, openly sharing them with the wider community. I also hope they’ll be excited about the specific problems being tackled: making better visuals for both fully VR and emerging passthrough MR experiences.” —Display Systems Research Director Douglas Lanman
For more details, check out the Butterscotch Varifocal abstract and Flamera full paper.
One More Thing... 😉
We asked Lanman one other question, and his response was just too good not to share:
Douglas Lanman: This question is a bit like answering which is better: the Beatles or the Rolling Stones? What I mean here is that, in industry, there’s a classic divide between working in research labs or in product teams. There are pros and cons with choosing to spend your career in one or the other.
For the members of the Display Systems Research team, we’ve decided to work on the frontier of the industry. With a small research team, we can build a small number of prototypes of any given concept. But we have to accept that the majority of those prototypes won’t ultimately find their way into consumer products—at our company or elsewhere. Some will be impractical to mass produce, some will be solving a problem that isn’t crucial enough to justify compromising the product size, weight, cost, or power, and some will need several other technologies to mature before it’s their turn.
Take varifocal as an example. We first learned about it from papers written in the 1990s, first started working on it ourselves in 2015, and first started speaking about it publicly in 2018. Even if it were in a product today, that would be a long, long journey. In that time, we’ve seen Oculus (now Meta Quest) ignite as a startup and ultimately ship three major generations of headsets, each building on the last. Product teams have the responsibility to move forward rapidly with intent, but only research has the privilege to dabble: to consider what might be one day, rather than what needs to be right now. Research can’t control when a product team might choose a technology; and, if we’re working on challenges they aren’t immediately faced with, it may be a long wait until they find themselves to be in need of a solution we’ve long-since prototyped.
So I see this a bit differently after working for almost two decades as a research scientist: the primary responsibility of research is to point out what product doesn’t know already. No one asked for varifocal or to solve vergence-accommodation conflict. No one asked for light field passthrough or reverse passthrough. We first built these demos to show ourselves (and then our product colleagues) that these were even problems worth some attention. And we’re doing that now with the broader SIGGRAPH community. We hope sharing these demos more widely will help advance the larger industry and pull in the timeline when such computational imaging concepts could be practically realized.
We’ve spoken about emerging display technologies, such as varifocal, for nearly a decade now. We can’t speak to when the industry may land these technologies in products—but we’re confident that the demos will convince the majority that these are challenges worth solving. We’re excited to hear from SIGGRAPH attendees on what they think of these challenges and the two technological solutions we’re proposing to solve them with.
By the way, the better band is obviously the Beatles: I expect they’d also make the better researchers, as they were always willing to try something new, even if they ended up with Rocky Racoon.


