Behind the Build: Bringing Connect 2024 to Life in Meta Horizon Worlds

|
|

In 2022, we brought Connect to Meta Horizon Worlds for the first time, letting people experience our flagship annual conference in fully immersive VR. While that world showcased everything you could accomplish with the tools native to Horizon at the time, we knew we could do something even better. This year, we upped the ante and pushed the limits of what could be achieved using Horizon’s integrations with tools like Unity and Blender, while improving the live, stereoscopic 3D video broadcast of the keynotes in the event space.

The more minimal and stylized world was a fantastic canvas for the visual effects and animations that brought the experience to life. Full of light, colorful accents, and movement, the world relied on PopcornFX and Unity Asset Bundle animations to bring vibrance and excitement to the environment.

The cozier world was built as a floating island with a central viewing area facing the screen, a play area featuring portals to top worlds like Super Rumble, and a tech lounge showcasing new hardware announced during the keynote. The space was easily navigable within seconds and offered vistas filled with other floating islands and branded assets reminiscent of the keynote, bringing a sense of space and wonder that evoked the magic of VR while staying grounded in the inspiration drawn from our Menlo Park campus.

Having a smaller environment created more natural opportunities for socializing, and social gaming activities helped spark interactions. These mini-games were crafted around three modalities for connection: sharing photos in the Epic Selfie, collecting coins in the Coin Hunt, and finding people with shared interests in the Badge activity. Each activity was easily approachable, understandable, and eye-catching, and they all helped people build meaningful connections.

The Epic Selfie stand transformed as more players posed for the camera, creating a dynamic and picture-perfect moment. We used the new Media Wall gizmo, just released to the Horizon Worlds community, to let people share their favorite shots and leave their mark on the world. Nearly 2,000 people took advantage of the selfie station—that’s about one in five attendees.

The layout encouraged natural movement towards a series of raised platforms behind a small tower with four pressure pads surrounding it. With surprise and delight, people who stepped on the pads saw coins shoot out of the tower and scatter within the world. The more people on the pads , the more coins released, though the activity was also fun and playable solo with engaging haptic, audio, and visual feedback that made coin collecting an engaging activity .

Virtual conference-goers could also customize a badge with multiple interests to spark conversation with others. Once a badge was created, participants could press a button to light up the badges of those around them with shared interests and send out colorful visual trails to others that were farther away to encourage them to find each other and make connections.

These social activities enhanced the overall experience and left lasting delight as interactions were rewarded with Connect-exclusive swag, including avatar wearable rewards and stickers that could be unlocked and used on the spot.

The main event was the keynote itself, with the viewing area customized for everyone’s comfort. We designed different audio zones that allowed for local conversation, no conversation, or open conversation. Attendees were free to choose the right environment for them for the most enjoyable viewing experience.

To highlight the new products announced in the keynote and to showcase what’s possible in Horizon, larger-than-life versions of Meta Quest 3S and Ray-Ban Meta glasses appeared beside the screen, synced to their announcements during the keynote. This presented a challenge in that there was about 15 seconds of latency between the action on-stage and the moment the stream hit headsets. The animations were manually triggered to sync perfectly with the keynote for every viewer, no matter their time zone. While these timing triggers were custom-made for Connect, we hope creators were inspired by this glimpse of what may be possible in the future with Events.

This year’s Connect also marked the first time people could engage with the world on mobile and web in addition to fully immersive virtual reality. There were unique challenges to crafting an experience that was engaging from both a first- and third-person perspective, easily navigable on mobile, and with intuitive controls regardless of whether you were using a touchscreen, a keyboard and mouse, or Quest’s Touch controllers. Reading text was a bit tricky on a flat screen, selfies couldn’t be taken by those outside of VR, and FOV camera adjustments had to be made in the viewing area to automatically switch from third- to first-person to address avatars blocking the screen for cross-screen users. However, it was important to face these challenges to create an experience that was as accessible as possible to a larger audience.

That larger audience was also accommodated by the world itself in VR—with up to 25 people in each instance. Each choice for the world was carefully considered to ensure it remained as performant as possible, running at 72 frames per second without hitches or frame drops. The team used the newest tooling available for performance metrics including content traces to assess the weight of different aspects of the world, deep traces to check for calls on the server, and Unity profiling to check draw calls and vertices to stay within best practices. Creators can learn more about how to optimize their worlds through the developer portal.

For those who jumped into the world on Quest, we delivered a live stereoscopic 3D rectilinear video stream of both the main keynote and the developer keynote. We were able to reduce the size of our 3D camera rigs by 50% this year, improving audience sight lines and providing more seating IRL.


Not only did the newly released Blackmagic Micro 4K G2 camera bodies allow us to set the two cameras needed for stereoscopic capture close enough together that they match the distance between a person’s eyes, it also enabled us to use automated zoom lenses, giving us more granular control of the shot composition and letting us place the camera farther back in the venue. Special software was designed to control the camera and stereoscopic image in real time to ensure a safe and comfortable 3D experience in-headset.

We used four camera positions this year, giving us the coverage needed for the Connect keynotes:

  • Two positions behind the front seating areas, giving us dedicated perspectives on the two main presenter areas on stage
  • One wide shot situated at the back of the room on the main camera platform
  • One shot fixed to the bottom of the JitaCam, a small crane arm attached to a rail running front-to-back in the venue

This was a similar production to last year but with some major improvements to camera hardware and video production workflows. If you missed the Keynote and Developer Keynote in Horizon Worlds, you can check out both in stereoscopic 3D in Theatre Mode in Meta Quest TV, available now in-headset.

In the end, the team built a world that was more performant and expressive than its earlier counterparts—purposefully smaller and more intimate for a more vibrant experience. There were many moving parts for this year’s Connect world, and we had to develop a lot of custom solutions to make it work. We hope that work inspires the Horizon Worlds creator community to continue putting our current tooling to the test and bring your own unique visions to life.