Charting the Next Frontier With AI

For the last decade we’ve been exploring the frontier of computing trying to figure out what comes after smartphones and laptops. Exploration is tough work. You spend a lot of time charting terrain that turns out to be a dead end. You get excited about promising ridgelines only to find another mountain range behind them. And every so often, you hit a valley with great soil and sunlight and you build something there — VR, hand tracking, spatial understanding, AI glasses.
But for most of that journey, the sheer scale of the frontier loomed large. Unmapped. Full of potential but also full of friction.
AI has been the watershed. It’s like someone handed us better instruments. Sharper compasses, more reliable weather forecasts, clearer satellite imagery. We’re still exploring, but the jumps we make each generation are bigger, and our confidence about the direction is higher. The platform we’ve been chasing for more than a decade is finally starting to come into focus.
This shift is happening most clearly on AI glasses. This is now the defining new hardware category of the AI era and we’ve sold millions of them since launching Ray-Ban Meta. Their growth trajectory today looks similar to some of the most popular consumer electronics devices of all time.
From Intention to Action
While the glasses themselves have already found their stride, this year a new technology arrived alongside them that can offer a fundamentally new mode of human-computer interaction. Meta Neural Band is an entirely new kind of wearable, and by translating subtle neural signals into digital actions, it lets your glasses feel like a much more natural extension of you. Since the birth of personal computing our industry has been working on ways to tighten the loop between human intentions and digital actions, beginning with the mouse and keyboard all the way to today’s touchscreens. But we’ve never had anything like this.

This makes for a much more seamless connection between humans and machines. And as the saying goes for any new technology at the beginning of its S-curve, this is the worst it’ll ever be. The EMG technology that enables Meta Neural Band is capable of mapping thousands upon thousands of motor neurons to almost any gesture, so while we launched it with a simple set of actions, like clicks, scrolls and dials, it can become capable of recognizing increasingly complex inputs as time goes on.
Historically neural interfaces only worked for a subset of people and required tedious calibration. AI changed the equation. With the right datasets and training regimes, our teams built generalizable models that decode neural signals across the broadest population ever. No calibration. No setup. It just works.
And it keeps getting better. Soon we’ll roll out EMG handwriting — dozens of new neural signals that allow the band to reliably recognize every letter of the alphabet. The more data we gather, the more capable it becomes. This is a foundational shift in how humans will interact with computers.

Virtual and Physical
It’s not a coincidence that the first device to ship with the Meta Neural Band was also our first pair of display glasses. This is another hardware advance that’s changing the way people interact with computers in a fundamental way. Until now digital life has always existed on screens that take you away from the world around you. You need to choose between one and the other, and as time goes on this is a choice people feel less happy making.
Display glasses are a major step toward a time when we don’t have to make this choice anywhere near as often.

Technologies like these move us closer to a computing platform that complements our lives more than it intrudes into them. But it is AI that has the greatest potential to make more digital experiences feel like an extension of who we are. And it’s on glasses where AI does the best job bridging the gap between digital life and the world around you.
At Connect this year we previewed a new glasses feature that shows how powerful this can be. Conversation focus intelligently augments your hearing by amplifying the voice of the person you’re speaking with. Because your AI glasses understand your environment, they can amplify the voice of only the person you’re speaking with — even in crowded spaces with lots of other people around.

This is the kind of thing that makes us so excited about AI glasses. When you combine digital intelligence with hardware that can act as a second pair of eyes and ears, you get an experience that can feel much more natural and human than the interfaces we’ve spent our entire lives adapting to. It can be smarter in ways that go far beyond answering questions or writing code. Eventually it will be a fully proactive helper that gets things done all on its own, like the way Oakley Meta Vanguard glasses can learn to take photos at just the right moment during your run or ride.

Every step in this direction gets us closer to a platform that anticipates your needs and understands your intentions, without needing your constant attention and focus. It lets the technology fade into the background and lets you focus on what actually matters.
From Imagination to Experience
There’s another side to our vision that’s just as important as this more natural interaction model. The next computing platform should also feel more like an extension of the life you’re living. This ability to simulate reality is one of VR’s real superpowers and a technological shift is now taking place that enables experiences that feel as real and expressive as the world around you.
A combination of new rendering technologies and a full spectrum of AI tooling is making it much easier for anyone to create fully immersive virtual worlds. Using 3D Gaussian splatting it’s possible to capture the world around you and convert it into high-fidelity digital replicas — everything from entire explorable environments to ultra-realistic avatars in Horizon. We’ve begun integrating these new rendering technologies across the platform, beginning with the new Hyperscape worlds that rolled out this month.
With nothing more than a Quest headset people now can quickly capture a scene around them and turn it into a realistic virtual space that can be shared online for friends to explore together. Just like Instagram helped people share their photos and videos, Hyperscape makes it possible for people to share their world.

And the same technologies behind Hyperscape can also enable a much higher level of realism and fidelity across every kind of experience. VR already lets people create and experience magical things that aren’t possible anywhere else, which is why immersive gaming has driven the success of the industry over the last decade. When rendered digital content can be converted to splats, games and other immersive content can take on new levels of realism.
AI is also making it much easier for creators and developers to build entire playable 3D worlds using nothing but text prompts. The tools we launched in Horizon this year let people generate realistic imagery, video, audio and 3D scenes, and over time these generations will also be capable of ever-increasing levels of realism.

Between this new mode of AI generation and the full fidelity of Hyperscape, we now have line of sight to a whole new class of experiences. They’ll combine the boundless creative possibilities of classic gaming with the realism of the natural world: Just describe the kind of space you’d like to be in, and suddenly you’re in it.
The end result will be an even wider spectrum of experiences to choose from. Spaces that can only exist in our imaginations, or an exact copy of your best friend’s home. Games that take you to the surface of an alien planet, or have you roaming the same city center you know from daily life.
When VR feels more like the world around us even the most familiar parts of life can be experienced in new ways. This is especially true for the best movies and TV shows, which already look great on a big virtual screen in VR. But they’re getting even better as studios take advantage of what’s possible in a headset. When you see M3GAN in the new Blumhouse Enhanced Cinema app the special effects spill off the screen into the room around you.
And a new wave of immersive entertainment is coming. Pioneering directors like James Cameron are working to scale up the creation of top-tier 3D content across the industry. This is another place where the range of experiences possible in VR will keep expanding.
What’s Next
The technologies that came to life in 2025 will be front and center as the computing platform of the future takes shape over the coming years. VR will offer a wider range of experiences that feel more like the world around you: going to movies with friends, visiting someone’s house, or just being together with other people.
Glasses will become increasingly capable of understanding your intentions and delivering real-life context to your personal AI, making it a better helper and assistant. And the same technologies that enable glasses to understand physical reality will also help robots do the same, which is why Meta stood up a new robotics research lab in 2025. They’re now racing toward the state of the art and benefiting from everything we’ve learned over the last decade of work on VR, AI, and wearables.
And ultimately, the metaverse will bring all these advances together. We’re well on the way to a new wave of technology that feels more like the world around us and less like looking at a screen.


