If you’ve chased a Pokémon down the street or visited Antarctica via the New York Times’ VR app, you’ve already experienced how quickly immersive digital experiences have begun to feel real. This first wave of virtual reality (VR) and augmented reality (AR) is using smartphones, glasses, and goggles to place users in the middle of a 360-degree digital environment or overlay digital artifacts on the physical world.
Other early forms of immersive experiences have appeared elsewhere in the marketplace. For $8, anyone can buy a pair of Google Cardboard VR goggles to explore 360-degree visual environments on a smartphone, but that’s just the beginning. The British Army uses a 360-degree VR video for recruiting; Lowe’s has its VR Holoroom where customers can build a mockup of their kitchens and bathrooms with new cabinets and appliances. L’Oreal Cosmetics has deployed an AR application that lets customers try on makeup at their laptops; medical schools are adopting Microsoft’s HoloLens so students can learn from diagnostic information and surgical instructions overlaid on patients’ bodies.
Diving deeper into a digital world
VR and AR hardware has advanced so quickly that the boundaries of what’s possible seem to be expanding by the day. Nvidia’s Project Holodeck, launched in May, generates photorealistic VR environments that multiple people can use and interact in simultaneously. Lenovo’s New Vision AR subsidiary (LNV) and AR engine firm Wikitude announced in June that they’re collaborating on a cloud-based platform that will deliver industrial AR applications over LNV’s next-generation smartglasses. At the same time, Meta, which introduced the first commercially available AR system in 2014, unveiled its new AR Workspace and Meta 2 headset, which lets users interact with AR artifacts by simply touching them as if they were physical objects.
The speed at which these technologies are evolving — and, let’s face it, the sheer science fiction thrill of using them — are good reasons to get excited about the possible ways they could be used, from design and maintenance to customer service. It’s not just that we’ll use new tools to perform existing tasks like consuming content, viewing instructions, augmenting employee performance, or delivering more engaging customer experiences. It’s that we could create entirely new ways of doing things.
Before long, you’ll be able to create a VR avatar that looks like you, sounds like you, and can meet with other VR avatars in an entirely realistic virtual meeting space. You could sit around a table in a digital conference room — or tour the digital twin of a factory, or attend a keynote speech — with colleagues from around the world, and interact both with them and with your surroundings, all without leaving your desk or, if you prefer, your home. With sufficient computing power and a smart enough AI, you could even program your VR avatar to participate in a virtual meeting as your proxy, and (theoretically) to do a good enough job that your colleagues would never guess it wasn’t actually you. That will raise questions about how to tell an avatar being controlled live by a human from one being operated by a bot — and whether to make the difference both obvious and mandatory.
Soaking it in — and soaking in it
As of now, of course, a 100 percent immersive experience that’s indistinguishable from real life is impossible. For one thing, not every task is best done in a VR or AR environment. More importantly, though, current technologies don’t have the power, throughput, or battery life necessary to stream the level of data necessary to build an entirely convincing digital world outside the confines of a purpose-built facility with wired headsets. In addition, although the eyes are the primary user interface for VR and AR, making experiences even more immersive will require companies to engage other senses as well, especially touch and sound.
That said, to make technology truly immersive, we need to make it align with the physical world. That’s going to require more sensors to make more objects interactive; technology infrastructure powerful enough to create convincingly realistic 3D models; and screens, glasses, and other interfaces smart enough to not just show data, but interpret it and allow us to interact with it.
The smaller sensors get, the easier it will become to embed them in everything or even use 3D printers to make objects out of materials that are themselves sensor-sensitive. Our entire physical environment will be intelligent and interactive, gathering and responding to all kinds of information, from the ambient temperature to hand gestures, in real time.
Imagine being in a factory — or an operating room — in which every item has an AR overlay or VR presence that lets you drill into information about that item, handle a digital version of that object, or operate it remotely. Consider the possibilities of technology that lets you see your GPS location or visualize heat gradients, that automatically uses your unique biometrics to log you into your company’s network when you sit down at your desk, that launches a video chat when you make a certain hand gesture. When everything around us is intelligent and interactive, omnipresent sensors and ambient AI could even enable a true virtual assistant capable of responding to requests that you speak into thin air, or of inferring from your current actions what you’ll want to do next.
Potential use cases for immersive digital experiences will include both the highly specialized and the mass market. Areas like military combat, fire and rescue, remote maintenance and repair of delicate or dangerous equipment, professional handling of data, and extreme sports could give rise to niche immersive applications that require intensive training and sophisticated virtual twins.
Meanwhile, at the consumer level, we’re likely to see easy-to-handle devices that add data overlays and immersive input to all kinds of experiences, from shopping and education to gaming and movies. Those could in turn give rise to open source platforms that make it easy for the public to create, crowdsource, and share their own VR and AR experiences. As immersive digital technology becomes commonplace, we could also see a backlash against it, with trendsetters insisting there’s no substitute for the authenticity of the physical world.
The definition of “immersive” is always a few steps ahead of wherever we happen to be. Remember, the racing simulator arcade games of the early 1990s with their surrounding video screens, working pedals, and haptic feedback in the steering wheel seemed completely realistic at the time. Some say we’re still five to ten years away from a truly convincing immersive digital world, one that engages multiple senses and allows us to move through it in 360-degree space. If and when that world arrives, it will change our entire sense of what’s real, what’s relevant, and what we can tangibly affect.
Read the executive brief Diving Deep Into Digital Experiences.