I won’t mince words: Despite many advances, today’s VR and AR still suck. The former is cumbersome, requiring absurd hardware setups. The latter suffers from limited visuals and user interfaces. Leap Motion, the San Francisco company that’s notoriously secretive about its experimental computing technology, is debuting a new platform designed to eliminate both problems.
It’s called Project North Star. The new platform is an effort to get us closer to that magical turning point in which users would feel truly immersed in synthetic worlds. North Star combines Leap Motion’s extremely precise real-time hand tracking (eliminating the need for controllers) with a new, Leap Motion-designed headset-as well as a new UX philosophy that seeks to achieve the Holy Grail of the industry: a truly natural mixed-reality computing paradigm.
Keiichi Matsuda, Leap Motion’s VP of design and global creative director, and Michael Buckwald, CEO and cofounder of Leap Motion, recently spoke with me over email about their design philosophy, how they went from gestures to a new skeuomorphism, and their vision for how mixed reality could change everything.
Project North Star
North Star is not a product but an open source reference platform for Leap Motion’s hardware and UX design-meant for manufacturers and designers.
The headset itself is a prototype that uses Leap Motion’s custom optics solution, which on paper has the most advanced AR headset specs in existence. It projects 1,600 x 1,440 pixels at 120 frames per second for each eye, and has a combined field of view of 100 degrees. For comparison, Microsoft’s Hololens version 2-the most advanced AR headset to date-only has a 70-degree field of vision. The human eye has a 114-degree field of view with depth perception. This means North Star almost matches your eye’s abilities. Crucially, the headset also includes a hand-tracking solution that runs at 150 frames per second, which accurately tracks your hands and fingers in real time everywhere in front of you. This hardware is crucial to make your brain believe that the AR layer is actually part of reality.
But the real key to making AR truly ubiquitous? That’s the third element of the new platform: the user experience.
Most VR platforms use one of two modalities to interact with users. Some use gestures (you wave your hand to invoke a menu you can point at) or hardware interfaces (you point using a physical controller). But Matsuda believes that the experience should be fully transparent and intuitive. Nothing should block the users from directly manipulating the virtual world around them.
He describes this as “naturalism,” or “the ability to be able to interact with the virtual world in a way that is intuitive, without having to learn how to use a complicated interface or abstracted set of controls.” According to Buckwald, today’s implementations fail to provide this “naturalism.” Most products, he says, simply try to use hardware controllers to emulate your hands or tools. Oculus Rift, for example, gives you controllers with buttons that you push to “grab” something. “That will always feel less natural or magical than actually grabbing with your own hands,” he argues.
But both Matsuda and Buckwald believe that the basic input device will always be our hands. Matsuda believes that while “VR and AR are seen as ‘futuristic’ technologies . . . they actually have the potential to be the simplest and most natural.” These technologies are the most direct, intuitive, and accessible paradigm because our entire species evolved to interact with objects in a real world. There will be other inputs, he tells me, like voice, but it all starts with our hands: “Our hands are our original interface with the world, and foundational to any immersive experience.”
Project North Star enables this kind of natural interaction by tracking our hands with total precision-allowing us to manipulate objects like we would in real life, if not truly “touch” them.
Power Hands & Virtual Wearables
Leap Motion’s research work with their hand-tracking platform, Orion, led the team to consider gestural interfaces under a new light-the kind where waving your hand in a certain way will pull up a menu. These gestures feel futuristic, but they require to be learned so you can’t just base your entire interface on them. Matsuda describes gestures as a regression to the days of control-alt-delete. “Abstract gestures can be very powerful when used sparingly, but it limits the technology to power users,” he writes. “Gestures can be hard to teach, hard to perform, and hard to remember if you have more than a few.”
On the other hand, their user testing showed that people-from children to seniors-could interact with virtual representations of familiar objects with no teaching whatsoever. Furthermore, people achieved tasks with 99+% accuracy. If you gave them something real and recognizable to play with-like a virtual clock or a book-they would just play with it.
So, instead of just using gestures, they decided to limit those and embrace the “physicality” of augmented reality, where you can create virtual representations of familiar objects that can help users access powerful features in an intuitive way. Their research evolved into a feature of North Star they call “Power Hands.” Power Hands, says Matsuda, give your hands superpowers. Each type of Power Hands app lets you do something different-from grab a virtual object and rearranging it, to paint in the air with your fingers with a virtual palette attached to your wrist. “They are abilities that users can activate in their hands to influence the virtual world around them,” he writes.
Their other new interface element is called Virtual Wearables, which serve to access the abilities provided by Power Hands. It adds a virtual gadget around your hand or wrist that looks like a real-world display or interface. Think of it as a smartwatch that exists in virtual space, and therefore can morph based on context. Instead of memorizing an innumerable amount of gestures to call up different menus or buttons, Virtual Wearables look and act like familiar interfaces. Users can click, open, twist, turn, or swipe them, just like they would in real life.
They’re a lot more flexible than gestures, too: “Virtual Wearables offer a much more accessible and exciting path,” Matsuda says. “Imagine having several wearables that you can equip depending on the task. Different wearables can serve different functions, and can be pitched to different levels of expertise.” You can use a specialized wearable for painting, another for writing, another for taking photos, and another for editing video-each interface logically tailored to the task at hand, literally, no controllers required.
Power Hands and Virtual Wearables are what Jef Raskin, who led the development of the Apple Macintosh, would call information appliances. For Raskin, an information appliance was a computing device with one single purpose-like a toaster that makes toast or a microwave that heats up food. A gadget so easy to use that it would become invisible to the user. Raskin wanted to create information appliances for everything-a different machine for word processing, photo processing, video editing, and so on. He soon realized that nobody would buy one specialized device for each task, so instead, we got the Macintosh, with its graphical user interface and programs like MacWrite, which turned the Mac into a typewriter, or MacPaint, which turned it into a canvas. Then we got the iPhone, a single device that allowed us to interact with an infinite number of devices (” a phone, a GPS, a web browser, an iPod! “).
Virtual Wearables are the realization of Raskin’s original dream of an information appliance tailored to each use-except they’re virtual, not physical devices. “They may seem esoteric now,” Matsuda explains, “but we believe that one day they could be as ubiquitous as the scrollbar or menu.”
Esoteric is not the first word that comes to mind. For me, it’s skeuomorphism-the virtual representation of physical objects that was all the rage in the early days of iOS and before flat design. While it may be unfashionable today, Matsuda argues that it can serve a very useful purpose for early technologies: “Skeuomorphism gets a bad rap, but served a very important purpose in the early days of mobile (and desktop)-to give people some reference points, leveraging their past experience to allow them access to new possibilities.”
To make a completely new technology intuitive, humans need a familiar reference point- the context that can bridge the mental gap. In his own words:
“Designing for any new medium has to involve thinking on multiple scales. On the one hand, it’s important to jump into the future and try to seek out a ‘native’ aesthetic and interaction model for AR and VR-a utopian world of effortless interaction that eschews all of our current-day conventions. But on the other hand, it’s also important to connect to the lineage of where we’ve come from. Design exists as a way to connect people with new possibilities, in a way that they can use easily.”
So, will users embrace UX that relies entirely on their own hands-plus a cadre of virtual, wearable interfaces-over mouse-like hardware controller devices like Oculus Touch or Google Daydream’s controller? For Matsuda, those controllers are a very physical barrier between the virtual world and the user. “We are on the verge of a new era of human-computer interaction, that will ultimately supersede not only mobile and desktop, but also many of the physical interfaces that we rely on,” he writes, and “the more the technology progresses, the more absurd it will be to have to rely on controllers, keyboards, and touch screens.”
First came the perforated card, then the character-based screen, then the mouse and GUI, the touch screen, and finally this-with each technology iteration, the barrier between users and computing has been fading away, making it more intuitive and accessible to everyone. It’s very clear to me that technologies like North Star are perhaps the final step toward the end game of computing, which is to completely merge the real and digital worlds. Leap Motion’s hand tracking coupled with UX that can add layers and objects to our perception are only second to direct brain-computer interfaces.
What will this mean for humanity, which is already struggling with the powerful draw of immersive technology in the form of five-inch screens? Is there a future in which AR is ubiquitous but it empowers us, rather than enslaving us to corporations? Matsuda (who is also a movie director and reckoned with this dystopian future in his 2016 short, Hyper-Reality) believes in its positive potential.
“We are moving into a new age of extremely powerful technologies-immersive tech, AI and automation, synthetic biology, and pervasive computing, to name just a few,” Matsuda explains. “These technologies will have huge consequences within our lifetimes, and it’s our responsibility to understand them and proactively guide them to shape a world that we want to actually live in.” For Matsuda, his work at Leap Motion is part of that responsibility and a challenge in itself: “Getting there won’t be a straight or easy path, but their positive potential is enormous. If I didn’t believe that was possible, I wouldn’t call myself a designer.”
I hope that he succeeds at making these technologies tools for the greater good, for all of our sakes.
Article by channel:
Everything you need to know about Digital Transformation
The best articles, news and events direct to your inbox
Read more articles tagged: Wearables