Heralded as a new frontier in human-computer interaction, the Leap Controller was launched earlier this year, promising a 3D, gesture controlled interface that would that would make us all feel a little more like Tony Stark. Retailing at just under €80, it’s affordable but can it rival the mouse and trackpad in terms of usability and functionality? At our iQ Content Maker Friday session, our group of developers, UX and content took it for a test drive.
How it works
Leap’s two cameras and infrared LEDS give users a 150-degree field of view and eight cubic feet of virtual space, in which Leap will sense human movements with unprecedented accuracy. In layman’s terms, it allows users to interact with their desktops with simple hand gestures. For those familiar with Microsoft Kinect, this might not seem that revelatory. But while Kinect has a larger surface area tracking, Leap offers accuracy in lieu of depth. In fact, it can track all 10 fingers up to 1/100th of a millimeter, giving a great deal more sensitivity than something like Kinect can offer.
Setting it up
Clean, minimalist and uncomplicated, the device is not dissimilar from an Apple TV, albeit a tiny one.
Only a few of us had heard of Leap before, so Ciarán brought us through a quick unboxing. The first thing you’re struck by is that, visually, the device, box and packaging could easily be mistaken for an Apple product. Clean, minimalist and uncomplicated, the device is not dissimilar from an Apple TV, albeit a tiny one. And like Apple devices, setup is pretty straightforward. There’s not much need for calibration – just download the software and plug in.
The first thing we were brought to was the introduction screen, undoubtedly one of the highlights of the Leap motion controller. Three scenarios (a skeletal wireframe hand, cascading light that changes colour and direction as you move your hands, and a simple paint-like tool) give you an introduction into how we interact with the controller, all with soothing ambient music in the background. Part advertisment, part orientation, these introductory screens give real insight into the highlights of Leap, especially the wireframe hands. Palms, wrists and digits are sensed almost down to the joint, all with virtually no lag. So far, so impressive
Giving it a work out
But digging a bit deeper, and putting leap to practical use, we found not everything was easy as the orientation screen would have us believe. Perhaps the truest test of a device that positions itself as a potential replacement for the mouse, trackpad and touchscreen is navigating a desktop. Which is turns out is frustrating with Leap. Even the most rudimentary desktop functions of clicking, dragging and scrolling were an exercise in frustration. Take the back button as an example. Extend your finger mid-air, hover over the back button for a few seconds and Leap will automatically interpret this as a click. Sounds easy, right?
Even the most rudimentary desktop functions of clicking, dragging and scrolling were an exercise in frustration.
In practice, trying to navigate and hover over an icon as small as the back button proves pretty taxing. The margin for error is massive. A misplaced knuckle obscuring the extended finger? You’ll find yourself on the other side of the screen. Had an extra coffee at lunch and are finding it hard to keep steady hand? Good luck trying to hover over that back button accurately.
Limited by imprecision
These situations might seem trite, but when you are dealing with day to day desktop usability, the various permutations of what might go wrong are huge. The problem is clearly twofold – imprecise movements on the one hand; and trying to control an interface that was never designed to be controlled by spatial movement.
Apps like the NY Times work well because they were designed specifically with Leap in mind, with easy scrolling and selection gestures. But the NY Times website, just like hundreds of thousands of other websites, was designed with a point and click user in mind. Using websites with a Leap is like trying to fit a square into a circle – try as much as you might it just won’t fit.
Can Leap’s makers be blamed that 3D interfaces do not yet exist on a substantial scale yet? Not necessarily. But launching a mass market, PC peripheral that promises comprehensive gesture control, and the potential to supplant mice and touch screens, means that users will rightly expect it to work in conjunction with existing interfaces. This lack of day to day functionality means that at times Leap feels like a product still in beta, not the finished for-sale product that has been launched.
But there are more obvious limitations to the general usability of Leap. Many of the group highlighted the physical strain that a device like this brought. Extending your arm out fully and keeping it steady for potentially long periods of time could cause severe discomfort for many users. If you thought RSI was bad from a regular mouse, try to imagine the gorilla arm syndrome that Leap would impose on users.
Hey, at least there’s apps, right?
We fared a little better with some of Leap’s other apps, especially those with simple controls and perhaps more importantly, allowed for a margin of error. Paul mastered Google Earth pretty quickly though it took a pretty steady hand to navigate with any sense of consistency. Kyoto, a moonlit puzzle game, and Flocking, a visually stunning underwater visualisation that lets you control a shoal of fish, illustrated how natural the human interaction with the screen could be.
You’re not so much trying to accomplish a task, as interacting with an environment.
But for both these apps it was the experience that counted, not the end result. With Flocking, you’re not so much trying to accomplish a task, as interacting with an environment. That, I’m afraid, is the general feeling users get with Leap. It feels like you’re experiencing something rather than doing anything. Without providing something tangible and useful, Leap runs the risk of being marginalised as a gaming device.
A language of movement
Finally, it was nigh on impossible to control Leap with any sort of consistency. The vagaries of human motion mean that what might work first time round, might not work five minutes later For example, with Digit Duel, a quick draw, Wild West shooting game, your hand is the gun. This is the game that got the most traction amongst our group (an insatiable thirst for blood), but it was difficult to absolute consistency of movement from user to user. The results tell the real story, with a paltry one kill to our name.
We cannot regulate our actions in a way that Leap can yet understand. Our swipe movements are not always exactly that, and a grab gesture is likely to vary widely from one person to the next.
This is not so much a problem with the Leap motion’s accuracy, which as stated earlier is pretty good. It’s more that we, as humans, do not yet possess the fluid, precise spatial movement that Leap requires. We cannot regulate our actions in a way that Leap can yet understand. Our swipe movements are not always exactly that, and a grab gesture is likely to vary widely from one person to the next. Sometimes it feels like if only we possessed the wireframe hand that Leap presents, such precision might be possible
A greater understanding and tolerance built into the motion sensors, recognising that we mess up all the time, would substantially improve the Leap motion experience. More robust, real-life scenario testing to identify the most substantial gesture errors would also go a long way. In addition, a set of standardised movements, shared across all apps, especially for the likes of clicking and scrolling would have been of great benefit. Like any language, whether verbal or physical, a lexicon and grammar needs to be established. And Leap also has to teach us this language
The final word
In conceptual terms, there is no doubt that Leap delivers, giving an insight into how human-computer interaction may look in years to come. It’s natural, intuitive interaction anticipates the time of boundaries between the physical and digital world will not be so fixed.
But in terms of practicality and as a retail product, Leap still has a way to go. In order to move from an impressive visualization tool to a day to day usage, standardised gestures for controlling the device, and greater tolerance for margin or error will need to be factored in. Moreover, the real use of Leap have not been realised. Each of our testers agreed that in terms of assistive technology, product design, and as as an educational resource, the potential is massive. But until developers become comfortable building apps with this sort of technology built in, and UIs are created with interaction design in mind, Leap’s real use will remain on the long finger.
What the gang said:
Paul Donnan, Strategist at Large
I was impressed by the accuracy with which it picked out individual joints and mapped their movement. That said, I found it hard to see how it could be used for productivity applications. Secretly, I hope I’m proved wrong.
Ciarán Harris, Director of Innovation
I was both overwhelmed and underwhelmed – overwhelmed by the new interaction possibilities it offers, but underwhelmed by the lack of the promised accuracy. I’m not saying the controller is inaccurate, I am saying the humans controlling the controllers weren’t as accurate and as consistent as the controllers can interpret. Some of the apps we tried didn’t allow enough tolerance for this. Guidance, tolerance & adapting to different user behaviours will be key for successful LEAP apps.
Piers Scott, Content Strategist
If I’m going to spend about €100 on a PC peripheral I want to be able to use it every day, but I couldn’t see myself plugging in Leap Motion in every time I booted my laptop. I can only see this tech taking off when it’s built into my laptop, tablet, and smartphone. When that happens, that’s when we’ll take a true leap forward.