Here’s a post I wrote awhile back for our UX Community of Practice regarding ideas on how to use this new type of input device:
I too bought a Leap motion controller and would like to think there is much potential for this type of computing interface. Hardware is solid. Excellent sensor and works well under office conditions. I haven’t had problems getting it to work with the bright office lights. Software drivers are fine. It is actually detected as an imaging device by windows. The API is actually decent. I have reviewed the API documentation and tried writing some .NET test programs against it. Finger, Hand and Pointing tool detection is straight forward.
I’ve spent some time showing it off to other fellow nerds here and they all asked me the same question; “What’s it for?” I actually have a number of ideas on how this type of device should be used. First off, this is definitely not going to replace a mouse. The problem with computing right now is that everything is still driven by a flying pointing cursor on the screen. This paradigm will not work with devices like the leap motion (Touchless for windows is an excellent example of how wrong it is). I seriously believe this would be more useful in two major ways. First as a steering mechanism that allows users to manoeuvre a 3D plane using all axes. I’ve only found 1 application where this kind of makes sense, Google Earth. Google earth actually supports using a motion controller to navigate it, it’s pretty intuitive (sorta).
The second way, which I think is more valuable is to use it as a triggering mechanism. This is what I am planning to write using their API. The idea is to use a combination of fingers + swipe direction to trigger commands, applications or keystrokes. For example, you can set a single finger left swipe as ctrl+c and single finger right swipe as paste. Five finger swipe up as show desktop. You get the picture. The launcher would be configurable to work on 5 possible finger combinations * 4 (potentially 6) swiping directions. That’s already a total of 20 customizable commands. So you can imagine having the leap motion device on the left hand side of your keyboard and your mouse on the right side. You can use the mouse for manoeuvring the pointer and have the leap motion execute commands separately.
I know for a fact how much productivity can be achieved (esp for developers with the number of tools we use) with common shortcuts being attached to macros. I have window docking apps (GridMove), custom keystroke launchers (autohotkey), window top placement (wintop) all running on my machine and I cannot live without them.
One last point on this rambling. I have been using a Logitech mouse with a few extra buttons for a while now. It has a button for your thumb on the side of it.
I have mapped the Close function on this button and I swear for the life of me I cannot live without it anymore. Alt+F4 sounds easy enough but clicking on a window and pressing this button is so much more fluid. This is what I want to achieve with the leap motion. Hopefully the UX CoP can help in figuring out where we can leverage this relatively simple but potentially time saving technology.


