This past week I was able to speak to a large room of developers at the Gravity Center, who were all there to learn more about developing for Kinect. Which is really developing for sensors – of course Kinect development is much more exciting than sensor development You could also say it’s designing for NUI – Natural User Interfaces, ways you’d expect to interact with interfaces if we didn’t have things like mice in the way. I was asked to speak, as a Designer (and the only one in the room) on the trends of Kinect interfaces, and give my perspective.
First up to start the day off right was Tim Huckaby – former Microsoft employee and currently the founder/chairman of interknowlogy. Tim has been working with Microsoft for a long time, and in recent years been focusing NUI interfaces around Surface, tablets, and now Kinect.
Tim set the stage with live demos and videos showing how some very interesting things around Kinect are being developed.
If you haven’t been keeping up with how Kinect is transforming from a gaming device to a UI for everything from home to business applications, here’s Microsoft’s official video:
What’s funny about this video is it’s a very gestural view on the future. If you gave a designer a Kinect, and had him/her dream up the future you’d get a world where everything was based on an invisible UI. To contrast, if you gave a designer an iPad, and tasked them with the same thing, you’d end up with a very different world where everything is a touch device. Here are some touch examples from Microsoft (they’re playing both sides!), and Corning – remember they make the glass for the iPhone:
Which direction are we heading? Probably somewhere in-between – or as is should be, whatever technology ends up being best for each specific task and context we’re designing for.
In my presentation I went through the beginning, of how the Kinect came to be – surrounded by a large enterprise company. Then on to how it’s transitioning into business, shopping, and education – with rumors that Kinect may end up in your next Windows laptops, ready to guide Windows 8. And with Microsoft officially opening up the SDK for Kinect, they’re inviting developers and hackers with open arms to create and innovate in this space.
Frog Design has dubbed this new skill of designing for gestures “Interaction Choreography” – how would that look on a business card:
Designing for this new layer of interaction requires new thinking about dexterity, ergonomics, and whether someone might feel silly or offensive with certain gestures. We are so involved in this space right now, that we’ve had to move our design technologists’ desks to create enough room for all the hand waving design.
I went on to show a round of hacks, everything from controlling your home to going full avatar and controlling a robot. Many of the first round business applications are around kiosks, things like trying on clothes, or controlling a TV.
Jakob Nielsen’s Alertbox had this summary when the Kinect was first released:
Inconsistent gestures, invisible commands, overlooked warnings, awkward dialog confirmations. But fun to play.
And there are no patterns to really draw from right now, they went on to say:
That there are no universal standards for gestural interactions yet is a problem in its own right, because the UI cannot rely on learned behavior. The Kinect has a few system-wide standards, however, which do help users.
Kinect does have some good rules they’re using when developing these first in-house applications and games:
Explain what the player can do.
Represent what they are doing.
Make it fun to match the two.
Test your implementation.
I left the large group of developers with some user centered questions to think about:
Distance and Environment?
How far do they need to stand? How far do they think they need to stand? Environment design of the area. Eyesight, size of the UI.?
Is it comfortable? Age? People with disabilities? Common movements vs. uncommon. How long will they interact??
Do they look silly? Will someone of a certain age/race/gender use this? What’s acceptable for an Avatar??
What do they know from click and touch interfaces? Is there something more natural? Try and unlearn, and imagine.?
Is it close or far away? What do we infer from spatial positioning? Can you get people to interact in 3D space on a 2D screen??
What’s the maximum amount of objects you can fit on a screen? What’s the minimum size of an object need to be on the screen??
Can you tailor to age? Race? Gender? What is helpful and what is potentially scary? Can you keep a snapshot for marketing purposes??
Here’s the full presentation:
Afterwards, one developer who had seen one of my earlier presentations on mobile was hoping for some more practical design tips, but designing for Kinect is so wide open right now. Even though we’ve been designing for sensors for some time now, but gestural by itself – how do we interact in space without tactile feedback – that’s new (and interesting!).
And here are some photos from the event:
I’m looking forward to exploring these interfaces, as I’ve explored touch over the past 5 years – and as always can’t wait to see what mind blowing interactions happen over the next 5 years.