Tag Archives: Keyboard

Kinect PowerPoint Mapper

I just published a Kinect mapping tool for PowerPoint allowing users to navigate through a PowerPoint slide deck using gestures.  It’s here on CodePlex: https://k4wppt.codeplex.com/ .  There are already a lot of these out there, by the way – one of my favorites is the one Josh Blake published.

So why did I think the world needed one more? 

kinect_for_windows_fig1

The main thing is that, prior to the release of the Kinect SDK 1.7, controlling a slide deck with a Kinect was prone to error and absurdity.  Because they are almost universally written for the swipe gesture, prior PowerPoint controllers using Kinect had a tendency to recognize any sort of hand waving gesture as an event.  Consequently, as a speaker innocently gesticulated through his point the slides would begin to wander on their own.

The Kinect for Windows team added the grip gesture as well as the push gesture in the SDK 1.7.  This required several months of computer learning work to get these recognizers to work effectively in a wide variety of circumstances.  They are extremely solid at this point.

The Kinect PowerPoint Mapper I just uploaded to CodePlex takes advantage of the grip gesture to implement a grab-and-throw for PowerPoint navigation.  This effectively disambiguates navigation gestures from other symbolic gestures a presenter might use during the course of a talk.

I see the Kinect PowerPoint Mapper serving several audiences:

1. It’s for people who just want a more usable Kinect-navigation tool for PowerPoint.

2. It’s a reference application for developers who want to learn how they can pull the grip and the push recognizers out of the Microsoft Kinect controls and use them in combination with other gestures.  (A word of warning, tho – while double grip is working really well in this project, double push seems a little flakey.)  One of the peculiarities of the underlying interfaces is that the push notification is a state, when for most purposes it needs to be an event.  The grip, on the other hand, is basically a pair of events (grip and ungrip) which need to be transposed into states.  The source code for the Mapper demonstrates how these translations can be implemented.

3. The Mapper is configuration based, so users can actually use it with PC apps other than PowerPoint simply by remapping gestures to keystrokes.  The current mappings in KinectKeyMapper.exe.config look like this:

    <add key="DoubleGraspAction" value="{F5}" />
    <add key="DoublePushAction" value="{Esc}" />
    <add key="RightSwipeWithGraspAction" value="{Right}" />
    <add key="LeftSwipeWithGraspAction" value="{Left}" />
    <add key="RightSwipeNoGraspAction" value="" />
    <add key="LeftSwipeNoGraspAction" value="" />
    <add key="RightPush" value="" />
    <add key="LeftPush" value="" />
    <add key="TargetApplicationProcessName" value="POWERPNT"/>

Behind the scenes, this is basically translating gesture recognition algorithms (some complex, some not so much) to keystrokes.  To have a gesture mapped to a different keystroke, just change the value associated with the gesture – making sure to include the squiggly brackets.  If the value is left blank, the gesture will not be read.  Finally, the TargetApplicationProcessName tells the application which process to send the keystroke to if there are multiple applications open at the same time.  To find a process name in Windows, just go into the Task Manager and look under the process tab.  The process name for all currently running applications can be found there – just remove the dot-E-X-E at the end of the name. 

4. The project ought to be extended as more gesture recognizers become available from Microsoft or as people just find good algorithms for gesture recognizers over time.  Ideally, there will ultimately be enough gestures to map onto your favorite MMO.  A key mapper created by the media lab at USC was actually one of the first Kinect apps I started following back in 2010.  It seemed like a cool idea then and it still seems cool to me today.

WP7 InputScope

qwerty

In Windows Phone development, InputScope is a property that can be attached to a TextBox control.  It is also one of the most convenient features of Silverlight for Windows Phone.

On a phone device, we cannot depend on having a keyboard to enter text in our applications.  The InputScope attached property provides a way to automatically associate a digital touch-aware keyboard to textboxes in our application.  The syntax, moreover, is extremely simple.  To create a keyboard like that shown above, all you have to do is set a value for the InputScope property like this:

<TextBox Name="myTextBox" InputScope="Text"/>

when the TextBox receives focus, the visual keyboard automatically pops up.  When focus leaves the TextBox, the keyboard will hide itself.  Additionally, the “Text” input scope has predictive text completion built in.  If you begin typing “R-I-G” and the InputScope is set to “Text”, the visual keyboard will make some suggestions on how to complete your word.

qwerty2

I showed the short syntax for the InputScope above.  In the Blend 4 RC, the xaml parser in design mode marks the short syntax as invalid (though it will still compile).  The longer syntax for setting the Input Scope looks like this:

<TextBox x:Name="myTextBox">
    <TextBox.InputScope>
        <InputScope>
            <InputScopeName NameValue="Text"/>
        </InputScope>
    </TextBox.InputScope>
</TextBox>

I am currently still using the Windows Phone April CTP Refresh, in which not all of the Input Scope implementations are complete.  Hopefully in the next drop I will be able to show more examples of the various Input Scope keyboard designs.

Using the long syntax above will allow intellisense support to provide a listing of all the input scope values that can be entered for the InputScopeNameValue.  You list out these values programmatically by using a little reflection (the Enum class in Windows Phone is a bit different than the regular C# enum class, so GetNames isn’t available):

var inputScopes = new List<string>();

FieldInfo[] array = typeof(InputScopeNameValue).GetFields(
        BindingFlags.Public | BindingFlags.Static);
foreach (FieldInfo fi in array)
{
    inputScopes.Add(fi.Name);
}

this.DataContext = inputScopes;

A simple app can be written to try out the different input scope keyboards as they become available.  If you use the code above to set the data context on your page, the following xaml should provide a select list for experimenting with different visual keyboards:

<StackPanel>
            <TextBox x:Name="myTextBox" 
        InputScope="{Binding ElementName=lbInputScopes
    ,Path=SelectedItem}"/>
<ListBox x:Name="lbInputScopes" 
            ItemsSource="{Binding}" 
            Height="500" />
</StackPanel>

Here is the full list of InputScopes that are expected to be supported, based on the current enum names for InputScopeNameValue:

1. AddressCity
2. AddressCountryName
3. AddressCountryShortName
4. AddressStateOrProvince
5. AddressStreet
6. AlphanumericFullWidth
7. AlphanumericHalfWidth
8. ApplicationEnd
9. Bopomofo
10. Chat
11. CurrencyAmount
12. CurrencyAmountAndSymbol
13. CurrencyChinese
14. Date
15. DateDay
16. DateDayName
17. DateMonth
18. DateMonthName
19. DateYear
20. Default
21. Digits
22. EmailNameOrAddress
23. EmailSmtpAddress
24. EmailUserName
25. EnumString
26. FileName
27. FullFilePath
28. Hanja
29. Hiragana
30. KatakanaFullWidth
31. KatakanaHalfWidth
32. LogOnName
33. Maps
34. NameOrPhoneNumber
35. Number
36. NumberFullWidth
37. OneChar
38. Password
39. PersonalFullName
40. PersonalGivenName
41. PersonalMiddleName
42. PersonalNamePrefix
43. PersonalNameSuffix
44. PersonalSurname
45. PhraseList
46. PostalAddress
47. PostalCode
48. Private
49. RegularExpression
50. Search
51. Srgs
52. TelephoneAreaCode
53. TelephoneCountryCode
54. TelephoneLocalNumber
55. TelephoneNumber
56. Text
57. Time
58. TimeHour
59. TimeMinorSec
60. Url
61. Xml
62. Yomi