User goal or problem this is trying to solve
- Select an object or action without needing a handheld controller.
- Solve the issue of accidental selection with gaze targeting if used alone (with fusing to confirm selection)
This interaction is currently the main selection mechanism for the Microsoft Hololens
- The user uses the gaze cursor to hover onto a button or action
- The underlying button that is now the focus of the cursor is highlighted or undergoes some visual change (as with standard gaze targeting)
- There is no fusing or change to the cursor. The cursor on its own cannot confirm the selection, just highlight it.
- The user raises their hand ready to execute the hand gesture
- The cursor changes appearance when the hand is detected in the trackable area. This shows that the system can see the hand and will track the gesture. It also informs the user that they are about to select the item that is currently targeted by the cursor.
- The user performs the gesture and the UI shows feedback to confirm the action. Note that in this case the gesture can be anything (as long as it’s simple and intuitive to the user and unambiguous for the system) and its specific action is not part of the pattern.
- Prevents accidental selection of items with gaze
- Can be fatiguing as the hand must be kept high in front of the HMD to be trackable
- Users find it difficult to focus the cursor at a command and look at their hand to perform the gesture at the same time. This could be solved with larger button sizes
- The gestures take time and skill to use consistently. This can make the confirmation more effort than it needs to be. Note this may just be there result of the specific gesture used by Hololens and the way the hand is tracked by the hardware.