Mapping input to controllers

User goal / Problem this is trying to solve

Help the user understand how to interact with the handheld controller with minimal break in presence

Interaction

Often the user needs to use the controllers to interact with objects in VR. This forces a break from the naturalistic interactions that VR encourages, and often the user is required to think about whether they need to use the controller or which specific button to press.

By showing a visual representation of the controller and highlighting the exact input required superimposed over the controller, designers can reduce the amount of cognitive processing required to find the correct input.

  1. Show a representation of the controllers in the virtual 3D environment, include the details of all interactive inputs and buttons
  2. Highlight the button required to trigger the desired interaction
  3. Accompany this with a prompt in the centre of the field of vision to prompt the user to look down at the virtual controllers
  4. Provide feedback once the correct button is pressed. This is usually achieved by immediately hiding or changing the prompt indicating which button to press. Even its disappearance can be sufficient. This is often accompanied by a positively toned sound

Linked to: Motion controller visualisation

Good 

  • Allows the user to understand which controller and which button to interact with, without needing to take of the headset.
  • Prevents the user from having to translate on screen text in one part of the field of view to actions on the controller (e.g. see Onward example)
  • Reduces the translation of instructional prompts to the bare minimum, requiring recognition and reaction rather than manipulating information in working memory. This helps to maintain immersion in the experience as disruption is minimal.
  • It’s immediately clear when the action has been completed. The prompt is completely contextual, being presented only when required and immediately vanishing to indicate positive outcome.

Bad

The main weakness to consider is that users must know to look down at the controller to see the prompt in the first place. For some experiences where the user will be looking elsewhere and using the controllers and hands infrequently, this can introduce a frustrating barrier, but this is where on on screen prompt in the central field of view can be helpful to direct the user’s attention down to their hands.

Examples

Bullet Train, Oculus

Best practice fully contextualised on controller

Google Blocks:

Prompt mapped to exact input required

Raw Data, HTC Vive

Visual on screen prompt showing controller interactions (however, the controller is not visualised in the game leaving the user with some cognitive processing work still to do

Onward, Oculus

On screen text describing which controller button to use. Requires the most cognitive processing and reliance on user’s memory