This pattern helps users see relevant content in the right place by placing the AR graphics in a precise location and context based on template matching.
Sometimes augmented information relates to a specific dial, element or device and the most useful way to view this information is if it is visually mapped onto the element it refers to. This creates a type of contextual overlay, that provides useful information in context that we would otherwise not be able to access.
An example of when this is useful could be a training manual or metadata about an object that would otherwise be invisible. By combining the augmented information with the real world object it becomes clear what the information relates to without needing to look anything up or even think about it, reducing cognitive load and errors.
- User launches the app or AR experience
- User usually needs to select the artefact they want more information about
- This can be via selecting from a menu, based on location, based on scanning a QR code etc.
- Once the app knows the object or model the user wants to interact with it provide a template that represents the shape and dimensions of the target object
- The user sees this as an overlay on the screen and can move it the template around
- The user moves the template over the target object until the shape and orientation matches
- The augmented information then appears.
Ask Mercedes app
If the space of interaction is predetermined and standard (e.g. a specific car model) the detection can be simplified to the benefit of users