Abstract: Commercial 3D scene acquisition systems such as the Microsoft Kinect sensor can reduce the cost barrier of realizing mid-air interaction. However, since it can only sense hand position but not hand orientation robustly, current mid-air interaction methods for 3D virtual object manipulation often require contextual and mode switching to perform translation, rotation, and scaling, thus preventing natural continuous gestural interactions. A novel handle bar metaphor is proposed as an effective visual control metaphor between the user’s hand gestures and the corresponding virtual object manipulation operations. It mimics a familiar situation of handling objects that are skewered with a bimanual handle bar. The use of relative 3D motion of the two hands to design the mid-air interaction allows us to provide precise controllability despite the Kinect sensor’s low image resolution. A comprehensive repertoire of 3D manipulation operations is proposed to manipulate single objects, perform fast constrained rotation and pack/align multiple objects along a line. Three user studies were devised to demonstrate the efficacy and intuitiveness of the proposed interaction techniques on different virtual manipulation scenarios.
Keywords: Virtual Object Manipulation, 3D virtual object, Virtual Reality Interfaces.
Title: A Handle Bar Metaphor for Virtual Object Manipulation with Mid-Air Interaction
Author: B. Amala
International Journal of Computer Science and Information Technology Research
ISSN 2348-120X (online), ISSN 2348-1196 (print)
Research Publish Journals