In computing, 3D interaction is a form of human-machine interaction where users are able to move and perform interaction in 3D space. Both human and machine process information where the physical position of elements in the 3D space is relevant.
The 3D space used for interaction can be the real physical space, a virtual space representation simulated in the computer, or a combination of both. When the real space is used for data input, humans perform actions or give commands to the machine using an input device that detects the 3D position of the human action. When it’s used for data output, the simulated 3D virtual scene is projected onto the real environment through one output device or a combination of them.”
I would like to present some of the techniques used for 3D interaction Applications.
3D interaction techniques:
3D interaction techniques are methods used in order to execute different types of task in 3D space. Techniques are classified according to the tasks that
Selection and manipulation
Users need to be able to manipulate virtual objects. Manipulation tasks involve selecting and moving an object. Sometimes, rotation of the object is
involved as well. Direct-hand manipulation is the most natural technique because manipulating physical objects with the hand is intuitive for humans.
However, this is not always possible. A virtual hand that can select and re-locate virtual objects will work as well.
3D widgets can be used to put controls on objects: these are usually called 3D Gizmos or Manipulators (a good example are the ones from Blender). Users can
employ these to re-locate, re-scale or re-orient an object (Translate, Scale, Rotate).
Other techniques include the Go-Go technique and ray casting, where a virtual ray is used to point to, and select and object. More recently there has been
user interface development and research by Richard White in Kansas over the past 3 years regarding interactive surfaces & classroom interactive whiteboards,
grade school students, and 3D natural user interfaces known as Edusim.
The computer needs to provide the user with information regarding location and movement. Navigation tasks have two components. Travel involves moving from
the current location to the desired point. Wayfinding refers to finding and setting routes to get to a travel goal within the virtual environment.
Wayfinding : Wayfinding in virtual space is different and more difficult to do than in the real world because synthetic environments are often missing
perceptual cues and movement constraints. It can be supported using user-centred techniques such as using a larger field of view and supplying motion cues,
or environment-centred techniques like structural organization and wayfinding principles.
Travel : Good travel techniques allow the user to easily move through the environment. There are three types of travel tasks namely, exploration, search,
and manoeuvring. Travel techniques can be classified into the following five categories:
Physical movement – user moves through the virtual world
Manual Viewpoint manipulation – use hand motions to achieve movement
Steering – direction specification
Target-based travel – destination specification
Route planning – path specification
Tasks that involve issuing commands to the application in order to change system mode or activate some functionality fall under the category of system
control. Techniques that support system control tasks in three-dimensions are classified as:
Virtual tools with specific functions
This task allows the user to enter and/or edit, for example, text, making it possible to annotate 3D scenes or 3D objects.