The key element of the project is most definitely the First Person (FP) controller asset. A controller is Unity's way of a camera-controlling object. The VR controller is similar to a standard FP controller, as provided by Unity, in its first-person point of view (POV) with physically modeled properties and behaviour. The list of differences between the two are much longer.
While standard controllers expect the input from axis sources, e.g. a mouse or the controller joysticks, the input of a VR controller should only be controlled by head-tracking of the virtual device. This requires some changes of the underlying model, since the connection between the tracker and the object has to be established, and correct transformation sequences have to be assigned.
As explained in the "Best Practice" section, head-bobbing, motion-blur, and similar features for more realism in classic first-person games should be disabled, and removed completely, to avoid encouraging any future changes to such features.
Yet, not only does the FP controller simulate a realistic head movement, but it must also include realistic placement of the controller, ideally with respect to the user's size. Since the standard deviation for a seated user remains relatively low, it can be predefined to a uniform height. The external head-tracking device of the Oculus Rift HMD does recognize vertical movement as well (e.g. standing up, or stretching), which is correctly passed to the FP controller as well. As with most features, it should serve to synchronise the input from the visual cortex and vestibular system, thus reducing nausea. Similarly, moving the head forward or backward could allow the user to inspect a object closer, yet poses a thread in the rotating scene to visibly culled objects (either by backface culling or frustum culling).
If constricted correctly, this could be an extension for further projects. For exploration scenes, this is already included.
Importantly, the FP controller also handles most of the triggers for actions like a scene-change, or the UI placement.
The UI is included as a child of the FP controller, and will thus move accordingly with the user.
Camera objects are also included as child objects. These have six anchor points, to which one can append various objects or triggers.
For example, the central anchor serves as relative position to the "eye lids", and renders the image which is shown on the computer screen. Other anchors include the left eye and right eye renderings to the Oculus Rift. These anchors may be extended by future projects.
To avoid conflicting behaviour, the properties should - aside from the obvious propoerties like height, speed, etc. - left mostly unchanged.
As for the current settings, the lateral movement is significantly slower than the forward/backward movements of the controller, to reduce the discomfort. This is also more realistic, when compared to the average movement in the real world (forward for most of the time). This might unsettle some users who are familiar with classical games, since the lateral or backward movement is usually quite frequent.
Figure 1: First Person controller with bounding box, and placement of the camera object inside.
The yellow cubes are exemplary path objects, as described in "Information Presentation".
As it can be observed, the actual position of the camera is centered inside the bounding box. This allows for a certain degree of freedom when moving in vertical direction. Since the bounding box does not extend/move during non-positional movements (by this, we mean movements that are only performed with the HMD, and not with a controller), a certain safety boundary for clipping is given.
|