Virtual Reality

Please note, VR is currently only supported on Windows Desktop when using the C++ API and the DirectX 11 or OpenGL 2 drivers.

This section provides an overview of how to use Visualize to render in a VR headset. Most of the source code presented below is available in hps_openvr_sandbox, which is located in the package in the samples/openvr_sandbox directory; this project is also accessible in the samples_v1xx solution located in the root directory of the Visualize package.

In addition, the full source code for the the move, select, and scale operators is also provided in hps_openvr_sandbox project, and you are encouraged to tailor these operators to meet the specific requirements of your VR applications or to develop new operators using these as a starting point.

OpenVR

Prerequisites

Visualize supports any VR headset compatible with OpenVR. In order to add VR to your application, the following items are required:

For OpenVR, you’ll need a VR-capable GPU. For current NVIDIA GPUs, this would be a GTX 1060 or better. For AMD, this would be an RX 480 or better.

To get started with OpenVR, follow these steps:

  • On your local system, clone OpenVR from the GitHub repository:

    git clone https://github.com/ValveSoftware/openvr
    
  • Set the OPENVR_SDK environment variable to the location of your OpenVR root folder.

  • Ensure that the OpenVR binaries are in your PATH.

  • Install the Steam application from: https://store.steampowered.com

  • Once Steam has been installed, run the application. In the top menu, select “Library”, go to “VR” and select “Install SteamVR” and install it on your system.

  • Run the installer for your particular hardware.

  • Establish your play space before launching the application. This is done from SteamVR if you are using a Vive, and from the Oculus app if you are using a Rift.

For Oculus Rift only:

  • Allow developer apps to run on the Oculus by opening the Oculus app and choosing Settings->General->Unknown Sources and toggling it to ON.

The VR API

The VR API is packaged as source code within the samples_v1xx solution in the hps_openvr_sandbox project. The VR API is currently made up of these files:

File name

Description

vr.h/cpp

Basic VR class which handles viewing a model in VR

vr_move.h/cpp

VR operator – rotates and translates the model when a button is held down

vr_selection.h/cpp

VR operator – selects and highlights an entity when a button is pressed

vr_scale.h/cpp

VR operator – scales the whole model based on the distance between the two controllers when a button is pressed on each controller

To use the VR API, simply add the files above to your project. These files are located in samples/vr_shared. Note that while the code in the API is not platform specific, Virtual Reality is currently only supported on Windows.

How to create a VR Application using the VR Class

The VR class handles initializing and shutting down VR, processing events coming from the headset, and keeping all tracked devices and controllers up to date. It can be used by the user both for simply viewing a model and for reacting to events and button presses. The first step in creating a VR object is to create a class that derives from it. The VR class has some pure virtual functions, so those functions should be present in the derived class:

The vr_view option is used in cases where you have a desktop application which has an optional VR mode (as can be seen in the HOOPS Demo Viewer). In such cases, you might want to preserve the current view as it is and show it in VR. This can be done by passing it as an option. If vr_view is not initialized, a brand new HPS::View will be created for VR, and will be disposed of once the VR session is over.

Starting a VR Session

A VR session starts once the user calls VR::Start(), and ends when the user calls VR::Stop() or the VR object goes out of scope. Starting a VR session means that the 3D scene will be rendered to the VR headset. Here’s a basic example of stopping and starting a session:

VROptions default_options;
DemoVR vr_session(vr_options);
...
vr_session.Initialize();
...
vr_session.Start(); // VR Session starts here
...
vr_session.Stop(); // VR Session ends here

And an example of session lifetime with scoping:

{
        myVR scoped_session(default_options);
        scoped_session.Start(); // VR Session starts here
} // scoped VR Session ends here

The VR session will execute in a loop on a separate thread, and the calling thread will continue to the next instruction.

Initializing a VR Session

Optionally, the user can decide to Initialize the VR session before Starting it (the session will be automatically initialized when Start is called if the user did not Initialize it beforehand). The reason why someone might want to Initialize a VR session before starting is that after Initialize is called every part of the VR session is valid but rendering to the headset is not started. This means that users can, for example, check how many controllers are connected or access the View to make changes to it before rendering starts. For example:

{
        myVR vr_session(default_options);
        vr_session.GetVRView()->GetViewKey(); // WRONG! Session hasn't been initialized yet.
        vr_session.Initialize();
        vr_session.GetVRView()->GetViewKey();
}
{
        myVR other_vr_session(default_options);
        other_vr_session.Start();
        other_vr_session.GetVRView()->GetViewKey(); // OK because the Start() function also initializes the session.
}

Interacting With VR

At this point, we’ve created and started a VR session. If we just wanted to view the model in VR, then nothing else is needed. Most likely, however, we’ll want to interact with the scene in VR, and therefore we’ll need to implement the four functions required in all classes that derive from VR:

Function

Description

Setup()

This function is called only once, before the first frame of a VR session begins. This is intended to contain any initialization of VR-specific objects you’re using in your application.

ProcessEvent()

This function is called every time the VR headset reports a new event, potentially multiple times per frame. You can check the type of event received, do something with events you’re interested in, and ignore the rest.

ProcessInput()

This function is called once per frame, after the input from controllers has been collected and the position of the tracked devices into space has been updated. You can use this function to make changes to the VR scene based on controller input.

ProcessFrame()

This function is called once per frame, right before the frame is presented to the headset. This is a place where you can perform per-frame operations that are not related to controller inputs.

Sample Implementation

Here is a basic example of how to implement a class derived from the VR class. In this example:

  • the derived class uses the move VR operator to move the model when the trigger is pressed

  • the frame rate is checked at every frame

  • VR events are checked to determine whether the controller we are using ever gets disconnected

For a more detailed implementation of this example, please refer to the hps_openvr_sandbox project source code.

Tracked Devices, Controllers, and Head-Mounted display

The base VR class keeps track of all tracked devices (i.e., the tracking towers, the controllers, the headset, etc…), and updates their properties each frame. Users can access data on tracked devices through the member variable tracked_device_data, by using the device index associated with the tracked device they are interested in.

This is the data associated with each tracked device:

class TrackedDevice
{
public:
        HPS::SegmentKey instance;               // SegmentKey containing the model for this device. Can be Type::None
        HPS::KeyPath path_to_instance;  // KeyPath going from the VR Canvas to this device
        HPS::MatrixKit pose;                    // The current pose of this device
        HPS::MatrixKit previous_pose;   // The pose this device had during the previous frame
};

// ...

HPS::MatrixKit controller_matrix = vr_session.tracked_device_data[device_index].pose;
HPS::MatrixKit previous_controller_matrix = vr_session.tracked_device_data[device_index].previous_pose;
HPS::MatrixKit previous_controller_matrix_inverse;
previous_controller_matrix.ShowInverse(previous_controller_matrix_inverse);

// get the matrix to transform the old controller into the current one
HPS::MatrixKit delta = previous_controller_matrix_inverse * controller_matrix;

The move operator calculates a matrix which represents the difference in position for the device associated with it, from the previous frame to the current one:

  • Controllers are also available directly from the base VR class, through the member variables controller_one and controller_two.

  • Controllers can be valid or invalid. A controller is valid if it is connected.

  • Controllers have a variety of variables associated with them, describing their state. They also have a few helper functions which are designed to help users get information about the controller in an easy manner.

It is important to note that controllers are also part of the tracked devices. This means that the position in space of a controller can be determined by using its device index:

tracked_device_data[controller_one.device_index].pose;

The Head Mounted Display (HMD) is the principal means of accessing the OpenVR API directly. Users will need it almost any time they wish to call OpenVR themselves. The HMD is accessible through the GetHeadMountedDisplay() function.

Using the sample operators

The sample operators all work similarly. They need to be initialized with a reference to the VR session, so that they can have access to the tracked device data. A device index that corresponds to a controller needs to be “attached” to them, alongside with an enum describing which button will cause the operator to become active.

All of the operators have a HandleFrame() function which should be called once controller input has been collected. This function checks that the button specified in the Attach() function is active before proceeding.

They all have a Detach() function which causes the operator to become inactive. These are simply suggestions on how interaction with VR can be accomplished, but users are of course free to experiment with any paradigm that suits their applications.

OpenVR sandbox

The OpenVR sandbox application uses the API discussed above and is a good starting point for developing a VR application with Visualize. The source code for this app is available, and provides for a template for a custom VR class using all three operators. Here are a few notes about the OpenVR sandbox application:

  • It accepts two parameters: -filename and -driver. Valid values for the -driver parameter are directx11 and opengl2. If no value is provided for the -driver option, it defaults to dx11.

  • There are three operators: move, scale, and selection.

    • Move is mapped to the trigger of the right controller

    • Select is mapped to the trigger of the left controller

    • Scale is enabled when both grips are pressed

  • The current FPS is shown in the window

Considerations

  • Performance is the main concern when using VR. For a smooth experience 60 fps should be the minimum requirement. Lower or jumpy frame rates can result in the user feeling sick. Because of this, the main render loop should be as bare-bones as possible. If you need to perform expensive calculations, it would be best to perform them on a separate thread.

  • When in VR mode you should expect that the camera is in constant movement, since it is tied to the position of the VR headset. As a consequence of this, some of the approaches used in highlighting and the default operators will not work in VR mode. For example:

    • The default orbit operator works by modifying the camera position. Since the position of the camera depends on the VR headset, it will be necessary to change the modelling matrix instead.

    • Because of the constant camera changes, Visualize will not cache the status of rendering buffers between updates. In turn this means that the heuristic setting of HPS::Drawing::Overlay, which relies on these cached buffers, cannot be used effectively. Highlights with a heuristic setting of HPS::Drawing::Overlay in a VR application will be performed sub-optimally.

    • Some view-dependent geometry, like non-transformable text and patterned lines will not transform correctly in stereo mode. This will create artifacts where it will look like two instances of said geometry are visible, unless the user closes one of their eyes.