This section provides an overview of how to use Visualize to render in a VR headset. Most of the source code presented below is available in the openvr_simple sandbox, which is located in the package in the directory; this project is also accessible in the samples_v1xx solution located in the root directory of the Visualize package.
In addition, the full source code for the the move, select, and scale operators is also provided in openvr_simple project, and you are encouraged to tailor these operators to meet the specific requirements of your VR applications or to develop new operators using these as a starting point.
Please note: AR/VR is currently only supported on Windows Desktop when using the C++ API and the DirectX 11 or OpenGL 2 drivers. To use VR with DirectX 11 on Windows 7 machines, please make sure Windows Update KB2670838 is installed. Additionally, stereo rendering will not work on machines that do not support DirectX 11.1.
OpenVR
Visualize supports any VR headset compatible with OpenVR. In order to add VR to your application, the following items are required:
For OpenVR, you'll need a VR-capable GPU. For current NVIDIA GPUs, this would be a GTX 1060 or better. For AMD, this would be an RX 480 or better.
To get started with OpenVR, follow these steps:
For Oculus Rift only:
The VR API is packaged as source code within the samples_v1xx solution in the openvr_simple_v1xx project.
The VR API is currently made up of these files:
File name | Description |
---|---|
vr.h/cpp | Basic VR class which handles viewing a model in VR |
vr_move.h/cpp | VR operator – rotates and translates the model when a button is held down |
vr_selection.h/cpp | VR operator – selects and highlights an entity when a button is pressed |
vr_scale.h/cpp | VR operator – scales the whole model based on the distance between the two controllers when a button is pressed on each controller |
To use the VR API, simply add the files above to your project.
These files are located in hoops_3df/Dev_Tools/hoops_vr.
Note that while the code in the API is not platform specific, Virtual Reality is currently only supported on Windows.
This section covers how to create a VR app and start a VR session.
The VR class handles initializing and shutting down VR, processing events coming from the headset, and keeping all tracked devices and controllers up to date.
It can be used by the user both for simply viewing a model and for reacting to events and button presses.
The first step in creating a VR object is to create a class that derives from it.
The VR class has some pure virtual functions, so those functions should be present in the derived class:
When creating a VR object, these options should be passed to it:
The vr_model
option is used in cases where you have a desktop application which has an optional VR mode (as can be seen in the HOOPS Demo Viewer).
In such cases, you might want to preserve the current view as it is and show it in VR. This can be done by passing it as an option.
If vr_model
is not initialized, a brand new HBaseModel
will be created for VR, and will be disposed of once the VR session is over.
A VR session starts once the user calls VR::Start()
, and ends when the user calls VR::Stop()
or the VR object goes out of scope.
Starting a VR session means that the 3D scene will be rendered to the VR headset. Here's a basic example of stopping and starting a session:
And an example of session lifetime with scoping:
The VR session will execute in a loop on a separate thread, and the calling thread will continue to the next instruction.
Initializing a VR session
Optionally, the user can decide to Initialize the VR session before Starting it (the session will be automatically initialized when Start is called if the user did not Initialize it beforehand).
The reason why someone might want to Initialize a VR session before starting is that after Initialize is called every part of the VR session is valid but rendering to the headset is not started.
This means that users can, for example, check how many controllers are connected or access the View to make changes to it before rendering starts.
Example:
At this point, we've created and started a VR session. If we just wanted to view the model in VR, then nothing else is needed.
Most likely, however, we'll want to interact with the scene in VR, and therefore we'll need to implement the four functions required in all classes that derive from VR:
Function | Description |
---|---|
Setup() | This function is called only once, before the very first frame of a VR session begins. This is intended to contain any initialization of VR-specific objects you're using in your application. |
ProcessEvent() |
You can check the type of event received, do something with events you're interested in, and ignore the rest. |
ProcessInput() |
You can use this function to make changes to the VR scene based on controller input. |
ProcessFrame() | ProcessFrame() is called once per frame, right before the frame is presented to the headset. This is a place where you can perform per-frame operations that are not related to controller inputs. |
Here is a basic example of how to implement a class derived from the VR class. In this example:
move
VR operator to move the model when the trigger is pressed (For a more detailed implementation of this example, please refer to the openvr_simple
sandbox source code.)
Example code:
The base VR class keeps track of all tracked devices (i.e., the tracking towers, the controllers, the headset, etc…), and updates their properties each frame.
Users can access data on tracked devices through the member variable tracked_device_data
, by using the device index associated with the tracked device they are interested in.
This is the data associated with each tracked device:
Example:
The move operator calculates a matrix which represents the difference in position for the device associated with it, from the previous frame to the current one:
controller_one
and controller_two
. It is important to note that controllers are also part of the tracked devices. This means that the position in space of a controller can be determined by using its device index:
The Head Mounted Display (hmd) is the principal means of accessing the OpenVR API directly. Users will need it almost any time they wish to call OpenVR themselves.
The hmd is accessible through the GetHeadMountedDisplay()
function.
The sample operators all work similarly. They need to be initialized with a reference to the VR session, so that they can have access to the tracked device data. A device index that corresponds to a controller needs to be "Attached" to them, alongside with an enum
describing which button will cause the operator to become active.
All of the operators have a HandleFrame()
function which should be called once controller input has been collected. This function checks that the button specified in the Attach()
function is active before proceeding.
They all have a Detach()
function which causes the operator to become inactive. These are simply suggestions on how interaction with VR can be accomplished, but users are of course free to experiment with any paradigm that suits their applications.
The OpenVR Simple sandbox application uses the API discussed above and is a good starting point for developing a VR application with Visualize. The source code for this app is available, and provides for a template for a custom VR class using all three operators.
Here are a few notes about the OpenVR Simple application:
-filename
and -driver
. Valid values for the -driver
parameter are directx11
and opengl2
. If no value is provided for the -driver
option, it defaults to dx11
. > When in VR mode you should expect that the camera is in constant movement, since it is tied to the position of the VR headset. As a consequence of this, some of the approaches used in highlighting and the default operators will not work in VR mode.
For example: