The HOOPS/OOC module is an out-of-core system that processes and renders large amounts of data. This system is designed take as its input ASCII point cloud files. In a multi-stage process, HOOPS/OOC transforms the raw point cloud data into a compressed format and renders it with high visual quality without sacrificing performance.
To begin, a preprocessor loads raw input data and restructures and compresses the data. The preprocessor creates an OOC file, a special purpose HSF file, as well as a directory of data node files. This processed data can be as small as a third of the size of the original raw input. The resultant OOC file can be loaded and rendered in the HOOPS graphics system.
HOOPS/OOC is supported on Windows platforms. C++ is the only supported language.
HOOPS/OOC provides a simple command line executable for processing your data. It is called ooc.exe and can be found in the bin directory of the HOOPS installation.
When the preprocessor receives a point cloud file, it performs three passes of the data. In each pass, every single data point is examined. As a result, this stage can take a significant amount of time. We suggest that you allow for sufficient preprocessing time prior to loading and rendering your model.
The following list describes the three phases of input data preprocessing:
HOOPS/OOC loads in all the data points and calculates a bounding box that contains the entire data set. One or more temporary binary file(s) are created to house the data for rapid access in subsequent phases.
HOOPS/OOC reads the temporary binary file(s) and then spatially sorts the points. If a user-specified bounding box was defined, HOOPS/OOC will exclude any points outside of this volume. Additionally, users can specify a subsample percentage that tells HOOPS/OOC to sort only a portion of the points in the dataset.
HOOPS/OOC uses a version of the binary space partition algorithm to sort the data. For each partition, it creates a node that stores a user-specifiable number of points. This number will be the size of the shell that later is loaded into the HOOPS database. If the number of data points exceeds the limit, then HOOPS/OOC subdivides the node further until the threshold is not surpassed.
While sorting the points, HOOPS/OOC adheres to a memory limit. If the memory usage reaches the limit, HOOPS/OOC writes the least recently used nodes out to disk to free up memory so that it can continue processing the remaining data.
To preprocess a point cloud file, run ooc.exe from the command line. This program takes a number of options - but the most important is the input file name, which is passed with the -f option. The following example shows how you might preprocess the MyPointCloud.ptx ASCII point cloud file:
> ooc -f ../data/MyPointCloud.ptx -l ../data/MyPointCloudLog.txt
The above command line entry tells ooc.exe to process the ../data/MyPointCloud.ptx file. In the data directory, it creates a MyPointCloudLog_ptx.ooc file along with a MyPointCloudLog_ptx_nodes directory that contains a series of *.node data files. Additionally, it will also create a MyPointCloudLog.txt log file. The preprocessor takes a number of options that lets you control how your data is analyzed and organized. They are as follows:
HOOPS Visualize provides a set of API functions for point cloud operations. To use the point cloud API, you must include the file PointCloudAPI.h. All point cloud classes are members of the ooc namespace.
This API provides:
IMPORTANT: In order for the HOOPS/OOC module to work properly you need to enable multithreading:
HC_Define_System_Options("multi-threading = full");
Once an OOC file and its associated directory of node files have been preprocessed, the OOC file can be loaded into the HOOPS Visualize scene graph.
The following workflow describes what happens when an OOC file is loaded into HOOPS Visualize:
Because the loading and storing of data in memory can be a highly intensive process, HOOPS implements several mechanisms to balance the throughput of data with system performance. HOOPS/3dGS and HOOPS/OOC work in concert to control the amount of data loaded at any given time. Additionally, HOOPS/3dGS also carefully manages the amount of memory used. As parts of the model move out of the view frustum or are extent culled, HOOPS/3dGS depopulates the data from the associated segments. Geometry is flushed from memory. As areas become visible again, data is reloaded. HOOPS/3dGS uses caching logic so that in situations where the camera is panning back and forth or zooming in and out, data is not disposed of prematurely.
The HOOPS/OOC loader is built as an HIO plug-in. If you are familiar with the HIO paradigm or use other HIO plug-ins, you can use this method to keep all your module loaders consistent. Alternatively, you may skip the OOC loader and read the file directly using FileInputByKey. Both methods are described below.
The HOOPS/OOC HIO plug-in lets you import OOC files into HOOPS Visualize. It is part of the HIO Plug-in architecture. To use this integration, you must be using the HOOPS/MVO module.
To use the HOOPS/OOC HIO plug-in, please follow the steps below:
During start-up, when HOOPS/MVO finds the HOOPS/OOC HIO plug-in in your application's path, it will perform the following steps:
The input handler that is registered with the HIOManager is HIOUtilityOOC. You can load files directly with this input handler as shown in the sample code below:
HIOUtilityOOC ooc_reader; HInputHandlerOptions ooc_opt; ooc_opt.m_pHBaseView = view; HFileInputResult result = ooc_reader.FileInputByKey(filename, &ooc_opt);
If you do not wish to use the HIO plug-in to import your file, you may alternatively choose to import it directly using ooc::io::FileInputByImageKey. Both methods require the OOC model to be associated with a HBaseView and both provide identical results.
HInputHandlerOptions input_options; input_options.m_pHBaseView = view; ooc::io::IOResult result = ooc::io::FileInputByKey(filename, dest_segment_key, input_options);
The point cloud data of a single file is stored in a HOOPS scene graph under a segment called "root", which is the root of all its point cloud data. No two point clouds may share the same point cloud root. That is, the following scene graph structure is not allowed:
./ooc-file-1/root/ooc-file-2/root
To work around this, one may store the roots in different segments as follows:
./ooc-file-1/root ./ooc-file-2/root
Alternatively, you can preprocess a set of point clouds into a single OOC file.
Each point cloud instance has its own corresponding environment, represented by the ooc::Env object. The Env is a logical group of nodes and is used in many of the API functions to identify which point cloud the function should operate on. The developer will never instantiate their own Env - HOOPS/OOC will create it automatically.
As part of the preprocessor algorithm, HOOPS Visualize will logically divide a point cloud into nodes. A node contains a set of spatially-related points. ooc::NodeHandle is the interface to use to interact with each node. Using NodeHandle, you can interact with individual points by performing operations such as counting and deleting.
HOOPS/OOC will only load points into the scene graph if they are within the frustum of the camera. However, it is not possible to know which points are viewable before performing viewing calculations, which are computed as part of the normal update process. Therefore, in order to cause a point cloud to be displayed, you must trigger an update, wait for the OOC module to populate its segments with points, then call update again to actually display the points.
The ooc::PollNodesLoadingOrHaveBeenLoaded is available to determine whether all viewable points have been loaded or if any are still waiting to load. One possible method for performing this task is shown in the following code snippet:
static void update (HBaseView & view, ooc::Env env) { view.Update(); // trigger OOC nodes to start populating. // test if nodes are finished loading while (ooc::PollNodesLoadingOrHaveBeenLoaded(env)) { std::this_thread::sleep_for(std::chrono::milliseconds(50)); view.Update(); // render any newly populated OOC nodes } }
Remember that if you have loaded multiple point clouds, you need to call this sequence for each ooc::Env object.
HOOPS Visualize has a limit to the amount of memory it will use when populating point clouds. If the available memory is exhausted, no more points will be loaded for that particular view orientation (although, the loaded points are not locked - if the camera is moved, some points will be removed and others will take their place). While this mechanic is an inherent feature of using the OOC system, you are able to adjust the memory limit by using the system option "populate memory limit". However, such an adjustment is not recommended. Contact Tech Soft technical support for advice on this subject if you feel there is a problem with the memory limit.
The HOOPS/OOC module takes advantage of multithreading to provide its functionality. It follows that modifying the point cloud is a potentially dangerous operation because data modified in one thread must conform to the expectations of all other threads. Therefore, HOOPS Visualize requires your code to be synchronized when performing edits. SynchronizeWith is the mechanism provided to address this scenario.
SynchronizeWith accepts a callback function which executes your editing logic. SynchronizeWith acquires a write lock on the scene graph, executes your code, then releases the lock when finished. For example, the following code snippet demonstrates how you would call a function to decimate nodes:
SyncResult sync_res = SynchronizeWith(env, [&](SyncToken const & sync_token) { for (size_t i = 0; i < node_handles.size(); ++i) { ooc::NodeHandle node_handle = node_handles[i]; if (!decimate_node(fraction, sync_token, node_handle)) { success = false; return; } } });
See example ooc-4-delete-points.cpp for a full code sample.
HOOPS Visualize provides two operators for use with OOC models: HOpSelectAreaOOC and HOpSelectPolygonOOC. These operators work similarly to HOpSelectArea and HOpSelectPolygon, but should be used with OOC models only. If you try to select non-OOC data with these operators, the functionality is undefined.
OOC operators differ from their corresponding operators in one significant way: they collect selection results into a special object called HSelectionSetOOC. This selection set has a few extra methods pertinent to OOC models. For example, you can use the HSelectionSetOOC::SetDeepSelection function to allow your operator to select points that are unloaded. The operators can select by rectangular area or by polygonal area. To add a selection set manually, you can call HSelectionSetOOC::AddRectangleWindow (rectangle selection) or HSelectionSetOOC::AddTriangleWindow (polygonal selection). If you have a complex polygon shape you want to select against, you should divide it into triangles and call AddTriangleWindow multiple times.
If you need to extend the functionality of the OOC operators, you are encouraged to do so by deriving from the existing OOC operators. See MVO Programming Guide Section 2.4 for details about creating a custom operator object. The source code for the OOC operators can be found in the MVO source directory.
IMPORTANT: The OOC operators are included with the MVO source, but are not built as part of the HOOPS/MVO package. In order to use the OOC operators, you need to include the following files in your project:
You also need to build these supporting files:
Filtering is a way to create a subset of points based on criteria you specify.
Step 1: Create the filter class
Any class that performs filtering must derive from ooc::query::Filter. You must override the filter's virtual methods to create logic that will satisfy your requirements. The following is a basic filter that will accept all points:
class CollectAllFilter : public ooc::query::Filter { // rejects an entire node of points virtual bool RejectNode (ooc::NodeHandle const & node_handle) { return false; } // reject point based on bounding parameters of a node virtual bool RejectBounding (ooc::Point const & min, ooc::Point const & max) { return false; } // rejects all points currently in view - only called if the node is in memory virtual bool RejectPointsInMemory () { return false; } // rejects all points not currently in view - only called if the node is on disk virtual bool RejectPointsOnDisk () { return false; } // accepts point virtual bool AcceptPoint (ooc::Point const & point, size_t point_index) { return true; } };
HOOPS/OOC will execute the filter in the order the functions are listed above. The return value for each function determines whether a point or set of points are accepted or rejected. For example, if you want the filter to only accept points that are currently in memory, you would set the return value of RejectPointOnDisk to true.
Step 2: Execute the filter
After the filter is created, it must be instantiated and subsequently executed by calling QueryPoints. Note that this function returns an iterator to the set of points that meet the filter's criteria.
CollectAllFilter filter; ooc::query::QueryIterator it = ooc::query::QueryPoints(env, filter);
Step 3: Iterate over the filtered points
Lastly, you use the iterator to operate on the point subset. The GetStatus function determines whether the iterator is finished looping over the point set. The loop below is simply adding all the points together:
while (true) { ooc::query::QueryIterator::Status status = it.GetStatus(); if (status == ooc::query::QueryIterator::Status_Dead) { break; // No more points to iterate. } if (status != ooc::query::QueryIterator::Status_Alive) { return ExitCode::OOC_Failure; } sum_of_all_coordinates = sum_of_all_coordinates + it->GetNodePoint(); ++num_points; it.Advance(); }
For a complete example of the filtering process, see the sample code ooc-5-query-all.cpp.
The OOC module performs view-dependent loading and unloading of points into the scene graph. After an OOC file has been loaded, saving it to another file type will include the subset of the OOC data that happens to currently be loaded, in addition to any other scene graph contents.
If edits of a point cloud are saved to disk, they are saved out as OOCD (OOC delta) files. These delta files keep track of changes made between the original OOC file and the current point cloud state. They do so by storing current changes as well as a reference to any previous changes made. If no previous changes have been made, the OOCD file refers to the original OOC file. Thus, a collection of OOCD files may reference one another, accumulating all the changes made from them. It is possible to skip loading the most recent OOCD file in the OOCD file chain, in which case, only the change history up to that specific delta is used. To commit point cloud edits and create the OOCD file, you need to call ooc::io::CommitDeltasToFile.
If you have a need to combine delta files into a single file, you should perform a query on the entire point cloud and write those points to a new file. Then, process that file using ooc.exe.
It is important to understand the difference between conventional HOOPS Visualize database entities and OOC entities. The OOC module will allocate its own memory and data structures in order to provide the functionality demanded by large point cloud visualization. Much of the infrastructure associated with OOC rendering is logically separate from conventional Visualize infrastructure. Therefore, when you are finished with a point cloud, you must clean up using ooc::Release or ooc::Destroy, but not both. Release will clean up all OOC infrastructure but will leave the currently active points and scene graph in memory. All the points in view and their segments will remain and can be accessed and manipulated using conventional HC_ calls - but no OOC functionality will be available. Calling Destroy automatically calls Release and also removes all points that were loaded as OOC points as well as their segments.
Another way to release memory is to allow the HBaseView to be destructed. When the HBaseView destructor is called, a Release will be triggered. However, this will only occur if the load option HInputHandlerOptions::m_bOOCAutoCleanup == true (this is the default).
In addition to the intensive memory and data management HOOPS performs to ensure visual quality and performance for point cloud rendering, there are additional options developers can set that further assist in the optimization of point cloud visualization.
Within HOOPS/MVO, you can use HBaseView::SetFramerateMode passing FrameRateMode::FramerateFixed indicating a frame rate as well as a culling threshold. Setting the frame rate lets HOOPS understand how best to balance performance with visual acuity.
By default, the data points will be rendered as vertex markers, but you can enable point splatting so that the points give an appearance of a smooth surface. To enable this feature, use the function HBaseView::SetSplatRendering. With HBaseView::SetSplatSize and HBaseView::SetSplatSymbol, you can determine the size and symbol of your splats respectively. If hardware acceleration is available, you can allow HOOPS to leverage it if you call HBaseView::SetFastMarkerDrawing.
Within HOOPS/3dGS, you set several options that will improve rendering performance. We recommend that for point cloud data, shadows should be disabled. In HOOPS, there are two kinds of shadows: Simple shadows and shadow maps. These are both rendering options and can be disabled by calling Set_Rendering_Options. The following code snippets shows how you can turn these options off:
HC_Set_Rendering_Options("no simple shadow, shadow map = off");
There also is another rendering option you can use to improve performance: vertex decimation. Although HOOPS does use culling logic in the fixed frame rate mode, vertex decimation is a quick and direct way to tell HOOPS to only draw a percentage of the vertices in the scene. If you are decimated vertices, the randomize vertices rendering option will cause vertices that are compiled into display lists to be inserted in random order. This option is intended for a more uniform point distribution when applying vertex decimation to non-randomized data. You should be aware that using vertex decimation does not decrease the number of vertices loaded into memory - it only instructs HOOPS Visualize to draw fewer of them.
The sample code demonstrates how to perform common operations with point clouds.