Writing HOOPS Stream Files

Overview

The HOOPS/Stream Toolkit supports ‘black box’ writing of the HOOPS Stream File format (.hsf). ‘Black box’ refers to making a single function call to write an HSF file. The manner of usage is similar to writing an ASCII HOOPS Metafile (.hmf) using the standard HOOPS HC_Write_Metafile function.

The function supplied by the 3dGS-specific classes to perform block box writing of a HOOPS Stream File is HTK_Write_Stream_File. This function only applies when you have a HOOPS/3dGS scene-graph that you want to export to an HSF file, or you have an HSF file that you want to import and map to a HOOPS/3dGS scene-graph.

In order to generate a HOOPS Stream File using the high-level Write function included with the 3dGS-specific classes, the geometry and attributes of interest must reside in the HOOPS/3dGS scene graph. Developers who are already using HOOPS/3dGS to store their geometric information can simply proceed with the writing process. Other developers who wish to export a .hsf file must first map their geometric information to segments, geometry and attributes in the HOOPS/3dGS scene graph. (Future versions of the HOOPS/Stream toolkit may allow developers to simply use the toolkit to directly export an .hsf file, without having to first map their data to the HOOPS/3dGS scene graph.) The following lists the general steps for creating a stream-enabled .hsf file:

  • Map the geometric hierarchy, data, and attributes of interest to the HOOPS/3dGS scene-graph.

  • Set the desired Level-Of-Detail options, such as number of LODs, and compression factor.

  • Export a stream-enabled .hsf file using HTK_Write_Stream_File.

Prior to calling HTK_Write_Stream_File, a HOOPS/3dGS segment must first be open. The function will walk the HOOPS/3dGS segment tree and its contents will be written to an .hsf file starting within the currently open segment.

HTK_Write_Stream_File takes as arguments a filename, flags which denote writing options, and a pointer to a HStreamFileToolkit object. (The HStreamFileToolkit object is used for custom reading and writing and is discussed in the Controlling the reading and writing process section.)

The following code writes out a file called sample.hsf, beginning within the contents of the segment called ‘?Picture’ within the HOOPS database:

#include "hc.h"
#include "HStream.h"

main()
{
        int  flags = 0; /* no writing flags set for now; use defaults */

        HC_Open_Segment("?Picture");
          HC_Insert_Shell(<some sampleshelldata>);
          HC_Insert_Line(<asampleline>);
          HTK_Write_Stream_File("sample.hsf", flags);
        HC_Close_Segment();
}

Controlling the Quality of the Streaming Process

The quality of the graphics streaming process is essentially based on how quickly the user gets an overall feel for the scene. The HOOPS/Stream Toolkit utilizes two separate techniques during HSF creation to improve this process. The first involves exporting simplified versions of 3D objects within the scene (since they can stream in more quickly), and the second involves ordering the objects within the file so that the most important objects in the scene are ordered towards the front of the file. Objects which are larger are typically the most important.

The 3dGS-specific classes provide built in logic to smartly order geometry within the file, placing lower level-of-detail (LOD) representations of facetted objects at the beginning. However, to control which geometry is stored in the file and the quality of the LODs, you must first build up a HOOPS/3dGS database (scene-graph). After building up the database, the Level of Detail generator can then be used to produce simplified versions of the facetted objects within the scene. LOD options are turned on and set via a HOOPS/3dGS Rendering Option. To do this, choose the root of your segment tree, turn on levels of detail and set your options as desired. To have HOOPS/3dGS to generate LODs, you must force an update under a display segment. (This causes HOOPS/3dGS to render the scene). Note, if LODs are already in the tree, you do not need to regenerate them.

In some instances, developers may wish to create HSFs in a batch mode and do not wish to render the screen. In this case, developers can force an update under the HOOPS/3dGS image driver to generate the LODs. An example of this approach is outlined below.

Creating an HSF With LODs

LOD generation is controlled via a HOOPS rendering option:

HC_Set_Rendering_Options("lod = on, lod options = (...)");

Refer to the HOOPS/3dGS Reference maual for details on the various LOD options.

In the example below, the scene graph exists under the segment that has a key of m_lMySceneKey. LODs have not yet been generated for the scene and consequently need to be generated. Additionally, it utilizes the HOOPS/3dGS image driver to create a ‘virtual’ memory based (background) rendering, rather than performing an update to a display:

// Setup the levels of detail for this model

char cval[256];

sprintf(cval, "lod = on, lodo = (levels = 2, ratio = 0.2, preprocess");

HC_Open_Segment_By_Key(m_lMySceneKey);
  HC_Set_Rendering_Options(cval);
HC_Close_Segment();

// Setup a raster image for the LOD
long image_key;
int width = 32;
int height = 32;

HC_Open_Segment("/null");
  image_key = HC_KInsert_Image(0.0, 0.0, 0.0, "rgb", width, height, NULL);
HC_Close_Segment();

HC_Open_Segment("/driver/image/foo");
  sprintf(cval, "use window id = %ld", image_key);
  HC_Set_Driver_Options(cval);
  HC_Set_Rendering_Options ("hsra = szb");
  HC_Include_Segment_By_Key(m_ModelSegmentKey);
HC_Close_Segment();

// Tell HOOPS/3dGS to walk the tree and do an update
HC_Update_Display();

// now delete the segment to free any memory
HC_Delete_Segment("/driver/image/foo");

// write the HSF file
HC_Open_Segment_By_Key(m_lMySceneKey);
  HTK_Write_Stream_File(buffer, flags)
HC_Close_Segment();

Using the #TKE_View Opcode

It is very useful to store some information at the beginning of the file which denotes the extents of the scene, so that an application which is going to stream the file can setup the proper camera at the beginning of the streaming process. Otherwise, the camera would have to continually get reset as each new object was streamed in and the scene extents changed as a result.

The #TKE_View opcode is designed for this purpose. It denotes a preset view which contains camera information, and has a name. An HSF file could have several #TKE_View objects, for example, to denote ‘top’, ‘iso’, and ‘side’ views.

The HOOPS Stream Control and Plug-In (an ActiveX control that can stream in HSF files over the web), along with the various PartViewers provided by Tech Soft 3D, all look for the presence of a #TKE_View object near the beginning of the HSF file with the name ‘default’. If one is found, then the camera information stored with this ‘default’ #TKE_View object is used to setup the initial camera.

If you (or your customers) are going to rely on the Stream Control or Plug-In to view your HSF data, then you should export a ‘default’ #TKE_View opcode toward the beginning of the file. This would be done by registering a custom #TKE_View opcode-handler as the ‘prewalk handler’ The toolkit will call the prewalk handler prior to exporting the HOOPS/3dGS scene-graph information to the HSF file. An example of how to register a custom prewalk handler is provided in the Customizing the HSF file section.

If you are going to create your own HSF-reading application to stream in HSF files that you’ve generated, then that application should have some way of knowing the extents of the scene at the beginning of the reading process; this can only be achieved if your writing application has placed scene-extents information at the beginning of the HSF file (probably by using the #TKE_View opcode), and your reader is aware of this information.

Referencing External Data Sources

The #TKE_External_Reference opcode is used to represent a reference to external data sources. The reference would typically be a relative pathname but could also be an URL. This opcode is intended to be handled in a manner similar to TK_Referenced_Segment, where the scene-graph information located in the reference should be loaded into the currently open segment. For example, a reference of ./left_tire.hsf located immediately after a #TKE_Open_Segment opcode would indicate that the HOOPS/3dGS scene-graph contained in left_tire.hsf should be created within the open segment. A reference of http://www.foobar.com/airplane.hsf would indicate that the HSF resides at a website, and the reader must access the data (it may choose to first download the entire file and then display it, or stream the data in and display it incrementally) When using the 3dGS-specific classes, this opcode could be exported by customizing HSF objects, or withing Pre/Post walk handlers, discussed in the section Customizing the HSF.

Write Options

The HOOPS/Stream Toolkit supports a wide variety of compression and streaming features which are used when exporting an HSF file. It may be desirable to modify the default settings for these features based on how your model is organized in the HOOPS scene-graph, the size of the model, and the amount of preprocessing time that is acceptable.

When using the 3dGS-specific classes, (which will automatically map an existing HOOPS/3dGS scene-graph to an HSF file), the file write options can be modified by setting the flags argument of HTK_Write_Stream_File to one or more #TK_File_Write_Options values. Alternately, if you call the variant of HTK_Write_Stream_File which takes an HStreamFileToolkit object as an argument, the flags can be passed in with the toolkit object by calling the HStreamFileToolkit::SetWriteFlags method. The toolkit will automatically utilize these flags to determine what options to use when exporting the file. The default behavior of the 3dGS-specific classes is to write out a HOOPS Stream File which has all compression and streaming features enabled, with the one exception being ‘advanced compression’ (discussed later).

Compression

Our HSF writer offers compression options which can be enabled or disabled for certain situations. Some options are enabled by default.

Global compression

The toolkit performs LZ compression of the entire file using a public domain component called ‘zlib’ - this is a lossless compression technique that permits pieces of the compressed file to be streamed and decompressed, and is computationally efficient on both the compression and decompression sides.

This type of compression is on by default, and can be disabled by setting the TK_Disable_Global_Compression bit in the flags parameter.

The HOOPS/Stream Toolkit will also compress raster data by default, using a JPEG compression utility. The compression level of this data can be controlled by calling BStreamFileToolkit::SetJpegQuality


Geometry compression

Geometry compression is currently focused on the ‘shell’ primitive, (represented by the #TKE_Shell opcode, and handled by the TK_Shell class). This is the primary primitive used to represent tessellated information. Datasets typically consist primarily of shells if the data sets originated in MCAD/CAM/CAE applications.


Vertex compression

This involves encoding the locations of shell vertices, providing reduction in file size in exchange for loss of coordinate precision and slightly lower visual quality. The degradation in visual quality is highly dependent on the topology of the shell, as well as how the normals information is being exported.

Vertex compression is on by default, and can be disabled by setting the #TK_Full_Resolution_Vertices bit in the flags parameter.

The function HStreamFileToolkit::SetNumVertexBits allows the developer to control the number of bits of precision for each vertex. The default is 24 (8 each for x, y and z).

Note

Vertex compression can cause precision issues when writing HSF files containing textures. If you are experiencing this problem, or think you may experience it, consider turning off position and parameter compression any time you are writing an HSF while using a texture.

A less draconian approach would depend on the type of texture(s) associated with the HSF. High-contrast, distinct edges (checkerboard, stripes) favor disabling compression, while general photographic textures (landscapes, materials like wood grain) can get away with minor compression-based distortion.


Normals compression

Normals compression involves encoding vertex normals, providing reduction in file size in exchange for lower visual quality. Again, the degradation in visual quality is highly dependent on the topology of the shell, as well as how the normals information is being exported. HOOPS/Stream transmits compressed normals for vertices that have been compressed, or if a normal has been explicitly set on a vertex. Surfaces that had gradual curvature over a highly tessellated region can look faceted due to the aliasing of the compressed normals. The function HStreamFileToolkit::SetNumNormalBits allows the developer to greatly reduce or effectively remove such aliasing at the cost of transmitting more data per normal. The default is 10.

Normals compression is on by default and can be disabled by setting the #TK_Full_Resolution_Normals bit in the flags parameter.

Compression of both vertices and normals can be disabled by setting the #TK_Full_Resolution bit in the flags parameter.


Parameter compression

This involves encoding the vertex parameters (texture coordinates). Compression will reduce the file size but this could also result in the loss of precision in textures mapping.

Parameter compression is on by default, and can be disabled by setting the TK_Full_Resolution_Parameters bit in the flags parameter.

The function HStreamFileToolkit::SetNumParameterBits allows the developer to control the number of bits of precision for each vertex. The default is 24 per component.


Connectivity compression

This compresses ‘shell’ connectivity information. This compression technique can provide compelling reductions in files sizes for datasets that contain many ‘shell’ primitives, but can also be a computationally intensive algorithm depending on the size of individual shells. Developers will need to decide for themselves whether the reduced file size is worth the extra computation time.

Additionally, depending on the topology of the shell, the algorithm may provide limited compression benefit or have to ‘punt’ after performing substantial work, thereby providing little to no additional file size reduction in exchange for extra computation time. Therefore, developers should do some experimentation with their specific class of datasets to see if the option is buying them any reduction in file size. If files sizes for typical files are the same both with and without the option set, then this compression option should be disabled when exporting an HSF file. Some specific examples of when the algorithm will punt or perform poorly are shells that contain many backwards faces (which also impact rendering performance and should generally be avoided anyway!), or contain certain complex combinations of ‘handles’ (a teapot or a torus each have one handle) and holes (for example, a flat plate that has a hole in the middle). In general, the connectivity compression algorithm will perform well with most of these cases, but developers should still take some time to investigate the [extra export time vs. file-size reduction] of their datasets with and without this option enabled.

Connectivity compression is OFF by default and needs to be manually enabled by setting the #TK_Connectivity_Compression bit in the flags parameter.

Instancing

Before the toolkit begins writing the file, it walks the scene graph and checks if any objects are geometrically equivalent versions of other objects already in the scene perhaps with a translation, rotation or uniform scale applied. Once it finds a match it stores a special tag indicating it is a copy of an object already in the file and the matrix that defines the translation. When decoding the file the toolkit then automatically runs the referenced object through the modeling matrix and restores it to its original location within the scene graph. This is particularly useful in cases where you are not importing the data directly from a CAD system model and consequently don’t have the assembly structure of the model. The greatest benefits are found when there are large numbers of similar objects within a scene.

A significant amount of effort has been put into ensuring this algorithm is as efficient as possible. However, if you already know the structure of your models you may want to eliminate this preprocess cost by disabling the option.

Instancing logic is enabled by default during writing. It is disabled by setting the #TK_Disable_Instancing bit in the flags parameter.

Object Priority

The toolkit prioritizes geometric objects within the HSF file by weighing their visual importance in the scene vs. their complexity. This enhances the overall ‘quality’ of the streaming process without requiring sever-side logic, since the objects toward the beginning of the file will have the largest benefit:cost ratio.

Object prioritization is enabled by default during writing, and is disabled by setting the #TK_Disable_Priority_Heuristic bit in the flags parameter.

LOD Export

As previously mentioned, all HOOPS/3dGS scene-graph objects will be written out to the HSF file by default, which includes LOD representations of objects. Consequently, to ensure that LODs are not inserted into the HSF for specific scene-graph objects which are ‘visually unimportant’ (i.e. very small), you would simply not have HOOPS/3dGS generate LODs for such objects. However, the HOOPS/Stream Toolkit provides a high-level mechanism for suppressing export of all LODs during the writing process. Export of LODs can be disabled by setting the #TK_Suppress_LOD bit in the flags parameter.

Another option called #TK_First_LOD_Is_Bounding_Box causes the first LOD of any shell to be replaced with its axis aligned bounding box.

Dictionary

Part of the HSF specification is a “dictionary” of file offsets. Its main purpose is to allow selective or ‘on-demand’ access/refinement of graphic database objects. The 3dgs classes will write such a dictionary at the end of the file if the #TK_Generate_Dictionary write option is set.

Tagging

The toolkit supports the concept of tagging, discussed in section Tagging HSF objects. Setting #TK_Force_Tags will cause tags to be automatically generated by the toolkit during the writing process. (Note: tags will always be generated for shells regardless of the value of this write option.)

Global Quantization

Setting #TK_Global_Quantization will cause any required quantization to be global (bbox of scene) instead of local (bbox of individual geometry). This is useful for situations where high-level objects are split up into mulitple shells, since it avoids cracks between the sub-objects (Using a solid modeling example, this would be a situation where a shell was used for each ‘face’, instead of using a single shell for each higher-level ‘body’.) Regardless of this flag, however, local quantization applies until the first #TKE_Bounding_Info. This flag is off by default.

Exporting Different HSF Versions

When writing information out to an HSF file, you will want to indicate what version the file is written in. To specify this information, use the method BStreamFileToolkit::SetTargetVersion. Then pass an integer indicating the file version. For instance, if you want to export a version 16.00, you would pass the integer 1600 when calling BStreamFileToolkit::SetTargetVersion. The following code snippet shows how you would indicate what version of HSF you are exporting:

//setup the toolkit to export v15.00 HSF File
my_toolkit->SetTargetVersion(1500);

The default value for the target version is the most recent version in the File version table.