Customizing HSF Objects

In addition to writing and reading a standard HOOPS Stream File which contains information stored in the HOOPS/3dGS scene-graph, the HOOPS/Stream Toolkit provides support for storing (and retreiving) user-defined data in the HSF file. This data could be associated with HSF objects, or it could simply be custom data which is convenient to store inside an HSF. The toolkit also supports tagging of objects in the HSF file, which allows for association of HSF objects with user data.

A HOOPS Stream File consists of header/termination information, and objects which represent HOOPS/3dGS scene-graph objects. These include segments, geometric primitives, and attributes. Each of these objects is uniquely indentified in the HSF file using an 'opcode'. When an HSF file is being written or read, the HOOPS/Stream toolkit utilizes a list of 'opcode handlers'. This list contains C++ objects that are registered to handle each of the standard opcodes.

Opcode handlers are derived from BBaseOpcodeHandler. This is an abstract class used as a base for derived classes which manage logical pieces of binary information. BBaseOpcodeHandler provides virtual methods which are implemented by derived classes to handle reading, writing, excecution and interpretation of binary information. (The methods are called Read, Write, Execute and Interpret.) Execution refers to the process of populating application-specific data structures with the binary information that has been read from a file or user-provided buffer within the Read method. Interpretation refers to the process of extracting application-specific data to prepare it for subsequent writing to a file or user-provided buffer within the Write method.

Naming conventions for opcodes and opcode handlers are as follows:

HOOPS/Stream Toolkit opcodes

  • TKE_<opcode> 3dGS-specific opcode handler objects - HTK_<object-type>

Some opcode-handlers can be used to process more than one opcode; when using these objects, the desired opcode must be passed into the opcode handler's constructor. (To find out which opcode handler supports each opcode, refer to the source to HStreamFileToolkit::HStreamFileToolkit() ). For example, the HTK_Color_By_Index opcode handler supports both the #TKE_Color_By_Index opcode and #TKE_Color_By_Index_16 opcode.

The 3dGS-specific classes provide built-in support for exporting an existing HOOPS/3dGS scene-graph to an HSF file, and importing an HSF file and mapping it to a HOOPS/3dGS scene-graph. This is achieved by utilizing the HStreamFileToolkit object which has HOOPS/3dGS-specific opcode handlers registered with it. The 3dgs-specific opcode handlers are all derived from the base opcode handlers, and provide HOOPS/3dGS-specific implementations of the Execute and Interpret methods.

  • During file writing, the GenerateBuffer method of the HStreamFileToolkit object traverses the HOOPS/3dGS scene-graph, and calls the Interpret method of each 3dgs-specific opcode hander. (The HOOPS/3dGS scene-graph is queried within the Interpret method.) After interpretation is complete, GenerateBuffer continually calls the Write method of the opcode handler until writing of the current opcode is complete.

  • During file reading, the ParseBuffer method of the HStreamFileToolkit object reads the opcode at the start of each piece of binary information and continually calls the Read method of the associated opcode handler. (The ParseBuffer method is actually implemented in the base class.) After the opcode handler reports that reading is complete, ParseBuffer calls the Execute method of the opcode handler. The objects which have been read and parsed are inserted into the HOOPS/3dGS scene-graph within the Execute method of each 3dgs-specific opcode handler.

The HStreamFileTookit's default HOOPS/3dGS-specific handler for a particular opcode can be replaced with a custom opcode handler (which in turn must be HOOPS/3dGS-specific as well), which enables writing and reading of user-data. For example, let's say we wanted to write out an extra piece of user-data at the end of each shell primitive (and of course retrieve it during reading) that represents a temperature value for each of the vertices in the shell's points array. This would involve the following steps:

1. Define a new class derived from TK_Shell that overloads the Write and Read methods to process the export and import of extra user data.

As previously mentioned, query/retrieval of the user data from custom data structures during the writing process would typically occur within the Interpret method of the opcode handler. Similarly, mapping of the imported user data to custom application data structures would typically occur in the Execute method. However, this work can be performed in the Write and Read methods as well, as the example indicates.

#include "HOpcodeShell.h"
class My_HTK_Shell : public HTK_Shell
{
protected:
int my_stage; // denotes the current processing stage
public:
My_HTK_Shell() { my_stage = 0; }
TK_Status Interpret (HStreamFileToolkit & tk, HC_KEY key, int lod=-1) alter;
TK_Status Read (HStreamFileToolkit & tk) alter;
TK_Status Write (HStreamFileToolkit & tk) alter;
void Reset () alter;
};

2. Implement the custom Write function.

This is done in stages, each of which correspond to the discrete pieces of data that need to be written out for the custom shell. We use different versions of the HStreamFileTookit's PutData method to output the user data, and we return from the writing function during each stage if the attempt to output the data failed. (This could happen due to an error or because the user-supplied buffer is full.) At this point, review the process of Formatting User Data.

The following lists in detail the 5 writing stages for our custom shell opcode-handler :

Stage 0: Output the default TK_Shell object by calling the base class' Write function
( TK_Shell::Write )

Stage 1-4: These stages write out the custom data (the temperature array) as well as formatting information required to denote a block of user data.

  1. Output the #TKE_Start_User_Data opcode to identify the beginning of the user data

  2. Output the # of bytes of user data.

  3. Output the user data itself.

  4. Output the #TKE_Stop_User_Data opcode to identify the end of the user data

TK_Status My_HTK_Shell::Write (HStreamFileToolkit & tk)
{
TK_Status status;
switch (m_stage)
{
// call the base class' Write function to output the default
// TK_Shell object
case 0:
{
if ((status = HTK_Shell::Write(tk)) != TK_Normal)
return status;
my_stage++;
} nobreak;
// output the TKE_Start_User_Data opcode
case 1:
{
if ((status = PutData (tk, (unsigned char)TKE_Start_User_Data)) != TK_Normal)
return status;
my_stage++;
} nobreak;
// output the amount of user data in bytes; we're writing out
// 1 float for each vertex value, so we have 4*m_num_values
case 2:
{
if ((status = PutData (tk, 4*m_num_values)) != TK_Normal)
return status;
m_progress = 0;
my_stage++;
} nobreak;
// output our custom data, which in this example is an array of
// temperature values which are stored in an application
// data structure called 'temperature_values'
// since the temperature values array might always be larger
// than the buffer, we can't just "try again" so always generate
// piecemeal, with m_progress the number of values done so far
case 3:
{
if ((status = PutData (tk, temperature_values, m_num_values)) != TK_Normal)
my_stage++;
} nobreak;
case 4:
{
// output the TKE_Stop_User_Data opcode which denotes the end
// of user data
if ((status = PutData (tk, (unsigned char)TKE_Stop_User_Data)) != TK_Normal)
return status;
my_stage = -1;
} break;
default:
return TK_Error;
}
return status;
}

3. Implement the custom Read function

This is also done in stages, each of which correspond to the discrete pieces of data that need to be read in for the custom shell. We use different versions of the HStreamFileTookit's GetData method to retreive data, and we return from the reading function during each stage if the attempt to retreive the data failed. Otherwise, the stage counter is incremented and we move on to the next stage.

The stages during the reading process are analogous to the stages during the writing process outline above, with one exception. The #TKE_Start_User_Data opcode would still be read during 'Stage 1', but rather than blindly attempting to read our custom data, we need to handle the case where there isn't any user data attached to this shell object. Perhaps the file isn't a custom file, or it was a custom file and this particular shell object simply didn't have any user data appended to it.

It is also appropriate at this time to bring up the issue of versioning and user data; it is also possible that there is user data following this shell object, but it is not 'our' user data. Meaning, it is not temperature data that was written out by our custom shell object, and therefore it is data that we don't understand; as a result, we could attempt to read to much or too little data. If custom versioning information was written at the beginning of our custom file, and this versioning information was used to verify that this was a file written out by our custom logic, then it is generally safe to proceed with processing user data since we 'know' what it is. The versioning issue, including details on how to write custom versioning information in the file, is discussed in more detail in the next section, Versioning and Storing Additional User Data.

Note that to check if there is any user data, we first call LookatData to simply look at (but not get) the next byte and verify that it is indeed a #TKE_Start_User_Data opcode. If not, we return.

TK_Status My_HTK_Shell::Read (HStreamFileToolkit & tk)
{
TK_Status status;
switch (my_stage)
{
case 0:
{
if ((status = HTK_Shell::Read (tk)) != TK_Normal)
return status;
my_stage++;
} nobreak;
case 1:
{
unsigned char temp;
// look at the next byte since it may not be the
// TKE_Start_User_Data opcode
if ((status = LookatData(tk, temp)) != TK_Normal)
return status;
if (temp != TKE_Start_User_Data)
return TK_Normal; // there isn't any user data, so return!
// get the opcode from the buffer
if ((status = GetData (tk, temp)) != TK_Normal)
return status;
my_stage++;
} nobreak;
case 2:
{
int length;
// the integer denoting the amount of user data
if ((status = GetData (tk, length)) != TK_Normal)
return status;
my_stage++;
} nobreak;
case 3:
{
// get the temperature value array; this assumes we've
// already determined the length of the array and identified
// it using m_num_values
if ((status = GetData (tk, temperature_values, m_num_values)) != TK_Normal)
return status;
my_stage++;
} nobreak;
case 4:
{
unsigned char temp;
// get the TKE_Stop_User_Data opcode which denotes the end of user data
if ((status = GetData (tk, temp)) != TK_Normal)
return status;
if (temp != TKE_Stop_User_Data)
return TK_Error;
my_stage = -1;
} break;
default:
return TK_Error;
}
return status;
}

4. Implement the custom Reset Function

The toolkit will call the opcode handler's Reset function after it has finished processing the opcode. This method should reinitialize any opcode handler variables, free up temporary data and then call the base class implementation.

void My_HTK_Shell::Reset()
{
my_stage = 0;
}

5. Implement the custom Clone function

TK_Status My_HTK_Shell::Clone (HStreamFileToolkit & tk, BBaseOpcodeHandler **newhandler) const
{
*newhandler = new My_HTK_Shell();
if ( *newhandler != null )
return TK_Normal;
else
return tk.Error();
}

6. Instruct the toolkit to use our custom shell opcode handler in place of the default handler by calling SetOpcodeHandler.

We specify the type of opcode that we want to replace, and pass in a pointer to the new opcode handler object. We then pass the custom toolkit into the read function.

tk->SetOpcodeHandler (TKE_Shell, new My_HTK_Shell);
HTK_Write_Stream_File ("testfile.hsf", tk);

This will also cause the toolkit to delete it's default handler object for the #TKE_Shell opcode. Note: As the HOOPS/Stream Reference Manual points out, all opcode handler objects stored in the HStreamFileToolkit object will be deleted when the HStreamFileTookit object is deleted. Therefore, we would not delete the My_HTK_Shell object created in the above example.

top_level:2 prog_guide:3 prog_guide/3dgs_stream:2