EdgeBreaker Data
Shells with the TKSH_CONNECTIVITY_COMPRESSION bit set have their vertex location
and connectivity data combined into an encoding according to the algorithm presented
by Jarek Rossignac in 1998. The remainder of this description will assume familiarity
with that algorithm. Please see:
Overall Organization
Header byte scheme, byte mtable_scheme, byte points_scheme, byte normals scheme, int opslen, int mtable length, int points length, int point count, int normals length
Opcodes:
MTable int mtable flags, [int mlengths count (obsolete) ] [int m2stackoffsets count (obsolete) ] [int dummies count ] [int patches count ] [array of ints mlengths (obsolete) ] [array of ints m2stackoffsets (obsolete) ] [array of ints dummies ] [array of ints patches ] [6*float bounding ] [3*int quantizations ] [3*int quantizations_normals ]
MTable Flags:
NotesDummies and PatchesBoth of these fields are instructions to postprocess the pseudo-manifolds that were produced by the standard algorithm into a generalized shell. As described below in "points", these will also have interactions with vertex location reconstruction. Dummy vertices, as well any triangles that touch them are removed. Patches are pairs of vertex id's. The first of each pair is the old vertex id. The second is the new vertex id. The pairs are sorted such that the first of each is monotonically increasing. In processing the patches, we replace the first vertex id with the second. The old vertex id's are relative, the new ones are absolute. In other words, the old vertex id's are encoded as differences from the last one seen. These two postprocessing steps are what allow us to avoid the inherently unstable CASE_M and CASE_M2 operations. Points are encoded as positions relative to a "parallelogram completion" prediction scheme. Let N be an unknown vertex location introduced as part of a triangle that contains vertices {N,A,B}. Suppose further that there is another triangle {B,A,C} that shares edge {A,B} with triangle {N,A,B}. If vertices A, B, and C are all known, vertex location A is predicted to lie at location A+(B-C). We must, however, account for the case where one or more of vertices A, B and/or C are also unknown. If that is the case, we use the first valid vertex found among A, B, and C, in order, as the predicted location. If all three are unknown (e.g. the very first vertex), the predicted location is the origin {0,0,0} (remember, though, that we're in integer space quantized to the axis-aligned bounding cuboid). These vertices could be unknown if it is part of the loop that begins a new set of triangles or if it is a "dummy" vertex (see "dummies and patches" above). Points are quantized to a uniform grid along the axis-aligned bounding cuboid so that we may reap the compression benefits of using small integers (the better the prediction of parallelogram completion, the deeper the compression). The axis-aligned bounding cuboid is chopped into the number of buckets specified by mtable's three "quantizations" values. If points_scheme in the edgebreaker header is 0, these differences are signed 16-bit shorts left for the LZ postprocess. We found, however, that LZ required too much computation time to pack the 16-bit shorts. This was the motivation for points_scheme 1. If points_scheme is 1, we do the packing ourselves into a cascading sequence of progressively larger samples that each have one escape sequence. Each sample is of one of the following forms.
New to 7.0: edgebreaker data may now be encoded completely without points, so that the TKE_Shell opcode may separately specify uncompressed floats after the end of the edgebreaker data. New as of version 7.0, normals are encoded in much the same way as points. The same algorithm of parallelogram completion is used to predict the locations. The only difference is simply that for the purpose of encoding, normals are assumed to lie in the standard 2x2x2 bounding cube: {[-1,-1,-1],[1,1,1]}. The same table of cascading ranges used in points_scheme 1 is used to pack the bits in normals_scheme 0. There is currently only one valid scheme for normals, normals_scheme 0. Within the context of edgebreaker, we allow only one normal per vertex, and every vertex must have a normal. The HSF opcode that uses the edgebreaker interface, TK_Shell, has a more general (though less compressed) encoding that allows some normals to be left undefined. |