PROGRAMMING GUIDE
Visualize supports rendering the scene to an offscreen window. This is accomplished using an offscreen image buffer, and OffScreenWindowKey serves as a high-level handle to the buffer. It allows you to inspect or manipulate the image data without the user seeing it rendered on the screen.
After you have the key, simply include a root segment as you would with any normal window. Once a segment is included by a window key, its entire hierarchy will enter the rendering pipeline for that window.
If you want to choose a specific driver interface to use with the window you create, you must use an OffScreenWindowOptionsKit to specify it. If you do not specify a driver interface, Visualize will default to the Window::Driver::Default3D interface. Whatever hardware is used to draw into a conventional on-screen window is also used to draw into a memory buffer, so the scene should be identical in both contexts (Visualize will not use a software mode to do any offscreen rendering).
The segment hierarchy in an offscreen window behaves the same way as a hierarchy in a normal window. So, you'll have to set all your cameras, lighting, and other attributes in the same way. If you need to render an existing scene into an offscreen window, it is recommended that you include the existing scene's root segment into the offscreen window using an include segment rather than trying to duplicate the scene by rebuilding it. Using an include segment saves memory, time, and all changes to the existing scene are automatically reflected in the other window.
It may be desirable to render images with a transparent background. This is only supported when the render target is an offscreen window.
To make the window background transparent, set the opacity level in the window options kit before creating the window:
Limitations: Rendering to a transparent background is only supported in the 3D shader drivers. Additionally, bloom is not supported.
Before accessing the image data, you have to make sure the rendering is complete. Normally, you would call WindowKey::Update to render a scene. However, due to the multithreaded nature of Visualize, the rendering may not be finished by the time you try to use the image. Therefore, it is appropriate to wait for the image to complete using an UpdateNotifier:
The raw image data is accessed through an ImageKit. Once you have the kit, you can manipulate the data in many ways. For example, you can write the image to a file, use it as a texture, or merely inspect it by iterating over its array. This code sample demonstrates how to write the image to a PNG:
Alternatively, you may want to modify part of the image data before doing something else with it. In that case, you can get the raw image as 24-bit RGB data using an ImageKit with an OffScreenWindowOptionsControl:
It is easy to make a screenshot by rendering your scene into an OffscreenWindowKey and then showing the rendered image into an ImageKit as described above. However, if you don't need to inspect or alter the image data, there are more convenient and less memory-intensive ways of doing this. See section 4.4.1 and section 9.1 for instructions on using alternate methods.
It is possible to associate an offscreen window directly with an image definition. When doing so, rendering to the offscreen window will automatically update the image definition with new data. The image definiton can then be used as a texture, a window background, or exported for other uses. If you want to render to a texture, you must make the association between the offscreen window and the target image definition at the time you create the window.
NOTE: While there are other ways of rendering to a texture, the method described here is recommended because it writes to GPU memory. Other methods that do not use the ImageDefinition target will render the image to main memory before transferring it to GPU memory. This incurs a performance penalty when continuously updating the render target.
Assuming you have the image definition already created, the last step is to apply it to a piece of geometry as a texture. High-level steps are shown below:
Complete instructions on creating an image definition and applying it as a texture can be found in section 5.3.
HOOPS Visualize offers the ability to print (see limitations) as well as export to PostScript and 2D PDF. Collectively, these are referred to as "hardcopy". Hardcopy is an extensible class for writing hardcopy versions of your model. The Hardcopy class has one purpose, to reproduce your scene as accurately as possible in the output medium.
Hardcopy works by rendering the faces in your model to an image in one pass, then doing a hidden line rendering over that image in the second pass. Text, lines, edges, curves, and other vector data are drawn in the second pass. This results in output which can be efficiently handled by a printer.
The background image, generated in the first pass, can be extremely large. For example, an 8.5" x 11" image at 600 dpi results in 128 MB of data. Visualize can't render an image that large, so Hardcopy divides that image up into smaller parts which it can handle. The smaller parts are compressed, written to temporary files, then reassembled and cropped before being sent to the output file. This enables Hardcopy to make arbitrarily large output files.
When printing, you must provide the window key from your scene hierarchy to Hardcopy. Additionally, there is an Hardcopy::File::ExportOptionsKit through which you can set size and resolution. For example:
Printing directly to paper only works in Windows applications. This is because printing to paper uses the Windows GDI. One of the main differences is the options kit is a Hardcopy::GDI::ExportOptionsKit object instead of a Hardcopy::File::ExportOptionsKit.
On Linux and OS X, printing is a two-step process. First, export your scene to a 2D PDF or Postscript file. Next, you must make your own calls to CUPS (or the printing API of your choice) in order to send the data to the printer.