Developers seek benchmark solutions in 3-D rendering
Developers seek benchmark solutions in 3-D rendering
Andrew Wilson
Traditional three-dimensional (3-D) graphics represent objects as mathematical models. Surfaces are extracted from these models and subdivided into many small triangles or polygons, which are then assigned colors, textures, and levels of transparency or opacity and continuously rendered to form images. In volume graphics, however, sample points such as computed-tomography or magnetic-resonance-imaging data are assigned color and transparency levels and are then projected onto the computer monitor.
Interior or subsurface structures of objects can be seen by varying the transparency of different samples based on value and type. Volume graphics has inherent advantages for applications needing visualization of irregular objects or where the interior structure is important, such as volume visualization of the human body.
To speed the rendering of such data, systems integrators can choose from a number of different systems from companies such as Hewlett-Packard (Palo Alto, CA), Evans and Sutherland (Salt Lake City, UT), and SGI (Mountain View, CA). These systems all use some form of graphics pipeline to quickly interpolate, shade, and light such data. To benchmark these and other graphics systems, the Application Performance Characterization project group at Standard Performance Evaluation Corp. (SPEC; Manassas, VA) offers performance results and free downloads for a benchmark based on Pro/Engineer Rev. 20 from Pro/E Design Associates (St. Augustine, FL).
During operation, the benchmark model renders a photocopy machine consisting of approximately 370,000 triangles and applies 16 graphics tests, which measure such parameters as wireframe performance, shaded performance, shading mode, and 3-D transformations. Composite numbers are provided for each set of graphics tests and a composite score for graphics and CPU operations is available.
Like SPECint and SPECfp benchmarks, the numbers in SPEC/GPC`s Pro/Engineer Version 20 benchmark are ratios that compare the machine under test to a reference machine. This reference consists of a DK400LX motherboard from Intel Corp. (Santa Clara, CA) with two 300-MHz Pentium II processors running Windows NT, an AccelECLIPSE graphics board from Evans and Sutherland, and a Barracuda SCSI disk from Seagate Technology (Scotts Valley, CA).
Like other SPEC benchmarks, the overall score for the graphic benchmark is the geometric mean of the multiple tests. Results comparing 15 different graphics workstations are published on the company`s Web site: www.spec.org/ gpc/apc.static/apc_proesummary.html. Although the overall performance of the Hewlett-Packard PA-RISC-based C360 Visualizefx6 and Pentium-based X550 fx6 workstations tied for first place, there is more than a $22,000 price difference between the two machines.
Despite the existence of such benchmarks, those companies addressing the needs of volume rendering applications issue a caution to systems integrators. "Volume rendering is a relatively new area of graphics from a commercial point of view," says Steve Sandy, director of marketing and business development for real-time visualization (RTViz; Concord MA), a business division of Mitsubishi Electronics America (Sunnyvale, CA). "All current benchmarking classes for graphics such as ProEngineer are all polygonally centric. Everything to date relates to how fast polygons fill a screen or run Pro Engineer or how fast someone can paint textures over polygons. None of this is relevant to voxels. There are huge differences between volume rendering and texture mapping," he adds.
To address the volume-rendering market, Mitsubishi has introduced the VolumePro 500, a PCI-based add-in board that computes volume rendering at 30 frames/s. "Because we process 500-MPhong-lit samples per second, all trilinearly interpolated," says Sandy, "it is hard to compare the polygon/s fill rate (all Gouraud shaded and bilinearly interpolated) to Phong-shaded voxel samples per second."
Hanspeter Pfister, chief architect of the VolumePro agrees. "Polygon rendering uses vastly different operations than volume rendering because it starts from a geometric description of objects, whereas volume rendering starts with a sampled representation of objects," he says. In a geometric description of an object such as a photocopy machine, the relevant operations that need to be performed on the data include 3-D transformations, rasterization, Gouraud shading, and texture mapping. With a volume-based representation consisting of sample data, the relevant operations include stepping through the volume data, data interpolation, estimation of gradients or surface normals, assigning color to samples, shading, and the compositing of samples into a pixel.
"Comparing the two rendering approaches is like comparing apples to oranges or vector graphics to raster graphics," Pfister says. "Current benchmarks only measure geometry rendering performance. And because there are no benchmarks for volume rendering, it does not make sense to publish any SPEC/APC Pro/E V20 Benchmark for VolumePro," he adds.
Until SPEC or other independent benchmarks are developed specifically to address such applications, systems developers will be left to evaluate independently the performance of such systems as Mitsubishi Electric`s VolumePro 500.