- 最后登录
- 2018-6-29
- 注册时间
- 2011-7-1
- 阅读权限
- 20
- 积分
- 359
- 纳金币
- 335582
- 精华
- 0
|
A Shading Reuse Method for Efficient Micropolygon Ray Tracing
Qiming Hou Kun Zhou
State Key Lab of CAD&CG, Zhejiang University
Abstract
We present a shading reuse method for micropolygon ray trac-
ing. Unlike previous shading reuse methods that require an ex-
plicit object-to-image space mapping for shading density estima-
tion or shading accuracy, our method performs shading density con-
trol and actual shading reuse in different spaces with uncorrelated
criterions. Specifically, we generate the shading points by shoot-
ing a user-controlled number of shading rays from the image space,
while the evaluated shading values are assigned to antialiasing sam-
ples through object-space nearest neighbor searches. Shading sam-
ples are generated in separate layers corresponding to first bounce
ray paths to reduce spurious reuse from very different ray paths.
This method eliminates the necessity of an explicit object-to-image
space mapping, enabling the elegant handling of ray tracing effects
such as reflection and refraction. The overhead of our shading reuse
operations is minimized by a highly parallel implementation on the
GPU. Compared to the state-of-the-art micropolygon ray tracing al-
gorithm, our method is able to reduce the required shading evalua-
tions by an order of magnitude and achieve significant performance
gains.
Keywords: micropolygon, GPU, Reyes, ray tracing
1 Introduction
Shading is typically the performance bottleneck in cinematic-
quality rendering, which is often based on the Reyes architecture
and uses micropolygons to represent high order surfaces or highly
detailed objects [Cook et al. 1987]. In order to reduce shading costs,
state-of-the-art micropolygon renderers (e.g., Pixar’s RenderMan)
perform shading computation on micropolygon vertices, and reuse
the shading values to evaluate the color of each visibility sample (or
antialiasing sample) and composite the final image. Such a shad-
ing reuse strategy enables a shading rate significantly lower than
the visibility sampling rate, which is vital for efficient high-quality
rendering where extremely high supersampling of visibility is nec-
essary, especially when rendering defocus and motion blur.
Existing shading reuse methods for micropolygon rendering are
primarily designed for rasterization based pipelines. Ray trac-
ing effects such as reflection and refraction are typically consid-
ered as a part of shading in such methods. Consequently, all re-
flected/refracted samples have to be shaded, incurring significant
overhead. As ray tracing achieves more significance in modern
high-quality rendering [Parker et al. 2010], this may become a ma-
jor obstacle in future applications.
In this paper, we introduce a simple but effective method to reuse
shading evaluations for efficient micropolygon ray tracing. Com-
pared to the state-of-the-art micropolygon ray tracing algorithm,
our method is able to reduce the required shading evaluations by
an order of magnitude and achieve significant performance gains.
1.1 Related Work
Extensive research has been done on micropolygon rendering and
ray tracing.
Researchers have explored efficient parallel implementations of mi-
cropolygon rendering on GPUs [Wexler et al. 2005; Patney and
Owens 2008; Zhou et al. 2009; Hou et al. 2010]. In particular,
Hou et al. [2010] introduced a GPU-based micropolygon ray trac-
ing algorithm. They demonstrated that for high-quality defocus and
motion blur ray tracing can greatly outperform rasterization meth-
ods. In their method, ray tracing is only used for visibility sampling
and shading is still performed on micropolygon vertices. Another
branch of research also seeks to accelerate micropolygon rendering
using GPUs [Fisher et al. 2009; Fatahalian et al. 2009; Fatahalian
et al. 2010; Ragan-Kelley et al. 2011; Burns et al. 2010]. The key
difference between our work and theirs is that while they propose
new GPU architecture designs that support real-time micropolygon
rasterization, we aim to accelerate high-quality, off-line ray tracing
using software approaches on current GPU hardware.
全文请下载附件:
|
|