Skip to content

[LEGACY] Tutorial 4: fast deferred rendering

Tzu-Mao Li edited this page Dec 16, 2019 · 1 revision

Redner renders the images using the path tracing algorithm. While it generates physically-plausible results by simulating inter-surface reflections and occlusions, it is a slow algorithm. For some applications, it is desirable to have a fast approximation. We can achieve this through a technique called deferred rendering. Redner supports outputting not just the color, but various attributes of the intersections between camera ray and the scene. We can render these attributes into a "G-buffer" and perform lighting on the G-buffer in PyTorch.

This tutorial perform a similar task compared to the previous one. We again want to estimate the pose of a teapot. The target image is

where the initial guess is

The full source code is here: https://github.com/BachiLi/redner/blob/master/tutorials/04_fast_deferred_rendering.py

The beginning of the scene setup is identical to the previous tutorial, except this time we don't setup any light sources:

scene = pyredner.Scene(cam, shapes, materials, area_lights = [], envmap = None)

When we serialize the scene, we specify an optional argument "channels", which is used for dictating the output of redner. In addition to color (radiance), redner supports various attributes such as the alpha values (1 if the pixel is foreground, 0 if the pixel is background), depth, position, normal, albedo, etc (see https://github.com/BachiLi/redner/blob/master/channels.h for the full list). Importantly, all these outputs are still differentiable with respect to all scene parameters. Here we ask redner to output the position, shading normal, and the diffuse albedo.

scene_args = pyredner.RenderFunction.serialize_scene(\
    scene = scene,
    num_samples = 16, # Still need some samples for anti-aliasing
    max_bounces = 0, # Set to 0 to avoid path tracing
    channels = [redner.channels.position,
                redner.channels.shading_normal,
                redner.channels.diffuse_reflectance])
render = pyredner.RenderFunction.apply
g_buffer = render(0, *scene_args)

Now, since we specified the outputs to be position, normal, and albedo, g_buffer is a 9-channels image. We extract the G-buffer as follow

pos = g_buffer[:, :, :3]
normal = g_buffer[:, :, 3:6]
albedo = g_buffer[:, :, 6:9]

The G-buffers look like the following (with some normalization applied):

To render the G-buffer into final image in PyTorch, we define the following function:

def deferred_render(pos, normal, albedo):
    # We assume a point light at the camera origin (0, 30, 200)
    # The lighting consists of a geometry term cos/d^2, albedo, and the light intensity
    light_pos = torch.tensor([0.0, 30.0, 200.0], device = pyredner.get_device())
    light_pos = light_pos.view(1, 1, 3)
    light_intensity = torch.tensor([10000.0, 10000.0, 10000.0], device = pyredner.get_device())
    light_intensity = light_intensity.view(1, 1, 3)
    light_dir = light_pos - pos
    # the d^2 term:
    light_dist_sq = torch.sum(light_dir * light_dir, 2, keepdim = True)
    light_dist = torch.sqrt(light_dist_sq)
    # Normalize light direction
    light_dir = light_dir / light_dist
    dot_l_n = torch.sum(light_dir * normal, 2, keepdim = True)
    return light_intensity * dot_l_n * (albedo / math.pi) / light_dist_sq 

Then we can just call

img = deferred_render(pos, normal, albedo)

to obtain the final image.

The rest of the code is similar to the previous tutorial.

Here is our final optimized image:

And here is the optimization video: