Skip to content

Rendering Techniques

Dzmitry Malyshau edited this page Jan 25, 2022 · 8 revisions

Here is a brief comparison of the rendering ideas for efficient visualization of Vangers worlds in today's hardware realities.

Relevant reading:

Original row fill

The game used a double-layer modification of the painters algorithm on the row by row basis. It scanned each line of the level within the visible range, elevated it to the needed height, projected to the screen, and wrote the appropriate color. Closer rows would overwrite the further ones.

Issues:

  • heavy on CPU and RAM usage
  • can't sustain 4K screens today
  • limited to narrow angles

Improvement vectors:

  • multi-threading

Point cloud

Done long time ago by https://github.com/stalkerg. Render each level texel as either a point or a vertical line on GPU. Supports dynamic modification. Could also be approached by turning a line into a box, which will make it even heavier.

Issues:

  • inefficiently uses GPU pixel shaders and the rasterizer: hardware does shading in 2x2 quads anyway, so we end up with at least 5 shader executions per texel (1 for the vertex, 4 for the pixel quad).
  • limited to narrow angles

Improvement vectors:

  • carefully cull the visible portion of the level on CPU

Sliced render

Implemented here: renders each slice as a plane, discarding the texels that don't belong to actual terrain. We render from top to bottom with enabled depth test+write. Currently heavily bound on pixel shader executions, could be improved by taking a bounding mesh of the terrain and preparing Z accordingly.

Issues:

  • too many pixels to process: we need to fetch the surface info for all texels in each slice in order to even figure out if anything needs to be drawn.
  • somewhat limited to narrow angles

Improvement vectors:

  • better terrain bounding volume to prepare Z

Ray tracing

Implemented here: draws one big primitive on screen and computes the color of each pixel by tracing a ray from the camera to the nearest intersection with the ground. Done fully on GPU, supports dynamic level modifications. Allows adding native shadows and water reflections within the same shader.

Issues:

  • heavy on GPU for using loops in shaders with lots of texture fetches
  • precision artifacts at steep angles, difficult to handle underground consistently

Improvement vectors:

  • accelerating structure with maximum mipmaps

Tessellation

Also implemented here: draws the level as a series of smaller patches, each getting tessellated by the hardware according to how it's seen by the camera. Done fully on GPU, supports dynamic level modifications.

Issues:

  • tessellation support is limited (Metal has a different API for it)
  • artifacts in places where the double layer splits or merges

Polygonalization in Unity

Implemented by "Andrey M". There is a plugin to import the level to Unity as a terrain object. It is rendered in polygons.

Issues:

  • locked to Unity
  • no dynamic level modification
  • no underground

Compute-based row fill

Implemented here. Basically, replicate the original row filling algorithm with compute shaders. Operates with high parallelism on GPU. Supports dynamic level modification.

Relies on some way of ordering voxels. Could be done with Raster Ordered Views or hackier with atomic operations on u32 values that contain the depth in the higher bits and the material type in the lower ones. This is the current approach here.

Issues:

  • synchronizing access to same pixels by multiple threads
  • also limited to narrow angles

Bar painting

Implemented here. We are drawing each terrain point in the visible area as a bar. In fact, as 2 bars, or 10 quads in total, per point, to account for the lower and higher levels. Supports dynamic level modification.

Issues:

  • stresses GPU quite a bit