GPU Shader Tutorial Logo
GPU Shader Tutorial
This tutorial is currently a work in progress. Content may be added, updated, or removed at any time.

Shader Advanced - Transparency

Implementing support for transparency in shader code is not a complex task. The fragment shader returns a color vector of 4 components, with the 4th component representing opaqueness of the color.

By setting the opaqueness value of the color of the fragments, you can make fragments transparent. The GPU can be told how transluscent colors returned by the fragment shader should be blended, and based upon that get a final result.

Fragment Shader Code:

1
2
3
4
5
varying highp vec3 color;

void main() {
  gl_FragColor = vec4(color, 0.5);
}

As seen in this fragment shader code, the color of the fragment is set using whatever color value is received, and the alpha value of the color is set to 0.5. This means that the fragment is 50% opaque.

However, just setting the alpha value isn't enough. You will have to explicitly tell the graphics API you use to enable color/alpha blending.

Color blending is an operation that the GPU can perform where it blends the colors of multiple fragments into a final output color.

Alpha blending is a type of color blending where the colors from different fragments are blended together depending on the alpha component of each color.

GPUs can blend colors in multiple ways, each using their own specific formula for how colors are mixed. However, while some can be used with no issues, there are certain blending algorithms that have caveats attached to them.

For the examples we'll be showing, the WebGL color blending configuration is:

WebGL Code:

1
2
gl.enable(gl.BLEND);
gl.blendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA);

The equivalent OpenGL and DirectX color blending configurations are:

OpenGL Code:

1
2
3
4
void enableBlending() {
  glEnable(GL_BLEND);
  glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
}

DirectX Code:

1
2
3
4
5
void enableBlending() {
  d3dDevice->SetRenderState(D3DRS_ALPHABLENDENABL, TRUE);
  d3dDevice->SetRenderState(D3DRS_SRCBLEND, D3DBLEND_SRCCOLOR);
  d3dDevice->SetRenderState(D3DRS_DESTBLEND, D3DBLEND_INVSRCCOLOR);
}

While this configuration is recommended for blending transparent colors, it has a requirement that can severly impact performance. Let's look at an example to see the limitations of the chosen color blending algorithm.

Example - Red Square and Green Square

Cannot run WebGL examples (not supported)
Order of Faces:
    Depth-wise:
        Red Square (Front)
        Green Square (Back)
    Passed to GPU:
        Green Square (First)
        Red Square (Second)

Here, we're drawing two squares, one in red, and one in green. In this image, the red square is placed in front of the green square, but the green square is passed to the GPU first for rendering.

The GPU first draws the green square onto the image. Once the green square is drawn, it begins drawing the red square on top. When the red square is being drawn, the parts of the red square that overlay on top of the green square require the colors of both to be blended together.

The blending configuration that is being used, and was shown previously, tells the GPU to blend the colors in such a way that the new fragment color being added to the image should dominate over the color already present in the image.

Since the red square is drawn after the green square, its color will dominate over the green, making the color of the overlay section skew more towards red.

This is in line with our expectations, since the red square is also in front of the green square depth-wise, which should result in the common area looking red with a green hue.

However, what if we wish to draw the image with the green square in front? If we swap the vertex positions of the two squares, we get the following result.

Cannot run WebGL examples (not supported)
Order of Faces:
    Depth-wise:
        Green Square (Front)
        Red Square (Back)
    Passed to GPU:
        Green Square (First)
        Red Square (Second)

This result is completely inaccurate. The color of the green square should be dominating over the red, but it is in fact the same as the first example, giving the impression that the red square is still in front.

Why is this the case? Because the blending operation has a major requirement that we have failed to satisfy with the second example.

In the first image, the green square was drawn onto the image first, and followed by the red square. The chosen blending operation operates in such a way that new objects being drawn will have their colors dominate over already existing colors in the image.

This means that the algorithm operates under the assumption that an object already drawn onto the image is present behind the object that is being drawn next.

Since the red square is being drawn second, its colors will dominate over the green due to the blending operation selected - the operation assumes that the red square is actually in front of the green square due to it being drawn second.

In order to fix this result, the order in which the squares are passed to the GPU should be sorted by the depth of the squares. Squares further away from the camera should be drawn before squares closer to the camera.

Cannot run WebGL examples (not supported)
Order of Faces:
    Depth-wise:
        Green Square (Front)
        Red Square (Back)
    Passed to GPU:
        Red Square (First)
        Green Square (Second)

Since the red square is now behind, by passing it first to the GPU to be drawn, the error in our result is now fixed. The color of the green square now dominates over the red, as it should have been.

In our example, the issue was very simple to fix since we were only drawing two squares that are completely separated and can be sorted with ease.

However, in cases of objects with multiple polygons that connect or cross-over each other, each polygon needs to be sorted in the correct order.

This issue can further be aggravated by a moving camera. Since all transparent polygons have to be sorted by depth w.r.t. the camera, a moving camera requires sorting each polygon in model-view-space, which is expensive.

As we can see, the scope of transparency becomes exponentially more difficulty with increasing complexity. This is why transparent objects aren't abundantly found in video games, where real-time rendering is crucial for a good experience.

The transparency chapter of OpenGL Tutorial does provide possible steps that can be taken to reduce the performance impact, and also provides links on order-independent transparency techniques that can achieve the same effect, although they are not simple to implement.

There are also other possible configuration options to use that don't have such performance degrading requirements, but each blending algorithm produces a different output, so we recommend experimenting to see which configuration works best for your use-case.

Summary

  • Transparency on the shader side is simple to add, through the use of the 4th component of the color value, which represents the alpha (or opacity) of the color.
  • The GPU needs to be told how colors should be blended together to form a final color. This is required for color transparency, since translucent colors need to be blended to form a resultant.
  • While graphics APIs have multiple functions to blend colors in various ways, some of these have caveats that can cause significant performance degregation if an accurate result is required.
  • To learn more about how transparency in various graphics APIs, check out the links below: