GPU Shader Tutorial Logo
GPU Shader Tutorial
This tutorial is currently a work in progress. Content may be added, updated, or removed at any time.

Shader Basics - Introduction

What is a shader?

A shader is a program that is primarily used to shade an image, generating color information while applying any lighting effects on said image.

While shaders are mainly used for this purpose, they can now also be used to perform other types of computational tasks.

Initally, graphics cards (GPUs) didn't support programmable shaders. GPUs would have a fixed-function pipeline (where one would call specific units), which could only perform specific actions.

This was later replaced with the current shading system used today.

Where are shaders used?

They can be used in a lot of areas:

  • Rendering video games.
  • Creating neat visual effects.
  • Post-processing videos.

They can be used to:

  • Change hue, saturation, brightness, and/or contrast of an image.
  • Produce effects like bloom, volumetric lighting, bokeh, cel shading.
  • Perform edge-detection, motion-detection.
  • Create even cool psychedelic effects.
  • And much, much more.

Can shaders only be used for visual effects and rendering?

No, they can be used for other areas too. If you've ever heard of machine learning, you might be interested to know that they can be trained on GPUs. Machine learning models can run on the GPU using a certain type of shader.

Considering the GPU is like a simple CPU with a lot of cores that can do some complicated math very fast, machine learning systems can run a thousand scenarios and improve themselves relatively quickly compared to if they ran on the CPU.

There are also cases where you don't need to render an image, but there is a lot of parallel operations you want to run and the results of them can be stored in an image.

For example, if you needed to perform a particular calculation on a 1000 input values, and the output values will always be within a specific range (like 0 to 256), then you can run the calculation on the GPU, and have the result stored in a grayscale image that can go from 0 (black) to 256 (white).

Sometimes this is even done on normal data just to visualize how it looks as an image, since certain data drawn out can bring out certain patterns that may not necessarily be obvious.

What do I need to know before getting started?

You will need to have a basic understanding of vector math, matrix math, and trignometry. Any other pre-requisite will have links or explanations provided before it is used.