GPU Shader Tutorial Logo
GPU Shader Tutorial
This tutorial is currently a work in progress. Content may be added, updated, or removed at any time.

Shader Intermediates - Image Generation

While images can be generated to render objects passed to a GPU, images can also be generated where the image is drawn over the entire screen, with the help of some logic for what color each pixel should be. Noise can also be added in to the image to change the end result.

These generated images can be used as the final image to show to users, or can be stored as a texture for use somewhere else, such as to color the surface of a particular object.

Certain applications do generate textures using shaders instead of having raw image files in order to save space or have a dynamic resolution for its textures depending on the system it is running on.

Images can also be passed as textures to these shaders, which can then apply filters or effects on such images, such as bluring, pixelating, changing color temperature (warm/cool), etc.

Since vertices are still required to be sent to the vertex shader, the vertices passed are:

[AsciiMath Syntax:] vertices = ((-1, -1), (-1, 1), (1, 1), (1, -1))

These four coordinates map to the vertices of the frame, so the vertex shader returns the vertex value as it receives it. No modifications are done to the received vertex since the passed vertices are already in the form of clip-space coordinates.

Vertex Shader Code:

1
2
3
4
5
attribute vec4 vertexPosition;

void main() {
  gl_Position = vertexPosition;
}

Example - A coordinate gradient

Cannot run WebGL examples (not supported)

How it works

Fragment Shader Code:

1
2
3
4
5
6
7
8
uniform highp vec2 resolution;

void main() {
  highp vec2 coordinates = gl_FragCoord.xy / resolution;

  highp float fragmentColor = coordinates.x * coordinates.y;
  gl_FragColor = vec4(vec3(fragmentColor), 1.0);
}

This fragment shader generates a gradient that's darkest at one corner and brightest at the opposite corner. The darkest corner will always be at the origin, since that is the point where the coordinates of the pixel will be 0 (or at least nearest to 0).

In GLSL, the coordinates of the current fragment can be accessed through the gl_FragCoord constant. It contains the position of the fragment as the X and Y values, and the depth of the fragment as the Z value.

In cases where a single fragment is generated per pixel, the position of this fragment will be the coordinates of the center of the pixel the fragment belongs to.

This means that a fragment belonging to the lower-left most pixel, the coordinates of the fragment is [AsciiMath Syntax:] (0.5, 0.5), when the location of the pixel itself is [AsciiMath Syntax:] (0, 0).

The coordinates of the fragments range from [AsciiMath Syntax:] (0, 0) to the width and height of the frame. In order to normalize these coordinates down to a range from 0 - 1, we can receive the total resolution of the frame from outside the shader (the shader by default has no way of knowing the resolution of the frame).

The fragment coordinates can then be divided by the resolution to get the normalized coordinates of the fragment within the range from 0 - 1, which can make calculations simpler since we now work within a constant range.

The color of the fragment is simply set as the product of the normalized X and Y coordinates of the fragment, and in grayscale (since R, G, and B components are all set to the same value).

From this render, we can see that the origin of the X and Y axis is at the lower-left corner. The value of X increases as you move right, and the value of Y increases as you go up.

Let's now look at the next example, where we tile the image.

Example - A tiled coordinate gradient

Cannot run WebGL examples (not supported)

How it works

Tiling an image is pretty simple. The image needs to be split into blocks, and then each block can have operations performed on it.

As the fragment shader can only work on individual fragments, two values need to be calculated for each of them:

  • The tile the fragment belongs to
  • The position of the fragment within that tile (in the form of normalized coordinates).

Once these values are known, applying operations on each tile (or block) is simple.

In order to understand how the math would work, let's look at the current image that we have.

The current image we have is made up of pixels. The position of these pixels can be easily determined. Taking an image of resolution [AsciiMath Syntax:] (99, 99), if we consider the screen as a graph and the lower-left corner pixel as the origin, then the center pixel would be located at [AsciiMath Syntax:] (49, 49).

Similarly, the coordinates of the top-right corner would [AsciiMath Syntax:] (98, 98), the coordinates of the top-left corner would be [AsciiMath Syntax:] (0, 98), the coordinates of the bottom-right corner would be [AsciiMath Syntax:] (0, 98), and the coordinates of the bottom-left corner would be [AsciiMath Syntax:] (0, 0) (since that's the origin).

We know that pixels contain multiple fragments. For this current situation, let's consider each pixel only containing one fragment, with its position at the center of the pixel.

Our previous explanation assumes that pixels are point sized (they have no area). However, in reality, they do occupy a region of space, just like tiles.

This means that the coordinates of the pixels don't exactly represent the position of that pixel, but instead a particular part of that pixel. As is with graphs and plotting, the coordinates of the pixel represents where the lower-left corner of the pixel is positioned.

Since fragments belong to a particular pixel, their position is generally determined relative to each pixel.

Since the coordinates of each pixel in an image is represented as an integer, the coordinates of the fragments within the pixel can be represented as decimal values of the integer coordinates.

For example, with the center pixel in our previous example, the coordinates[AsciiMath Syntax:] (49, 49) now represents the lower-left corner of that pixel. As the position of the fragment lies at the center of the pixel, its coordinates would become:

[AsciiMath Syntax:] "fragment coordinates" = (49 + 0.5, 49 + 0.5) = (49.5, 49.5)

Note: Since the coordinates of two consecutive pixels will have a difference of 1, a fragment within a pixel has to have coordinates within the range of 0 to 1 (excluding 1, including 0). As the fragment in our example is at the centre, its coordinates would be [AsciiMath Syntax:] 50% "of 1" = 0.5.

Here we notice two interesting properties:

  • The integral part of the coordinates represents the pixel position.
  • The decimal part of the coordinates represents the fragment position within that pixel. They are also considered as normalized coordinates of the fragment w.r.t that pixel (we'll look into that later).

An important note about these coordinates is that they are dependent on the resolution of the image, which is [AsciiMath Syntax:] (99, 99). This is evident through a simple check.

We saw that the coordinates of the center pixel of the image is [AsciiMath Syntax:] (49, 49). However, the coordinates are relative to the size of the image itself. If the size of the image changes, the coordinates of the center pixel would also change.

These coordinates need to be converted into a format such that they are no longer dependent on the resolution of the image. By doing so, the coordinates become "normalized", and are independent of any resolution.

The value of the components of normalized coordinates is always within the range of 0 to 1 (excluding 1, including 0).

This can be done through a simple division operation of the pixel coordinates and the resolution of the image. The calculation is:

[AsciiMath Syntax:] text(normalized coordinates) = (text(pixel coordinates)_x / text(image resolution)_x, text(pixel coordinates)_y / text(image resolution)_y)

Once the normalized coordinates of any pixel are known, the coordinates of a pixel relative to any screen resolution can be determined through multiplication:

[AsciiMath Syntax:] text(new coordinates) = (text(pixel coordinates)_x / text(new resolution)_x, text(pixel coordinates)_y / text(new resolution)_y)

Now consider the pixels as tiles. Tiles have area as well, similar to pixels as we've discussed. So this allows us to map the concepts discussed so far into tiles.

Tiles also contain a group of points, similar to the concept of pixels, where a fragment is be considered a point, and a pixel is considered a group of fragments, in this case there being only one fragment in the center.

By normalizing the coordinates of a fragment, all the fragments from the original image are grouped into an image that is one pixel in size.

This is evident by looking at the value of the components of the normalized coordinates. These coordinates will always have the integral component set to 0, which means that they all belong to the same pixel. It is the decimal component that determines where they are located within that pixel.

If we took an image of resolution [AsciiMath Syntax:] (100, 100) with one fragment per pixel, when the fragment coordinates are normalized, the fragments are now simply mapped onto an image of resolution [AsciiMath Syntax:] (1, 1).

This means the pixel in this "new image" will contain all the fragments of the original image, which is [AsciiMath Syntax:] 100 times 100 = 10000. The location of all these fragments within that pixel are determined from the decimal part of the values of the coordinates.

By multiplying the normalized coordinates against another resolution, these fragments are again spread across all the pixels in that resolution. This can result in a change in the number of fragments per pixel, depending upon the new resolution.

This is also evident from the resultant coordinates. The integral part of the coordinates will indicate which pixel the fragment belongs to, and the decimal part indicates where they are located within that pixel.

If the fragments from the example are now spread across an image of resolution [AsciiMath Syntax:] (10, 10), then each pixel in the new image will contain [AsciiMath Syntax:] 10000 / (10 times 10) = 10000 / 100 = 100 fragments.

A fragment present within pixel of coordinates [AsciiMath Syntax:] (5, 5) will have coordinates [AsciiMath Syntax:] ("5.x", "5.y"), where [AsciiMath Syntax:] "x" and [AsciiMath Syntax:] "y" are the decimal parts that tell the location of the fragment within that pixel.

We initially noted that the decimal part of the coordinates are the normalized coordinates of the fragment. The reason for this is simple.

We've just discussed how with normalized coordinates, the integral part is always constant, with the decimal part noting the actual location.

Similarly, with the fragment coordinates, since the decimal part determines where within a specific pixel the fragment is located, it can be considered as the normalized coordinates relative to the pixel the fragment belongs to.

The concept of tiling uses these principles, with a tile representing a pixel. When the coordinates of a fragment is normalized, it is grouped into a single tile.

By multiplying this normalized coordinates with the "tiling resolution" (number of tiles across the screen, similar to screen resolution and pixels), the fragment is then placed into the tile that it should belong to.

Using the final coordinates of the fragment (which we'll refer to as the tiling coordinates), we can:

  • Determine the tile the fragment is present in by looking at the integral part of its tiling coordinates (which we'll refer to as tile coordinates).
  • Determine its location within a tile (normalized coordinates w.r.t the tile) by looking at the decimal part of its tiling coordinates (which we'll refer to as tile normalized coordinates).

Fragment Shader Code:

1
2
3
4
5
6
7
8
9
10
11
uniform highp vec2 resolution;

void main() {
  highp vec2 tilingResolution = vec2(12.0, 9.0);
  highp vec2 coordinates = gl_FragCoord.xy / resolution;

  highp vec2 blockCoordinates = fract(coordinates * tilingResolution);

  highp float fragmentColor = blockCoordinates.x * blockCoordinates.y;
  gl_FragColor = vec4(vec3(fragmentColor), 1.0);
}

Looking at our code, we can see the application of the discussed concepts. We set a tiling resolution of ([AsciiMath Syntax:] (12.0, 9.0)). The normalized coordinates of the fragment are then calculated.

The tiling coordinates of the fragment w.r.t. the tiling resolution is then calculated through the multiplication operation.

However, for the color of the fragment, we only care about its tile normalized coordinates, so we grab the decimal component of the resultant coordinates.

In GLSL, this can be done using the built-in function fract.

Note: In the shader code, any values w.r.t a parent tile will be prefixed with block. Any values w.r.t the all tiles will be prefixed with tile. This is done to differentiate between them.

The tile normalized coordinates are then used to calculate the color of the fragment, just like in the first example.

Next, let's draw something else within the tiles.

Pattern Example - A tiled pattern with glowing center

Cannot run WebGL examples (not supported)

How it works

Now equipped with the information on how to tile an image, and get the necessary values required to work within a tile, we can start drawing within tiles.

The algorithm for drawing a glowing circle that dims as you move towards the edge is simple:

  • Determine the tile normalized coordinates for the center of the circle.
  • Calculate the distance of the current fragments' tile normalized coordinates from the center.
  • The distance from the center determines the brightness of the fragment. The closer a fragment is to the center, the brighter it will be.

Fragment Shader Code:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
uniform highp vec2 resolution;

void main() {
  highp vec2 tilingResolution = vec2(12.0, 9.0);
  highp vec2 coordinates = gl_FragCoord.xy / resolution;

  highp vec2 blockCenter = vec2(0.5, 0.5);
  highp vec2 blockCoordinates = fract(coordinates * tilingResolution);

  highp float distanceFromCenter = distance(blockCenter, blockCoordinates);
  highp float centerFactor = 1.0 - distanceFromCenter;

  highp float fragmentColor = pow(centerFactor, 3.0);
  gl_FragColor = vec4(vec3(fragmentColor), 1.0);
}

First, we need to determine the tile normalized coordinates center of a tile. In a tile, the tile normalized coordinates of the center would be [AsciiMath Syntax:] (0.5, 0.5), since the center would be located at the 50% width and height mark of a tile.

We can then calculate the distance between the fragment and the center of the tiles using their normalized coordinates. In GLSL, a built-in function exists that can provide this value, called distance.

Since we wish for fragments closer to the center of the tile to be brighter, we calculate the factor for the brightness as 1.0 (which is the maximum possible brightness) minus the distance of the fragment from the center.

The result of this factor is stored in a variable called centerFactor, since this brightness factor is based around the center of the tile.

This brightness factor can now be set as the grayscale color of the fragment. This will result in fragments further away from the center of the tile becoming brighter.

One thing to note is that the brightness factor is multiplied to the power of 3 (using the GLSL function pow), which is then set as the grayscale color of the fragment.

There are two reasons this was done, and are also interlinked with each other:

  • It increases the range of brightness values that are possible for the fragments.
  • It increases the dropoff in brightness exponentially with distance.

Without using pow, the possible range for the brightness values of fragments would be 1.0 - 0.3.

  • Points at the center would have a distance of 0, their brightness would be 1.0.
  • Points at the edges of a tile would have a maximum distance of 0.7 (the distance of the corners from the center), making their brightness factor go down to 0.3.

By multiplying the factor by the power of 3, this range increases dramatically, to 1.0 - 0.027, since the lowest possible brightness gets cubed.

This also means that the dropoff in brightness increases exponentially with distance from the center. Since the edges are now darker, the fragments in between will drop off in brightness a lot more to maintain the transition and range.

Use this trick to your advantage if you need to exaggerate color values of your fragments.

Pattern Example - A tiled pattern with glowing diagonals

Cannot run WebGL examples (not supported)

How it works

Drawing a pattern of diagonal lines across a tile may seem complex but, looking at the mathematics, we'll see that it's much more simple than it appears to be.

In this pattern, we wish for fragments to grow dimmer the further they are from a either diagonal line. This requires calculating the distance of the fragment from either diagonal.

In our image, we are dealing with square tiles, so all our calculations will be respective to that particular shape. For non-square tiles, the calculations may differ slightly, but the same requirement exists.

Let's take the center of the tile as the origin of the graph. The tile is a square, and the tile normalized coordinates of the center of the tile is [AsciiMath Syntax:] (0.5, 0.5). This results in the boundaries of the tile in our graph being 0.5 units away from the origin.

The plot for this would be:

Tile Graph Plot

Image Generation Source

Adding the diagonals of the tile to the graph would result in the plot:

Tile With Diagonals Graph Plot

Image Generation Source

Here, we can notice a peculiar property of diagonals - the absolute values of the X and Y coordinates of points on the diagonal are always equal. This can be easily verified from the plot generated above.

So in order to determine if a point is on a diagonal line, we just need to find the difference between the absolute values of its X and Y coordinates. If it's zero, then we know it is on the diagonal.

This subtraction calculation also provides us with another info - an approximate distance from the closest diagonal line.

Any point that has a difference greater than zero between its X and Y coordinates are known to be present outside the line. This difference provides us with an approximation of how far away the point is.

It will, in many cases, not equal to the closest distance between a point and the closest diagonal line, but for our purposes it is a good enough approximation.

Now that we can calculate the distance of a fragment from the closest diagonal line, we can apply the same principles we learnt from the previous shader example.

Fragment Shader Code:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
uniform highp vec2 resolution;

void main() {
  highp vec2 tilingResolution = vec2(12.0, 9.0);
  highp vec2 coordinates = gl_FragCoord.xy / resolution;

  highp vec2 blockCenter = vec2(0.5, 0.5);
  highp vec2 blockCoordinates = fract(coordinates * tilingResolution);

  highp vec2 coordFromCenter = abs(blockCenter - blockCoordinates);
  highp float distanceFromDiagonals = abs(coordFromCenter.x - coordFromCenter.y);
  highp float diagonalFactor = 1.0 - distanceFromDiagonals;

  highp float fragmentColor = pow(diagonalFactor, 3.0);
  gl_FragColor = vec4(vec3(fragmentColor), 1.0);
}

The first few lines in the shader are similar to the previous shader example. After that, we first determine the tile normalized coordinates of the fragment.

This is done through simple subtraction of the tile normalized coordinates of the center of the tile from the tile normalized coordinates of the fragment, thanks to the properties of vector mathematics.

Since we don't care about the signs of the X and Y coordinates of the fragment, just their absolute values, we remove the signs from the resulting coordinates calculation. In GLSL, the built-in function abs achieves this operation.

Next we determine the distance of the point from the diagonal through the subtraction of the X and Y coordinates of the fragment. Only the absolute value of this result matters, hence the use of the abs function again.

Since the smaller this distance is, the closer the fragment is to the diagonal, this value is subtracted from 1.0 so that points closer to the diagonal are brighter (similar to the previous shader example).

This result is stored in a variable called diagonalFactor, since this brightness factor is based on the distance from the diagonal.

From this point onwards, it is again similar to the previous shader - using the brightness value to set the grayscale color of the fragment, and exaggerating the color difference using pow.

Any pattern drawing that depends on tiling works on the same basics as the two patterns shown - splitting the image into tiles, and then operating within a single tile using the tile normalized coordinates.

A pattern may not necessarily be created through a completely unique set of rules. Multiple patterns can be combined to form new patterns that produce interesting effects.

Let's look at an example where we combine our previous two patterns into one and see the results.

Pattern Example - A tiled combination pattern

Cannot run WebGL examples (not supported)

How it works

Fragment Shader Code:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
uniform highp vec2 resolution;

void main() {
  highp vec2 tilingResolution = vec2(12.0, 9.0);
  highp vec2 coordinates = gl_FragCoord.xy / resolution;

  highp vec2 blockCenter = vec2(0.5, 0.5);
  highp vec2 blockCoordinates = fract(coordinates * tilingResolution);

  highp float distanceFromCenter = distance(blockCenter, blockCoordinates);
  highp float centerFactor = 1.0 - distanceFromCenter;

  highp vec2 coordFromCenter = abs(blockCenter - blockCoordinates);
  highp float distanceFromDiagonals = abs(coordFromCenter.x - coordFromCenter.y);
  highp float diagonalFactor = 1.0 - distanceFromDiagonals;

  highp float fragmentColor = pow(diagonalFactor * centerFactor, 3.0);
  gl_FragColor = vec4(vec3(fragmentColor), 1.0);
}

In this example, we can see that we perform the calculations for the center brightness factor and diagonal brightness factor exactly as shown previously.

The results of these two factors are combined (or merged) through a multiplication operation. This results in the image that you see above.

Looking at the image, you can see the effects of both factors:

  • The diagonal lines are visible, but are sharpened at the edges due to the effect of the center brightness factor.
  • The brightness decreases closer to the edges of the tiles, but it produces a more "square-ish" shape than a circular one due to the diagonal brightness factor.

Here we see the effects of both the diagonal brightness factor and center brightness factor, but neither dominating over each other. The reason for this is due to the multiplication operation.

A question that might be raised is why addition wasn't used instead. Addition is an operation that basically stacks such effects, but doesn't "combine" them into one.

Here's an analogy to explain the effects of addition and multiplication:

  • Addition is like taking a cake, applying a layer of frosting over it, and then adding a layer of chocolate sprinkles on top.
    • The frosting and sprinkles are in visibly separate layers, with the texture properties of both present only in their respective layers.
    • The frosting and sprinkles can be separated with ease.
    • If enough sprinkles are spread evenly enough across the frosting, it can obscure a majority of the frosting from the view.
  • Multiplication is like taking a bowl, putting in the frosting and chocolate sprinkles, and mixing it together into one mixture, and then layering this mixture on top of the cake.
    • The mixture combines the texture of the frosting and the sprinkles.
    • The frosting and sprinkles are relatively much harder to separate.
    • A relatively much larger ratio of sprinkles vs frosting is required to obscure the frosting.

By performing an addition operation between the two factors, one factor could completely dominate over the other in certain fragments, and appear to stack over each other instead.

By contrast, performing a multiplication operation "melds" the two factors together into a result where being able to separate the individual effects is harder to do visually.

In these types of cases, an addition operation stacks effects and factors, whereas a multiplication operation mixes effects and factors into one.

The effects of multiplication and addition for combining and stacking effects will be more apparent in the lighting and dithering chapter.

So far, we've looked into images generated through the use of patterns, specifically tiling and patterns within tiles. Next, let's look at how images can be generated using randomness and noise.

Randomness Example - Random noise

Cannot run WebGL examples (not supported)

How it works

Fragment Shader Code:

1
2
3
4
5
6
7
8
9
10
11
12
uniform highp vec2 resolution;

highp float random(highp vec2 coords) {
   return fract(sin(dot(coords.xy, vec2(12.9898,78.233))) * 43758.5453);
}

void main() {
  highp vec2 coordinates = gl_FragCoord.xy / resolution;

  highp float fragmentColor = random(coordinates);
  gl_FragColor = vec4(vec3(fragmentColor), 1.0);
}

In this example, for each fragment a random number is generated within the range of 0 - 1 (excluding 1, including 0). For this, a user-defined random was used to create a number from given 2D coordinates.

The function is provided with the normalized coordinates of the fragment as a parameter, and the random number returned is used as the grayscale color value for the fragment.

The random function used here will always generate the same random number given the same input, which allows for consistency.

However, if it is required to always be unique, modifying the passed 2D coordinates with the current time can be done to make sure a new random value is generated for every render.

Note: The random function code was taken from this GitHub Wiki page.

Randomness Example - A tiled pattern of random (grayscale) colors

Cannot run WebGL examples (not supported)

How it works

Fragment Shader Code:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
uniform highp vec2 resolution;

highp float random(highp vec2 coords) {
   return fract(sin(dot(coords.xy, vec2(12.9898,78.233))) * 43758.5453);
}

void main() {
  highp vec2 tilingResolution = vec2(12.0, 9.0);
  highp vec2 coordinates = gl_FragCoord.xy / resolution;

  highp vec2 tileCoord = floor(coordinates * tilingResolution);

  highp float fragmentColor = random(tileCoord / tilingResolution);
  gl_FragColor = vec4(vec3(fragmentColor), 1.0);
}

For setting up a tile of random grayscale colors, we first setup the tiles as learnt in the previous pattern examples, but make certain changes.

In the pattern examples, we were only concerned with working within the block itself. So, when we generated the tile normalized coordinates of the fragment (coordinates * tilingResolution), we ignored the integer component and only kept the decimal component using the function fract.

However, in the current situation, we will use the tile coordinates of the fragment to determine what color the tile will be. This requires the integer part of the fragment coordinates to be taken, with the fractional part ignored.

In GLSL, by using the built-in function floor to floor a value (round a number to the nearest integer value at or below it), we remove the decimal component of the coordinates.

The resultant coordinates are the tile coordinates of the fragment. Next, this tiling coordinates needs to be converted into normalized coordinates, since its value is currently dependent on the tiling resolution.

By dividing the tile coordinates by the tiling resolution, we get back the normalized coordinates of the tile itself (to be specific, the lower-left corner of the tile).

These normalized coordinates are then passed to the random function, whose returned value is then used as the grayscale color value of the tile.

All fragments within a tile will have the same tile coordinates as they all belong to the same tile. This means that they will generate the same random number, ensuring that the entire tile is the same color.

Combined Example - A tiled pattern with random centers

Cannot run WebGL examples (not supported)

How it works

Fragment Shader Code:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
uniform highp vec2 resolution;

highp float random(highp vec2 coords) {
   return fract(sin(dot(coords.xy, vec2(12.9898,78.233))) * 43758.5453);
}

void main() {
  highp vec2 tilingResolution = vec2(12.0, 9.0);
  highp vec2 coordinates = gl_FragCoord.xy / resolution;

  highp vec2 blockCoordinates = fract(coordinates * tilingResolution);
  highp vec2 tileCoord = floor(coordinates * tilingResolution);

  highp vec2 randomizedBlockCenter = vec2(mix(0.1, 0.9, random(tileCoord / tilingResolution)));

  highp float distanceFromCenter = distance(randomizedBlockCenter, blockCoordinates);
  highp float centerFactor = 1.0 - distanceFromCenter;

  highp float fragmentColor = pow(centerFactor, 2.0);
  gl_FragColor = vec4(vec3(fragmentColor), 3);
}

In this example, the concepts of the pattern example of glowing circles is combined with the concepts of the random tile example to randomize the position of the center of the circles.

In this example, instead of taking the standard tile normalized coordinates of the center of the block [AsciiMath Syntax:] (0.5, 0.5), we generate the tile normalized coordinate of the center randomly instead.

We've seen in the previous example on how to generate a random number for each tile. We then use this number to select a value within a range of numbers.

In this example, the center will only be placed on the diagonal of the tile that goes from the lower-left corner to the top-right corner, with the range of its X and Y coordinates limited to 0.1 to 0.9. A value is then selected within this range using the random number for the tile.

To select a value within the set range using the random number as a factor, we can use a method called linear interpolation. But first, let's see what linear interpolation is.

Let's consider a range 0.0 to 1.0. We need to get a number within this range using a certain factor provided to us. From this factor, we figure out how far into this range the number will be present, and return the number at that point.

So if this value of this factor provided is 0.3, which is 30% in terms of percentages. So, this factor tells us that we should move 30% across the range of 0.0 - 1.0, starting at 0.0, and the number that is at this 30% mark should be returned.

The number that is present 30% away from 0.0 and 70% away from 1.0 would be 0.3. Hence, the result of this calculation will be 0.3.

This method of generating a value by travelling through a certain range of numbers by a certain factor, and returning the value at that point in the range, is called linear interpolation.

The formula for this calculation is simple - given a range [AsciiMath Syntax:] X - Y and a factor [AsciiMath Syntax:] F that is within the range [AsciiMath Syntax:] 0.0 - 1.0, the formula is:

[AsciiMath Syntax:] (X times (1 - F)) + (Y times F)

In GLSL, a built-in function mix can perform this calculation, which we use to calculate the value of the tile normalized coordinates for our random center, with the X and Y components being the same.

Once the random center is calculated, the rest of the process is similar to the original glowing circle pattern example.

In this example, we modified the position of the center of the circle, thereby displacing it from the center within the tile.

In the next example, let's look at how the random factor can be used to make "decisions" on what should and should not be shown.

Combined Example - A tiled pattern with random diagonals

Cannot run WebGL examples (not supported)

How it works

In this example, either diagonal needs to be shown based upon the random factor of the tile. First, let's take a look of the tile with diagonals plot again.

Tile With Diagonals Graph Plot

Image Generation Source

Let's name the diagonal starting from the lower-left corner and ending at the upper-right corner diagonal 1, and name the other diagonal diagonal 2.

For the initial diagonal example, we were ignoring the sign of the values of the coordinates of the diagonal points, which simplified calculations to see if a point was on either diagonal, or how far away it was from either diagonal.

However, in the current example, we need to separately calculate the distance from diagonal 1 and diagonal 2 and, depending on which diagonal is to be shown, use either result.

For diagonal 1, both X and Y components of points on the diagonal increase together. By finding the difference between the X and Y components, we find the distance from diagonal 1, which is similar to the original diagonal pattern example.

For diagonal 2, the X and Y components of points on the diagonal progress in opposite directions. The X components increases and the Y component decreases as you go from left to right, with the X component starting at the lowest value, and the Y component starting at the highest value.

This also means that the signs of the X and Y components of the points on the diagonal are also always opposite to each other. Looking at the points on diagonal 2, this becomes apparent.

The coordinates of the top-left corner the square, which is the one of the ends of diagonal 2, has coordinates [AsciiMath Syntax:] (-0.5, 0.5). Similarly, the coordinates of the other end of the diagonal at the bottom-right corner are [AsciiMath Syntax:] (0.5, -0.5).

This difference compared to diagonal 1 means that the calculation of the distance from diagonal 2 has to be done as an addition operation instead of a subtraction operation.

This change accounts for these differences and ensures that the result of the calculation doesn't differ from what's expected.

Once the distance of the fragment from both diagonals are calculated, we can use a decision factor to determine which one gets applied.

Fragment Shader Code:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
uniform highp vec2 resolution;

highp float random(highp vec2 coords) {
   return fract(sin(dot(coords.xy, vec2(12.9898,78.233))) * 43758.5453);
}

highp float invert_step(highp float edge, highp float x) {
  return 1.0 - step(edge, x);
}

void main() {
  highp vec2 tilingResolution = vec2(12.0, 9.0);
  highp vec2 coordinates = gl_FragCoord.xy / resolution;

  highp vec2 blockCenter = vec2(0.5, 0.5);
  highp vec2 blockCoordinates = fract(coordinates * tilingResolution);
  highp vec2 tileCoord = floor(coordinates * tilingResolution);

  highp vec2 coordFromCenter = blockCenter - blockCoordinates;
  highp float distanceFromDiagonal1 = abs(coordFromCenter.x - coordFromCenter.y);
  highp float distanceFromDiagonal2 = abs(coordFromCenter.x + coordFromCenter.y);

  highp float diagonal1Color = (1.0 - distanceFromDiagonal1);
  highp float diagonal2Color = (1.0 - distanceFromDiagonal2);

  highp float randomFactor = random(tileCoord / tilingResolution);

  highp float diagonal1Factor = step(0.5, randomFactor) * diagonal1Color;
  highp float diagonal2Factor = invert_step(0.5, randomFactor) * diagonal2Color;
  // highp float diagonal1Factor = step(0.4, randomFactor) * diagonal1Color;
  // highp float diagonal2Factor = invert_step(0.6, randomFactor) * diagonal2Color;
  highp float diagonalFactor = max(diagonal1Factor, diagonal2Factor);

  highp float fragmentColor = pow(diagonalFactor, 3.0);
  gl_FragColor = vec4(vec3(fragmentColor), 3);
}

Looking at the code, you can see the principles we've discussed so far applied. For calculating the distance from diagonal 1, we subtract the X and Y components of the fragment coordinates, same as our initial diagonal pattern example.

For diagonal 2, instead of performing a subtraction operation, we perform the addition operation instead, due to the nature of the X and Y components having opposite signs in diagonal 2.

Once these two are calculated, we then calculate what the color of the fragment would be respective to each diagonal, which is similar to the original diagonal pattern example. After this, we calculate the random factor for the tile, and then decide which diagonal to show.

Let's define a threshold of 0.5 on the random factor. If the random factor is below this threshold, then we'll show only diagonal 2. However, if this is not the case, then diagonal 1 will be shown instead.

In GLSL, a built-in function called step allows us to achieve this objective. It is provided with the threshold and the value to check as parameters. If the value is below the threshold, it returns 0, or else it returns 1.

By multiplying this result with the appropriate diagonal color, we cancel out the color value w.r.t diagonal that shouldn't be shown, and only maintain the color value w.r.t the diagonal which should be shown.

For diagonal 1, it should only be shown if the random factor exceeds the threshold of 0.5. So for this, the final factor for diagonal 1 was calculated by checking whether the random factor was exceeding the threshold using step, and multiplying that result with the color value w.r.t. diagonal 1.

If the random factor is less than 0.5, step will return 0, meaning that the effective color factor of diagonal 1 is 0.

If the factor is above 0.5, step will return 1, which means the effective color factor for diagonal 1 will be the calculated color value for it.

Similar steps are followed for calculating the effective color factor for diagonal 2, except we use the inverse of the value returned by step (1.0 - step(...)) with the same threshold, using the user-defined function invert_step.

Since the factors for both diagonals use the same threshold value, only either one of the diagonals will have a non-zero value, since if step returns 1, then invert_step will return 0, and vice-versa.

The final diagonal factor of the fragment is chosen as the maximum value between the effective factor of diagonal 1 and diagonal 2, since one of them will be 0, and the other will be greater than 0. In GLSL, this can be done using the built-in function max.

The final color value of the fragment is set using this final chosen diagonal factor.

An additional case can be added where both diagonals can be shown. This can be done by commenting lines 28 and 29, and uncommenting lines 30 and 31.

The threshold for showing diagonal 1 becomes 0.4. If the random factor is more than 0.4, then the diagonal will be shown.

Similaryly, the threshold for showing diagonal 2 becomes 0.6. If the random factor is less than 0.6, then the diagonal will be shown.

This results in a range 0.4 to 0.6, where if the random factor of the tile lies within this range, then both diagonals will be shown on the tile.

Since now there is a case where the effective color factor of both diagonals can be non-zero at the same time, the final diagonal color factor becomes whichever diagonal color factor is greater.

If the color factor of the diagonal 1 is greater, this means that the fragment is closer to diagonal 1 than diagonal 2, so the fragment should be colored w.r.t. diagonal 1.

Similarly, if the color factor of the diagonal 2 is greater, this means that the fragment is closer to diagonal 2 than diagonal 1, so the fragment should be colored w.r.t. diagonal 2.

Since we're already selecting the maximum color value w.r.t. diagonal 1 and diagonal 2, this case is already accounted for.

The only change required is to change thresholds for either diagonal such that there will be an overlapping range where both diagonals can be shown for a tile.

Additional Notes

The concepts taught so far show the basics of pattern image generation, random noise image generation, and a combination of both.

These concepts can be further built upon to generate sub-patterns within patterns, or multiple layers of randomness and patterns combined together.

These patterns and randomness can also be animated by including time as a factor as well, which opens up options for dynamic and animated image generation. Time can be used to influence the randomness of an image, or the patterns of an image.

Summary

  • Images can be generated without using external data, but just using some algorithmic logic within shaders.
  • Images can be generated using patterns and procedural code, or with random values and noise.
  • Logic from pattern/procedural images and random images can be stacked or combined with each other to create many more types of images.