WebGL Series, Part 6: Post-Processing | Hendrik Erz

Abstract: In part six of this series on WebGL, I introduce the concepts behind post-processing a rendered image, and how to implement that in a WebGL program.


This week, I’ll be walking you through post-processing. I’ll start with adding a little bit of post-processing immediately, so that you can see what it does, and then re-organize the code quite a bit to enable us to run arbitrary post-processing stages. Check out last week’s article for a refresher.

View the demo-page

Adding Tone Mapping

Before we actually get into the guts of post-processing, let us first do a post-processing stage. Last time, I have introduced HDR-colors that you could control using an hdrFactor of 10.0. I have mentioned that you may want to set this factor to 1 for the time being. The tone mapping is intended to ensure that you can use HDR-colors, and still see everything.

If you haven’t changed the HDR factor, you may have noticed that the colors of the circle are very bright, maybe almost white (depending on your display). This is not nice. But this is due to us using HDR colors that must be brought back into the regular color range of $[0; 1]$. If we leave the colors as they are, OpenGL will just forcefully move the colors to the range $[0; 1]$, resulting in a lot of white.

To convert the bright colors back ourselves (and as such prevent any harsh artefacts), we employ tone mapping. I essentially follow the guidance of LearnOpenGL here and adjust the exposure and gamma down to manageable levels.

Since tone mapping involves changing the colors, we will have to add this to the fragment shader. I decided to write a simple function for this:

vec3 tonemap (vec3 color) {
  const float exposure = 0.5;
  color = vec3(1.0) - exp(-color * exposure);
  const float gamma = 0.8;
  color = pow(color, vec3(1.0 / gamma));
  return color;
}

While I have seen “gamma correction” several times in some video game settings before, I never knew what exactly it did. It is kind of interesting to see all of these little formulas in action that do something to the colors of our displays! You can play around a bit with both exposure and gamma to see what it does. However, all that it should do at this point is restore the original, vibrant colors with no over-exposure or being too white.

This is already quite nice, and was a simple step to take. But there are a few additional post-processing steps that I want to add. And for this, we will have to rewrite the shader code quite a bit, and understand frame buffers. This is the second part of the OpenGL rendering pipeline that I announced earlier.

Understanding OpenGL’s Rendering Pipeline, Part Two

Rendering in OpenGL always follows some well-defined steps: First, you set up any data your shaders need. This is both vertex buffers for the actual geometry, and some additional values. Our little iris indicator thus far receives a transformation matrix, the triangle data, and colors and ratios for segments. Once this is set up, we have to tell OpenGL that we want to draw something to the canvas by setting the frame buffer to null. Finally, we call gl.drawArrays to actually draw the geometry onto the canvas by running the vertices through the vertex shader, and assigning each pixel a color in the fragment shader.

However, one part I was consistently skipping over was that you aren’t required to directly write to a canvas. And, moreover, the canvas is indeed just a regular frame buffer. So what if we just create a random, other frame buffer to write to? This is how we can add post-processing effects. To understand this a little better, let’s focus on frame buffers first.

A frame buffer in OpenGL is nothing but a data structure that you can write to. You tell OpenGL where you want your shaders to write data to by calling gl.bindFramebuffer. If you bind the frame buffer null, you effectively tell OpenGL to use the frame buffer that is the canvas. But you can also bind your own frame buffers. However, in order for OpenGL to write to something, we also have to attach either a texture or what is called a render buffer to it.

You see, a frame buffer is less an actual “buffer” (as one might understand it in terms of data storage), and more like a container for other buffers that you then actually write to. The basic idea behind frame buffers is that they allow you to organize your rendering stages. For example, you may have one frame buffer to write your actual geometry to, then you may have a frame buffer that you use in the post-processing stage. And only once you’re done with the post-processing steps, then you finally select the canvas to draw to and effectively transfer all the processed pixels onto the screen for the user to see.

There are three additional concepts to understand here. First, how you actually perform post-processing using textures; second the concept of performing post-processing using what is called a “ping-pong” setup; and lastly reading from and writing to multiple sources/targets at the same time.

First, how does post-processing actually work, given the tools we have? Well, I found the solution hilarious, but at the same time very smart. So, the rendering consists of taking some geometry and drawing it onto a frame buffer. Then we have just basic geometry that doesn’t look very nice. But what we also have is an entire width × height picture. And we can essentially take that entire picture and run post-processing on it. When we then draw this picture to the canvas, we see a processed image. So here we are at the stage of, quite literally, doing the browser’s work of displaying a simple image, but one that we had to generate before.

In order to draw your geometry onto a picture, you will need to use a texture. Textures are the only way to transfer large chunks of data into and out of your GPU and running shaders onto them. If you write to a texture instead of the canvas, you can transform this texture, and then you have to simply paint the texture onto the canvas. How do you do it? Well, a texture must be attached to some geometry. But we already drew our geometry, right? So, how do we do it? By drawing a rectangle that is the same size as the canvas, and telling it to use our texture.

That’s it. That’s the entire magic trick. Effectively, to post-process some image, we first draw our actual geometry onto a texture that is the same size as our canvas. Then, we just have to draw a rectangle that is also the same size as our canvas, using the texture as a source for the fragment shader and transform the colors in the fragment shader according to what we want. And we can repeat this step ad infinitum, if we so wish, constantly drawing a texture the size of our canvas onto a rectangle the size of our canvas, progressively adjusting the colors of that texture. That brings us to the second concept.

To do post-processing you could, in principle, generate as many frame buffers as you have steps for post-processing. But that can become cumbersome, and OpenGL does allow us to re-use a lot of our code. So why not re-use frame buffers, too?

This is where the “ping-pong” method comes into play. For this, you need two frame buffers and two textures, one for each frame buffer. Then, you load your source image for the fragment shader to use; bind the first ping-pong frame buffer, and draw our canvas-sized rectangle, telling the fragment shader to use the source image; and transform the colors on it to draw on the frame buffer. Then, we have the result of this step in the texture of the first ping-pong frame buffer. Now we load that resulting texture as the source for the fragment shader, bind the other frame buffer to write to, and run the fragment shader again. We can do so for as long as we want, and all we need are two frame buffers that we switch back and forth between (hence the name). After the post-processing is done, the texture that is associated with the last ping-pong buffer contains the result.

Finally, one last concept to understand is that you can actually read and write from and to multiple sources and targets at the same time. For example, when you call gl.bindTexture, you tell OpenGL that you want your frame buffer to read this texture. And, whatever texture is associated with your target frame buffer is what the fragment shader will write its fragColor to. But we can also pass multiple textures, e.g., to combine two pictures. We do so by calling gl.activeTexture before gl.bindTexture to select one of the available texture slots. If I have seen it correctly, OpenGL allows up to eight textures to be provided to the frame buffer at the same time.

To tell your shader to use multiple textures as sources, you just specify all textures using variables (uniform sampler2D u_texture;). In your drawing code, you then only have to use gl.activeTexture to select one of the slots, gl.bindTexture to provide the data, and then tell your fragment shader where the correct texture is by setting the uniform u_texture to the correct number slot (i.e., 1 if you want the second texture).

Likewise, you can also specify multiple outputs of a fragment shader. For this, you must bind multiple textures to a frame buffer. A frame buffer also has up to eight (?) slots, but here they are called “color attachments.” To enable your shader to write to multiple targets, you have to define the output variables, specifying a “layout location” that corresponds to the color attachment slot (i.e., layout (location = 0) out vec4 fragColor; will always write to color attachment zero). Additionally, you need to make sure that the frame buffer you are writing to also has a color attachment (read: texture) assigned to each location that you are producing output for.

There is a third part to the rendering pipeline that I will explain in due time, but with what we know now, we can continue to do the first “cool” post-processing step.

Preparing Post-Processing

Since I want the colors of the iris indicator to pop a bit more, my brain immediately jumped to “Oh, bloom!”

If you have been alive in the early 2000s, there was a video game called “The Elder Scrolls IV: Oblivion” which I enjoyed as a child. One innovation it brought to computer gaming was the extensive use of a bloom filter. Apparently, video game developers used bloom to convey brightness before other techniques became possible. But because of how much Oblivion overdid the bloom filter, everyone was talking about it. To see how much is too much bloom, I invite you to go to the demo again and set the bloom intensity to 8×. Then you have an idea of what Oblivion looked like at times.

But an iris indicator is not a video game, and here I personally believe that it can really benefit from some overly bright filter. So, how do we actually implement a bloom filter? That is quite simple, and I’m indebted to LearnOpenGL for providing a simple algorithm for this.

Bloom involves three steps: First, extract only the bright spots of a rendered scene. Then, blur the hell out of those bright spots. Finally, combine the blurred highlights with the original image to get this impression of brightness. Let’s see how this is implemented in OpenGL.

First, let us focus on the WebGL engine. We now need to add a few frame buffers, and we need to stop directly rendering to the canvas. Instead, we want to render our rays onto a frame buffer and then, in a second pass, we want to process them to apply bloom.

Adding a Frame Buffer

To get started, we define a frame buffer and a texture called “scene target” because it’s going to be the frame buffer we write our geometry (the triangles) to:

this.scenetarget = {
  fb: gl.createFramebuffer(),
  scene: this.createTexture()
}

For this, we also need a routine to create a new texture. Why? Because we have to adjust some settings for each texture. Here’s how that works:

private createTexture (): WebGLTexture {
  const gl = this.gl
  const texture = gl.createTexture()
  gl.bindTexture(gl.TEXTURE_2D, texture)
  gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE)
  gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE)
  gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR)
  gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.LINEAR)
  return texture
}

Again, you can see that we first have to “bind” a texture, and then we can adjust the settings of the currently bound texture. To learn what these settings do, I recommend reading WebGL fundamentals, from which I adapted this function. Back to the frame buffer, we now have to set up the frame buffer to use the texture. Because this involves modifying the settings of the frame buffer, we need to – you guessed it – bind it first:

gl.bindFramebuffer(gl.FRAMEBUFFER, this.scenetarget.fb)    
gl.bindTexture(gl.TEXTURE_2D, this.scenetarget.scene)
gl.texImage2D(gl.TEXTURE_2D, 0, internalFormat, cWidth, cHeight, 0, format, type, null)
gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_2D, this.scenetarget.scene, 0)

In this code, we first bind the frame buffer and texture, because we want to couple them. With texImage2D we effectively just tell OpenGL that we want to allocate enough space for a fragment buffer to write an image of cwidth × cHeight into it.

Changing the Texture Size

Where do cWidth and cHeight come from? Well, here’s another lesson: There are CSS pixels, and there are actual pixels. Many modern displays have such a high resolution that, drawing a pixel the size of an actual pixel would be almost imperceptible. Instead, on such high-resolution displays, drawing one pixel usually involves drawing four pixels (a 2×2 square). The canvas size is provided in CSS pixels, which already account for high-resolution displays. This is why we have set the frame buffer viewport to the actual canvas size whenever we drew to it. But for writing to a texture, we want that texture to have as much resolution as the display itself. This means that the textures should always have the actual resolution of the canvas, not its reported CSS size.

To do so, we simply multiply the reported Canvas size with the window.devicePixelRatio. This device pixel ratio will be, e.g., 2 for my MacBook screen, and it is 1 for my normal work monitor. For your displays, it might differ. In any case, by using the device pixel ratio, we ensure that one pixel on the textures we’re writing to corresponds to one actual, physical pixel. I decided to write a simple utility function for that:

textureSize (): { cWidth: number, cHeight: number } {
  const gl = this.gl
  const cWidth = Math.ceil(gl.canvas.clientWidth * this.textureSizeModifier)
  const cHeight = Math.ceil(gl.canvas.clientHeight * this.textureSizeModifier)
  return { cWidth, cHeight }
}

The textureSizeModifier is just a variable so that you can change the resolution of the texture to other values than your device pixel ratio. Go ahead and try it out on the demo page to see the effect.

But now, back to the frame buffers: By calling framebufferTexture2D we attach the texture to the frame buffer. We here do so as color attachment 0, but there are multiple ones to choose from. The various settings for the texture are as follows:

const internalFormat = gl.RGBA16F
const format = gl.RGBA
const type = gl.FLOAT

It is common to use simple RGBA as the internal format and use UNSIGNED_BYTE as the type of texture. However because we’re working with HDR colors that exceed the common maximum of 1.0, we can’t do that. One little known fact that I had to research first is that OpenGL actually does not support using floating point colors by default. This needs to be explicitly enabled:

gl.getExtension('EXT_color_buffer_float')

Now we have a texture to render to. After the setup, it’s good habit to unbind both again, because if you accidentally leave a texture or frame buffer bound, WebGL can get kind of funny:

gl.bindFramebuffer(gl.FRAMEBUFFER, null)
gl.bindTexture(gl.TEXTURE_2D, null)

Adjusting the Draw Code

Next, we have to modify the draw code to draw not to the canvas, but rather to this frame buffer. We do so by setting our just created frame buffer as the rendering target. The setFramebuffer is simply a method that binds the provided frame buffer and also sets the viewport, which I have adapted from WebGL fundamentals:

this.setFramebuffer(this.scenetarget.fb, cWidth, cHeight)
gl.clear(gl.COLOR_BUFFER_BIT)

We also have to clear the buffer again to reset all the colors in the texture. Otherwise, we would simply overwrite changing colors, and that would lead to a smearing effect. Now when we call gl.drawArrays, the shaders will write to the scene texture. And then we can use this scene texture as a source for our post-processing. But how do we draw the result of that onto the canvas? Let’s for now just immediately implement that so that we can add our post-processing in between.

To draw to our canvas, we first have to bind the canvas as our rendering target, read: as the frame buffer:

this.setFramebuffer(null, this.gl.canvas.clientWidth, this.gl.canvas.clientHeight)

Next, we have to bind the texture we have just written to as the source for our fragment shader:

gl.bindTexture(gl.TEXTURE_2D, this.scenetarget.scene)

Now we have to add a way for the fragment shader to actually retrieve the colors from this texture. For this, we first create a new texture:

uniform sampler2D u_texture;

Since this is a variable we also have to fill, we have to adjust our engine code accordingly:

this.textureUniformLocation = gl.getUniformLocation(this.program, 'u_texture')
gl.uniform1i(this.textureUniformLocation, 0)

This tells OpenGL to always attach the texture from slot 0 to that variable. Note that we only pass the index of the texture here, we don’t have to copy the texture. That is work that OpenGL does for us whenever we call bindTexture.

Nota bene: While writing this article I realized that I never actually did this. But yet, the animation worked. Why? Well, because OpenGL pre-sets everything with zero, and it keeps this value as long as you don’t overwrite the values by calling uniform1i. So even if you only declare a texture in your fragment shader, but never set the value in your WebGL code, it will still work, because the value is implicitly preset to 0. It wouldn’t work with a second texture, of course.

Conditional Shading

Next, we have to change our shaders. Until now, both vertex and fragment shader could assume that they would receive a bunch of triangles, and had to draw those onto a canvas and compute an appropriate color for each pixel. This would be the point at which you could create another shader-pair. But this entire project should only render an iris indicator, and we are currently at ~12,000 words, so I won’t go through the added complexity of having to juggle multiple shader programs.

Instead, we’re going to hide all our different shaders within the two we already have. To do so, we have to define a new variable that will tell our shaders how they should behave. So let’s define an enumeration to know which passes we have:

float FRAGMENT_PASS_PASSTHROUGH = 0.0;
float FRAGMENT_PASS_NORMAL = 1.0;
float FRAGMENT_PASS_BLUR = 2.0;
float FRAGMENT_PASS_COMPOSITE = 3.0;
float FRAGMENT_PASS_TONEMAP = 4.0;
float FRAGMENT_PASS_BRIGHTNESS = 5.0;

(There are a bunch of additional ones that we will slowly add to the shaders). You need to copy these definitions to both shaders. Then, in the rendering engine, you’ll also want to add them:

const FRAGMENT_PASS_PASSTHROUGH = 0.0
const FRAGMENT_PASS_NORMAL = 1.0
const FRAGMENT_PASS_BLUR = 2.0
const FRAGMENT_PASS_COMPOSITE = 3.0
const FRAGMENT_PASS_TONEMAP = 4.0
const FRAGMENT_PASS_BRIGHTNESS = 5.0

Now, we have to let our shaders know which pass we currently run with a simple uniform. We define it in the vertex shader, and pass it through to the fragment shader:

uniform float u_pass;

out float v_pass;

void main () {
  // ... other code
  v_pass = u_pass;
}

The fragment shader:

in float v_pass;

Side note: I had to learn the hard way that, even though “uniforms” are kind of constants, WebGL will make funny noises if you simply define them in both shaders. So if you have one value that you need to address in both shaders, you’ll need to only declare it in the vertex shader, and pass it through to the fragment shader.

Of course, we also need to be able to change this value, so we’ll have to adapt the rendering engine. Nothing here is new:

this.passUniformLocation = gl.getUniformLocation(this.program, 'u_pass')

// At render time, for example to tell the shaders we do a regular pass:
gl.uniform1f(this.passUniformLocation, FRAGMENT_PASS_NORMAL)

Now we can write conditional logic in our shaders. First, let’s change the vertex shader. In the first pass, the vertex shader will receive triangles and is supposed to transform them (translate to center of canvas, and apply a rotation). But in all other passes, we will only give it a rectangle to draw, and it should not do anything fancy with it. So we have to change the code accordingly:

vec2 transformed = u_pass == FRAGMENT_PASS_NORMAL
    ? (u_matrix * vec3(a_position, 1)).xy
    : a_position;

vec2 normalized = transformed / u_resolution;

That is all our vertex shader needs to know: If there are triangles incoming (in our “normal” rendering pass), it should transform the coordinates, and in all other times, it should simply convert them to clip space. The shader part that actually does need all of these funny constants we just defined is the fragment shader. Because now everything that changes affects the colors. For now, we can just check the pass value and either compute a pixel color, or literally pass through the texture value:

void main () {
  if (v_pass == FRAGMENT_PASS_NORMAL) {
    fragColor = compute_color();
  } else if (v_pass == FRAGMENT_PASS_PASSTHROUGH) {
    fragColor = texture(u_texture, v_texcoord);
  }
}

Drawing a Texture to the Canvas

Now all we have to do is give the fragment shader the correct texture, which we have already done. We now return to the rendering engine. The last thing we have done is assign the rendered scene as a texture:

gl.bindTexture(gl.TEXTURE_2D, this.scenetarget.scene)

Now we want to draw this texture simply onto the canvas. How do we do that? Well, first we must define a rectangle that equals the canvas size and write its coordinates into our position buffer:

this.setFramebufferRectangle(cWidth, cHeight)

This function is very simple:

function setFramebufferRectangle (width: number, height: number) {
  const coords = new Float32Array([
    0.0, 0.0, width, 0.0, width, height,
    0.0, 0.0, width, height, 0.0, height
  ])
  this.gl.bufferData(this.gl.ARRAY_BUFFER, coords, this.gl.STATIC_DRAW)
}

Note that for gl.bufferData to do the right thing, you need to make sure that the position buffer is bound. Because we only use a single buffer, the position buffer should still be bound. What we do here is effectively overwrite the triangles with our single rectangle. At this point, we have the correct frame buffer (null for the canvas), the correct texture (this.scenetarget.scene), and a simple rectangle in the buffer. Now, let’s start up our rendering pipeline by issuing the draw command:

gl.drawArrays(gl.TRIANGLES, 0, 6)

This tells OpenGL to draw triangles, and to expect six elements (read: x/y coordinates). It will pass those first to the vertex shader which now will only convert the coordinates to clip space but ignores the matrix transformation. Then, OpenGL will calculate which pixels are affected (which, for this particular rectangle, are just all) and passes the information to the fragment shader which is instructed to just copy the information from the texture to the drawing target. Since the drawing target has the same size as the texture, this means that it effectively copies the information.

Final Thoughts

At this point, everything is set up to add whatever post-processing stages you like. Next week, I’ll explain how I added a bloom filter using this setup.

Also, you may notice, if you’ve followed along, that the image suddenly looks very rough and pixelated. That’s because, due to what we just did, we lost the ability to antialias the rendered image. That will follow in article 8 in two weeks. So, as always, stay tuned!

Suggested Citation

Erz, Hendrik (2026). “WebGL Series, Part 6: Post-Processing”. hendrik-erz.de, 13 Feb 2026, https://www.hendrik-erz.de/post/webgl-series-part-6-post-processing.

Send a Tip on Ko-Fi

Did you enjoy this article? Send a tip on Ko-Fi

← Return to the post list