WebGL Series, Part 4: Animating Things | Hendrik Erz

Abstract: In this fourth article of my eight-part series on WebGL, I explain how I animated the rendering and ensure that it conveys a sense of motion.


This week, I’m going to take the rays we’ve set up last week, and start to animate the entire thing. This means that, after today’s article, your rays will move around and you’ll be roughly 50% through the ordeal! Read last week’s article here.

View the demo-page

Understanding Transformations

So, cool, we now have a stationary image of some rays that should at least somewhat resemble an iris. But we can do more. The most simple step that does not require many changes to the rendering engine is to add animations. I wanted the animation to convey a sense of movement. We can do so in two ways: First, by rotating all the rays slowly around the origin, and second by moving the lengths of the rays slowly over time.

This is actually quite simple to achieve. We already have most of the code in place, and we would not have to really modify any code in the rendering engine for that. We’ll do that anyways, because it’s simpler, faster, and helps you understand more complex setups.

Let us first talk about the rotation, as that is simpler. As you may have noticed when running just the code I provided earlier, that the rays were all centered around the origin of the canvas – the top-left corner. That’s clearly not desirable. The reason is that in the ray generation code I only calculate positions based on the unit circle, but the unit circle is centered at the origin of $x = 0$ and $y = 0$. But we want the origin to be at the center of the canvas, i.e., half its height and half its width.

How do we fix that? Well, first we could indeed move the transposition of all the rays away from the origin into the center of the canvas to the ray coordinate generation. But as I mentioned earlier, we could rather move that to the vertex shader, similar to the scaling that I do indeed perform in the ray generation code. Again, that was an optimization I could have done, but decided not to, because this project has already eaten up so much time.

Note, February 1, 2026: This is not quite true. As this article series has been unfolding for you, I have added a scale matrix to the mix, based on my impression that we can just do the minimal work in JavaScript, and move everything parallelizable into the (parallel running) vertex shader. However, there's one thing I forgot: When I calculate the ray coordinates by multiplying the cosine and sine of the coordinate with an inner and outer radius, I forgot that this calculation has to happen in the JavaScript code. The only "optimization" I could have done would've been to use the fraction-of-1 inner and outer radii that I define in the ray generation code, but that is in fact a calculation that the JavaScript has to do very infrequently. So in fact there is not much to be gained. So, having the scaling operation in the JavaScript code instead of the vertex code really is the only option we have to actually produce properly spaced vertices. So, please remember also in the following, when I talk about some "optimization," this is really not the case. It has to be done this way if we want to end up with visible rays.

So instead there are only two transformations we now have to add to the code: Transposition and rotation. We do so using matrices. In game development code, you will frequently encounter matrices, just because that is simpler to work with. Indeed, in my actual research work, I also have to frequently work with matrices, so for me this was a mere exercise. Essentially, what you do in game development (or any rendering, for that matter), is you define three matrices. One describes a transposition of a point from one point to another one. The second describes the rotation of all vertices. And the last one describes how to scale all points uniformly up or down (make them bigger or smaller). The scaling is already done, so we only have two matrices left.

What we will do is produce these two matrices, and then multiply them. Because a matrix multiplication will actually ensure that we can apply both transformations to each vertex at the same time. And, because these matrices remain the same for all vertices (that is, they are “global”), we only have to perform that work once and then we can just pass the matrix to the vertex shader that will transform all vertices, but in parallel.

Creating Matrices

To achieve that, we borrow three utility functions again from WebGL fundamentals. I was too lazy to deduce the matrices myself, and these matrices are so common, I didn’t want to bother reinventing the wheel here. To learn more, I recommend reading the corresponding WebGL Fundamentals guide.

The code is simple:

function translationMatrix (tx: number, ty: number): Mat3 {
  return [
    1, 0, 0,
    0, 1, 0,
    tx, ty, 1
  ]
}

function rotationMatrix (rad: number): Mat3 {
  const c = Math.cos(rad)
  const s = Math.sin(rad)
  return [
    c, -s, 0,
    s, c, 0,
    0, 0, 1
  ]
}

function mat3mul (mat1: Mat3, mat2: Mat3): Mat3 {
  const [a00, a01, a02] = [mat1[0 * 3 + 0]!, mat1[0 * 3 + 1]!, mat1[0 * 3 + 2]!]
  const [a10, a11, a12] = [mat1[1 * 3 + 0]!, mat1[1 * 3 + 1]!, mat1[1 * 3 + 2]!]
  const [a20, a21, a22] = [mat1[2 * 3 + 0]!, mat1[2 * 3 + 1]!, mat1[2 * 3 + 2]!]
  const [b00, b01, b02] = [mat2[0 * 3 + 0]!, mat2[0 * 3 + 1]!, mat2[0 * 3 + 2]!]
  const [b10, b11, b12] = [mat2[1 * 3 + 0]!, mat2[1 * 3 + 1]!, mat2[1 * 3 + 2]!]
  const [b20, b21, b22] = [mat2[2 * 3 + 0]!, mat2[2 * 3 + 1]!, mat2[2 * 3 + 2]!]
  return [
    b00 * a00 + b01 * a10 + b02 * a20,
    b00 * a01 + b01 * a11 + b02 * a21,
    b00 * a02 + b01 * a12 + b02 * a22,
    b10 * a00 + b11 * a10 + b12 * a20,
    b10 * a01 + b11 * a11 + b12 * a21,
    b10 * a02 + b11 * a12 + b12 * a22,
    b20 * a00 + b21 * a10 + b22 * a20,
    b20 * a01 + b21 * a11 + b22 * a21,
    b20 * a02 + b21 * a12 + b22 * a22
  ]
}

One thing I did learn while copying this code is that I really grew accustomed to how easy it is to work with matrices in numpy, and how cumbersome it looks like in JavaScript. But, alas these three functions do the work perfectly.

Now we have to modify the drawFrame function in the IrisIndicator class:

const now = Date.now()
const msPerRotation = 120_000
const rot = now % msPerRotation / msPerRotation
const moveByRadians = -rot * (2 * Math.PI)

const originX = this.gl.canvas.clientWidth / 2
const originY = this.gl.canvas.clientHeight / 2

const mat = mat3mul(
  translationMatrix(originX, originY),
  rotationMatrix(moveByRadians)
)

What we first do is determine the rotation of all the rays. We want this to be an endless spinning motion, and we want this to depend on time, not the frame rate. Why? Bear with me, I will explain below. First, let’s finish talking about this code block.

First, I define rot as a ratio between 0 and 1, based on time. So, if we say that the rays should take 120 seconds, or two minutes, for one full rotation, line three essentially “clamps” the rotation between 0 and 1, and does so in lockstep with time. So even if you reload the page, the rotation will remain constant, and independent of frame rate. We then turn the rotation into radians, because that is the angle that most trigonometric functions expect.

Side note: Why have we, as a society, decided that we want to use degrees to describe portions of a circle, when all the math only works with radians? I have never understood that. I mean, I can intuitively say what 270° of rotation mean, and it looks weird to call it $\pi$ radians. But that’s what sine and cosine functions expect. Anyways, if you really insist on using degrees instead of radians, you can convert between the two using $rad = \frac{d}{180} * \pi$. Back to topic.

As you may or may not know, radians move “backwards” around the circle, in a counterclockwise motion. The code -rot * (2 * Math.PI) essentially reverses this motion to move in clockwise direction.

The next lines are fortunately simple: Calculate the center of the canvas and create a translation matrix based on that. Finally, we just multiply the two matrices to arrive at one single matrix that performs both transformations at the same time. Now we have to provide that matrix to the WebGL engine.

To do so, we first add a third parameter to the draw function that receives this matrix:

this.engine.draw(triangleData, nComponents, mat)

Next, in the vertex shader, we have to tell it that we want to pass a matrix in:

uniform mat3 u_matrix;

We have to again retrieve the actual memory position:

this.matrixUniformLocation = gl.getUniformLocation(this.program, 'u_matrix')

And finally, we provide our matrix at draw time:

gl.uniformMatrix3fv(this.matrixUniformLocation, false, matrix)

Finally, tell the vertex shader to utilize this matrix:

vec2 transformed = (u_matrix * vec3(a_position, 1)).xy
vec2 normalized = transformed / u_resolution;
// ... the rest of the shader code

One thing to note: As you can see, the matrices are 3×3, but we are only dealing with x/y coordinates. To correctly multiply our 2D coordinate with a 3D matrix, we have to add a third dimension, calculate, and then discard it again by only extracting .xy from the coordinate. We’re not dealing with depth here, it’s all perfectly two-dimensional. But if you want to add a third dimension, you can absolutely do so, you’d just have to add one variable each to the two matrices above. I’ll leave that to google yourselves.

If you now re-render the entire thing, you should see that both the rays are actually centered in the canvas now, and that they rotate as you reload the page. But it would be great to actually see the animation in motion without hitting the reload-button, right? For that, we have to do a slight modification to the IrisIndicator class. We need to create an infinite loop of animating.

Defining a Rendering Loop

Let’s start with a simple solution:

setInterval(() => iris.drawFrame(), 1000/60)

The 1000/60 just means: “Run the function 60 times a second,” a.k.a.: Render at 60 fps.

If you do so, you should see the rotation in action, but depending on your display, it might flicker badly. The reason is that setInterval doesn’t care about your display’s refresh rate, and as such it may draw when your display refreshes, or vice versa, leading to flickering. To fix that, we’ll just have to instead call requestAnimationFrame() and, at the end of the drawFrame function, request another animation frame:

class IrisIndicator {
  // ... other code
	function loop(timestamp: number) {
    this.drawFrame()
    requestAnimationFrame(ts => this.loop(ts))
	}
}

// And, at the end of the entire setup code:
requestAnimationFrame(ts => iris.loop())

This essentially tells the browser: “Please run this function at the appropriate time to make sure we can draw without flickering.” How often this function gets executed therefore depends on your display’s refresh rate. You can actually figure out your display’s refresh rate by taking the timestamp that the animation frame passes to draw frame (see the code repository for how that is done) and, if you divide 1000/(currentTime - previousTime), you get the frame rate. Neat! Using this information you can even implement a frame limiter that limits the refresh rate to, say, 30 fps, by simply not doing any work until at least 1000/fpsLimit milliseconds have passed.

This is also what I alluded to earlier: The rotation only depends on time, not on the frame rate. This means that, strictly speaking, the rotation “continues” even if your browser doesn’t render the animation at all (because, e.g., your browser is minimized). But, more crucially, regardless of display refresh rate, this ensures that the animation will not change in speed. This is generally a good habit to foster.

Animating the Rays

Now that our iris rotates successfully, it is time to also animate the individual rays. Earlier, I have already introduced some properties for each ray that we will now use to actually animate them. What I will animate is only their lengths, so that they move inward, then outward, then inward, and so on.

Fortunately, we can do this entirely in the drawFrame method without any changes to the rest of the code:

const speed = deltaMs / this.rayMovementSpeed
for (const ray of this.rays) {
  const { min, max } = ray.radius
  let { current, inc } = ray.radius
  const increment = (max - min) * speed
  current = inc ? current + increment : current - increment
  if (current <= min) {
    current = min
    inc = true
  } else if (current >= max) {
    current = max
    inc = false
  }

  ray.radius = { ...ray.radius, current, inc }
}

As you can see, the more we progress in this rabbit hole, the more complex the code becomes. What do we do here? First, we determine the speed with which we want to adjust the current length of each ray – again, based on time, not on frame rate. this.rayMovementSpeed is essentially just another parameter that we can set that determines how fast the rays will change time. Feel free to play around with some values.

In any case, here we actually perform a change in how the rays will be calculated: First we extract the minimum and maximum radius. These are randomly allocated (within some limits), and between these two the rays oscillate. We extract current and inc separately because we need to modify them. current remembers the current radius of each ray, and inc simply tells us if we’re currently in a “lengthening” motion or in a “shortening” motion. First, we determine how fast we need to adjust the ray. Doing so will make rays move faster, the longer the distance they travel.

Then, based on inc, we either increase or decrease the current radius of the ray, and change direction if we overshoot the upper or undershoot the lower limit. Finally, we adjust the values in the ray object itself. Afterwards, we can again call this.rays.map() to calculate the coordinates based on these updated positions.

If you now re-run the code, now you should see both a rotational movement and the rays moving around. This completes everything I wanted to animate in terms of geometry.

Final Thoughts

We’re four articles deep into the series, and we still haven’t quite completed the journey. At this point, you should have something that starts to resemble the final animation, but there are two things still missing. Crucially, there is no color variation. This is what I will go through in the next installment next week. Second, we will add some post-processing in the articles afterwards. So, one more, stay tuned!

Suggested Citation

Erz, Hendrik (2026). “WebGL Series, Part 4: Animating Things”. hendrik-erz.de, 30 Jan 2026, https://www.hendrik-erz.de/post/webgl-series-part-4-animating-things.

Send a Tip on Ko-Fi

Did you enjoy this article? Send a tip on Ko-Fi

← Return to the post list