Shaders get a bad rap. They contain a lot of codewords and follow some odd conventions, but truth be told they’re not actually all that hard to write. In this article I’ll be breaking down the 2D shader I use in Patch Quest. But first, a refresher on what shaders are and how they work.

Shading Crash Course

Your computer’s CPU probably has between one and four “cores”. Each core can independently run their own programs and they can also communicate with each other (though communication between cores is often slow if you do it safely). One core might be running your web browser while another core runs some background tasks, for example.

A graphics card (GPU) is different. Instead of 4 cores, modern GPUs have tens of thousands of cores. This makes them very, very fast – but it comes at a price. A CPU core can do just about anything, time permitting. But GPU cores are very simple and have a limited set of features.

One major limitation is that all GPU cores have to be running the exact same program and in perfect synchronisation with all the other cores. This means you can’t have branches  (if statements or loops) in your GPU programs. Some cores might branch one way while other cores branch another way, desynchronizing them and creating the risk of garbage output.

A shader is a program that runs on your GPU and renders something to the screen (like an image or 3D mesh). They have two main components:

  1. The vertex shader determines the boundaries your image will be drawn inside of.
  2. The fragment shader then renders each pixel of the image within these boundaries.

There might be thousands of vertices or pixels that need rendering, but that’s why we use a GPU. Each core can handle one vertex or one pixel and they can all run simultaneously. We just need to make sure that our shader programs are simple and don’t contain any branches.

OK, now you know a bit about shading. Let’s take a deep dive into the “paper” shader I use for all my 2D rendering in Patch Quest.

Getting the Papery Feel

First off, I apply a paper texture to my images. This makes everything look just a little bit more detailed and textured. This is the texture I’m using; it’s just a few layers of random noise overlayed at different scalings (sort of like a Perlin noise). The average brightness of the image is exactly 0.5 (middle gray) but some pixels are slightly lighter than that and some are slightly darker.

I take the source value, subtract 0.5 from it, then add it to the base color. This way, colors below middle gray darken the base color and colors above middle gray lighten it. Lightening is much more visible on dark colors and darkening is much more visible on light colors. So by combining both effects, this paper texture will look good on all colors!

I scale the texture by the mesh’s world coordinates, rather than its local coordinates. This keeps the texture at an even scaling across the screen regardless of whether it’s drawn on small sprites or large ones.

Here you can see a side by side comparison with this feature turned off (left) and on (right). The difference is subtle but definitely visible, especially when applied to everything on the screen!

Here is the code I use to do all this (in my fragment shader):

Tech note: This approach has a downside! Since the texture is sampled by world space, as our mesh moves through the world its location will change but the texture will stay fixed in 3D space. This creates a weird “window” effect where it seems like you’re looking “through” the mesh onto a fixed background. To fix this, I subtract the mesh’s world space origin from my texture sampling location. Sampling will now be offset relative to the model’s position, while still scaled relative to the world. Here’s the code that finds these coordinates (this goes in my vertex shader):


Like with many shaders, the user can supply a “tint” that modifies the mesh’s overall color. But instead of using a standard multiplicative tint, I use a colorise tint.

A multiplicative tint is very simple: you just multiply one color by the other. Since the red, green and blue (RGB) channels of your color are all values between zero and one, this has the effect of darkening one color by the other. This looks really good when applied to something light (especially white), but it looks really bad when the colors are dark or very different.

Colorisation has an added step. You first find the grayscale of your input color by calculating the average of its (RGB channels. Only then do you multiply this grayscale color with your tint color. By eliminating other colors before we tint, we preserve the luminosity of our texture (or in other words, it will be colored but not darkened).

Here you can see a particularly tricky use-case: tinting a red image blue.The multiplicative tint has pretty much gone black. But the colorise tint, because we strip out the original color before multiplying, takes up the target color much better (though in practice I normally wouldn’t tint things to such an extreme degree!)

Finally, we apply a simple linear interpolation between the original texture and our colorised one. This lets us smoothly increase or decrease the intensity of this effect. Instead of using a separate variable for this, I just use the alpha channel of the target color.

Here is the code for all this (again, called from the fragment shader):

I’ve also added a “silhouette” feature that lets me draw the image as a solid white blob. This is used in my post-processing for creating the glowing outline effects. But that’s a topic for another article.

HSV Shift

A hue/saturation/value (HSV) shift is a really nifty way of modifying the colors of an image. Whereas RGB colors are pretty unintuitive, HSV colors are very easy to think about.

Hue refers to the “tone” or “shade” or the image. Here you can see the same image with 3 hue shifts (original and two variants).

Saturation refers to how “washed out” a color is. Here you can see the same image with 3 different saturations (full, half and none).

Value refers to how bright an image is. Here you can see the same image with 3 different values (again – full, half and none).

By combining these you can easily create a lot of different effects. For example, when a player selects an invalid move in Patch Quest I slightly desaturate and darken all the invalid tiles to clarify to the user where they can legally move.

The code to do all this is well understood and easily found online. Here’s how it looks in my own shader (I copied this from an article I found a while back). The source color is transposed from RGB into HSV, shifted, and then transposed back again. This is called from my fragment shader.

This code is hard to follow because it has been heavily optimised. This is actually the slowest part of my shader, but it enables so many cool effects that I feel it’s very worthwhile.


A wind effect can do wonders for making a 2D game seem more alive. Applying it to your grass sprites can turn a dull, static scene into something bristling with life. The code for this is pretty simple: I offset the x coordinates of my vertices relative to their height above the ground, the current time and a scale factor (this is a vertex shader feature).

Here you can see an image with and without the wind effect (exaggerated for clarity in this article). Still images don’t really do this effect justice, though. You need to see it in motion to get the full benefit.


Sometimes you want your image to look gloopy, wobbly or slimy. By distorting the texture sampling in your fragment shader you can easily create these sorts of effects. Here’s a slug at 3 different points in time (the distortion has again been exaggerated for clarity in this article).

For this effect I just applied a sin wave to the x coordinate of the sampled pixel. This creates a wiggling effect where the image undulates from side to side. I offset this sin wave by the current time so the effect will automatically animate. Here’s the code for this (a fragment shader feature):

A careful reader might have spotted the #if DISTORT_ON directive. Didn’t I say earlier that shaders can’t contain branching code?

Indeed I did! And they can’t! What you’re seeing here is not truly a code branch, but an example of multi compilation or “mega shading”. At compile time, two seperate versions of our shader are created. One has the distortion feature, and one doesn’t. This feature can be enabled and disabled by swapping out shaders, rather than by branching within one shader.

This all happens automatically if I add this directive to the top of my shader:

Putting it All Together

Here’s what this final shader’s control panel looks like in the Unity editor. By fiddling with these parameters you can huge variety of effects all within one shader.

Shaders are regularly used in 3D games but tend to be neglected in a lot of 2D games. I hope this article has shown that 2D shaders aren’t all that hard to write and you can get a great deal of value out of them.

Here’s the full code for my paper shader. Feel free to use some or all of these features within your own shaders. And with a little creativity you can extend these features to create a variety of special effects that suit the kind of game you’re working on.

If you liked this article, why not read one of my other ones? (like this one on dynamic animation retargeting in Unity). And if you want to learn more about Patch Quest, you can watch this video.