top of page

INTRODUCTION

Videogames need to pinpoint and highlight game objects for a multitude of reasons; from drawing attention to key items to targeting homing shots on enemies. I’d like to talk about a couple methods for highlighting meshes using Unreal Surface Materials and the Rendering Buffers.

These Post Process visuals were created in Unreal 4.14 back in 2016.

PROJECT SETUP

For this portfolio post, I created an empty Unreal project, and am just using the default Unreal assets for my shader experiments. The main focus is to highlight the Unreal Mannequin character through several visual effects. In the center of the Test Map, I’ve added a giant Reticle mesh that all the screenshots will be captured through. The idea is that players, in-game, would peer through a special view to see all the special highlight effects. In each section, I’m just switching which Material is applied to the Reticle mesh’s glass surface.

GOAL

The goal is create these highlight effects strictly through surface, translucent materials. Nothing is being done through the post process, but have access to several of the GPU Buffers because translucent surfaces are drawn after the opaque pass. Really, these effects could just as easily be part of the post process, but the biggest advantage is you don’t have to worry about isolating to a portion of the screen space, but apply the surface shader to whatever needs to have the effect ( a scope on a rifle, a glass wall ), and you’re good to go. Of course, apart from the materials, I’ll need to modify the Mannequin’s Blueprint to make some these effects work too.

RESEARCH

When researching Custom Depth for Unreal 4, the first search result I’m sure most people see is Tom Looman’s website. He has a great breakdown of Custom Depth 101, and some other amazing tutorials. This post is really just an append to his Tom’s helpful tutorial ( he is using post process though ). Of course, I’ve also looked into how other engineers have authored similar effects in GLSL on the always amazing ShaderToy website. Lastly, I did mild reading on several filters and algorithms for various screen effects that I’ll point out later on.
http://www.tomlooman.com/multi-color-outline-post-process-in-unreal-engine-4/
https://www.shadertoy.com/results?query=outline

OUTLINES

The tried and true approach to highlighting game objects ( without compromising their visuals ) is to draw a outline border around their silhouette. To my knowledge, there are two primary methods for creating outlines around game meshes during the base pass; two-pass rendering and image edge detection. Both have their strengths and weaknesses.

TWO-PASS OUTLINE

To create a real outline in 3D space, we need to render the character twice. The additional render is for drawing the border by extrusion. In the Mannequin Unreal Blueprint, we need to duplicate it’s skeletal mesh, I refer to the second one as the OutlineMesh. The tricky part with skinned characters, is to keep them in sync, so we need to make sure the same Animation Blueprint is applied to both meshes. The key to this effect is a simple Outline Material applied to the OutlineMesh;

Outlines are typically full-bright, so the Material’s Shading Model is set to unlit. So all we need to worry about is the outline’s color parameter plugged into the Emissive Output. To create the outline border, we multiply the vertex world position by the vertex normals. To control the outline Thickness, we increase the normal’s magnitude by a scalar parameter ( I named Thickness ) . Since the world normals are unit vectors, setting the Thickness parameter to 3 will result in a 3 cm border.

At this point, the last problem is that the OutlineMesh’s inflated verts are sorting sort over the Manniquin’s base mesh. So, the final step is plug a scalar parameter into the material’s Pixel Depth Offset output to ensure that the OutlineMesh will always draw behind.

Adjusting the Pixel Depth Offset value will increase and decrease outlines within the mesh’s silhouette. As the Pixel Depth Offset increases, only the most extreme changes in surface angles will be outlined. This inner detail within mesh is the biggest advantage that the Two-Pass Outline has over the Image Filtering Process. Incidentally, this is how Toon Shaders are typically done. Just need to add a step gradient to the base mesh’s colors and viola!

CUSTOM DEPTH AND STENCIL 

Since the next outline needs these, it’s a good time to explain how to use Unreal’s Custom Depth and Custom Stencil Buffers. Unreal 4’s rendering system is quite involved, so I can’t give the full picture. For a simple rundown, all Opaque Material meshes write their pixel’s Z-value into the Scene Depth Buffer. The mesh fragment shaders read from the depth buffer to decide if they’re going to overwrite or discard their pixel.

Transparent Materials are rendered after all opaque materials, therefore can read, but not write, the Scene Depth. The Custom Depth is an optional buffer that Opaque Materials can write their z-value into. This feature is great for isolating parts of the screen and performing depth tests for unseen meshes. It’s enabled by toggling the “Render Custom Depth Pass” flag in the Rendering Category for the mesh options in Blueprint.

While the Custom Depth can isolate meshes, there is no way to distinguish between two meshes when reading from these buffers. The Custom Stencil was added in a later Unreal 4 version to address this. Its really just an appended Custom Depth, as you need enable the former to use the latter. The Custom Stencil adds a one byte Layer Mask for every mesh queued to render into the Custom Depth. This allows us to isolate specific meshes for different effects, colors etc.

STENCIL MATCH TEST

The Material itself needs to test if each pixel is the correct Stencil value. First, the test subtracts the Mask we’re looking for from the CustomStencil pixel value. The idea is that if the two values do not match, that it will result in a non-zero value. Passing the result through three nodes ( Abs, 1-x, and Clamp ) will result in a binary float value ( 1 for Match, 0 for not ). The result of the StencilMatchTest is used as the Lerp Alpha or multiplied to any of the Material’s outputs to cancel or let the result passthrough.

IMAGE EDGE DETECTION OUTLINE

So for the second Outline method, instead of rendering a second mesh, we detect and create the outline strictly in 2D space. We’re using a variation of the Sobel-Feldman Operator. This algorithm is commonly used to detect outlines in an image. It’s probably most famously executed in videogames by the “Borderlands” series.

For each pixel, the Sobel Operator takes eight weighted samples from the adjacent pixels. The luminosity of the adjacent pixels are added together, and the sum average decides if it’s an edge or not based on a threshold.
http://homepages.inf.ed.ac.uk/rbf/HIPR2/sobel.htm

Typically, an Edge-Detection filter is applied at a fully-rendered image. However, for this surface shader, we’re sampling the Custom Stencil buffer to determine the silhouette of the Mannequin only. This means the outline will only appear around the character, and hard edges within the mesh will not be outlined.

Our material still samples eight adjacent pixels, but because of the Custom Stencil buffer, we only have to make binary sweep; is the pixel black or not! After filtering, the result is an extruded silhouette mask.
By multiplying the extruded silhouette by a binary inverse of the original silhouette, we will end up with a third mask that is just the isolated border outline. Using these masks, we can assign colors to the two areas of the Mannequin; the base image and the outline itself.

OUTLINE RESULTS

So, using either method it’s performance drawbacks. The Two-Pass Outline needs to render each object twice, which can get even more difficult when dealing with skeletal meshes. On the other hand, the Edge Filter Outline is a far more expensive material, with at least eight samples per pixel for extruding the outline. As for visual results, both outline methods work great for a full-bright, solid color outline around the Mannequin. They’re intended to highlight objects for gameplay purposes, and not necessarily for aesthetics purposes. I haven’t tried this yet, but I wonder how an outline affected by lighting would work. The edge filter could be applied to the World Normal buffer, and a general Z-bias could be applied to the normals to have account for the extrusion. Other cool effects could maybe be applied like outlines used neon tubing!

X-RAY VISION

By flagging the Mannequin to render into the Custom Depth, we were able to extend the silhouette to get a nice outline. But what if we combined both outline methods to create an entirely new effect! Instead of having a second mesh for extrusion, the mesh rendered underneath the Mannequin will be it’s skeleton! Flagging the Skeleton to render into the Custom Stencil will get our material to highlight the bones; X-Ray Vision!

The Custom Depth buffers have no information from the normal Scene Depth. Therefore, anything we draw using them will render in front of everything despite back-to-front order. This can be awesome if intended, but if we dont to see our skeleton through walls, we need to compare Custom Depth with Scene Depth. In addition, we need to add a depth bias in favor of the SceneDepth to make sure our Skeleton sorts behind. In order to make sure our highlight outline appears behind, we would have to apply the same extrusion to the SceneDepth, making it much more expensive…

OTHER VISIONS

Apart from outlining characters and viewing their skeletons through the Reticle, I want to show a few other effects I experimented with. When it comes to highlighting characters and objects via screen overlays, Metroid Prime is one of the best videogame examples out there. Through Samus’ handful of Visors, she can view the game world through several spectrums to see hidden enemies, scan for objects etc. Even fifteen years later, the on-screen effects are still very impressive, especially coming from a fixed function pipeline on the GameCube! I’ve taken inspiration from the game to deliver two additional highlighting visuals.

THERMAL VISION

Thermagraphy is the detecting of infrared radiation ( within the electromagnetic spectrum ) emitting from objects. Basically its taking temperature information and converting it to an image in our visible spectrum. Obviously, polygonal characters do not emit any type of radiation, so we need to fake information somehow so that every item and character’s “heat” can be measured. There are many ways to do this, and it all depends on how “accurate” we this gameplay feature to be.

Creating a separate texture map for every mesh could work, but doubles the texture memory footprint. Metroid Prime seems to be using separate heat maps for its environments. Another idea is assign the thermal values to the mesh’s vertex color. For the scope of this post, I wanted to keep the Thermal Vision within the bounds of the Reticle Scope material; not assigning special data to each mesh viewed. So, for this post, we’re faking the Thermal Vision by using the World Normal Buffer and a 256×1 Thermal Gradient textures;

Using the World Normals, we’re basically doing a Fresnel Effect ( dot product with Camera Vector ). To create color banding, we run the result through a step curve first. Then, the result is actually used a uv coordinate for our thermal gradient texture. Since our texture is one dimensional, we only have to worry about the x coordinate. The “hotter” pixels will sample further along the gradient until it reaches the clamped white edge.

To make the Mannequin stand out from the rest the environment, so we can use the Stencil Match Test from the previous material to lerp between the Custom Depth meshes and the rest of the scene. For the remaining screen space, we’ll use a separate 256×1 Gradient that’s much cooler in color tones, and reading from the Scene Depth instead of World Normals. This way, the entire scene is in Thermal View, but the Mannequin is still highlighted focus.

Even though our thermal imaging does not isolate true hot spots ( like the head, chest, etc ) I think it at least conveys the look of actual thermal cameras. Another quick trick would have been to use the Sobel Operator to and use the luminosity values as the uv for the thermal gradient. However, I think that leads to more false negatives because whiter objects ( like a car ) would appear hotter than a human wearing all black.

GHOST VISION

Metroid Prime features an “X-Ray Visor” that allows Samus to see not just her own skeleton, but also through hidden walls and even ghosts! Since we already reveal a skeleton above, we’ll focus on the Ghost Vision portion. Once again, this effect is deriving from a lot of what we were doing above.

This time, the Scene, rather than the Character requires more attention. First, the color sampling is similar to Thermal Vision, but with two solid ethereal colors instead of the texture gradients. The two colors are lerped with a typical Fresnel effect using the World Normal buffer. Keeping the two colors close to high luminosity, gives the world an otherworldly glow. That’s it for Mannequin. As for the rest of the Scene, let’s give it a hazey night-vision look, like all those “found footage” videos of ghosts on YouTube. To do this, we’ll sample the World Normals four times, but doing something special with the sample offset.

Using a Noise Node and adding it to Screen Position will go beyond a simple blur, and introduce more artifacts.

On a side note, I’m generally not a fan of Material Nodes that don’t have exposed parameters for input ( you have to click on them instead ). It makes the Material Editor less readable, and more error prone in my opinion.

Anyway, after the noise blur, we’ll give the Ghost Vision a subtle “shakey-cam” look by adding some mild Sine and Cosine oscillation. Using both Sine and Cosine is important, otherwise the shake will look very repetitive. What’s cool about the “shakey-cam” is that it gives us a freebie effect. The oscillating will cause the Scene to occasionally sample the Custom Depth objects and Characters, causing a “Ghosting” effect around them. Normally, this would be bad, but totally work for this screen effect. The haze plus the shake really causes the Mannequin to pop out. The Mannequin is perfectly clear and disturbingly motionless, floating in ether…

PLAY AROUND

I’m sharing the project on GitHub. Please feel free to download the whole project, or simply the Material uasset files. I used Unreal Engine Version 4.14

CONCLUSION

It’s quite neat to take one visual effect, and see how far you can push it within arbitrary limits. Using Custom Depth and Custom Stencil within these test rooms was very easy, but I’m sure the complications would escalate dramatically when trying to apply it to a AAA dev environment. I could see a couple ways where edge cases might cause the effect to fail or be quite taxing. For example, combating these Highlight effects and other Transparent Materials in the scene!

I like doing exercises with limitations because it forces you to be more creative, and be prepared for actually industry constraints. One constraint example is that some mobile platforms do not feature multiple render targets at all. I want to make it clear that any of these effects shown above could look 10x better if we didnt set any limitations ( authoring heat maps for every mesh for example ). However, the reality is that game development is a time-sensitive industry, and if a gameplay feature is minor, there’s no sense in wasting manpower or processing power to deliver higher quality results if the simpler ones will done faster and cheaper. I guess, what I’m trying to say is that the focus of these exercises is problem solving rather than delivering amazing artwork.

bottom of page