Table of Contents
Post-processing effects are used extensively in all modern games, and have many applications - from HDR tone-mapping and colour-correction to cartoon effects, the possibilities are almost limitless.
BigWorld Technology has support for complete user customisation of the post-processing chain, for artists via a built-in editing tool, and for programmers by offering full control over every parameter in python and drop-in shaders in the way of DirectX/HLSL effect files.
However before you jump into creating your own new swanky effect, it pays to think about it. Effects should play nicely together, they should reuse render targets where possible, and you need to monitor performance.
After the opaque scene, translucencies and lens effects, and before the GUI is drawn, the PostProcessing::Manager draws its current chain. A Chain contains a list of effects that draw in order. Internally each Effect contains a list of Phases that draw in order. A Phase usually draws a full-screen quad to the screen using an effect file, although other transfer meshes are available.
There are three parts to the implementation of post-processing. The core is written in C++, in the bigworld/src/lib/post_processing library. All of the features there are exposed to python via the _PostProcessing module.
In Python, the PostProcessing module exists in bigworld/res/scripts/client/PostProcessing and imports all of the methods from _PostProcessing. This allows you to override, or wrap, any of the C++ methods. Therefore all python calls should be to the PostProcessing, not _PostProcessing.
By default, the PostProcessing module registers 3 graphics settings. If the user selects high/medium/low, an appropriate post-processing chain is loaded. These are found in bigworld/res/system/post_processing/chains/ and are "High Graphics Setting.ppchain", "Medium Graphics Setting.ppchain" and "Low Graphics Setting.ppchain". Therefore it is easy for developers to redefine the default post-processing chains by creating new chains in World Editor and saving over the top of these files.
In general it is expected that a game will use a combination of the default post-processing chain files, dynamically mixed in with gameplay related effects. In order to achieve this, follow the examples in the PostProcessing module. Additionally, take a look at the Python API for PyMaterial, as it will demonstrate how you can smoothly fade in/out your dynamic post-processing effects.
Finally in World Editor, the Post-Processing tab is a full-featured editor and preview tool for chains. It loads and saves .ppchain files. Please see the Content Creation Manual and the Content Tools Reference Guide for further information.
While BigWorld comes with a basic set of post-processing shaders, phases and effects, more likely than not you will find yourself needing to implement an effect that is unique to your game.
For this example, we will be creating a post-process that will invert all the colours on the screen.
We will author a post-processing effect that can be added as part of the client's overall post-processing chain, and we will write a custom shader that performs the actual colour inversion.
So how are we going to get the GPU to invert all the colours on the screen?
Because the BigWorld client supports drop-in DirectX Effect files (.fx), this step is relatively easy. All we need to do is author a .fx file that takes an input texture, inverts the colour, and outputs that value.
float4 ps_invert(PS_INPUT input) : COLOR0 { float4 map = tex2D(inputTextureSampler, input.tc0); float4 invMap = float4(1,1,1,1) - map; invMap.w = 1; return invMap; }
OK, so that was the easy bit. This pixel shader assumes a couple of things, namely that the vertex shader is passing through a set of texture coordinates, that there is a sampler reading the correct texture map, and that there is a PS_INPUT structure defined.
Luckily all of this has been taken care of, via the effect include file post_processing.fxh (bigworld/res/shaders/post_processing/post_processing.fxh)
The complete effect utilising this pixel shader is quite straightforward, and looks something like this:
#include "post_processing.fxh" DECLARE_EDITABLE_TEXTURE( inputTexture, inputSampler, CLAMP, CLAMP, LINEAR, "Input texture/render target" ) float4 ps_invert(PS_INPUT input) : COLOR0 { float4 map = tex2D(inputSampler, input.tc0); float4 invMap = float4(1,1,1,1) - map; invMap.w = 1; return invMap; } STANDARD_PP_TECHNIQUE(compile vs_2_0 vs_pp_default(), compile ps_2_0 ps_invert())
The shaders can also be edited in FX Composer by adding the following:
#define FX_COMPOSER 1 #include "post_processing.fxh" FX_COMPOSER_STANDARD_VARS DECLARE_EDITABLE_TEXTURE( inputTexture, inputSampler, CLAMP, CLAMP, LINEAR, "Input texture/render target" ) float4 ps_invert(PS_INPUT input) : COLOR0 { float4 map = tex2D(inputSampler, input.tc0); float4 invMap = float4(1,1,1,1) - map; invMap.w = 1; return invMap; } STANDARD_PP_TECHNIQUE(compile vs_2_0 vs_pp_default(), compile ps_2_0 ps_invert()) STANDARD_FX_COMPOSER_TECHNIQUE(compile vs_2_0 vs_pp_default(), compile ps_2_0 ps_invert(), "RenderColorTarget0=inputTexture;")
In the FX Composer settings, add the path to post_processing.fxh and the other post processing headers to your include path. Comment out #define FX_COMPOSER 1 to use the shader in the client or world editor.
As the post-processing chain contains many phases, which often write to intermediate, invisible render targets, it is often desirable to see the intermediate results of a post-processing effect or chain. There are two methods available for this purpose, PostProcessing.debug() and World Editor's preview feature.
In the client, you can register a render target of arbitrary size with the _PostProcessing module, and have it record all intermediate steps. This render target can then be viewed by displaying it in the GUI. A helper class, ChainView, is available in the PostProcessing module, this will display the entire chain on-screen in real-time.
In World Editor, there is a preview button in the post-processing editor, this displays the intermediate results inside each of the phase nodes in the editing graph.
Sometimes, this simple preview is not going to be suitable. By default, the preview directly displays the output of each phase's pixel shader. However, often the output of an intermediate step is written to a floating-point render target that does not directly map to the visible colour range, other times there is information encoded in specific ways that are not directly viewable.
Take for example a depth-of-field lens simulation. One possible implementation might decode the depth buffer, and categorise the scene into 7 separate areas, 3 levels of blur in front of the focal range, in-focus and 3 levels after. This information may be written into a single-component floating point render target and contain a value between -3 and +3.
When the output of a pixel shader is not directly viewable, you can author another pixel shader that is used for the preview function. To do this, you need to add a new technique to your effect. This technique must be called "Preview", and if available, will be used in lieu of the main technique when previewing the post-processing chain. This technique should output the data such that it will be viewable and make sense on an ordinary R8G8B8A8 render target and when viewed on-screen.
In the above example, you could write a preview technique that displays 3 shades of red for blurred areas in front of the focal range, full-green for all areas in focus, and 3 shades of blue for all areas being blurred behind the focal range.
So now how do we get our new effect to manipulate the screen at post-processing time? We have to author a PostProcessing::Effect. Most often, this will be done via the post-processing editor in World Editor. World Editor saves out .ppchain files, these contain chains of effects and phases, and can simply be loaded up and set as the current post-processing chain. However, it's also useful to know how to use the Python API, as you do have access to the entire chain, and often you will want to tie in post-processing effects directly to game logic. It also helps to understand what is happening 'under-the-hood' when authoring chains in the editor.
For this example effect, we must write every pixel in the backbuffer with the inverse of whatever colour is in the backbuffer.
PC hardware cannot read from the same texture that is being written to, so we will need first to grab a copy of the back buffer, and store it in another texture.
This snippet of python code creates a CopyBackBuffer phase, creates a render target that is the size of the back buffer, and hooks the two up.
import PostProcessing phase1 = PostProcessing.CopyBackBuffer() bbcRT = BigWorld.RenderTarget("backBufferCopy", 0, 0) phase1.renderTarget = bbcRT
In World Editor, we can simply drop a "BackBufferCopy" phase into our effect and be done with it.
Note in this case, we are creating a new render target, however generally you want to share render targets between effects and phases, especially full-screen ones like the one above. So instead of creating a render target, we could instead use the RenderTargets module like this:
bbcRT = PostProcessing.RenderTargets.rt("backBufferCopy")
Now we have a copy of the back buffer that we can read in as a texture, now is time to do our colour inversion pass
phase2 = PostProcessing.Phase() phase2.material = BigWorld.Material("shaders/post_processing/colour_invert.fx") phase2.material.inputTexture = phase1.renderTarget.texture phase2.renderTarget = None phase2.filterQuad = PostProcessing.TransferQuad()
This code sample creates a new phase, this time a generic PyPhase object. A PyPhase object has a PyMaterial and a FilterQuad. It uses these to write to a RenderTarget.
We have created a new PyMaterial, from "colour_invert.fx", the shader we authored earlier.
The effect file uses a texture variable named "inputTexture". Since we marked this variable in the effect as 'editable', it shows up in the python dictionary for the material. Thus we can set it directly to the texture held by the back buffer copy render target.
We have set this phase's renderTarget attribute to None. Specifically this means "don't set a render target", in practice this means we want to write directly to the main scene's back buffer, instead of an off-screen render target.
Note that whenever we write to the back buffer, we change its contents, and the next post-processing effect or phase must use a new copy that contains these changes. The BackBufferCopy phase internally detects whether or not the back buffer has been modified since the last copy was taken. Therefore it is ok to use BackBufferCopy phases all the time, and there will be no performance penalty if that phase is in fact not needed at that time.
Finally the phase uses a FilterQuad to draw with; these usually draw with n sets of filter taps, in this case we just want to read a single pixel from the source texture, for each pixel in the output render target. So we have created a PyTransferQuad, which has only a single sample point, and with no offsets. If we wanted to do some texture filtering, we could have used a PyFilterQuad instead, and specified n sample points - with each sample point representing (u-offset in texels, v-offset in texels, weight, unused).
colourInvert = PostProcessing.Effect() colourInvert.phases = [phase1, phase2] colourInvert.name = "Invert Colours" PostProcessing.chain([colourInvert])
This final code example wraps up our two phases into a single Effect, and registers the Effect as the post-processing chain. From here on in, the colours on the screen will be inverted.
One of the main sets of resources used by post-processing chains are render targets. These tend to be multiples of the back buffer size, and have different surface formats and uses. The BigWorld client exposes the PyRenderTarget class which can be used to create custom render targets on demand. Please see the Python Client API for detailed instructions on how to use PyRenderTarget.
Note that it is ok to create as many render targets as you like, as the actual surface memory is only allocated when the render target is first used for drawing into (via RenderTarget.push). Therefore you can define render targets that are not actually used, with negligible overhead. However for the render targets in use, the video memory can quickly add up, so it pays to take care when designing your post processing chains. The total memory used by any particular chain can be viewed in the World Editor, or by calling the function PostProcessing.RenderTargets.reportMemoryUsage().
In Python, the PostProcessing module has its own RenderTargets module, in bigworld/res/scripts/client/PostProcessing/RenderTargets. If you want to add more render targets for use by your post-processing chains, then add them to the render target list here. Doing this is necessary because this is where World Editor gets its list of available post-processing render targets.
There are two main performance metrics to be aware of when authoring post-processing chains. These are: memory use (mainly by the render targets); and the time spent in the GPU. Render targets and their associated memory use are described in the previous chapter. As always, PostProcessing resources created dynamically should be loaded in the background thread, to avoid stalling the rendering thread.
Post-processing chains tend to have a low CPU cost - involving simple iteration through effects and phases and simple geometry setup - but a high GPU cost, with complex pixel shaders that perform full-screen passes and many texture fetches per pixel. Therefore the main cost is normally GPU bandwidth: fill-rate, and texture-fetch.
The BigWorld client has a python API function, PostProcessing.profile(nIterations), which measures the time taken by the GPU. The parameter nIterations should usually be around 10 or so to make sure an accurate value is measured. Since the main cost is fill-rate and texture-fetch, this value depends on the resolution of the screen, so it is necessary to profile on different GPUs and at different resolutions. Note that World Editor also comes with a toolbar button on the Post-Processing panel that also profiles the chain.
There is no direct support for background creation of .ppchain files, since the library uses many PyObject pointers which can only be created in the main thread. Instead, support for background loading of .ppchain files can be achieved via the PostProcessing.prerequisites() method. This extracts the appropriate resources (mainly EffectMaterials) from the XML file and returns a list of the required resources which can then be passed directly to BigWorld.loadResourceListBG().
For example:
filename = "system/post_processing/chains/underwater.ppchain" BigWorld.loadResourceListBG(PostProcessing.prerequisites(filename), onLoadBG)
PostProcessing chains loaded via the SFX system are automatically loaded in the background.
Because gathering the PostProcessing prerequisites relies on the .ppchain file data section already being loaded, it is recommended that you preload all your .ppchain XML files.