RAW to Albedo

As you probably know, in physical-based rendering albedo – its a base color of the surface, without any lighting information, such as shadows, occlusion, specular or diffuse lighting. So, that’s the primary problem with using a photos (or photoscans)  in PBR.

During the work with PBR and my photoscans, I found a simple technique to make an ‘albedo’ maps from (almost) any tiled photo. Although it is possible to reproduce the effect manually (read how it works below), for example in a Photoshop or GIMP, I made filters for the Filter Forge and Allegorithmic Substance Designer to simplify this process. Here they are:

Filter Forge:

Download RAWtoAlbedo for FilterForge and unzip archive in your filters folder.

Substance Designer:

Download RAWtoAlbedo for Substance Designer (5 or newer), uunzip anywhere, find RAWtoAlbedo.sbs and put it to your library. There is also RAWtoAlbedoTest.sbs which contains examples from this article and uses RAWtoAlbedo.sbs as child node.

Example results:

 

How it works

So, we want to suppress shadows and highlights, while retaining the texture and color. Obviously, we want to edit luma- and chroma -values of the image separately. Looks like LAB color space is perfect for this: it splits the image on three channels: L (Lightness) and A/B – chroma (color information).


Let’s work with the luma first. The core of the trick is to replace problematic areas with some ‘natural looking noise’, and best of all, if this ‘noise’ matches the texture itself. Let’s try this: dublicate layer, shift it randomly (or mirror, or 90° -rotate) and blend it using ‘Max’ mode (‘Lighten’ in Photoshop). As you might notice it is necessary that source image being tiled.

This supress some darker areas using image pattern itself. Copy merged and paste into a new layer and shift/mirror randomly again. Set blend mode to ‘Min’ (‘Darker’ in Photoshop) this will supress overbrighted areas (speculars mostly). Now we have another level of ‘average noise’. Repeat this two steps until you get something really lighting-neutral. And that would be an average luminosity of your surface. I actually using only three blend steps: Max-Min-Max and tweek blend opacity, and here they are:

As you can see the idea behind it is pretty simple. But it gives us nice average pattern. Also we can extract some fake Ambient Occlusion by just subtracting result from the source and inverting it. Replacing the luminocity channel in LAB space gives us simplified base color (to correct albedo use some charts links below).

Not bad, but we still can make it better.

Preserving Details

Lets look at next image. It have many critical details, and simplified method makes it just too messy:

We can resolve this by blending our ‘average noise’ with the source using some mask, that will contain our detail areas. And getting it also pretty simple: most of the time details lies on brighten areas. And we can smoothly tweek our blending mask using curves to find a better result. I use ‘Gain’ curve in Filter Forge and ‘Levels’ in Substance Designer. So thats how it look after tweeking blending mask:

Maybe not ideal, but for me still better than previous.

Correct chroma in dark areas

Notice how previously shadowed areas look on result:

They are gray and wrong. Looks like there almost no chromatic information in this areas. We can correct this, randomly shifting source chroma (A/B channels) and blend it using inverted luminocity mask (which is also can be tweeked by curves). And this trick assume, that most of your image not dark or have chromatic info somehow.

Two-pass method

I found that sometimes it better to handle source with two serial passes and tweak them both. Setup in SD looks like this:

Usage

To automate this process I make a filters for Filter Forge and Substance Designer. There is no significant difference in results. But implementation is slightly different. In Filter Forge we can work with LAB color space natively, and it has some pretty nice edge-based antialiasing, so results sometimes looks more ‘integrated’ and clean, but for me it’s also quite slow. I didn’t find a LAB color space worflow in Substance Designer, but there is built-in “Chroma” node, which gives exactly same result as A/B with L=0.5, so there was no problems in implementation, and SD works much faster with immediate realtime response.

Initially filter just makes an ‘average noise’ albedo for you, and if you happy with that just leave it as is. To ‘preserve details’ enable “Use details mask” checkbox and tweak parameters below. ‘Correct Chroma in dark areas’ blends just that, and “Dark areas treshold” tweek the curve of mask. Enable “Extract Occlusion”, this gives you additational output in SD, in FF it output occlusion as main image.

It is also a good practice to correct result using some albedo reference charts:

Conclusion

So I use this trick to cleanup my photoscans from lighting and to add a little bit of ‘natural noise’ to my materials generated procedurally (because sometimes they look just too synthetic). It’s not generating any normals or heightmaps, it much better to get them from  photoscans, or bake from 3D or generate manually.

Leave a Reply

Your email address will not be published. Required fields are marked *