Sonntag, 31. Juli 2016

Simple deferred translucent foliage rendering

Translucency seems to be one of the new illumination features that every graphics engine has to provide these days. The effect is most noticeable on organics, like skin (ears for example) and plant leafs. For the latter, there's a nice and easy way to implement it - this solution only works for very thin objects. This means double sided triangles, like paper, curtains, or leafs. Other cases would be more complicated.

You should have a mechanism to support multiple material types in your deferred renderer already. Then you need to disable backface culling, or the back side of the object won't be rendered at all...Let's talk about direct illumination first. Light traveled through the object and direct light is treated at the same time. Since we assume that our translucent objects are infinitely thin, we don't need to determine a thickness, like you would have to, regularly. An artificial thickness can be provided via per-object parameter or through a texture. One could assume, that texture coordinates are the same for the point on the front and the back side of the currently rendered fragment. For normals, another assumption is possible: The normal on the back side should be the current fragment's normal, but just negated. Using different normals or diffuse texture for the backside is not easy to add here, since you can't know if you are currently rendering the front or the back side ob the object.

After calculating the regular lighting, calculate the lighting with the artificial backface normal und multiply it with the object's thickness at this point. Add the two values and you're done. Even though the usage is limited to very thin and simple-colored objects, this can look very nice, for example with a curtain:


Look at the subtle shadow of the sphere above the curtain that you can now see from below the curtain.

Why I dumped bindless textures

Nowadays, graphic interfaces tend to provide bindless access to resources - no more texture binding points was the promise OpenGL made. Curious about how I could enhance my code with the ARB_bindless_texture extension, I started changing my engine so that no more texture.bind(int index) was necessary any more.

After I successfully implemented this feature, I was very glad, because instanced rendering can be done with different textures per object in one draw call, since the texture ids are accassible via a global buffer object.

However, one thing is very, very, very uncomfortable with bindless textures (nowadays), and that's an even more important feature for a game engine: texture streaming. I implemented texture streaming with regular textures: For each texture, a timer is set, to identify when the texture was used last (this means when it was bound). When  a certain threshold is reached (could depend on the amount of VRAM available, the distance to an objects etc.), all but the smallest mipmap levels of the texture are freed. No change needed on the shader side of things. But this requires the texture to be mutable. Keep in mind that this has nothing to do with the question if the texture's contents are mutable or not. And thats the whole problem: bindless textures are immutable. No way around it. The consequence is, that you can't modify your minimum miplevels after creation.

So how to implement texture streaming in this scenario? I tried to create two texture objects per texture - one has the full mipmap stack, one has only the smallest. My global buffer gets the ids of both. If a texture wasn't used long time, the complete one is discarded - but how would my shader now know if it should sample the small-mipmap-texture or the regular texture? There is a second part of the sparse_texture_ext which exposes an API that lets your shader figure out if the texture is resident, that sadly wasn't available on my GTX 770.... And then, I have to recreate the regular texture if it is needed again, but the id changed, so I have to change all referenced ids in my global buffer. In the end, nothing is won, if you want to use bindless textures for your materials, because you would have to keep track of deleted/created textures - that causes overhead for buffer updates, increases the complexity of your code and the apis you need are most probably not available on your GPU. That's why I dropped them.

If anyone out there has a hint on how to properly implement texture streaming with bindless textures and how to get along with the missing second part of the extension, I would be interested.