Dienstag, 17. Februar 2015

Using Go with eclipse

I like how different platfoms like Java, C++, build tools like maven, gradle etc. are integrated in eclipse via its extension mechanisms. So the first fight every tool that wants me to use it has to win is its eclipse integration. Because the usage of Google's Go with eclipse isn't a complete no-brainer, I want to share the way I did my setup. So here's a guide how you can setup Go on Windows, with eclipse luna.


  1. Install your go distribution
    You can download your distribution from the official download page. The installation is quite easy. After completion, ensure that your environment variable GOROOT points to your Go folder. The variable GOPATH should point to your chosen workspace, in the means of where you want to place your Go projects. Or where you have any libs or other Go code. This sounds like an easy job, but seems to be confusing for half the internet, me included. It turns out that every directory in the go path has to have a well described structure, that you can find here. We will take another look later, when the run configuration in elcipse is confiured. You can check if Go is properly installed via a shell with "go version".
  2. Install eclipse integration
    Within eclipse (luna) go to Help -> Install new software and use http://goclipse.github.io/releases/ as path. Chose the GoClipse project and install.
  3. Create a Go project
    This is fairly easy: Restart your eclipse, chose File -> new Project -> Go Project . You could check the box for automatic main method generation. Now slow down, first chance to waste time ahead: You should create a subfolder in src because if you just place your first Go file in the source folder and try to execute it, you would get go install: no install location for directory as a response. Also not nice is that eclipse auto generates a main.go file and places it right in the wrong folder...Took me half an hour to figure out that not my Go installation is the cause, but that I placed the source file in the top source folder. Don't do that. Create a Go file, your main function if not already done.
  4. Adjust your path
    As said before, the GOPATH variale shows your workspace. If you have several IDEs or several projects, it could be useful to use the run configurations to override the path. Therefore, open Window -> Preferences -> Go. This is where you could set a path for your eclipse executions if your default path is not the right one.
  5. Add a debugger
    Probably the most interesting part. Since Go is a compiled language, you have to work with debug symbols. I think Go's compiler is based on the GCC, therefore you can use the GDB. On Windows, you could get trouble finding it, so here is your link. Install wherever you like it and afterwards, go back to your eclipse. Right-click your project and chose Debug Configurations. Select the Debugger tab and tell your IDE where your gdb.exe can be found. Uncheck the box Stop on startup at main because this won't work with your go program and apply the settings. Now you can use eclipse debugging as you know it, with breakpoints and stepping and stuff. Remember that go programs are not fully compatible to gdb, so there will be some issues. Let me correct myself: There will be many issues. Doesn't work very nice at all. But at least you can use it.

Compute shader advices

Recently, I had a lot of pleasure with OpenGL's compute shaders. With this lot of pleasure came a lot of pain because I made some (rookie) mistakes. So I wanted to share my experience and some advices I have, just in case you have troubles too:

  • The first thing you should check are your texture formats! No, really, double check it, don't repeat my mistakes. In your compute shaders, you could use your images (not textures) with

    glBindImageTexture(unit, textureId, 0, false, 0, GL_WRITE_ONLY, GL30.GL_RGBA16F);

    of OpenGL version 4.2 as an output texture. Of course you could use GL_READ_ONLY or GL_READ_WRITE if you use the texture differently. Also keep in mind that this call binds an image, not a texture. And that's why you have to provide a mipmap level you want to attach. I used the wrong format once, namely rgba32f, which my rendertarget attachments didn't have, and it resulted in non existent output from my compute shader. Very frustrating but correct behaviour.
  • Keep in mind that you could use your regular textures via samplers in your compute shaders, too. Simply bind the texture and have a  similar line to this in your shader

    layout(binding = 1) uniform sampler2D normalMap;

    That's helpful if you want to access mip levels easily.
  • Since even in the OpenGL super bible is a typo that doesn't help to understand the compute shaders built-ins, I recapture them.
    With dispatchCompute you have to provide three variables that are your group counts. A compute shader pass is done by a large number of threads and defining clever group counts/sizes will help you to process your data. In graphics related cases, mostly you will need compute shaders to render to a texture target. So it would be clever to have a certain, two-dimensional amount of threads, wouldn't it? Define your group sizes corresponding to your image size: a 320*320 image could be devided into 10*10 groups, or tiles - and each will have 32*32 pixels in it. So you should define your group size as 32, 32, 1. Now you can dispatch 320/group size threads, which will be 10 groups, for x and y dimension. In your shader, you will be able to use the built-in gl_WorkGroupSize to have this information in every invocation of your shaders main method. To uniquely identify your invocation, you can use the gl_GlobalInvocationID. If you use your shader like I said in this example, this would contain your texel's position the invocation would have to write. And that's how you can use compute shaders to manipulate your textures. Additionally, there is a gl_WorkGroupID, that identifies your tile/group of the invoation, and gl_LocalInvocationID, that is your pixels position in its tile. Sometimes, it could be useful to use a flattened identifier - for example if you have a task that requires performing an action just 12 times, but has to be done in the compute shader - and therefore you can use gl_LocalInvocationIndex. You can use it as a conditional to limit some code paths like

    if(gl_LocalInvocationIndex < MAX_ITEMS) { processItem(); }

    For a better understanding, have a look at this post, which has a nice picture and another explanation of the group layout.

What else? Compute shaders are awesome! I like how easy it is to invoke them, independent of something like the graphics pipeline. Use compute shaders!

Freitag, 13. Februar 2015

Quick look at my OpenGL engine

I just want to share a small screenshot of my OpenGL rendering project. Includes physical based rendering, a global illumination concept daveloped by myself and realtime (glossy) reflections as you can see on the screenshot. Realtime of course - seen on my GTX 770 in full hd at max 300 fps.


Java 8 default methods for your game engine transformations

Although transformation and class hierarchies in game engines are a topic for itself for sure, I finally arrived at a point where I just want every single of my regular game objects to be a transformable entity. Those objects that don't act as something that can be transformed are outside of my interest, they need some default behaviour I don't care about. I think that is the way the Unity engine took, too. A shared super class might look like a good idea, but we are often warned about such kind of class hierarchies.

While C++ offers multiple inheritance to implement things like this very easily, in languages like Java, you probably have to use composition - which should be favored over inheritance nevertheless. The problem is that sometimes, you get lost in interfaces and class fields... and see yourself write interface implementations again and again while delegating interface calls to field objects.

The last sentence is the catch-word: I tried to learn about Java 8's new stuff and stumbled across default methods. While mainly created to guarantee binary compatibility when changing interfaces, they offer a nice way to implement transformations within one file, without the need of (nearly) no other implementations or stuff. Here's how I did it:

I have a class called Transform. This holds a position, an orientation, a scale and is mostly a data holder. Additionally, I created an interface called Transformable. If this would be a class, I should have implemented the state (which now is in Transform class) in this class. But it's an interface. My gameobjects implement this interface, so I would have to implement all those move(vec3 amount)-etc-methods in this implementation. With default methods, I can now provide implementations on the interface tiself - combined with a pattern I don't know the name for anymore, this could be powerful: methods implemented by the interface can be called by the interface. This means I can use a non-default-implemented method getTransform() on the interface in my default methods.

For all classes that implement Transformable, it's sufficient to provide a transformation, because it needs getTransformation() to be implemented. That's because interfaces are not allowed to have state and one has to add the field to the implementing class.

Where this really shines is in situations where you would use (method)-pointers in C++: When you have an object besides your regular game objects that has to be a transformable, but is attached to another object that controls their transformation, you can implement the getTransform()-method with returning a field object's transform. Best example is for gameobjects that have a physics componentn attached, that should win every transformation war.

Additionally, I made the gameobject interface a subclass of my transformable interface, so that I can have different entity types for game objects, lights, environment probes etc. Some of them are not movable, like a directional light - therefore, I can override the directional light's move-methods to not do anything. And then, the subclass interface can call it's superclass interface's methods, too: For example the default implementation of the game object interface's isDirty()-method uses the superclass interface's hasMoved()- and its own animationHasPlayed()-method, if it's an animated entity.

So for the maximum price of a method call, that the interface has to do on itself when a transformation changes, you can have your transformations interfaced. My experience is, that I have very few problems with undesired class hierarchies in the means of "oh no, now I have to implement x or subclass y, but it's not clean code". As always, I'm still a bit uncertain about if this is good use of default methods. But at least I gave them a try and I don't regret my design descision - let me know why this stuff is bullshit, I'm curious :)

Copy textures in OpenGL

Often, people need to postprocess textures, like with a blur or something. While it's sometimes possible to render to and sample from the same texture in OpenGL, it's not recommended, as long as rendering and sampling uses the same mipmap level. However, some cards and drivers let you do exactly this, but I guess most of the times you want to use kernels, you're screwed, because pixels are processed in parallel.

One common approach is to use somthing that is called ping-ponging. You bind the texture to sample from to a texture unit and render to another texture. However, all other application components have to be aware that your fist texture doesn't contain the result they need, thus have to use the other texture, means the other texture handle id. This is sometimes very inconvenient and I didn't want to clutter my code - so I checked an alternative approach that modern OpenGL provides us: copy textures.

With earlier OpenGL version, you had to do a fullscreen quad render pass or a framebuffer blit to duplicate textures, with version 4.3 you can do the equivalent of a memcopy. I duplicated my texture, set my source texture as a color attachment of a temporary rendertarget and set the duplicated texture to a texture unit for sampling. My method looks like (Java, used lwjgl):


I modified the code so that it doesn't take a texture object but the attributes you know from OpenGL. Copying a 1280, 720-Texture takes around 0.2 ms on my GTX 770. I'm pretty sure it doesn't take much more time for a larger texture, but if you want me to test it, just leave a comment. Or if you need additional explanations. Somewhere I saw people having trouble with this simple functionality and most of the times it was because their textures were incomplete. That's why I added all those filter attributes etc.