Mittwoch, 10. Juni 2015

Ninja framework: Argument extractors

The last post introduced a simple way to automatically extract a collection of objects from a form and inject it into a controller's action. However, when classes get more complex, this option is not the best one because of two reasons: Method signatures get bloated and the collections have to somehow get attached to the corresponding object (injected into the action as well) by hand. If one writes a new action and uses the built in functionality, it's possible that he forgets to update one of the instance's fields....saves...and boom: the object's data is gone.

The functionality can be gathered into an argument extractor. Sadly, the official documentation only shows an example where the session is used to extract a simple session cookie. But what if you have to get complex form data? Ideally, one wants a clean action method signature, where the instance is injected correctly. This can be done with simply with an annotation:


public Result saveTrip(Context context, @argumentextractors.Trip Trip trip) {

It's important to note, that you can't use other built-in extractors (Param, Params) any more, after you parsed the request. Additionally, your own extractor has to be the first extracting paramter in the signature.

The marker interface specifies the extractor class:

@WithArgumentExtractor(TripExtractor.class)
@Retention(RetentionPolicy.RUNTIME)
@Target({ElementType.PARAMETER})
public @interface Trip {
}

The magic then has to be implemented by yourself. Therefore, extend a BodyAsExtractorwhere T is the return type you want to extract. There are three methods to be overriden. I don't have a clue what the third one (getFieldName()) does, but the important one is

public Trip extract(Context context) {}

Your form data has to be gathered now. Took me some time to find out how to do this - actually, you can do


String body = "";
while (context.getReader().ready()) {
  body += context.getReader().readLine();
}

and that's it. I was'n able to use the InputStream the context provides directly. Now the ugly part. the params string of the form http://example.org/path/to/file?a=1&b=2&c=3 should result in a list of a=1, b=2, c=3. Since this is a common task, it's implemented in the apache commons htmlUtils - nice wordplay. I extracted some single methods from their library, because I only use a few ones. Now, you have to apply the parsed values by hand. To mention would be, that this can only work, if the keys you use to extract all the stuff don't change between forms. Otherwise, you would have to implement another extractor.


trip.getStops().clear();
for (int x = 2; x < params.size(); x++) {
  trip.getStops().add(params.get(x).getName());
}
return trip;

The nice thing is now, that everyone who uses this object class, can use the extractor and afterwards just has to save the instance regularly in the controller action:
manager.merge(trip);

I'm curious if this is the intended way to extract stuff from forms. It's a pity that such an important requirement isn't documented better.

Ninja framework: collection extration from forms

Ninja quickly became one of my favorite web frameworks. For REST, MVC, dependency injection, database and other basic stuff, it mostly is very convenient. But what about the more complicated things web development often demands? Because documentation is rather sparse for it, here's how you can use built in functionality to extract a collection of objects from a form.

My example has a simple edit form for a trip model.A trip can have multiple stops, for simplicity represented by a String. With a POST route in the TripsController, Ninja can automatically parse the request, extract your form data and inject the Trip instance into the method call - one has to add a Trip reference the controller's signature and it just works, how great is that:


public Result saveTrip(Context context, Trip trip) {

However, the documentation states, that the extraction only works with primitives and arrays of them. This means no other collections, like Lists, can be extracted automatically. But no one uses plain arrays as fields... So, an easy way to circumvent this limitation, is to add the given items within the collection to the form and provide the same name attribute for all of them:
<#list trip.stops as stop>
  <tr>
    <td><input type="text" class="form-control" id="stops[${stop_index}]" name="stops" value="${stop}" ></td>
  </tr>
</#list>

Then, add the String[] stops parameter to your signature and you're done.


public Result saveTrip(Context context, @Params("stops") String[] stops, Trip trip) {

In my case, I updated all of the trip instance's stops with the stops automatically injected and saved the objet. Can't get any easier, I think.

I'm not yet sure if this would work for more complex (means no-primitive type) objects. For this purpose, argument extractors were introduced. The documentation is again a bit sparse about them - a first try seemed that argument extractors that try to parse the request data for object extraction tend to be a bit hacky. Will be continued.

Freitag, 8. Mai 2015

Microbenchmarking Java image loading libraries

During my 3d engine project, the demand to load arbitrary image formats came up. My choice was ImageIO since it's shipped with Java. Short time ago, I realized that there is a cool Apache lib called commons imaging. The main goal would be to speed up the loading processes, so finally I have a reason to do some microbenchmarks, yey.

Since there are only very few tools for micro benchmarking and most of them offer poor features and documentation, I recommend to use JMH. As always, documentation and examples are kind of confusing, so here's the workflow I used.

First of all, you need two dependencies - the jmh core lib and the annotation processor.

<dependency>
    <groupId>org.openjdk.jmh</groupId>
    <artifactId>jmh-core</artifactId>
    <version>1.9.1</version>
</dependency>
<dependency>
    <groupId>org.openjdk.jmh</groupId>
    <artifactId>jmh-generator-annprocess</artifactId>
    <version>1.9.1</version>
</dependency>

The methods you want to benchmark can then be annotated to be automatically registered for measurement.


@Benchmark
    public void commonsImage(){
        MainClass.loadImageCommonsImaging();
    }

Now a piece of software is needed that runs all your annotated methods. This can be done with some command line stuff, I prefer a solution that can be packaged as a jar or directly run from the IDE. Embedding your benchmark config in a class and write a small main method can do the job. The class you pass into the benchmark via the options is scanned for annotated methods.

public static void main(String[] args) throws RunnerException {
        Options opt = new OptionsBuilder()
                .include(ImageLoadBenchmark.class.getSimpleName())
                .forks(1)
                .build();

        new Runner(opt).run();
    }

Run the main method from your IDE or export a package. Note: If you export a jar, you have to provide the dependencies - if you don't want them to reside in your classpath, create a fat jar with the maven-assembly-plugin. Tested it, works fine. Here's the result:

Benchmark         Mode  Cnt   Score   Error  Units
commonsImage  thrpt    20    8,127   0,077  ops/s
imageIOImage    thrpt    20   11,283  0,105  ops/s

So it seems as if imageIO can push more loading jobs per second than commons imaging. Dang it.

This is just a minimal benchmark setup - of course there's a ton of other things you could do with JMH. Have fun.

Donnerstag, 16. April 2015

Simple setup: Spring with Boot, Maven and IntelliJ from scratch

There will never be enough tutorials about how to use Spring with an IDE. Here's another one in case of someone wants to know how to setup a development environment with IntelliJ. Especially the hot reloading features are very important and nobody wants to miss them. Here's how one can do it.


  1. Create a new maven project. Don't use archetypes.
  2. Edit the pom.xml file to use a parent from Spring Boot that does a lot of configuration for you.
       <parent>  
         <groupId>org.springframework.boot</groupId>  
         <artifactId>spring-boot-starter-parent</artifactId>  
         <version>1.1.5.RELEASE</version>  
       </parent>  
    
  3. Also, the spring boot dependency has to be added to the pom.xml.
       <dependencies>  
         <dependency>  
           <groupId>org.springframework.boot</groupId>  
           <artifactId>spring-boot-starter-web</artifactId>  
         </dependency>  
       </dependencies>  
    
  4. The main application class is configured to enable auto configuration. For further information, one should read one of the thousands of Spring tutorials.
     @Configuration  
     @ComponentScan  
     @EnableAutoConfiguration  
     public class Application {  
       public static void main(String[] args) {  
         ConfigurableApplicationContext ctx = SpringApplication.run(Application.class, args);  
       }  
     }  
    
  5. Now you could add controllers and other stuff. And run the main class from your run configuration. In order to have an executable fat jar, you could use the maven assembly plugin (see one of my other posts). Class reloading with this run configuration should work out of the box.
  6. The convention seems to say one should place resources in src/main/resources/static. Placing an index.html in there will make it available via the applications root path. However, if you use src/main/webapp/ as your folder structure, you fulfill standard java web application convention and make Tomcat automatically recognizing your stuff. You then have to access your static content via /static/index.html or similar, or you can reconfigure your routes (not covered here).
  7. If you work on your static content, you want it to be reloaded automatically. However, this doesn't happen with the configuration so far. That's because of the fact that your static content will be copied into a working directory - changing the root files doesn't change their copies. There may be other ways, I successfully tried to use spring boot maven plugin.
           <plugin>  
             <groupId>org.springframework.boot</groupId>  
             <artifactId>spring-boot-maven-plugin</artifactId>  
             <dependencies>  
               <dependency>  
                 <groupId>org.springframework</groupId>  
                 <artifactId>springloaded</artifactId>  
                 <version>1.2.0.RELEASE</version>  
               </dependency>  
             </dependencies>  
           </plugin>  
    
  8. Executing the goal spring-boot:run in your IDE will now launch the application and automatically reload your content. Debugging with breakpoints and hot reloading your classes  seems to not work with this run configuration any more. But if you want to work on the backend of your application, you could run the main class like before.
If anyone knows a better way to setup a development environment, I would be curious about it, just tell me. Especially it would be nice to have only one run config for reloading static content and classes with breakpoints and stuff alltogether.

Samstag, 11. April 2015

Declerative Programming in Java With Dagger Dependency Injection

Most web developers got already used to working with Dependency Injection (DI), but other developers tend to avoid this topic in e.g. desktop applications. First of all: dependency injection does only mean that the inversion of control paradigm/pattern is used. As a consequence, domain objects don't get their dependencies (other objects, services etc.) by themselves, but have them injected (at construction time). This isn't necessarily done by a framework, the easiest way is to just have constructor arguments for all object fields. For example this
MyObject object = new MyOject(new Dependency());

class MyOject() {
  private Dependency dependency;
  MyObject(Dependency dependency) {
    this.dependency = dependency;
  }
}
is better than this
MyObject object = new MyOject();

class MyOject() {
  private Dependency dependency;
  MyObject() {
    this.dependency = new Dependency();
  }
}
because of some reasons. The most important reason is, that the second example doesn't support different implementations for the dependency. Imagine you want to do tests and mock a heavy database connection. Using interfaces in those situations and an injected implementation is recommended. Second, if you change something on the dependency class, you have to touch the intrinsics of the object class, which doesn't sound right, because you don't want to touch this class. 

After it's clear to use DI, one can go a step further. Instead of how our dependencies are solved, it would be nice to just say what dependencies we want to have: declerative programming. How they are solved is easy most of the time and handled later, even in a declerative manner.

So for Java SE applications, it's not trivial to chose the right framework for injection. For the platform standard CDI, an implementing library is needed. Other containers, like Google's Guava or Spring are rather heavyweight and provide a rich set of features. However, there's a new shooting star I dove into: Dagger. Promises to be simple and fast, because it's (optional) a compile time thing. And I can confirm that it's easy and nice.

For example how many times have you already used the Singleton pattern? With dagger, you can just annotate your class as a Singleton and the framework does everything else for you.
@Singleton
public class Config {
    public int WIDTH = 1280;
    public int HEIGHT = 720;

    @Inject
    public Config() {}
}
A small configuration for a renderer framework of mine. Can't get any simpler than this - only one odd @Inject constructor annotation I don't understand completely. To obtain the service, you declare the injection in the consuming class. For example a context class of the application.
@Singleton
public class Context {
    @Inject Config config;
}
The context class is not meant to provide access to the configuration. The configuration can be injected directly into all other classes as well.

There is a ton of other features and having some objets wired together automatically is great. But I already found a job that Dagger can not fullfill - something that is called assisted injection. The factory - for example for your game objects - can be injected into the context. You can inject something like a provider as well; this is an object that works like a factory but cannot take your parameters to construct them. While other frameworks may be used for injection of objets with partially managed fields, Dagger can't do it, maye can do it in the future.

Sonntag, 5. April 2015

Java + Maven: Use local dependencies with maven-assembly-plugin

Every now and then we come across dependencies that can't be found in a repository. While there is a simple way to tell maven how to use a local file as a dependency with there is one major issue. The maven-assembly-plugin will not copy your local libs into your fatjar. I found several ways to somehow tell maven how to handle maven/the maven-assembly-plugin that the provided dependencies should be asembled as well - one looks scarier than the other and none of them did what I wanted. To make it short: After I ripped out my hair several times trying to manage such... screwed needs, the most satisfying answer is probably: don't do it this way.

If there's no way to get your own dependency repo server, where you can put your lib (which should be the case most of the times you program small personal projects), then just place all dependencies somewhere in your repo and let other programmers do the rest for themselves. Sounds lame, can be pretty easy and simple: One should install the local dependency as a local dependency. This is the stadard way maven recommends and it is robust. Can be done with

This installs a given jar into your local maven repo. Minimal pain, minimal effort, works with (probably) all plugins, because it's a simple provided dependency.

Dienstag, 24. März 2015

Get normal for cubemap texel

Since I came across this problem and wasn't able to find an easy solution on the internet, I decided to write s small recipe to calculate normals when you want to do manual mipmapping/radiance convolution with cubemaps in OpenGL. I use compute shaders, so for geometry/vertex/pixel-pipeline, you could use layered rendering and other stuff.

First of all, the shader needs the current cubemap face index as a uniform variable. I recommend using the standard OpenGL indices (see link below).

Most likely, you are using the standard cubemap layout. If this is not the case, you have to change the vectors in my code. So with a given face index and a given texel position, the problem can be solved:




What happens here is that I calculate the pixel position in texture space with the help of the invoation position. The compute shader is invoked with cubemapfaceResolution.x/16, cubemapfaceResolution.y/16, 1. Knowing which (OpenGL) world axis the view direction of the virtual camera, facing the current cubemap side from the inside (cubemaps origin) is, the other two axis are the two orthogonal axes. These two axes' values grow with the texelcoordinates we already have. But therefore, they have to be remapped from 0 - 1 to -1 - 1. The resulting vector can be used to sample a cubemap as it is. Normalization could be unnecessary.