Dienstag, 7. Dezember 2021

CHIP-8 emulator with Kotlin multiplatform

My main spare time project got a bit boring after some years and I wanted to do something small, rewarding, refreshing to test drive some cool new things. For example I wanted to  see what the current state of Kotlin multiplatform develpment is, after it was alreday really enjoyable for JVM/JS development back in 2019 for me. I am also very interested in GraalVM, my mediocre experience from 2018/2019 needs to be overriden with something more happy. So I decided to implement a CHIP-8 emulator, like ... nearly everyone who's doing software development already did before I had the idea. So I did it when it wasn't cool anymore, is that okay?

CHIP-8

I won't waste your time explaining all the details - CHIP-8 is already so well known and implemented a dozen times for every platform you can think of, you can google that easily by yourself. I use my remaining words to say that I am very thankful for this blog, this post and this wikipedia page! Those three pages contain all the information you need to implement an emulator yourself. Additionally, there are some repositories where you can get ROMs, like a nice ROM to test your emulator, or some funny game ROMs.

Kotlin Unsigned Types

Kotlin the language is heavily influenced by the JVM, as it is its main target. There's some friction because of that when implementing something as low level as an emulator. For example it's very nice to just have language support for unsigned types, which makes a lot of sense for indices. The support however, ends where anything related to the JVM appears, which is unfortunately the case for array indexing: The API expects a signed int as an index. Since there is no implicit conversion in the language, a program counter that essentially could be an unsigned int, needs to be explicitly converted with statements like 

val firstByte = memory[programCounter.toInt()]
val secondByte = memory[(programCounter.toInt() + 1)]

which is not too nice. It's not super important, as CHIP-8 only uses 4k memory, so we don't need all the bits of the int, but nonetheless. Furthermore, there are no bitshift operators on bytes, neither signed nor unsigned, so conversion to integer is always necessary after fetching instruction bytes from the memory, like so:

val a = (firstByte.toUInt() shr 4) and 0b0000000000001111u

Kotlin when statements

When statements with subject allow for very nice matching code for the opcodes. Take a look at this, as I think it is really readable. Maybe in the future passing this into all OpCode constructors can be removed with contextual constructors. I had to smirk a bit, because my main source of info for the emulator recommended to just inline all the calls and don't do a lot of architecture, but yeah, I couldn't resist and think it was for the good.

Multiplatform

Spoiler alert: I wasn't able to complete the emulator for Kotlin native targets like Windows or Linux.

So I started implementing everything in the common source set that can be compiled to all supported platforms, I planned for JavaScript, Windows, Linux and JVM. The first minor thing that was missing was a Bitset. Was able to resolve that with expect/actual pairs that pointed to the JVM implementation and implement a simple one for native by myself. The next thing that was missing in common sources was input handlig. I wasn't able to just access the native APIs, I didn't even manage to get autocompletion working. I then tried to just add two multiplatform libraries for input handling and resigned after some weird linkage errors I wasn't able to resolve. I wasn't able to be successful after 4 hours of work and I don't have that much time, so I conclude that the state of kotlin multiplatform for native targets hasn't changed that much, compared to my last try in 2019.

GraalVM

Now this one was really interesting for me. Simply bundling an existing web application with a (normally big) bunch of dependencies wasn't easy or even possible in 2019, as the ecosystem lacked tooling for kotlin, reflection, graalvm and the native image tool. This time, I had this nice gradle plugin, which is how I would wish tooling to be. Sadly, this time I had to use Windows, and for Windows, one needs to do some additional hops, namely use either the Windows SDK or Visual Studio Build Tools, which both need to be installed manually by clicking thorugh a bunch of websites and wizards. Of course the described way didn't work for me ootb, as the 2021 version of Visual Studio somehow uses different folder structures, so I needed to override the windowsVsVarsPath property in order to get it to run. After that, the compilation process just worked, finished my application in under 2 minutes, including some downloads, which is just NICE, I have to say. Size of the executable (.exe file!) is around 7MB, which is nice, considering I included ROMs, Swing and all that stuff. You can download it from the 0.0.1 release here.

Rendering

Even though there's no specification about the refresh rate at which CHIP-8 runs, it's common to set sth like 500Hz. This would mean 500 updates per second, or an update every 2ms. A game step needs to be finished within that budget though. Since simulation step and rendering step are coupled by specification for CHIP-8, both steps together may not exceed the budget. While not a problem for the game logic on modern machines, for rendering (if not to a console) it's a different story. I have/had different implementations tested, for example based on this console rendering library (which admittedly isn't meant to be used for games) or a very dumb implementation with Swing, rendering pixel by pixel on a graphics instance. Didn't meet the requirements, but I will write down the Swing journey as a seperate post, I think :)

Dienstag, 10. August 2021

How to pass multiple vars to terraform command in Gradle Exec task

I can't believe I wasn't able to find a single example on the internet about how to use the terraform executable with Java ProcessBuilder, Runtime.exec, Gradle's Exec task or anything else. How hard can it be might be your question. The problem is that it's not to intuitive and easy how to pass args when not just directly typing everything by hand on the shell directly.

In my case, I needed to pass mutliple var options into a terraform plan command. On the command line, this may look like

terraform plan -var foo=bar -var bar=baz -var-file=variables.prod.tfvars

Or as described in the official documentation, it could be

terraform apply -var="image_id=ami-abc123"
or
terraform apply -var='image_id_list=["ami-abc123","ami-def456"]' -var="instance_type=t2.micro"


There is even more documentation in order to make this thing work across different operating systems.
What struck me was apparently the whitespace between -var and the key value pair itself. Took me round about an hour to figure out this is the correct way to feed a command into a Gradle Exec task (Kotlin DSL ahead):


Mind that every var requires you to pass in an arg of -var and an arg of the key value pair itself.

Freitag, 29. Januar 2021

My master's thesis: Dynamic Global Illumination Using Realtime Importance Sampling And Image Based Lighting

I realized that I have never published or uploaded my master's thesis about global illumination stuff in realtime rendering from 2015. Even though it's written in german, it may be helpful for anyone out there, so feel free to take a look at the PDF version and the code examples.

https://drive.google.com/file/d/0B0j-0MDrGMAlSmtDNi1xWG5yMk0/view?usp=sharing

Quick summary: I utilized hand-placed, box-projected environment maps rendered and filtered in-engine in realtime in order to get a coarse scene representation. This is afterwards traced against with ray tracing, ray marching and cone tracing. For environment map rendering I created a datastrucutre per probe that is pretty much like a cubemap g-buffer, so lighting updates can be performed very fast. The whole process is complemented by screen space reflections because fine details or less rough reflections are not captured very good with this technique. I did some crazy experiments with volume interpolation in order to hide seams and leaks, added alpha information so that cone tracing through multiple volumes can be done and evaluated some optimizations. The results can be satisfying for diffuse indirect lighting and even for rough reflections, as long as the projection volume and the actual geometry don't differ that much.

All the implementations were made in my custom game engine I wrote from scratch and used afterwards for all the other experiments I did and wrote about here, like voxel cone tracing, gpu occlusion culling etc. So likely everything can be found in the repository (https://github.com/hannomalie/hpengine) in some git state :) The title picture in the readme is from the last state of my thesis technique, it looked like this


I also found two screen recordings I never uploaded.

This one is a demo scene with a tight office or school floor. It's a tough scene for illumination because sun light can only enter the scene through small windows/doors on the side and the ceiling lights are thin area lights pointing downwards only. It would mean global illumination effects account for the largest amount of illumination in this scene, which is captured quite well by my technique. Changing floor materials has a visible impact on the mood of the scene because of that.




This one is a special test scene that has a spot light as the only direct light source.

Dienstag, 24. November 2020

Kotlin dependency injection and modularization without a framework

Recently I found myself in yet another discussion about dependency injection frameworks. The internet showed me that there is a weird tension and a lot of discussions about runtime vs compile time di, reflection usage, compile time safety, service locator vs di pattern and many more.

Here's what I think: The only acceptable ... actually good implementation of a di framework is Scala implicits. The reason why the JVM world is so obsessed with di frameworks is because Java is such a limited language, that implementing things with the language only is simply not feasible.

Pure di in Kotlin

It doesn't need many features, but those we need are key to make frameworkless di (I will call it pure di from now on) practical: Primary constructors, default arguments and smart constructors. Implicit parameter passing like in Scala would be an optional bonus on top - this feature is too controversial to just require it for pure di.

About the "testability" requirement

So first, the elephant in the room: You don't need interfaces to create testable implementations for something. Mocking frameworks like mockk can just mock any class you have and replace the implementations. Conclusion: Hiding things behind an interface is a good idea for a lot of reasons, but di doesn't care about them. You decide what you accept as a dependency in your class and that's it. No drawbacks for testability when you aren't able to use the default implementation for testing.

No annotations 

I know there's a standard on the JVM, but as I said in the introduction, we should question that. When a class is declared, just from a domain driven perspective, why on earth should we annotate our constructor with @Inject for example? It's a technical detail of a framework my caller may or not may not use. And even if he uses it, why is the declaration of the constructor not sufficient for anyone else to use it, be it automatically or not by hand? From my pov, using annotations on the dependency is a code smell that we got used to because of CDI. Even worse when configuration file keys are added into the annotation...

Module definitions

A module itself doesn't have to be interfaced. The components of a module can be. The module itself is just a plain old class that defines related components.


Note how Kotlin's primary constructor with default arguments completely replaced the need for any complex override framework sometimes needed for testing or bean definition overrides. Smart constructors (operator fun invoke on an interface companion here) don't exactly relate to dependency injection but can serve as a factory for default implementations.

Multiple module definitions

Using multiple modules with frameworks is often not too easy because of a single container or service locator, that flattens all definitions into a single pool which is used for service retrieval. Service locator will be talked about in the next paragraph, now lets take a brief look how simple multiple modules can and should look in your applications:


Note that it's not necessary to bundle all modules into a single super module - you can group whatever is meaningful for your domain, not what the framework requires you to do. When you really want to squash all definitions, all components of all of your modules into a flat facade, you can either use Kotlin's built-in delegation and interfaces, like so



Or you can use... Scala 3 that has a feature called exports - just kidding, we're doing Kotlin right - or something like what I implemented with this one https://github.com/hannespernpeintner/kotlin-companionvals .

Factories, lazy, optional

All those features di frameworks offer are already built-in in the Kotlin language. Singletons are given by just using val properties. Take a look at this example how factories, lazy things and optional things can be implemented

  

Those features automatically work with IDE features such as auto completion and refactoring, which is one of the most important things in projects and the reason Kotlin is so successful. Also you don't get runtime errors for example for optional dependencies, as Kotlin's built-in nullability gives you compile time errors. An additional bonus is that you can have nullable dependencies on the interface and override with non-nullable implementations in a module. Using those modules non virtual when it's okay to rely on the implementation (for example in testing) safes you from using the double bang operator all over the place.

Service locator

Finally, the probably most important aspect of di frameworks, the piece of code that is the surface your application and your components are allowed to rely on (are they? :) ). The implementation of the service locator is the source of problems in most frameworks, as it always generifies your module graph into something unnecessary generic that works more or less like a big map of types/names to instances/factories. This is also where compile time safety is lost.

Without any frameworks, you can just pass around the module (interface when given) instance you want to use somewhere. I found the best strategy to just use the smallest possible dependencies in your components, even though that may make your primary constructors big - it's just cleaner and more appropriate than passing context objects aka modules directly. For the caller's convenience - which is not an unimportant aspect! - you can provide a smart constructor that takes a complete module.

This is the point where manual declarations are more verbose than the magic wiring frameworks do for you. But hey, that's code. Plain old code. Everyone can go to declarations, refactor them, add more smart constructors, know how they work without having to know any framework. This approach has proven to be appropriate for even big module graphs in my applications.

Inner-module dependencies in components

What if a component that is part of a module needs a component from that very module? Most frameworks solve this problem by making everything lazy. In code, we would reorder statements - with constructors, we have to either pull out default arguments and wire and pass arguments explicitly or change the properties order like so

Not too worse, I think.

Bonus Round 1: Constructor vs field injection

You may have noticed that I did only write about constructor injection. The short reason is, that everything else should never be used as it introduces mutable and invalid state in your application. Whenever you have to deal with an environment that requires you to use such a lifecycle, Kotlin offers the lateinit keyword that can be used perfectly with pure di - but more important it depends on the foreign framework whether it's simple, robust and important to implement. When your environment requires you to use CDI, you should probably stick to it. Or not use those frameworks any more :)

Bonus Round 2: Quasi mixins

Kotlin doesn't allow for multiple inheritance of state, but interfaces and default implementations can become quite powerful and useful for a mix of data driven design and modularization. The idea is to place implementations into interfaces, writing dependencies as abstract state. Interfaces can leverage multiple inheritance and what's left is the implementation of state that can be done declaratively.

Let's consider you have a typical webapp Controller class that fetches some Products and needs some dependencies for that because it's not trivial.

Using interface inheritance could be seen as an abuse of the language feature here, but let's try to stay pragmatic. Using it automatically brings local extensions into scope, enabling implicit parameter passing of contexts, hence dependency injection. This approach gives you the freedom of just not caring about modules at all and just think about fine grained dependencies. Pretty much what di frameworks give you, but without any runtime errors because the source code is your module graph that is already validated on the fly by the compiler :) This approach can also be combined with pure di - you can define generic implementations in interfaces and deliver some default implementations as final classes, just as you wish, I can't see any borders here.

Sonntag, 27. September 2020

Rendering massive amounts of animated grass

 I recently played Ghost of Tsushima and I was impressed by the amount of foilage that covers the world. Just as I was impressed when I played Horizon: Zero Dawn a few years back.

So my engine can already render a lot of instanced geometry, a lot of per-instance animations and so on, but for that much foilage that is needed for believable vegetation, this is too costly. The answer to the problem is pivot based animation and therefore some simple, stateless animation in the vertex shader.

In addition to that, the instances of for example grass, need to be clustered and culled efficiently. My two-phase occlusion and frustum culling pipeline is exhausted pretty fast, when we use it for 10000000 small instances without any hierarchical component. A cluster is best and easy a small local volume that covers enough instances to not mitigate the benefit of batching. For example it's not worth batching only 10 instances, only to be able to cull them better. 1000 instances seem to work well for me. I generate a cluster's instances randomly, so that I can just render the first n instances and scale n by distance between camera and cluster. This way, the cluster gets thinner and thinner, until completely culled. Hard culling results in pop-ins. For a smooth fadeout without alpha blending enabled - which would again kill the performance of foliage - screen door transparency can be used. This is again a simple few lines, now in the pixel shader, and culling is mostly hidden.

Three things that are for themselves very efficient team up for a nice solution for foliage: Pivot based animation, cluster culling and screen door transparency fading.



As stated under the first video, I don't have nicely authored pivot textures, so I created a veeeeery simple one that just goes from 0-1 from root to leaves of the grass.


Montag, 13. Juli 2020

Private routes with Kotlin JS and React Router

In order to implement login functionality in a Kotlin React app, I used React Context API to save an optional instance of Login. This context can be used to create a wrap-around component that checks whether a given route can be accessed or not. If not allowed, the request is redirected to the public /login route.

The context class has to provide a way to get the current login data and functionality to logout or login. The state resides in the main component and is accessed through the onChange callback. The component1 function can be used to destructure the useContext result when only read access is needed.

class LoginContextData(login: Login?, private val onChange: (Login?) -> Unit) {
    var login: Login? = login
        private set
    fun login(potentialLogin: Login) {
        login = potentialLogin
        onChange(login)
    }
    fun logout() {
        login = null
        onChange(login)
    }
    operator fun component1() = login
}
// can be global
val LoginContext = createContext(LoginContextData(null) { })

The context is used like


val loginState = useState(null as Login?)
val (login, setLogin) = loginState

LoginContext.Provider(LoginContextData(login) { newLogin ->
    setLogin(newLogin)
}) {
// render your tree here
}

Just like React Router provides the route function, we can write a function that does an if-else on the context and either calls the known route function or gives a redirect:


fun  RBuilder.privateRoute(
    path: String,
    exact: Boolean = false,
    strict: Boolean = false,
    children: RBuilder.(RouteResultProps<*>) -> ReactElement?
): ReactElement {
    val (login) = useContext(LoginContext)

    return route(path, exact, strict) { routerProps: RouteResultProps<*> ->
        if(login != null) {
            children(routerProps)
        } else {
            redirect(from = path, to = "/login")
        }
    }
}

The callsite can just use privateRoute instead or route and done. The login route remains public.

The context can be used to decide whether a navigation item should be rendered or not as well.

My journey with Kotlin JS and React

Honestly I have no idea what the purpose of this post could be, but I am really happy with Kotlin JS and react and I want to write down my thoughts after implementing a real world administration application in my spare time. Maybe some of the sources I link can help somebody, maybe my experience can help someone in a similar position.

This is the app I created:


It uses Kotlin JS, Kotlin react wrapper, react, react-dom, react-router-dom, uuid, styled components, Kotlin coroutines.... the react hooks api, the react context api and Bootstrap 4.

TLDR
: Even though Kotlin JS is still experimental, i created a complete administration application using it and encountered only a few minor bugs. There's so much gain in being able to use gradle tooling and Kotlin as a language for me, that I would happily accept some minor bugs here.

Motivation

I have to admit: I love Kotlin but I was very sceptical about its non-JVM targets, especially after reading a lot about Kotlin Native and its caveats. The thing is: I feel such a strong demand of frontend stuff, that it would be super super handy for a Kotlin team to be able to use the same language - and toolchain - for backend and frontend projects and even share libraries between the two targets. Maintaining builds and code on a high quality level is that much easier this way, polyglot really shows its costs there.

Chicken and egg

I would love to implement such a thing in the team at my company, but this is not an easy task: Kotlin JS is still experimental. The next Kotlin version (1.4) will break binary compatibility for example because the compiler backend is switched completely. Given there is already Typescript, it's very very hard to convince people that another experimental technology might do a good job for the frontend. This results in the chicken-egg-problem: No one tries it, no one knows whether it works out well, no one gains experience, no one helps moving the platform forward, no one makes any progress. Arguing for a new technology reminds me of the time my team switched to Kotlin from Java for the backend. And after we did it, everyone was much happier than before. Getting the time to proof that a techology is worth it and can be used in production is key.

Elephant in the room: JSX

The biggest downside of React with JavaScript is most probably JSX. It reminds me of the wild days when JSP was in. Not only does it require the build system to do very very complex stuff, but also I don't think it's a good idea to extend code with something that makes it code no more, mixing markup languages and programming languages, introducing many strange constructs that are mostly workarounds for naming clashes and identifiers that can't be mapped. Tooling has to be adjusted, knowledge has to be adjusted, code style has to be adjusted... This is a proper comment on that. Do you know what is a nice way to write the UI? kotlinx html. This is basically what everyone would be happy with. I am. There was only one missing piece in the workflow: When you get html based designs of the page you should program, you have to convert html to kotlin dsl. With JSX you can just paste the html into your component and modify it slightly for variable usage. For kotlin dsl, there is this, that lets you do the same, but additionally, you can just start refactor names, extracting methods and so on in the best IDE / one of the best IDEs out there.

Hooks 

The last time I used react was when hooks didn't exist. The usage of setState was so annoying for me, that I just didn't warm up with the framework at all, because state is the single most important aspect of the application code. Hooks are such a nice addition and make functional component usage so pleasant. Take a look at a simple example with kotlin. It's hard not to like that. Now the downside: Hooks have some constraints that are not too intuitive. And now a proper downside: Even though I never used any return statements in kotlin, placed every hook usage at the top of the component, I got the infamous rendered too few hooks error... I wasn't able to figure it out exactly but I suspect it came from nested component usage where I had a fairly complex list based component that nested a lot of stuff. I removed the complex component completely, but if I weren't able to do that because of design requirements, I would have had a hard time with it.

Build

Everyone who knows me knows: Builds are really my métier. I am doing this excessively for many many years with different build systems and I always ensure projects have clean, stable, maintainable builds that enable proper development and testing workflows. Builds are one of those areas where having only one kind of them in the team is very beneficial, as all of the existing tooling can be reused. Being able to use gradle is a big plus for me (note, gradle with kotlin, not gradle with groovy brrrr). Convince yourself of how easy and simple the gradle build of a kotlin js project can be here and here. I can confirm that it works like that for a complete application development cycle. The good thing is, that the whole webpack stuff is hidden from you so that you don't have to bother with the whole mess. However, if necessary, you can configure things. And this is one of the two issues I faced during development: Hot reloading with the webpack development server. I had to apply this workaround as everyone seems to have to. Annoying to find out, not a problem anymore after the small fix.

Final thoughts

What can I say? I am very happy about what can already be done with Kotlin JS. Finally, I can get back to frontend development with pleasure again, keeping all my gradle and kotlin love :)