Writing specializations is generally pretty straight forward, but there is at least one common pitfall. When designing specializations, we need to remind ourselves that type-based specializations are technically guards.
The third post of this series is about how I started using Truffle and Graal, pretty much 4 years ago. It might be in parts ranty, but I started using it when it was in a very early stage. So, things are a lot better today.
Research on programming languages is often more fun when we can use our own languages. However, for research on performance optimizations that can be a trap. In the end, we need to argue that what we did is comparable to state-of-the-art language implementations. Ideally, we are able to show that our own little language is not just a research toy, but that it is, at least performance-wise, competitive with for instance Java or JavaScript VMs.
Next weekend starts one of the major conferences of the programming languages research community. The conference hosts many events including our Meta’16 workshop on Metaprogramming, SPLASH-I with research and industry talks, the Dynamic Languages Symposium, and the OOPSLA research track.
With the Truffle language implementation framework, we got a powerful foundation for implementing languages as simple interpreters. In combination with the Graal compiler, Truffle interpreters execute their programs as very efficient native code.
Now that we got just-in-time compilation essentially “for free”, can we get IDE integration for our Truffle languages as well?
We, or more specifically our colleagues from the Software Languages Lab in Brussels are looking for a post-doctoral researcher to work on a collaborative research project with us.
Today, I gave a talk on implementing languages based on the ideas behind RPython and Truffle at FOSDEM on the main track. Please find abstract and slides below.
You got a big multicore, or manycore machine, but do not have a clue of how to actually use it, because your application doesn’t seem to scale naturally? Well, that seems to be a problem many people are facing in our new manycore age. One possible solution might be to accept less precise answers by relaxing synchronization constraints. That could allow us to circumvent Amdahl’s law when Gustafson is out of reach.
The second day of the technical tracks started with a keynote by Markus Püschel. He is not the typical programming language researcher you meet at OOPSLA, but he does research in automatic optimization of programs. In his keynote, he showed a number of examples how to get the best performance for a given algorithm out of a particular processor architecture. Today’s compilers are still not up to the task, and will probably never be up to it. Given a naïve implementation, hand-optimized C code can have 10x speedup when dependencies are made explicit, and the compiler knows that no aliasing can happen. He was then discussing how that can be approached in an automated way, and was also thinking about what programming languages could do.