Feb 23, 2024: Why Are My Bytecode Interpreters Slow? Hunting Truffles with VTune
As part of our work on the AST vs. Bytecode Interpreters paper, I briefly looked at how the native code of ahead-of-time-compiled bytecode loops looks like, but except for finding much more code than what I expected, I didn’t look too closely at what was going on.
Feb 13, 2024: Rank 10 Language Implementations
Please rank 10 language implementations by their median performance, based on your best guess or estimate.
Sep 30, 2021: How do we do Benchmarking?
Impressions from Conversations with the Community
Feb 18, 2021: Open Postdoc Position on Language Implementation and Concurrency
Dec 7, 2020: Preventing Concurrency Bugs from Causing Harm, Automatically
Jul 7, 2020: Is This Noise, or Does This Mean Something? #benchmarking
Do my performance measurements allow me to conclude anything at all?
Apr 17, 2018: A Typical Truffle Specialization Pitfall
Writing specializations is generally pretty straight forward, but there is at least one common pitfall. When designing specializations, we need to remind ourselves that type-based specializations are technically guards.
Jul 5, 2017: A 10 Year Journey, Stop 5: Growing the SOM Family
May 14, 2017: A 10 Year Journey, Stop 3: Performance, Performance, and Metaprogramming
The third post of this series is about how I started using Truffle and Graal, pretty much 4 years ago. It might be in parts ranty, but I started using it when it was in a very early stage. So, things are a lot better today.
Oct 25, 2016: Cross-Language Compiler Benchmarking: Are We Fast Yet?
Research on programming languages is often more fun when we can use our own languages. However, for research on performance optimizations that can be a trap. In the end, we need to argue that what we did is comparable to state-of-the-art language implementations. Ideally, we are able to show that our own little language is not just a research toy, but that it is, at least performance-wise, competitive with for instance Java or JavaScript VMs.
Oct 22, 2016: Language Research with Truffle at the SPLASH'16 Conference
Next weekend starts one of the major conferences of the programming languages research community. The conference hosts many events including our Meta’16 workshop on Metaprogramming, SPLASH-I with research and industry talks, the Dynamic Languages Symposium, and the OOPSLA research track.
Aug 10, 2016: Can we get the IDE for free, too?
With the Truffle language implementation framework, we got a powerful foundation for implementing languages as simple interpreters. In combination with the Graal compiler, Truffle interpreters execute their programs as very efficient native code. Now that we got just-in-time compilation essentially “for free”, can we get IDE integration for our Truffle languages as well?
Feb 4, 2016: Open PostDoc Position on Programming Technology for Complex Concurrent Systems
We, or more specifically our colleagues from the Software Languages Lab in Brussels are looking for a post-doctoral researcher to work on a collaborative research project with us.
Jan 31, 2015: FOSDEM 2015: Building High-Performance Language Implementations With Low Effort
Today, I gave a talk on implementing languages based on the ideas behind RPython and Truffle at FOSDEM on the main track. Please find abstract and slides below.
Jun 26, 2012: Workshop on Relaxing Synchronization for Multicore and Manycore Scalability
You got a big multicore, or manycore machine, but do not have a clue of how to actually use it, because your application doesn’t seem to scale naturally? Well, that seems to be a problem many people are facing in our new manycore age. One possible solution might be to accept less precise answers by relaxing synchronization constraints. That could allow us to circumvent Amdahl’s law when Gustafson is out of reach.
Oct 31, 2011: OOSPLA 2011 @SPLASH2011, Day 2
The second day of the technical tracks started with a keynote by Markus Püschel. He is not the typical programming language researcher you meet at OOPSLA, but he does research in automatic optimization of programs. In his keynote, he showed a number of examples how to get the best performance for a given algorithm out of a particular processor architecture. Today’s compilers are still not up to the task, and will probably never be up to it. Given a naïve implementation, hand-optimized C code can have 10x speedup when dependencies are made explicit, and the compiler knows that no aliasing can happen. He was then discussing how that can be approached in an automated way, and was also thinking about what programming languages could do.