Back in 2013 when looking for a way to show that my ideas on how to support concurrency in VMs are practical, I started to look into meta-compilation techniques. Truffle and RPython are the two most promising systems to build fast language implementations without having to implement a compiler on my own. While these two approaches have many similarities, from a conceptual perspective, they take two different approaches that can be seen as the opposite ends of a spectrum. So, I thought, it might be worthwhile to investigate them a little closer.

This brings us to the paper that I am going to present end of the month at the OOPSLA conference: Tracing vs. Partial Evaluation. Based on Truffle and RPython, we studied the differences of the two meta-compilation approaches on the obtained performance results as well as which of the two approaches seems to require less programming effort. The abstract and links to the paper are posted below. But, before we get to that, I’d still like to point out something that I consider a common misunderstanding:

Tracing isn’t an Instance of Partial Evaluation

To give a little bit of background, often partial evaluation is associated with the so-called first Futamura projection, which is a transformation that has the goal of taking an interpreter as well as a source program for it as projection input and producing an executable from it. The idea here is that we can easily provide the semantics of a programming language as interpreter, and with such a projection would get a compiler for free. The important insight here is that this projection is not bound to partial evaluation, but could be realized by other means as well. And meta-tracing-based compilation is one such mechanism.

While other people argue that partial evaluation and tracing are very similar, my understanding is that the two approaches aim for the same goal, but come from the two opposite directions and thereby are very different. Partial evaluation, especially in the combination with Truffle, tries to take all known information about how a program executes, combine it with the program itself, and generate an executable from that. The important part is that it needs to abstract over multiple executions to learn a programs behavior by recording profiling information. Especially for dynamic languages, this is necessary because the static knowledge is very incomplete. From a interpreter implementer perspective, this however requires a change in the mindset. Instead of reasoning only about the concrete execution one is currently interested in, future executions have to be taken into account as well.

Tracing, and specifically meta-tracing as done by RPython on the other hand is using a single execution to produce an executable. It does that by recording all operations that the interpreter performs for that single execution. This means, theoretically it has all data and knows exactly what the program did for a specific case. However, since compilation is a runtime overhead, we cannot just compile each observed execution. Instead, we need to leave out details that would make the executable too specific. And this is the main reason that I see the two approaches as very different. Partial evaluation needs to collect additional information to know how a program behaves, while tracing got all information, but needs to use heuristics to leave information out that is usually not beneficial. From an implementer perspective this means for tracing that one only has to reason about a single execution and does not need to regard any other executions, and thus, does not have to switch to a compilation mindset. There is however one wrinkle with this view: in practice the heuristic to leave out information can be to aggressive, so that one has to mark some information explicitly as being relevant/constant. Here we somewhat go back to a profiling view, and moving a little to the other end of the spectrum.

For the comparison of Truffle and RPython, please have a look at the paper:

Abstract

Tracing and partial evaluation have been proposed as meta-compilation techniques for interpreters to make just-in-time compilation language-independent. They promise that programs executing on simple interpreters can reach performance of the same order of magnitude as if they would be executed on state-of-the-art virtual machines with highly optimizing just-in-time compilers built for a specific language. Tracing and partial evaluation approach this meta-compilation from two ends of a spectrum, resulting in different sets of tradeoffs.

This study investigates both approaches in the context of self-optimizing interpreters, a technique for building fast abstract-syntax-tree interpreters. Based on RPython for tracing and Truffle for partial evaluation, we assess the two approaches by comparing the impact of various optimizations on the performance of an interpreter for SOM, an object-oriented dynamically-typed language. The goal is to determine whether either approach yields clear performance or engineering benefits. We find that tracing and partial evaluation both reach roughly the same level of performance. SOM based on meta-tracing is on average 3x slower than Java, while SOM based on partial evaluation is on average 2.3x slower than Java. With respect to the engineering, tracing has however significant benefits, because it requires language implementers to apply fewer optimizations to reach the same level of performance.

  • Tracing vs. Partial Evaluation: Comparing Meta-Compilation Approaches for Self-Optimizing Interpreters; Stefan Marr, Stéphane Ducasse; in ‘Proceedings of the 2015 ACM International Conference on Object Oriented Programming Systems Languages & Applications’ (OOPSLA’ 15), page 821–839.
  • Paper: PDF, HTML
  • BibTex: BibSonomy
  • DOI: 10.1145/2660193.2660194
  • Online Appendix: artifacts and experimental setup

Slides