Last September, I had a lot of fun putting together a lecture on language implementation techniques. It is something I wanted to do for a while, but I had not had a good excuse before to actually do it.
When I got asked to give this lecture at a Dagstuhl summer school, I posted an outline on Twitter, and as one might expect, some people raised concerns about things that are missing. And, indeed, it’s far from complete, and biased by my own experience, background, and research. Though, perhaps it is still useful for others.
Dynamic languages leave all hard problems to the runtime system. Some argue this allows programmers to focus on the application they implement, but it also means that dynamic language implementations have to learn at run time what a program does so that they can execute it efficiently.
This lecture will give a brief introduction into implementation techniques, starting from abstract-syntax-tree and bytecode interpreters, and then going to modern just-in-time compilation approaches based on partial evaluation or meta-tracing. We will review ideas such as inline caching, hidden classes, and storage strategies to understand better how dynamic languages can reach the performance of less dynamic languages such as Java or C.
Optimizations such as storage strategies unfortunately have a major impact on thread safety for dynamic languages such as Ruby and Python, which use shared memory multi-threading. To ensure that we can implement such languages with safe and efficient parallelism, we will also review variations for classic storage strategies and object models for parallel virtual machines.
Last but not least, since the techniques are all about performance, we have to discuss how to effectively assess optimizations. Modern virtual machines and hardware systems are far from the deterministic machines we expect, which means we have to take extra care when measuring and reporting performance numbers.
The final agenda for the lecture included:
- abstract syntax trees
- optimizations at the interpreter level
- caching, and lookup caching in particular
- bytecode quickening
- self-optimizing AST interpreters
- just-in-time compilation
- basic ideas of compilation at run time
- metacompilation techniques based on meta-tracing and partial evaluation
- efficient data representation
- maps, hidden classes, shapes
- storage strategies
- data representations for multithreaded VMs
- concurrent shapes
- concurrent strategies
This means there are indeed very many things, I haven’t been talking about. This includes fundamental ideas such as garbage collection, compiler optimizations and their intermediate representations, tiered compilation, tooling for debugging & profiling, and metacircularity. Perhaps, I get to teach a full course at some point, and can include them.
Until then, the following slides might be useful to others.
If you have any questions, I am more than happy to answer, possibly on Twitter @smarr.