Last week, I gave two lectures at the Programming Language Implementation Summer School (PLISS). PLISS was very well organized and the students and other presenters made for a very enjoyable week of new ideas, learning, and discussing.

For my own lectures, I decided to take an approach that focused more on the high-level ideas and can introduce a wider audience to how we build interpreters and a range of techniques for just-in-time compilation.

Of course, I also wanted to talk a little bit about our own work. Thus, both lectures come with the strong bias of meta-compilation systems. My interpreter lecture is informed by our upcoming OOPSLA paper, which shows that in the context of meta-compilation systems, abstract-syntax-tree interpreters are doing surprisingly well compared to bytecode interpreters.

My lecture on just-in-time compilation of course also went into how meta-compilation works and how it enables us to build languages that can reach state-of-the-art performance by compiling a user program through our interpreters. While it’s still a lot of work, the big vision is that one day, we might just define the grammar, provide a few extra details of how the language is to be executed, and then some kind of toolchain gives us a language runtime that executes user programs with state-of-the-art performance.

One can still dream… 🤓

When preparing these lectures, I was also looking back at the lectures I gave in 2019 for a summer school at Dagstuhl. Perhaps, this material will at some point form its own course on Virtual Machines. Another of those dreams…

Lectures

I have to admit, the original abstracts don’t quite represent the final lectures. So, I’ll also include the outlines in addition to the slides.

Interpreters: Everywhere And All The Time

Implementers often start with an interpreter to sketch how a language may work. They are easy to implement and great to experiment with. However, they are also an essential part of dynamic language implementations. We will talk about the basics of abstract syntax trees, bytecodes, and how these ideas can be used to implement a language. We will also look into optimizations for interpreters: how AST and bytecode interpreters can use run-time feedback to improve performance, and discuss how super nodes and super instructions allows us to make effective use of modern CPUs.

Outline
  • How are programming languages implemented?
  • Types of interpreters
    • abstract syntax tree
    • bytecode
  • Interpreter optimizations
    • Lookup caching
    • AST/bytecode-level inlining
    • Library lowering, library intrinsification
    • Super nodes, super instructions
    • Self-optimization, bytecode quickening
Slides

A Brief Introduction to Just-in-Time Compilation

Since the early days of object-oriented languages, run-time polymorphism has been a challenge for implementers. Smalltalk and Self pushed many ideas to an extreme, their implementers had to invent techniques such as: lookup caches, tracing and method-based compilation, deoptimization, and maps. While these ideas originated in the ’80s and ‘90s, they are key ingredients of today’s just-in-time compilers for Java, Ruby, Python, JavaScript.

Outline
  • Just-in-time compilation
    • Basic assumptions and application behavior
    • Selection of compilation units
    • Executing Dynamic Languages
    • Using Run-Time Feedback
    • Metacompilation
  • Efficient Data Representation
    • Maps, hidden classes, shapes
    • Storage strategies
    • Handling concurrency and parallelism
Slides

If you have any questions, I am more than happy to answer, possibly on Twitter @smarr or Mastodon.