Oct 16, 2023: Which Interpreters are Faster, AST or Bytecode?
This post is a brief overview of our new study of abstract-syntax-tree and bytecode interpreters on top of RPython and the GraalVM metacompilation systems, which we are presenting next week at OOPSLA.
Jul 5, 2017: A 10 Year Journey, Stop 5: Growing the SOM Family
May 14, 2017: A 10 Year Journey, Stop 3: Performance, Performance, and Metaprogramming
The third post of this series is about how I started using Truffle and Graal, pretty much 4 years ago. It might be in parts ranty, but I started using it when it was in a very early stage. So, things are a lot better today.
Oct 19, 2015: Tracing vs. Partial Evaluation: Comparing Meta-Compilation Approaches for Self-Optimizing Interpreters
Back in 2013 when looking for a way to show that my ideas on how to support concurrency in VMs are practical, I started to look into meta-compilation techniques. Truffle and RPython are the two most promising systems to build fast language implementations without having to implement a compiler on my own. While these two approaches have many similarities, from a conceptual perspective, they take two different approaches that can be seen as the opposite ends of a spectrum. So, I thought, it might be worthwhile to investigate them a little closer.
Apr 28, 2015: Zero-Overhead Metaprogramming
Runtime metaprogramming and reflection are slow. That’s a common wisdom. Unfortunately. Using refection for instance with Java’s reflection API, its dynamic proxies, Ruby’s #send or #method_missing, PHP’s magic methods such as __call, Python’s __getattr__, C#’s DynamicObjects, or really any metaprogramming abstraction in modern languages unfortunately comes at a price. The fewest language implementations optimize these operations. For instance, on Java’s HotSpot VM, reflective method invocation and dynamic proxies have an overhead of 6-7x compared to direct operations.
Jan 31, 2015: FOSDEM 2015: Building High-Performance Language Implementations With Low Effort
Today, I gave a talk on implementing languages based on the ideas behind RPython and Truffle at FOSDEM on the main track. Please find abstract and slides below.
Nov 19, 2014: SOM Performance Numbers
Today, I got a few more benchmarks running to get a better idea of where RTruffleSOM and TruffleSOM stand in terms of their absolute performance.
Sep 22, 2014: Are We There Yet? Simple Language-Implementation Techniques for the 21st Century
The first results of my experiments with self-optimizing interpreters was finally published in IEEE Software. It is a brief and very high-level comparison of the Truffle approach with a classic bytecode-based interpreter on top of RPython. If you aren’t familiar with either of these approaches, the article is hopefully a good starting point. The experiments described in it use SOM, a simple Smalltalk.
Feb 1, 2014: How to get a JIT Compiler for Free: Implementing SOM Smalltalk with RPython and Truffle
Today at FOSDEM, I gave a brief talk on implementing SOM, a little Smalltalk, with RPython and Truffle. RPython, probably best known for the PyPy implementation, uses meta-tracing JIT compilation to make simple interpreters fast. Truffle, a research project of Oracle Lab, is an approach for building self-optimizing interpreters and in combination with Graal, it gives a JIT compiler for AST-like interpreters. In the talk, I briefly sketch both of them, without going into many details.