Oct 16, 2023: Which Interpreters are Faster, AST or Bytecode?
This post is a brief overview of our new study of abstract-syntax-tree and bytecode interpreters on top of RPython and the GraalVM metacompilation systems, which we are presenting next week at OOPSLA.
Nov 8, 2022: How Effective are Classic Lookup Optimizations for Rails Apps?
We know that Ruby and especially Rails applications can be very dynamic and pretty large. Though, many of the optimizations interpreters and even just-in-time compilers use have been invented in the 1980s and 1990s before Ruby and Rails even existed. So, I was wondering: do these optimizations still have a chance of coping with the millions of lines of Ruby code that large Rails apps from Shopify, Stripe, or GitLab have? Unfortunately, we don’t have access to such applications. As the next best thing, we took the largest Ruby benchmarks we could get our hands on, and analyzed those.
Oct 10, 2022: The Cost of Safety in Java
Overhead of Null Checks, Array Bounds, and Class Cast Exceptions in GraalVM Native Image
Jul 11, 2019: SOMns 0.7.0 Release with Extension Modules and Artifacts
It has been a while since we put together a release for SOMns. And it has been even longer, since I last wrote about it on this blog.
Oct 15, 2017: Debugging Concurrency Is Hard, but We Can Do Something About It!
When we have to debug applications that use concurrency, perhaps written in Java, all we get from the debugger is a list of threads, perhaps some information about held locks, and the ability to step through each thread separately.
Mar 7, 2017: SOMns 0.2 Release with CSP, STM, Threads, and Fork/Join
Since SOMns is a pure research project, we aren’t usually doing releases for SOMns yet. However, we added many different concurrency abstractions since December and have plans for bigger changes. So, it seems like a good time to wrap up another step, and get it into a somewhat stable shape.
Oct 22, 2016: Language Research with Truffle at the SPLASH'16 Conference
Next weekend starts one of the major conferences of the programming languages research community. The conference hosts many events including our Meta’16 workshop on Metaprogramming, SPLASH-I with research and industry talks, the Dynamic Languages Symposium, and the OOPSLA research track.
Jan 25, 2016: Towards Meta-Level Engineering and Tooling for Complex Concurrent Systems
Last December, we got a research project proposal accepted for a collaboration between the Software Languages Lab in Brussels and the Institute for System Software here in Linz. Together, we will be working on tooling for complex concurrent systems. And with that I mean systems that use multiple concurrency models in combination to solve different problems, each with the appropriate abstraction. I have been working on these issues already for a while. Some pointers are available here in an earlier post: Why Is Concurrent Programming Hard? And What Can We Do about It?
Dec 8, 2015: Add Graal JIT Compilation to Your JVM Language in 5 Easy Steps, Step 5
Step 5: Optimizing the Interpreter for Compilation
Dec 1, 2015: Add Graal JIT Compilation to Your JVM Language in 5 Easy Steps, Step 4
Step 4: Complete Support for Mandelbrot
Nov 24, 2015: Add Graal JIT Compilation to Your JVM Language in 5 Easy Steps, Step 3
Step 3: Interpreting a Simple Fibonacci Function with Golo+Truffle
Nov 17, 2015: Add Graal JIT Compilation to Your JVM Language in 5 Easy Steps, Step 2
Step 2: Adding Bit Operations To Golo
Nov 10, 2015: Add Graal JIT Compilation to Your JVM Language in 5 Easy Steps, Step 1
Over the course of the next four weeks, I plan to publish a new post every Tuesday to give a detailed introduction on how to use the Graal compiler and the Truffle framework to build fast languages. And this is the very first post to setup this series. The next posts are going to provide a bit of background on Golo, the language we are experimenting with, then build up the basic interpreter for executing a simple Fibonacci and later a Mandelbrot computation. To round off the series, we will also discuss how to use one of the tools that come with Graal to optimize the performance of an interpreter. But for today, let’s start with the basics.
Sep 22, 2014: Are We There Yet? Simple Language-Implementation Techniques for the 21st Century
The first results of my experiments with self-optimizing interpreters was finally published in IEEE Software. It is a brief and very high-level comparison of the Truffle approach with a classic bytecode-based interpreter on top of RPython. If you aren’t familiar with either of these approaches, the article is hopefully a good starting point. The experiments described in it use SOM, a simple Smalltalk.