Runtime metaprogramming and reflection are slow. That’s a common wisdom. Unfortunately. Using refection for instance with Java’s reflection API, its dynamic proxies, Ruby’s #send or #method_missing, PHP’s magic methods such as __call, Python’s __getattr__, C#’s DynamicObjects, or really any metaprogramming abstraction in modern languages unfortunately comes at a price. The fewest language implementations optimize these operations. For instance, on Java’s HotSpot VM, reflective method invocation and dynamic proxies have an overhead of 6-7x compared to direct operations.

But, does it have to be that way? No it doesn’t!

And actually, a solution is rather simple. In a paper that Chris Seaton, Stéphane Ducasse, and I worked on, and which recently got accepted for presentation at the PLDI conference, we show that a simple generalization of polymorphic inline caches can be used to optimize metaprogramming so that it doesn’t have any performance cost after just-in-time compilation.

You might wonder, why do we care? Since it is slow, people don’t use it in performance sensitive code, right? Well, as it turns out, in Ruby it is used everywhere, because it is convenient and allows for straightforward and general solutions. So, making metaprogramming fast will benefit many applications. But that’s not all. For my own research on concurrency, I proposed the ownership-based metaobject protocol (OMOP) as a foundation for implementing a wide range of different concurrent programming abstractions. Unfortunately, such metaobject protocols have been inherently difficult to optimize. But instead of finding a solution, researchers gave up on it and instead focused on designing aspect-oriented programming languages, which sidestep the performance issues by applying only to a minimal set of program points instead of pervasively throughout the whole program. For my use case, that wasn’t good enough. However, now, by generalizing polymorphic inline caches, we solved the performance issues of metaobject protocols as well.

The abstract of the paper and the PDF/HTML versions, as well as the artifact with all experiments are linked below.


Runtime metaprogramming enables many useful applications and is often a convenient solution to solve problems in a generic way, which makes it widely used in frameworks, middleware, and domain-specific languages. However, powerful metaobject protocols are rarely supported and even common concepts such as reflective method invocation or dynamic proxies are not optimized. Solutions proposed in literature either restrict the metaprogramming capabilities or require application or library developers to apply performance improving techniques.

For overhead-free runtime metaprogramming, we demonstrate that dispatch chains, a generalized form of polymorphic inline caches common to self-optimizing interpreters, are a simple optimization at the language-implementation level. Our evaluation with self-optimizing interpreters shows that unrestricted metaobject protocols can be realized for the first time without runtime overhead, and that this optimization is applicable for just-in-time compilation of interpreters based on meta-tracing as well as partial evaluation. In this context, we also demonstrate that optimizing common reflective operations can lead to significant performance improvements for existing applications.

  • Zero-Overhead Metaprogramming: Reflection and Metaobject Protocols Fast and without Compromises; Stefan Marr, Chris Seaton, Stéphane Ducasse; in ‘Proceedings of the 36th ACM SIGPLAN Conference on Programming Language Design and Implementation’ (PLDI’ 15).
  • Paper: PDF, HTML
  • BibTex: BibSonomy
  • DOI: 10.1145/2737924.2737963
  • Online Appendix: artifacts and experimental setup