Tag Archives: OMOP

Zero-Overhead Metaprogramming

Runtime metaprogramming and reflection are slow. That’s a common wisdom. Unfortunately. Using refection for instance with Java’s reflection API, its dynamic proxies, Ruby’s #send or #method_missing, PHP’s magic methods such as __call, Python’s __getattr__, C#’s DynamicObjects, or really any metaprogramming abstraction in modern languages unfortunately comes at a price. The fewest language implementations optimize these operations. For instance, on Java’s HotSpot VM, reflective method invocation and dynamic proxies have an overhead of 6-7x compared to direct operations.

But, does it have to be that way? No it doesn’t!

And actually, a solution is rather simple. In a paper that Chris Seaton, Stéphane Ducasse, and I worked on, and which recently got accepted for presentation at the PLDI conference, we show that a simple generalization of polymorphic inline caches can be used to optimize metaprogramming so that it doesn’t have any performance cost after just-in-time compilation.

You might wonder, why do we care? Since it is slow, people don’t use it in performance sensitive code, right? Well, as it turns out, in Ruby it is used everywhere, because it is convenient and allows for straightforward and general solutions. So, making metaprogramming fast will benefit many applications. But that’s not all. For my own research on concurrency, I proposed the ownership-based metaobject protocol (OMOP) as a foundation for implementing a wide range of different concurrent programming abstractions. Unfortunately, such metaobject protocols have been inherently difficult to optimize. But instead of finding a solution, researchers gave up on it and instead focused on designing aspect-oriented programming languages, which sidestep the performance issues by applying only to a minimal set of program points instead of pervasively throughout the whole program. For my use case, that wasn’t good enough. However, now, by generalizing polymorphic inline caches, we solved the performance issues of metaobject protocols as well.

The abstract of the paper and the PDF/HTML versions, as well as the artifact with all experiments are linked below.

Abstract

Runtime metaprogramming enables many useful applications and is often a convenient solution to solve problems in a generic way, which makes it widely used in frameworks, middleware, and domain-specific languages. However, powerful metaobject protocols are rarely supported and even common concepts such as reflective method invocation or dynamic proxies are not optimized. Solutions proposed in literature either restrict the metaprogramming capabilities or require application or library developers to apply performance improving techniques.

For overhead-free runtime metaprogramming, we demonstrate that dispatch chains, a generalized form of polymorphic inline caches common to self-optimizing interpreters, are a simple optimization at the language-implementation level. Our evaluation with self-optimizing interpreters shows that unrestricted metaobject protocols can be realized for the first time without runtime overhead, and that this optimization is applicable for just-in-time compilation of interpreters based on meta-tracing as well as partial evaluation. In this context, we also demonstrate that optimizing common reflective operations can lead to significant performance improvements for existing applications.

  • Zero-Overhead Metaprogramming: Reflection and Metaobject Protocols Fast and without Compromises; Stefan Marr, Chris Seaton, Stéphane Ducasse; in ‘Proceedings of the 36th ACM SIGPLAN Conference on Programming Language Design and Implementation’ (PLDI’ 15).
  • Paper: PDF, HTML
  • BibTex: BibSonomy
  • DOI: 10.1145/2737924.2737963
  • Online Appendix: artifacts and experimental setup

Slides

OMOP Ported to Opal on top of Pharo 3

To prepare some experiments with Pharo’s new compiler infrastructure and a simple AST interpreter, I ported my implementation of the Ownership-based Metaobject Protocol (OMOP) to the Pharo 3. Loading the OMOP into an image will give you an STM implementation, a basic actor system, communicating sequential processes, Clojure-like agents, and active objects. Eventually, the goal is to provide a more extensive set of such concurrent programming mechanisms on top of the OMOP, but for now these five should already give an impression of how the OMOP itself works.

To try it out, it can be loaded into a recent Pharo 3 image with the following code snippet:

Gofer new
	squeaksource3: 'Omni';
	configuration; load.
(Smalltalk classNamed: 'ConfigurationOfOmni') load: #(ST Isolate)

Supporting Concurrency Abstractions in High-level Language Virtual Machines

Last Friday, I defended my PhD dissertation. Finally, after 4 years and a bit, I am done. Finally. I am very grateful to all the people supporting me along the way and of course to my colleagues for their help.

My work focused on how to build VMs with support for all kind of different concurrent programming abstractions. Since you don’t want to put them into a VM just one by one, I was looking for a unifying substrate that’s up to the task. Below, you’ll find the abstract as well as the slides.

In addition to the thesis text itself, the implementations and tools are available. Please see the project page for more details.

Abstract

During the past decade, software developers widely adopted JVM and CLI as multi-language virtual machines (VMs). At the same time, the multicore revolution burdened developers with increasing complexity. Language implementers devised a wide range of concurrent and parallel programming concepts to address this complexity but struggle to build these concepts on top of common multi-language VMs. Missing support in these VMs leads to tradeoffs between implementation simplicity, correctly implemented language semantics, and performance guarantees.

Departing from the traditional distinction between concurrency and parallelism, this dissertation finds that parallel programming concepts benefit from performance-related VM support, while concurrent programming concepts benefit from VM support that guarantees correct semantics in the presence of reflection, mutable state, and interaction with other languages and libraries.

Focusing on these concurrent programming concepts, this dissertation finds that a VM needs to provide mechanisms for managed state, managed execution, ownership, and controlled enforcement. Based on these requirements, this dissertation proposes an ownership-based metaobject protocol (OMOP) to build novel multi-language VMs with proper concurrent programming support.

This dissertation demonstrates the OMOP’s benefits by building concurrent programming concepts such as agents, software transactional memory, actors, active objects, and communicating sequential processes on top of the OMOP. The performance evaluation shows that OMOP-based implementations of concurrent programming concepts can reach performance on par with that of their conventionally implemented counterparts if the OMOP is supported by the VM.

To conclude, the OMOP proposed in this dissertation provides a unifying and minimal substrate to support concurrent programming on top of multi-language VMs. The OMOP enables language implementers to correctly implement language semantics, while simultaneously enabling VMs to provide efficient implementations.

  • Supporting Concurrency Abstractions in High-level Language Virtual Machines, Stefan Marr. Software Languages Lab, Vrije Universiteit Brussel, Pleinlaan 2, B-1050 Brussels, Belgium, PhD Dissertation, January 2013. ISBN 978-90-5718-256-3.
  • Download: PDF.
  • BibTex: BibSonomy

Slides

What If: Developing Applications in the Multicore Era

Yesterday was the first day of Smalltalks 2012 in Puerto Madryn. The organizers invited my to give a keynote on a topic of my choice, which I gladly did. Having just handed in my thesis draft, I chose to put my research into the context of Smalltalk and try to relate it to one of the main open questions: How do we actually want to program multicore systems.

The talk went ok, I think. Compared to academic conference, I was surprised by the amount of questions people asked. The discussions for me were also much more interesting than on a typical conference. Overall a good experience.

Abstract

What if you would need to use all the processor cores you got to get your application to run with acceptable performance? This talk explores how we can support the various abstractions for concurrent and parallel programming that would help us to master the challenges of the multicore era. We show a variant of the RoarVM and with a novel metaobject protocol that allows us to implement agents, actors, software transactional memory, and others easily while preserving performance.

Slides

Recording