Last week, I started a series of posts to go over some of the projects I was involved in during my first 10 years working on language implementations. Today’s post focuses on my time as PhD student.

Let’s do something fun with… cconrnceury and pileaslarlm

After finishing my master thesis in 2008, I still wanted to continue this kind of work. And there was another topic hot at the time, which I wanted to look into: concurrency. In 2008, software transactional memory was all the rage. The multicore revolution was going strong, and we all expected to use 32 core processors in 2015. I guess, the 32 cores didn’t quite work out. Nonetheless, concurrency and parallelism is a topic that’s relevant for a much larger group of people than it used to be.

As I said, the topic was kind of hot, and the people at the Software Languages Lab where interested in it as well, and did cool things with concurrency and language implementations. Most widely known is perhaps AmbientTalk, an actor language for peer-to-peer applications on top of ad hoc mobile networks.

I got lucky, and my project proposal to decouple abstract from concrete concurrency models got accepted by IWT and I got funding for four years of PhD research. I have to say, it was a big vision. In the end, my PhD scratched perhaps at the surface of a quarter of the things that would be necessary to realize the vision put forth in the proposal.

Either way, I had the chance to work on quite a few interesting ideas. Early on, I got involved with David Ungar and Sam Adams work on the Renaissance project. David worked on a Smalltalk VM for a manycore processor with 64 cores. In the beginning, I didn’t have access to those 64 core Tilera processors. Instead, I started porting, what became the RoarVM, to standard multicore systems. The RoarVM is essentially a reimplementation of the Squeak Smalltalk interpreter in C++. The goal was to support classic shared-memory concurrency, and instead of fearing race conditions, the goal was to handle them retroactively: race and repair. I haven’t really worked much on the race-and-repair idea myself, but the work on a fully concurrent and parallel VM was very exciting.

As mentioned above, the lab was interested in actor languages. So, I guess, it isn’t really surprising that I started dabbling with them as well. One of the results was ActorSOM++. It was a simple Actor language based on SOM++, a SOM implemented in C++.

I got also involved in research on making the Actor model more useful for commodity multicore systems. Together with Joeri De Koster, we worked on a few papers on a domain model. We wanted to preserve the basic guarantees of the actor model, while still providing the data parallelism of shared memory concurrency.

And then there was that ‘Rete thing’. Lode Hoste used a Rete-based system to enable declarative multi-touch applications. As one might imagine, that’s the kind of stuff that’s great for giving impressive demos. So, the two of us decided to spend a week on parallelizing the CLIPS Rule Engine. Of course, a week wasn’t enough, but it gave us enough of an idea what we are up for to start our own parallel Rete implementation. Well, actually, Thierry Renaux did most of the work. The result was PARTE, an actor-based parallel Rete engine. And of course, in 2013, there also had to be a version for the cloud.

These and various other experiments lead me to proposing a metaprogramming-based solution for the problem of supporting all kind of different concurrency models on the same VM. In the end, this approach, the ownership-based meta-object protocol (OMOP) became also the focus of my PhD dissertation. The OMOP allowed me to customize the basic behavior for field accesses and method dispatches for instance to enforce isolation between actors, or to implement a basic STM. My implementation was based on the RoarVM, which means, everything was pretty slow. So, performance remained one of the big open questions. The other open question was whether we can actually find ways to use all these different concurrency models safely together.

But, those questions didn’t really fit into the PhD anymore. And, they might also better fit into one of the next posts on:

  • performance, performance, and metaprogramming
  • and safe combination of concurrency models