This post, the fourth in the series, is about my current work on concurrency and tooling. As mentioned before, I believe that there is not a single concurrency model that is suitable for all problems we might want to solve. Actually, I think, this can be stated even stronger: Not a single concurrency model is appropriate for a majority of the problems we want to solve.

In practice, this means that the programming languages, which we have today, all get it wrong. They all pick a winner. Be it shared memory, a very common choice (e.g. C/C++, Java, C#, Python, …), or be it message-passing models with strong isolation (e.g. Erlang, JavaScript, Dart, …). At some point, the choice will turn out to suboptimal, and restricting. It will either lead to code riddled with data races or deadlocks, or it becomes increasingly difficult to ensure data consistency or performance.

So, what’s the solution? I have been advocating for a while that we should investigate the combination of concurrency models. As part of this research, we started Project MetaConc. The goal of the project is not necessarily to design languages that combine models, but instead, provide more immediate solutions in the form of tools that allow us to reason about the complex programs we already got.

We outlined the vision in a short position paper last year. The general idea is to devise some form of meta-level interface for tools to abstract from the concrete concurrency models present in a language or library. With meta-level interface, I mean either some form of reflective API in a language, or an interface to the language implementation, which could either be an API or something like a debugger protocol. This would allow us to construct tools that can handle many different concurrency models, and ideally, their combinations, too.

We started the project with looking into debugging. Specifically, we focused on the actor model and identified the kind of bugs that might occur and what kind of support we would want from a debugger to handle actor problems. Carmen, one of our PhD students, reported on it at the AGERE’16 workshop.

I myself had good fun adding debugging support to SOMns. With Truffle’s debugger support that’s pretty straightforward. You just need to add a few tags to your AST nodes as described in my blog post. Since I was already busy with the debugger, I also invested some time into the language server protocol (LSP), a project pushed by RedHat and Microsoft. I think, with a platform like Truffle, and a generic way of talking to an IDE like the LSP, it should be possible to get basic IDE support for a language by just implementing a Truffle interpreter. But since that’s getting a little off topic, I’ll just point at the Can we get the IDE for free, too? blog post. In practical terms, the LSP allowed me to provide a very basic support for code completion, jump to definition, and debugging support for Visual Studio Code.

More recently, I demoed our own Kómpos debugger for various concurrency models at the <Programming>‘17 conference. It is a live debugger that abstracts from specific concurrency models, and instead allows us to use custom stepping operations and breakpoints as provided by the SOMns interpreter. In the demo, I wasn’t actually able to show that yet. That’s more recent work, which we wrote up and submitted for review to DLS’17. And at least for debuggers, I think we come very close to the goal set out for the project. We devised a protocol that uses metadata to communicate from the interpreter to the debugger which breakpoints and stepping operations are supported. This makes the debugger completely independent from the concurrency models. We also showed that one can use such concurrency-agnostic protocol to provide visualizations. And I hope that’s a good indication for being able to build other advanced debugging tools, too.

That’s it for today. There are so many other things, I probably don’t get to mention. But, in the next and likely last post in the series, I am going to look at the SOM family of language implementations.