Tag Archives: Parallelism

Domains: Sharing State in the Communicating Event-Loop Actor Model

It has been a while since we started working on how to extended the Actor model with mechanisms to safely share state. Our workshop paper on Tanks was published in 2013. And now finally, an extended version of this work was accepted for publication. Below you can find the abstract with a few more details on the paper, and of course a preprint of the paper itself.

Abstract

The actor model is a message-passing concurrency model that avoids deadlocks and low-level data races by construction. This facilitates concurrent programming, especially in the context of complex interactive applications where modularity, security and fault-tolerance are required. The tradeoff is that the actor model sacrifices expressiveness and safety guarantees with respect to parallel access to shared state.

In this paper we present domains as a set of novel language abstractions for safely encapsulating and sharing state within the actor model. We introduce four types of domains, namely immutable, isolated, observable and shared domains that each are tailored to a certain access pattern on that shared state. The domains are characterized with an operational semantics. For each we discuss how the actor model’s safety guarantees are upheld even in the presence of conceptually shared state. Furthermore, the proposed language abstractions are evaluated with a case study in Scala comparing them to other synchonisation mechanisms to demonstrate their benefits in deadlock freedom, parallel reads, and enforced isolation.

  • Domains: Sharing State in the Communicating Event-Loop Actor Model; Joeri De Koster, Stefan Marr, Tom Van Cutsem, Theo D’Hondt; Computer Languages, Systems & Structures (2016)
  • Paper: PDF
  • BibTex: BibSonomy

Towards Meta-Level Engineering and Tooling for Complex Concurrent Systems

Last December, we got a research project proposal accepted for a collaboration between the Software Languages Lab in Brussels and the Institute for System Software here in Linz. Together, we will be working on tooling for complex concurrent systems. And with that I mean systems that use multiple concurrency models in combination to solve different problems, each with the appropriate abstraction. I have been working on these issues already for a while. Some pointers are available here in an earlier post: Why Is Concurrent Programming Hard? And What Can We Do about It?

End of February, I am going to talk about that a little more at the Arbeitstagung Programmiersprachen in Vienna. Below, you can find an abstract and link to the position paper. There is not a lot of concrete material in yet, but it sketches the problems we will try to address in the years to come.

Abstract

With the widespread use of multicore processors, software becomes more and more diverse in its use of parallel computing resources. To address all application requirements, each with the appropriate abstraction, developers mix and match various concurrency abstractions made available to them via libraries and frameworks. Unfortunately, today’s tools such as debuggers and profilers do not support the diversity of these abstractions. Instead of enabling developers to reason about the high-level programming concepts, they used to express their programs, the tools work only on the library’s implementation level. While this is a common problem also for other libraries and frameworks, the complexity of concurrency exacerbates the issue further, and reasoning on the higher levels of the concurrency abstractions is essential to manage the associated complexity.

In this position paper, we identify open research issues and propose to build tools based on a common meta-level interface to enable developers to reasons about their programs based on the high-level concepts they used to implement them.

  • Towards Meta-Level Engineering and Tooling for Complex Concurrent Systems; Stefan Marr, Elisa Gonzalez Boix, Hanspeter Mössenböck; in ‘Proceedings of the 9th Arbeitstagung Programmiersprachen’ (ATPS’ 16).
  • Paper: PDF, HTML
  • BibTex: BibSonomy

Why Is Concurrent Programming Hard? And What Can We Do about It? #vmm2015

Yesterday at the Virtual Machine Meetup, I was giving a talk about why I think concurrent programming is hard, and what we can do about it.

The talk is very much related to an earlier blog post with the same title. My main point is that concurrent programming is hard because on the one hand there is not a single concurrency abstraction that fits all problems, and on the other hand the various different abstractions are rarely designed to be used in combination with each other. First work in that direction makes us hopeful that we can actually adapt interesting sets of concurrency abstractions to work together without causing combination issues as for instance unexpected deadlocks or data races.

The blog post, and the slide set below hopefully give a little more insights on what we got in mind.

Partitioned Global Address Space Languages

More than a decade ago, programmer productivity was identified as one of the main hurdles for future parallel systems. The so-called Partitioned Global Address Space (PGAS) languages try to improve productivity and explore a range of language design ideas. These PGAS languages are designed for large-scale high-performance parallel programming and provide the notion of a globally shared address space, while exposing the notion of explicit locality on the language level. Even so the main focus is high-performance computing, the language ideas are also relevant for the parallel and concurrent programming world in general.

As part of our research in the field of parallelism and concurrency, we studied the PGAS languages more closely to get a better understanding of the specific concepts they explore and to get a feeling for the tradeoffs of the various language design options. The result is a survey of the major PGAS languages, which was very recently accepted for publication in the ACM Computing Surveys.

The preprint of the paper is available as PDF and HTML version. The final edited version will probably take another eternity to appear, but oh well, that’s academia.

Abstract

The Partitioned Global Address Space (PGAS) model is a parallel programming model that aims to improve programmer productivity while at the same time aiming for high performance. The main premise of PGAS is that a globally shared address space improves productivity, but that a distinction between local and remote data accesses is required to allow performance optimizations and to support scalability on large-scale parallel architectures. To this end, PGAS preserves the global address space while embracing awareness of non-uniform communication costs.

Today, about a dozen languages exist that adhere to the PGAS model. This survey proposes a definition and a taxonomy along four axes: how parallelism is introduced, how the address space is partitioned, how data is distributed among the partitions and finally how data is accessed across partitions. Our taxonomy reveals that today’s PGAS languages focus on distributing regular data and distinguish only between local and remote data access cost, whereas the distribution of irregular data and the adoption of richer data access cost models remain open challenges.

  • Partitioned Global Address Space Languages; Mattias De Wael, Stefan Marr, Bruno De Fraine, Tom Van Cutsem, Wolfgang De Meuter; ACM Computing Surveys, to appear.
  • Paper: PDF, HTML
  • DOI: TBA
  • BibTex: BibSonomy

Fork/Join Parallelism in the Wild: Documenting Patterns and Anti-Patterns in Java Programs using the Fork/Join Framework

Parallel programming is frequently claimed to be hard and all kind of approaches have been proposed to solve the complexity issues. The Fork/Join programming style introduced with Cilk enables the parallel decomposition of problems in a recursive divide-and-conquer style, and on the surface looks very simple with its minimalistic approach of having a fork and a join language construct. But is it actually simple to use? To find out, Mattias started to dig through the Java open source projects on GitHub and tried to identify common patterns. Next week, he will present our findings at PPPJ’14.

The preprint of the paper is available below. Additionally, Mattias made the information on the corpus and how to obtain it available.

Abstract

Now that multicore processors are commonplace, developing parallel software has escaped the confines of high-performance computing and enters the mainstream. The Fork/Join framework, for instance, is part of the standard Java platform since version 7. Fork/Join is a high-level parallel programming model advocated to make parallelizing recursive divide-and-conquer algorithms particularly easy. While, in theory, Fork/Join is a simple and effective technique to expose parallelism in applications, it has not been investigated before whether and how the technique is applied in practice. We therefore performed an empirical study on a corpus of 120 open source Java projects that use the framework for roughly 362 different tasks.

On the one hand, we confirm the frequent use of four best-practice patterns (Sequential Cutoff, Linked Subtasks, Leaf Tasks, and avoiding unnecessary forking) in actual projects. On the other hand, we also discovered three recurring anti-patterns that potentially limit parallel performance: sub-optimal use of Java collections when splitting tasks into subtasks as well as when merging the results of subtasks, and finally the inappropriate sharing of resources between tasks. We document these anti-patterns and study their impact on performance.

  • Fork/Join Parallelism in the Wild: Documenting Patterns and Anti-Patterns in Java Programs using the Fork/Join Framework; Mattias De Wael, Stefan Marr, Tom Van Cutsem; in ‘Proceedings of the 2014 International Conference on Principles and Practices of Programming on the Java Platform: Virtual Machines, Languages, and Tools’ , pp. 39-50.
  • Paper: PDF
  • BibTex: BibSonomy
  • Corpus and additional material: online appendix