Tag Archives: STM

Why Is Concurrent Programming Hard? And What Can We Do about It? #vmm2015

Yesterday at the Virtual Machine Meetup, I was giving a talk about why I think concurrent programming is hard, and what we can do about it.

The talk is very much related to an earlier blog post with the same title. My main point is that concurrent programming is hard because on the one hand there is not a single concurrency abstraction that fits all problems, and on the other hand the various different abstractions are rarely designed to be used in combination with each other. First work in that direction makes us hopeful that we can actually adapt interesting sets of concurrency abstractions to work together without causing combination issues as for instance unexpected deadlocks or data races.

The blog post, and the slide set below hopefully give a little more insights on what we got in mind.

Why is Concurrent Programming hard?

In short, I think, it is hard because on the one hand there is not a single concurrency abstraction that fits all problems, and on the other hand the various different abstractions are rarely designed to be used in combination with each other.

But let us start at the beginning. The terminology might get otherwise in the way. For the purpose of this discussion, I distinguish concurrent programming and parallel programming.

Concurrent programming and its corresponding programming abstractions focus on the correctness of a computation and the consistency of state in the context of parallel or interleaved execution.

Parallel programming and its corresponding programming abstractions focus on structuring an algorithm in a way that parallel computational resources are used efficiently.

Thus, while both programming approaches are related and are often used in combination, their goals and consequently their main abstractions are different. I am not claiming that parallel programming is solved and widely understood, but it is comparably easy to apply it to an isolated problem when performance is an issue. The emphasize here is on isolated, because the integration into an existing system is the hard part and can expose all kind of concurrency issues for which concurrent programming techniques are required as a solution.

When do Concurrency Programming Abstractions Break Down?

The first questions might be why and when are we using concurrent programming? The main reason is the desire to increase “performance”, either by increasing throughput, by reducing latency, or improving interactivity by moving operations off the critical path. Sometimes, a concurrent design also happens to map best on a problem by aligning the program structure with the domain model for instance in terms of tasks or processes and thus is chosen as solution.

A classic example calling for concurrent execution is user interfaces. Independent of a particular solution, the overall goal is to move a computational or I/O task out of the loop that processes user-generated events to maintain the interactivity of an application. This offloading can either be done by using some form of asynchronous I/O and computation library. To give but a few examples, C#’s async/await is frequently used for this purpose, as well as Java’s ExecutorService, or Clojure’s future.

In another simple scenario, the application is already parallel and some form of execution monitoring needs to be added. Either to stear optimizations or even for billing purposes. Depending on the concrete scenario, various solution approaches are available. In a non-performance-critical scenario, a simple atomically modified counter can be sufficient. When performance matters, it might be more appropriate to gather the initial counts local to a single thread, however, this requires later communication to build the sum of all thread-local values, which might lead to consistency issue because it is harder to get one globally consistent snapshot of all local counters. Depending on the requirements all kind of different solutions in-between could be devised. If the count itself for instance does not matter, a scalable non-zero indicator might suffice. Either way, the problem remains the same. An existing parallel program needs to be changed, which potentially introduces concurrency issues.

Keeping something like an independent counter consistent is however rather trivial compared to making parallel or concurrent operations on a large and complex shared data structure yield consistent and correct results. Imagine a tree or graph representation for a program as frequently employed by IDEs. In such a scenario various subsystems might want to change or annotate the graph. For instance to include inferred types, add test coverage, or history information, apply refactorings, or simply account for the change done by the user in the editor. Often the relevant subsystems work concurrently. One could of course the graph immutable and ‘updates’ produce strictly new versions of it. However, for various reasons other choices might be made and then the question arises how consistent updates are possible. Solutions could potentially include locks or software transactional memory (STM).

When making the decision for how to manage the consistency for such a graph that is updated concurrently, the rest of the system has unfortunately to be considered as well. Suddenly our inconspicuous counter might need to take into account that the STM might retry transactions. Similarly, the library for asynchronous tasks might suddenly need to retract a task from the run queue when a transaction is retried. This is the point that makes concurrent programming really hard. Such ‘design’ decision are not strictly local anymore and the question arises not only for STM but for all concurrent programming abstractions: do they compose well?

Huge Number of Different Abstractions

The question of whether concurrent programming abstractions compose is not at all straightforward to answer. As indicated in the discussion in the previous section, there are many different possible requirements in any given situation, so that even the design of a simple counter becomes a complex undertaking. Over the decades, the huge amount of tradeoffs resulted in many different variations of few at least superficially related concepts. The tradeoffs are also not only about performance for instance in terms of how much guarantees a framework may provide. Often somewhat philosophical points come into the discussion. For instance, some people argue that blocking operations preserve better the local sequential view on a system and therefore are simpler to program, often however at the cost of potential deadlocks. On the other hand, a completely asynchronous non-blocking design might be deadlock free, but depending on how the language exposes it, one might end up in callback hell and code becomes hardly maintainable. Yet another aspect might be whether to allow non-determinism or not. It can be easier to reason about a strictly deterministic system. However, such a language or framework might restrict the expressiveness so much that not all conceivable applications can be expressed in it.

To give a few examples, the futures of Clojure and Java are blocking, which always introduces the risk for deadlocks when other blocking abstractions are used in conjunction. The futures offered by AmbientTalk and E (called promises) are inherently non-blocking to fit into the overall nature of these two languages as being non-blocking and deadlock free. Consequently however, both types of futures are used differently and one might argue that one is preferable over the other in certain situations.

Similar is the situation when it comes to the concrete implementation of communicating sequential processes. Personally, I consider the strict isolation between processes, and therefore the enforcement that any form of communication has to go via the explicit channels, as a major property that can simplify reasoning about the concurrent semantics and for instance makes sure that programs are free from low-level data races. However, Go for instance chose to adopt the notion of communicating via channels but its goroutines are not isolated. JCSP, a Java library goes the same way. The occam-pi language on the other hand chose to stick with the notion of fully isolated processes. The same design discussion could be had for implementations of the actor model. AmbientTalk and Erlang go with fully isolated processes, while for instance Akka makes the pragmatic decision that it cannot guarantee isolation because it is running on top of a JVM.

This discussion could go on for quite a while. Wikipedia lists currently more than 60 concurrent programming languages of which most will implement some specific variation of a concept. In previous work, we identified roughly a hundred concepts that are related to concurrent programming.

It can now of course be argued that a single language will not support all of them and thus applications will perhaps only have to cope with a handful concurrent programming abstractions. However, looking at large open source applications such as IDEs, it seems that the various subsystems from time to time start to introduce their own abstractions. NetBeans for instance has various representations of asynchronous task or future like abstractions and there are at least two implementations of somewhat ‘transactional’ systems, one in the refactoring subsystem and another one in the profiling library. They seem to implement something along the lines of STM in different degrees of complexity. And this again raises the question how are these different abstractions interacting with each other. A look at NetBeans bug track yields more than 4000 bugs that contain the word “deadlock” and more than 500 bugs with the phrase “race condition”. While most of these bugs are marked as closed, it is probably a good indication that concurrent programming is hard and error prone.

Concurrent Programming Abstractions Not Designed for use in Combination

Usually concurrent programming is considered hard because low-level abstractions such as threads and locks are used. While NetBeans uses these to a significant extent, it uses also considerably more high-level concepts such as futures, asynchronous tasks, and STM. Now I would argue that it is not necessarily the abstraction level but that various concurrent programming abstractions are used together while they have not been designed for that purpose. While each abstraction in isolation is well tailored for its purpose, and thus reduced the accidental complexity, concurrency often does not remain confined to modules or subsystem and thus the interaction between the abstractions causes significant accidental complexity.

As far as I am aware, the fewest languages have been designed from the ground up with concurrency in mind, and even fewer languages are designed with the interaction of concurrent programming abstractions in mind. While for instance Java was designed with threads in mind and has the synchronized keyword to facilitate thread-based programming, its memory model and the java.util.concurrent libraries were only added in Java 5. Arguable, Java’s libraries are so low-level that languages such as Clojure and Scala try to close the gap. Clojure was consequently designed from the start concurrent programming in mind. It started out with atoms, agents, and STM to satisfy the different use case for concurrency. However, even so Clojure was design with them from the start, they do not interact well. Atoms are considered low-level and do not regard transactions or agents at all. STM on the other hand accounts for agents by deferring message sends until the end of a transaction to make sure that a transaction can be retried safely. With only these three abstractions, Clojure actually could be considered a fine example. However, these abstractions were apparently not sufficient to cover all use cases equally well and futures and promises as well as CSP in form of the core.async library got added. Unfortunately, the abstractions were not designed to integrate well with the existing ones. Instead, there were merely added and interactions can cause for instance unexpected deadlocks or race conditions (for more details see this paper).

In order to give a more academic example, which might not be governed by mostly pragmatic concerns, Haskell might be a reasonable candidate. Unfortunately, even in Haskell the notion of adding instead of integrating seems to be the prevalent one. I am not a Haskell expert, but the STM shows the same symptoms Clojure has, however, in a slightly different way. The standard Control.Concurrent package comes for instance with MVar and Chan as abstractions for mutable state and communication channels. But instead of integrating the STM with these, it introduces its own variants TMVar and TChan. It might be performance reasons that led to this situation. However, from the perspective of engineering large applications this can hardly be ideal, because the question of whether these abstractions can be used without problems in the same application remains unanswered.

Conclusion

I think that concurrent programming is hard because the abstractions we use today are not prepared for the job. They are good for one specific task, but they are not easily used in conjunction with each other. Instead, interactions can lead for instance to unexpected race conditions or deadlocks. And just to support the claim that interaction is an issue, it is not just NetBeans that uses are variety of concurrent programming concepts. Eclipse looks similar, and so do MonoDevelop and SharpDevelop. A study in the Scala world suggests also that application developer chose to combine the actor model with other abstractions for instance for performance reasons.

So, what’s the solution? I think, we need to design languages and libraries that properly integrate a variety of concurrent programming abstractions. How that should look concretely, I don’t know yet. The work of Joeri De Koster shows how solutions could look like for actor languages, and together with Janwillem Swallens, we are extending this work to a wider set of languages. Personally, I still belief that the ownership-based metaobject protocol is a useful foundation to experiment with various different concurrent programming abstractions on top of one language. But, we will see.

For comments, suggestions, ideas, or complains that I did not consider your language that already solves to problem, please catch me on Twitter @smarr or send me a mail.

Towards Composable Concurrency Abstractions

One of the big questions that came up during my PhD was: ok, now you got your fancy ownership-based metaobject protocol, and you can implement actors, agents, communicating sequential processes, software transactional memory, and many others, but now what? How are you going to use all of these in concert in one application? Finding a satisfying answer is unfortunately far from trivial.

Since I am far from the first person thinking about these problems, we, that is Tom, Joeri, and most notably Janwillem put out heads together to figure out what the main issues are, and what the solutions are others have come up with. Janwillem took the lead and started to write down our first preliminary findings in a paper for the PLACES workshop, co-located with ETAPS in April.

Below, you can find the preprint and abstract of the paper. It is only a first small step, but I hope it won’t be the last one because in the end, the OMOP is only going to be useful if we actually can figure out how to combine the various concurrent programming models it enables in a safe and efficient manner.

Abstract

In the past decades, many different programming models for managing concurrency in applications have been proposed, such as the actor model, Communicating Sequential Processes, and Software Transactional Memory. The ubiquity of multi-core processors has made harnessing concurrency even more important. We observe that modern languages, such as Scala, Clojure, or F#, provide not one, but multiple concurrency models that help developers manage concurrency. Large end-user applications are rarely built using just a single concurrency model. Programmers need to manage a responsive UI, deal with file or network I/O, asynchronous workflows, and shared resources. Different concurrency models facilitate different requirements. This raises the issue of how these concurrency models interact, and whether they are composable. After all, combining different concurrency models may lead to subtle bugs or inconsistencies.

In this paper, we perform an in-depth study of the concurrency abstractions provided by the Clojure language. We study all pairwise combinations of the abstractions, noting which ones compose without issues, and which do not. We make an attempt to abstract from the specifics of Clojure, identifying the general properties of concurrency models that facilitate or hinder composition.

  • Towards Composable Concurrency Abstractions; Janwillem Swalens, Stefan Marr, Joeri De Koster, Tom Van Cutsem; Proceedings of the workshop on Programming Language Approaches to Concurrency and Communication-cEntric Software, 2014, co-located with ETAPS.
  • Paper: PDF
  • BibTex: BibSonomy

What If: Developing Applications in the Multicore Era

Yesterday was the first day of Smalltalks 2012 in Puerto Madryn. The organizers invited my to give a keynote on a topic of my choice, which I gladly did. Having just handed in my thesis draft, I chose to put my research into the context of Smalltalk and try to relate it to one of the main open questions: How do we actually want to program multicore systems.

The talk went ok, I think. Compared to academic conference, I was surprised by the amount of questions people asked. The discussions for me were also much more interesting than on a typical conference. Overall a good experience.

Abstract

What if you would need to use all the processor cores you got to get your application to run with acceptable performance? This talk explores how we can support the various abstractions for concurrent and parallel programming that would help us to master the challenges of the multicore era. We show a variant of the RoarVM and with a novel metaobject protocol that allows us to implement agents, actors, software transactional memory, and others easily while preserving performance.

Slides

Recording

Trends in Concurrency 2010, part 1

I already posted the presentation I gave at the summer school earlier. In the following posts, I will report a bit about the lectures of the summer school, similar to my posts about the TPLI summer school of last year.

Like last year, my thanks go to FWO and the summer school, for providing me with grants to cover all the costs. Thanks 🙂

The first two days covered four different topics. Adam Welc started with an introduction to Software Transactional Memory, Jean-Pierre Talpin gave lectures on the topic Virtual Prototyping Embedded Architectures, Satnam Singh reported on his experience with Data-Parallel Programming, and last but not least, Peter Sewell introduced us to the mystics of hardware memory models with his talk titled Low-Level Concurrency – For Real.

Adam Welc: Software Transactional Memory

Adam, who works for Intel Labs, talked in his first presentation mainly about the semantics of STM and the different approaches for designing a STM system. The second presentation concentrated on how the implementation of such a system could work.

His presentations were very much in the spirit of the STM system Intel provides as a prototype, but the general ideas are used in other systems as well.

The different options for semantics of an STM included open or closed transaction nesting, transaction flattening as well as abort and retry. Furthermore, he discussed the interaction for instance with legacy code, which leads to the question of weak and strong atomicity properties, or the semantics of single global lock atomicity. The last semantic question he discussed was how consistency should be provided, and how pointer-privatization could be supported by the system. Afterwards, he outlined the design space for STM systems, introducing optimistic vs. pessimistic transactions as well as write buffering vs. in-place updates.

In the second talk, he presented the implementation details.

For their C++-based STM the compiler needs additional information about the characteristics of all code, which is done by annotations on class and method level. Similar, the compiler supports atomic blocks, abort, and retry statements. Beside all the implementation details, he also discussed several optimizations the compiler attempts to reduce the STM overhead with regard to read/write-barriers wherever possible. My gut feeling tells me, someone should write a STM+TracingJIT story. Not sure whether that buys anything, but what I have seen with STM looks a lot like the trace-guards.

The last slides he presented were about performance. The basic conclusion still is: performance is an issue. However, he gave me an interesting pointer to one of his papers, which basically states, that transaction can be used for looks without contention, quite straight forwardly: Transparently reconciling transactions with locking for Java synchronization.

Jean-Pierre Talpin: Virtual Prototyping Embedded Architectures

Unfortunately, I missed the introductorily part of these lectures due to problems with our accommodation *sigh*. However, as far as I understood, in the domain of embedded systems for large scale applications like aircrafts, there is always a need to combine a large number of different technologies, and the final goal is to simulate, analyze, and verify the software before it is deployed into production use. Especially the verification part is important for critical systems like in avionics.

The systems in this domain are mostly event driven, i.e., in his terminology it is an asynchronous composition of reactive processes.

For these purpose, they developed Polychrony including an Eclipse integration. It allows using data-flow models for computation and mode automata for control. These models are verified with model-checking techniques and allow controller synthesis.

At the heart of this system is a data-flow language, which allows expressing such an asynchronous event system in terms of synchronous modules, which have better characteristics with respect to verification. Furthermore, they established the means to prove certain correctness criteria.

For an aerospace project, they used this as a basis and code-generation framework. One of their techniques uses a SSA as an intermediate representation to model existing software. They use SSA to translate the input to their synchronous data-flow formalism enabling their analysis’.

Satnam Singh: Data-Parallel Programming

The motivation behind his work is to allow users to utilize parallelism in a civil manner, i.e., with a reasonable effort/gain ratio. His premise is that programming models like SIMD, OpenMP, and MPI are inherently low-level and require very detailed knowledge about the target architecture, which restricts their applicability. Not only increases it the development cost, but it also restricts the application to a single platform. With the Microsoft Accelerator, they develop a high-level data-parallel library to target various platforms, starting with standard multi-core CPUs with SIMD instruction extensions, GPGPUs, and FPGAs.

The user describes the computation by defining a data-flow graph, which describes the intention instead of the detailed mapping to an execution. Most notably, they provide parallel array structures, on which operations can be performed without necessarily using an index, but deferring these details to the runtime, which can optimize the relevant array operations for the target architecture. Furthermore, it provides certain standard mechanisms to work in a data-parallel way. This includes various operations on the parallel arrays, which for instance can be combined with appropriate reduce operations. An important operation he emphasized is the array shift operation, which is extensively used to describe necessary data transformations. Instead of doing these kind of transformations explicitly, the runtime system can highly-optimize them to the target platform.

The resulting program description is just-in-time compiled for the target machine, and only then executed. For the typical problems they are approaching, the construction of the program describing graph and the JIT overhead are negligible.

However, this form of nested data parallelism brings also some problems. Especially the distribution of the problem over the computing platform is hard since the shape of the resulting computational graph depends on the input data.

Peter Sewell: Low-Level Concurrency – For Real

The basic conclusion of his talk is: it is impossible to write portable, correct, and efficient concurrent code for any existing hardware platform, because they do not actually tell you all the constrains/properties in their specifications.

So he basically starts to rant about all processor manufacturers and their specs. He starts with a seemingly simple example, and demonstrates that the expected outcome is not guaranteed.

Then he goes on and describes various examples for x86 and relates them to the tricks a CPU applies, which result in rather weak memory models.

Based on that, he presents a model they developed called x86-TSO (Total Store Order), which allows to reason about the correctness of concurrent x86 programs. This model is eventually used to introduce the notion of triangular races. This allows to reason about a class of programs, which can contain data races, for instance lock implementations.

From here, he goes on to rant about all the other existing hardware architectures and their semantics or lack of specification. They tried for years to come up with a formal model like x86-TSO but failed.

To conclude his lectures, he presents 10 different options how to define a sane memory model for instance for a programming language. He starts out with option 1: DON’T i.e. no concurrency. Well, my personal impression still is, that this is his favorite option. It has several advantages. Especially, it is very simple. In the end he comes to option 10, which means to have a memory model similar to C++0x. It is a data-race free model, which provides low-level concurrency primitives, and allows to break abstractions. Beside the notion of the data-race free part, that is something, I have expected/thought about earlier for a good VM.