The first day of the technical tracks including OOPSLA started with a keynote by Ivan Sutherland titled The Sequential Prison. His main point was that the way we think and the way we build machines and software is based on sequential concepts. The words we use to communicate and express ourselves are often of a very sequential nature. His examples included: call, do, repeat, program, and instruction. Other examples that shape and restrict our way of thinking are for instance basic data structures and concepts like strings (character sequences). However, we also use words that enable thinking about concurrency and parallelism much better. His examples for these included: configure, pipeline, connect, channel, network, and path.

After the talk David ask him what he would do in the first day of a class on how to program a massively parallel system. His answer was something like: “I would probably retire!”, making the point that it is a hard problem which requires creative solutions.

Catching Concurrency Bugs

After the keynote, the first technical track started with a session on catching concurrency bugs. The first paper presented was Sheriff: Precise Detection and Automatic Mitigation of False Sharing. As far as I understood, it is a pthread replacement, which abuses the memory-managing features and copy-on-write tricks to know about write-write contention on cacheline-level. The tool can attribute that back to the allocation site, which does not seem to be terribly useful if I manage my heap myself :-/

Accentuating the Positive: Atomicity Inference and Enforcement Using Correct Executions was the second paper presented. They use some inference technique to place locks to prevent data-races that are still in the code. The most severe limitation seems to be that it only works for stack variables. However, the idea of making almost correct code more correct looks interesting.

The third paper, SOS: Saving Time in Dynamic Race Detection with Stationary Analysis, focuses on identifying objects that are not changing their state after a certain initialization period. They call such objects stationary, since they cannot participate in races. Would be interesting to see what we could get out of a similar analysis to decide when to promote an object into the RoarVM’s read-mostly heap.

The last presentation in that session was about Testing Atomicity of Composed Concurrent Operations. Here they use commutativity specifications to reduce the search space/testing effort. The goal is to find for instance pairs of operations, which are meant to be executed atomically but are not properly synchronized.

Parallelizing Compilers

The second session started with Hawkeye: Effective Discovery of Dataflow Impediments to Parallelization. They use a dynamic analysis to determine dependencies. The analysis further uses abstraction, ADT semantics to understand the recorded traces. The ADT semantics are used to abstract from the details, and avoid having to track everything precisely. Based on that analysis approach, they developed a tool that can be used to identify undesirable dependencies and iteratively improve a program.

The second paper was Automatic Fine-Grained Locking using Shape Properties. The goal is go from unsynchronized code to fine-grained synchronized code. For that purpose, they describe a new locking protocol called Domination Locking for objects graphs, which is supposed to be more general then earlier approaches.

The third paper Safe Parallel Programming using Dynamic Dependency Hints presented an extension on earlier work that uses hints like ‘possibly parallel region’. The system can execute such regions in parallel and will use an STM-like approach to make it correct in case there are conflicts. The presented work introduced channels to better describe data dependencies.

The last paper titled Sprint: Speculative Prefetching of Remote Data is targeted at distributed systems, but presents an approach that will automatically prefetch data to reduce the impact of latency and reduce overall runtime. Such a technique could be relevant for manycore systems exhibiting similar tradeoffs.

Memory Management

The last session of the day started with the presentation of a nice VM paper: Why Nothing Matters: The Impact of Zeroing. Certainly a worthwhile read for everyone implementing safe languages concerned with initializing objects/tables/structures/… efficiently with NULL.

The second talk Ribbons: a Partially Shared Memory Programming Model presented a programming model in-between threads and processes using memory-protection tricks to restrict the use of shared memory and isolate components. An interesting approach, especially since I have something similar in mind for the RoarVM.

The last talk of the day was about Asynchronous Assertions. Also something that might be interesting for paranoid VM hacker like me. It is an approach that allows the runtime to offload the assertion checking to other threads. To make that work, it works with snapshot semantics of the memory at the point where the assertion is offloaded.