Tag Archives: javascript

Language Research with Truffle at the SPLASH’16 Conference

Next weekend starts one of the major conferences of the programming languages research community. The conference hosts many events including our Meta’16 workshop on Metaprogramming, SPLASH-I with research and industry talks, the Dynamic Languages Symposium, and the OOPSLA research track.

This year, the overall program includes 9 talks on Truffle and Graal-related topics. This includes various topics including optimizing high-level metaprogramming, low-level machine code, benchmarking, parallel programming. I posted a full list including abstracts here: Truffle and Graal Presentations @SPLASH’16. Below is an overview and links to the talks:

Sunday, Oct. 30th

AST Specialisation and Partial Evaluation for Easy High-Performance Metaprogramming (PDF)
Chris Seaton, Oracle Labs
Meta’16 workshop 11:30-12:00

Towards Advanced Debugging Support for Actor Languages: Studying Concurrency Bugs in Actor-based Programs (PDF)
Carmen Torres Lopez, Stefan Marr, Hanspeter Moessenboeck, Elisa Gonzalez Boix
Agere’16 workshop 14:10-14:30

Monday, Oct. 31st

Bringing Low-Level Languages to the JVM: Efficient Execution of LLVM IR on Truffle (PDF)
Manuel Rigger, Matthias Grimmer, Christian Wimmer, Thomas Würthinger, Hanspeter Mössenböck
VMIL’16 workshop 15:40-16:05

Tuesday, Nov. 1st

Building Efficient and Highly Run-time Adaptable Virtual Machines (PDF)
Guido Chari, Diego Garbervetsky, Stefan Marr
DLS 13:55-14:20

Optimizing R Language Execution via Aggressive Speculation
Lukas Stadler, Adam Welc, Christian Humer, Mick Jordan
DLS 14:45-15:10

Cross-Language Compiler Benchmarking—Are We Fast Yet? (PDF)
Stefan Marr, Benoit Daloze, Hanspeter Mössenböck
DLS 16:30-16:55

Thursday, Nov. 3rd

GEMs: Shared-memory Parallel Programming for Node.js (DOI)
Daniele Bonetta, Luca Salucci, Stefan Marr, Walter Binder
OOPSLA conference 11:20-11:45

Efficient and Thread-Safe Objects for Dynamically-Typed Languages (PDF)
Benoit Daloze, Stefan Marr, Daniele Bonetta, Hanspeter Mössenböck
OOPSLA conference 13:30-13:55

Truffle and Graal: Fast Programming Languages With Modest Effort
Chris Seaton, Oracle Labs
SPLASH-I 14:20-15:10

Adding Debugging Support to a Truffle Language

Beside the great performance after just-in-time compilation, the Truffle Language implementation framework provides a few other highly interesting features to language implementers. One of them is the instrumentation framework, which includes a REPL, profiler, and debugger.

To support debugging, an interpreter needs to provide only a few tags on its AST nodes, instruct the framework to make them instrumentable, and have the debugger tool jar on the classpath. In this post, I’ll only point at the basics, of course for a perfectly working debugger, a language needs to carefully decide which elements should be considered for breakpoints, how values are displayed, are how to evaluate arbitrary code during debugging.

The most relevant aspect is tagging. In order to hide implementation details of the AST from users of a tool, only relevant Truffle nodes are tagged for the use by a specific tool. For debugging support, we need to override the isTaggedWith(cls) method to indicate that all elements, which correspond to a statement at which a debugger could stop, return true for the StatementTag. Based on this information, the debugger can then instrument AST nodes to realize breakpoints, stepping execution, etc. In addition, we also need to make sure that a node has a proper source section attached, but that’s something a parser should already be taking care of.

To instrument nodes, they are essentially wrapped by the framework. To allow Truffle to wrap a node, its class, or one of the super classes, needs to be marked as @Instrumentable. This annotation can be used to generate the necessary wrapper classes and allow wrapping of such nodes. In most cases, this should work directly without requiring any custom code.

Before the tool is going to be useable, a language needs to explicitly declare that it supports the necessary tags by listing them on its subclass of TruffleLanguage using the @ProvidedTags annotation.

And lastly, we need to have the truffle-debug.jar on the classpath. With this, languages that use the mx build system, it should be possible to access the REPL with mx build, which then also allows debugging.

With a little bit more work, one could also adapt my prototype for a web-based debugger interface from SOMns. This prototype is language independent, but requires a few more tags to support syntax highlighting. Perhaps more on that in another post.

Below, you can see a brief demo of the debugger for SOMns:

OOSPLA 2011 @SPLASH2011, Day 3

The third day started with Brendan Eich’s keynote on JavaScript’s world domination plan. It was a very technical keynote, not very typical I suppose. And he was rushing through his slides with an enormous speed. Good that I have some JavaScript background. Aside all the small things he mentioned, interesting for me is that he seemed to be very interested to get Intel’s RiverTrail approach to data-parallelism into ECMAScript in one or another form. That is kind of contradicting the position I heard so far, that ECMAScript would be done with having WebWorkers as a model for concurrency and parallel programming.

Language Implementation

The first session had two interesting VM talks for me. The first being JIT Compilation Policy for Modern Machines. With the assumption that you cannot get your multicore/manycore machines busy with application threads, they experimented how additional compilation threads can be used to optimize code better. I do not remember the details completely, but I think, after 7 compilation threads they reached a mark where it was not worthwhile anymore to add more threads.

The second talk was on Reducing Trace Selection Footprint for Large-scale Java Applications with no Performance Loss. The goal here is to reduce the number of trace in a tracing JIT that need to be kept around and optimized. Might be something interesting for PyPy and LuaJIT2 to consider.

Parallel and Concurrent Programming

The last session I attended was again on parallel and concurrent programing. The first paper A Simple Abstraction for Complex Concurrent Indexes was a pretty formal one.

The second paper is a pessimistic approach to implement atomic statements meant for systems programming: Composable, Nestable, Pessimistic Atomic Statements. It is not optimistic like STM, and does not use a global locking order, which would need to be determined statically. Instead they annotate the fields with so-called shelters. Shelters build a hierarchy, which describes the necessary parts to synchronize with.

The third paper was also a very interesting one for me: Delegated Isolation. It is an approach again similar to STM in a sense, but it avoids unbounded number of transaction retries. For that, the object graph is essentially partitioned in growing subgraphs that can be processed in parallel. Only when a conflict, a race-condition occurred, subgraphs are merged logically, and the computation is serialized. A neat idea and an interesting use of the object-ownership idea.

The last talk was on: AC: Composable Asynchronous IO for Native Languages. It was a presentation of work done in the context of the Barrelfish manycore OS. The goal was to have an easy to use programming model, that comes close to sequential programming, but has similar performance properties as typical asynchronous APIs in operating systems. The result is a model based on async/finish, that seems to be relatively nice. As I understand it, it is basically syntactic sugar and some library support to wrap typical asynchronous APIs. But it is a model that is purely focused on such request/response asynchrony, and does not handle concurrency/parallism.

And that was basically it. SPLASH is a nice conference when it comes to the content. Not so interesting when it comes to the social “events”, it wasn’t much of an event anyway. Not even the food was notable…