Oct 17, 2021: Actors! And now?
An Implementer’s Perspective on High-level Concurrency Models, Debugging Tools, and the Future of Automatic Bug Mitigation
Sep 30, 2021: How do we do Benchmarking?
Impressions from Conversations with the Community
Jul 26, 2021: Interpreters, Compilation, and Concurrency Tooling in PLAS at Kent
Here at Kent, we have a large group of researchers working on Programming Languages and Systems (PLAS), and within this group, we have a small team focusing on research on interpreters, compilation, and tooling to make programming easier.
Jun 16, 2021: Interpreter Generators: A Brief Look at Existing Work
Motivated by Tiger, a tool for generating interpreters, being mentioned on Twitter, I had a brief look at vmgen, Tiger, eJSTK, Truffle DSL, and DynSem. What follows are my rather rough notes and pointers. So, this is by no means a careful literature study, and I welcome further pointers.
Feb 18, 2021: Open Postdoc Position on Language Implementation and Concurrency
Jan 2, 2021: Towards a Synthetic Benchmark to Assess VM Startup, Warmup, and Cold-Code Performance
One of the hard problems in language implementation research is benchmarking. Some people argue, we should benchmark only applications that actually matter to people. Though, this has various issues. Often, such applications are embedded in larger systems, and it’s hard to isolate the relevant parts. In many cases, these applications can also not be made available to other researchers. And, of course, things change over time, which means maintaining projects like DaCapo, Renaissance, or Jet Stream is a huge effort.