Research on programming languages is often more fun when we can use our own languages. However, for research on performance optimizations that can be a trap. In the end, we need to argue that what we did is comparable to state-of-the-art language implementations. Ideally, we are able to show that our own little language is not just a research toy, but that it is, at least performance-wise, competitive with for instance Java or JavaScript VMs.

Over the last couple of years, it was always a challenge for me to argue that SOM or SOMns are competitive. There were those 2-3 paragraphs in every paper that never felt quite as strong as they should be. And the main reason was that we don’t really have good benchmarks to compare across languages.

I hope we finally have reasonable benchmarks for exactly that purpose with our Are We Fast Yet? project. To track performance of benchmarks, we also set up a Codespeed site, which shows the various results. The preprint has already been online for a bit, but next week, we are finally going to present the work at the Dynamic Languages Symposium in Amsterdam.

Please find abstract and details below:

Abstract

Comparing the performance of programming languages is difficult because they differ in many aspects including preferred programming abstractions, available frameworks, and their runtime systems. Nonetheless, the question about relative performance comes up repeatedly in the research community, industry, and wider audience of enthusiasts.

This paper presents 14 benchmarks and a novel methodology to assess the compiler effectiveness across language implementations. Using a set of common language abstractions, the benchmarks are implemented in Java, JavaScript, Ruby, Crystal, Newspeak, and Smalltalk. We show that the benchmarks exhibit a wide range of characteristics using language-agnostic metrics. Using four different languages on top of the same compiler, we show that the benchmarks perform similarly and therefore allow for a comparison of compiler effectiveness across languages. Based on anecdotes, we argue that these benchmarks help language implementers to identify performance bugs and optimization potential by comparing to other language implementations.

  • Cross-Language Compiler Benchmarking: Are We Fast Yet? Stefan Marr, Benoit Daloze, Hanspeter Mössenböck; In Proceedings of the 12th Symposium on Dynamic Languages (DLS ’16), ACM, 2016.
  • Paper: HTML, PDF, DOI
  • BibTex: BibSonomy