Irrationally Annoyed: The SIGPLAN Blog Post writing 30 Years of PL Research Out of Existence
I started writing this post when being very very annoyed by this blog post on the SIGPLAN blog. I could not understand how “THE SIGPLAN” blog could simply write 30 years of programming language research out of existence, only barely acknowledging Self and JavaScript. It felt like duty called…
Though, after multiple rounds of edits, I am now assuming that there’s simply an issue of awareness. What follows is therefore an attempt to raise awareness on the various academic but also industrial efforts around the performance of dynamic languages.
Disclaimer: this blog post isn’t attempting to be complete in any way, but I hope it provides enough pointers for people interested in language implementation technology for dynamic languages to go deeper and possibly even contribute.
Languages vs. Language Implementations
Before going into technical details, it is important to start distinguishing between a language and its implementation. Especially, when we talk about programming language research, these two are rarely the same.
While the languages mentioned in the post (Python, R, PHP) all have “standard” implementations, which are the by far most widely used ones, they are far from the only implementations. Let’s just pick three for each language: There are PyPy, IronPython, and Jython for Python, pqR, Renjin, and FastR for R, as well as HippyVM, PeachPie, and Quercus for PHP.
They all have benefits and drawbacks. And perhaps are not even 100% compatible, but various groups of people clearly spent a lot of time and energy on more optimized language implementations. Some of these implementations have state-of-the-art just-in-time compilers and can reach performance competitive with JavaScript, which itself is certainly competitive with for instance Java, and only slightly behind “metal” languages in some, and ahead in other benchmarks.
How do I know? I didn’t benchmark all of these language implementations, but I worked on comparing compilers across languages with the Are We Fast Yet project [5]. One important insight here is that language design can definitely have a performance impact. But, the ingenuity of compiler writers and language implementers has eliminated all major performance hurdles in the last 30 years.
This means, claims about the languages for instance them being “all notoriously slow and bloated” and “not designed to be fast or space-efficient” are rather misguided.
Just-in-time compilation technology as used for JavaScript, Java, Python and others is very effective, and not particularly language-specific. Some of the key techniques have been known for 30 years, as I discussed elsewhere. Others, such as storage strategies, mementos, and partial escape analysis are much more recent and help with many issues. Perhaps interesting to note is that storage strategies were actually first done in PyPy, and are now pretty much standard in just-in-time compiling VMs for a whole range of languages, including JavaScript.
Though, one may argue that these techniques have not been successful enough, because people don’t use the language implementations actually benefiting from them.
Fine, but I believe this is only partially a technical problem. Some of it is a social issue, too.
Others have spent time on identifying potential reasons of why people do not pick up our research. For instance, Laurie wrote blog posts on why not more users are happy with our VMs [1, 2].
Given the claim that JavaScript is somehow exceptional, I would argue that JavaScript was lucky in a way. JavaScript was hilariously slow to start with. For V8 this meant that they didn’t have to worry about making “startup performance” worse. Their baseline native code compiler produced code that was much better than the JavaScript interpreters back then. Perhaps unfortunately, Python has a surprisingly fast interpreter, which gives VMs such as PyPy quite a challenge. Still, it is not impossible to beat such interpreters and have just-in-time compiling VMs that have also good startup performance. Examples are indeed modern JavaScript VMs, but also much smaller and less well funded projects such as LuaJIT, which has an incredibly fast interpreter and a very effective just-in-time compiler. But indeed, it takes time and effort to build these systems, which contrary to the claims in the blog post, are indeed invested in these languages.
Is Python doomed? Of course not!
So, what about Python then? The blog post picks up a claim that Python is incredibly slow when it comes to matrix multiplication. And later, without much context claims that PyPy is perhaps 2 times faster than Python, but sometimes slower.
One could have asked the PyPy community, or looked at their blog. PyPy doesn’t have auto-vectorization as far as I know, so, indeed, it has a hard time reaching the performance of vectorized code, but it is much faster than what’s implied. Such broad claims are not just unjustified and bad style, they are also painfully unscientific. No, Python is not necessarily slow and bloated. Maps, Hidden Classes, and object shapes make it possible to store even integers rather efficiently. With the previously mentioned storage strategies, this works extremely well for your hugh array with integers, too.
Now, perhaps more important is the questions whether auto-vectorization is something dynamic languages simply can’t support. Again, there’s no fundamental reason why VMs for dynamic languages can’t support it. The GraalVM supports auto-vectorization and can speedup JavaScript, Python, Ruby, and other dynamic languages with it. Key techniques are speculative compilation, type feedback, object shapes, and storage strategies. With them, optimistic compilers can vectorize even the code of dynamic languages. For good measures, I should probably also mention Juan Fumero’s work around using just-in-time compilation for dynamic languages to target GPUs.
Are Dynamic-Languages Single Threaded?
This one, I do indeed take personally.
First of all, being effectively single threaded can be a Good Thing™. Why would you want to deal with shared memory race conditions? This is a language design question. So in the end a matter of taste. And, if one really wants to, some forms of shared memory are possible, even without introducing all its drawbacks.
That aside, Python, R, and others have fairly nice ways of supporting parallelism, including multiprocess approaches. And, to blow my own horn a little more, if you really want shared memory multithreading, you can have that efficiently, too with our work on thread-safe object models [3] and thread-safe storage strategies [4] .
Where to go from here?
To sum up: yes, metal languages will continue to be important, but so will the irrational exuberance languages. We as a community should lead the way in developing systems (at the hardware and software levels) that will make them run faster.
Yes, we should. And the PL community spent a lot of effort doing this for the last 30 years. So, please join us. We are out there and could use help!
MPLR has a keynote coming up titled “Hardware Support for Managed Languages: An Old Idea Whose Time Has Finally Come?”. Perhaps worth registering for. Attendance is free.
And of course, SPLASH is around the corner, too. With DLS and VMIL, SPLASH has a strong history of providing venues for discussing the implementation of dynamic languages. OOPSLA, as in the past, has pretty cool papers on dynamic language implementation.
Other relevant venues include for instance ICOOOLPS and MoreVMs.
My Introduction to Efficient and Safe Implementations of Dynamic Languages has many pointers to research, some of which also made it into some early overview of the field.
For questions or difference in opinion, find me on Twitter @smarr.
Post Scriptum
Part of my annoyance with the blog post is its dismissive and condescending tone.
Some of these so-called “scripting languages” are essentially moribund, like Perl (1987)
Seems a completely unnecessary point to be made. It is also unclear how this is assessed. The Perl community is active, but yes, a lot of effort went into Raku.
Elsewhere, it describes “JavaScript is an outlier”, refers to the “monoculture [of] (the browser)”, and says there’s “no need for compatibility with legacy code”. JavaScript is certainly an interesting special case, and indeed, as acknowledged, browsers just happen to have attracted a lot of funding. But a monoculture? In which sense? At least in the past, “the browser” itself was rather diverse. When it comes to backwards compatibility, the number one design goal of Ecma TC39, the group designing JavaScript, is “don’t break the web”.
And of course, browsers themselves are incredibly complex “legacy systems”. To get a taste, I’d recommend to listen to Michael Lippautz talking about how C++ and JavaScript interact.
Post Post Scriptum
Python programmers need to distinguish between time and memory spent in pure Python (optimizable) from time and memory spent in C libraries (not so much). They need help tracking down expensive and insidious traffic across the language boundaries (copying and serialization).
We could also try to get rid of it instead. Or give more power to people like Stephen Kell, and avoid copying/serialization in a different way.