It's Thursday, and My Last* Day at Kent

Today is the 31st of July 2025, and from tomorrow on I’ll be “between jobs”, or as Gen Z allegedly calls it, on a micro-retirement.

When I first came to Kent for my interview, I was thinking, I’ll do this one for practice. I still had more than 2 years left on a research grant we just got, which promised to be lots of fun, but academic jobs for PL systems people are rare, even rarer these days. But then I got the call from Richard Jones, offering me the position, and I never regretted taking him up on it.

Kent’s School of Computing was just growing its Programming Languages and Systems (PLAS) group and Richard, Simon Thompson, Andy King, Peter Rodgers, and many others at the School did a remarkable job in creating an environment and community that was truly supportive of young academics taking their first steps in a permanent academic post. Be it about wrestling with teaching duties, papers, reviews, reviewers, and of course grant writing. PLAS and the School of Computing was the right place for me.

Of course, many things changed since my start in October 2017. Perhaps most notably, Computing is now in the Kennedy building, a very nice space. But there was also that moment, where we, the young ones, became the “senior” ones. Mark, Laura, and Dominic grew well into their new roles and I can only hope that I passed on some of the extensive support I got, to the people who started after me.

There are many challenges ahead for my dear colleagues at Kent, but I hope, that enough of the spirit of support and community remains in the School, enabling PLAS and the next generation of academics to do great things.

Also a huge thank you to Kemi, Anna, and Janet for keeping the School afloat.

I’ll miss you all. Thanks for everything! And see you soon!

Most of PLAS in October 2023

* It’s a little more complicated than that, but for good reasons. Right, EPSRC? :)

Instrumentation-based Profiling on JVMs is Broken!

Last year, we looked at how well sampling profilers work on top of the JVM. Unfortunately, they suffer from issues such as safepoint bias and may not correctly attribute observed run time to the correct methods because of the complexities introduced by inlining and other compiler optimizations.

After looking at sampling profilers, Humphrey started to investigate instrumentation-based profilers and found during his initial investigation that they were giving much more consistent numbers. Unfortunately, it became quickly clear that the state-of-the-art instrumentation-based profilers on the JVM also have major issues, which results in profiles that are not representative of production performance. Since profilers are supposed to help us identify performance issues, they fail at their one job.

When investigating them further, we found that they interact badly with inlining and other standard optimizations. Because the profilers we found instrument JVM bytecodes, they add a lot of extra code that compiler optimizations treat as any other application code. While this does not strictly prevent optimizations such as inlining, the extra code interferes enough with the optimization that the observable behavior of a program with and without inlining is basically identical. In practice, this means that instrumentation-based profilers on the JVM are easily portable, but they can’t effectively guide developers to the code that would benefit most from attention, which is their main purpose.

Profilers that do not capture production performance will misguide us!

While they can still identify the code that is activated most often, the interaction with optimizations means that developers see mostly unoptimized behavior. With today’s highly optimizing compilers this is unfortunate, because we may end up optimizing code that the compiler normally would have optimized for us already, and we spend time on things that likely won’t make a difference in production.

Let’s look at an example from our paper:

class ActionA { int id; void execute() {} }
class ActionB { int id; void execute() {} }
var actions = getMixOfManyActions();
bubbleSortById(actions);
framework.execute(actions);

In this arguably a little contrived example, we use some kind of framework, for which we have actions that the framework applies for us. This is probably a worst case for profilers that instrument bytecodes. Here, the execute() methods would be identified as the most problematic aspect. Though, they don’t do anything. A just-in-time compiler like HotSpot’s C2, would likely end up seeing a bimorphic call site to execute() and inline both methods. And if the compiler heuristics are with us, it might even optimize out the empty loop in the framework.

So, if we assume a sufficiently smart compiler, here our inefficient code, that’s forced on us by a framework, is being taken care of by the compiler. And a good profiler, would ideally guide us to the bubbleSortById(.) as being of interest. Typically, we’d expect to get a good speedup here by switching to a more suitable sort, especially since we implicitly assume there are many actions so that this code matters in production.

To me this means, instrumentation-based profilers can only be a matter of last resort when sampling with its own flaws fails. They are just not useful enough as they are.

Can we do better than profilers that instrument bytecode?

At the time, Humphrey was quite in favor of instrumentation, because it gives very consistent results. So, he wanted to make the results of instrumentation-based profilers more realistic. Inspired by the work of Basso et al., he built an instrumentation-based profiler into the Graal just-in-time compiler that works more like classic instrumentation-based profilers for ahead-of-time-compiled language implementations.

The basic idea is illustrated below:

Figure 1: Instrumentation-based profilers on the JVM typically insert instrumentation very early, before compilers optimize code. In our profiler, instrumentation is inserted very late, to minimize interfering with optimizations.

Instead of inserting the instrumentation right when the bytecode is loaded, for instance with an agent or some other form of bytecode rewriting, we move the addition of instrumentation code to a much later part of the just-in-time compilation. Most importantly, we insert it only after inlining and most optimizations are performed. To keep the prototype simple, we insert the probes right before it is turned into the lower level IR. At this point, there are still a few optimizations to be performed, including instruction selection and register allocation. Though, in the grand scheme of things, these are minor.

How much better is it?

With his prototype, Humphrey managed to achieve not only much better performance than classic instrumentation-based profilers, but also minimize interference with optimizations. For a rough idea of the overall performance impact of this approach, let’s have a look at Figure 2:

Figure 12: Sampling-based profilers such as Async, Honest, JFR, Perf, and YourKit (in sampling mode) have very low overhead, though suffer from safepoint bias and only observe samples. YourKit and JProfiler doing instrumentation introduce overhead of two orders of magnitudes and lead to unrealistic results because of their impact on optimizations. Bubo, our prototype, has much lower overhead, and does not interfere with optimizations.

With a few extra tricks briefly sketched in the paper, we get good attribution of where time is spent, even in the presence of inlining, reduce overhead, and benefit from the more precise results of instrumentation, because it does not have the same drawbacks of only occasionally obtaining data.

There’s one major open question though: what does a correct profile look like? At the moment, we can’t assess whether our approach is correct. Sampling profilers, as we saw last year, also do not agree on a single answer. So, while we believe our approach is much better than classic instrumentation, we still need to find out how correct it is.

All results so far, and a few more technical details are in the paper linked below. Questions, pointers, and suggestions are greatly appreciated perhaps on Mastodon or Twitter @smarr.

Abstract

Profilers are crucial tools for identifying and improving application performance. However, for language implementations with just-in-time (JIT) compilation, e.g., for Java and JavaScript, instrumentation-based profilers can have significant overheads and report unrealistic results caused by the instrumentation.

In this paper, we examine state-of-the-art instrumentation-based profilers for Java to determine the realism of their results. We assess their overhead, the effect on compilation time, and the generated bytecode. We found that the profiler with the lowest overhead increased run time by 82x. Additionally, we investigate the realism of results by testing a profiler’s ability to detect whether inlining is enabled, which is an important compiler optimization. Our results document that instrumentation can alter program behavior so that performance observations are unrealistic, i.e., they do not reflect the performance of the uninstrumented program.

As a solution, we sketch late-compiler-phase-based instrumentation for just-in-time compilers, which gives us the precision of instrumentation-based profiling with an overhead that is multiple magnitudes lower than that of standard instrumentation-based profilers, with a median overhead of 23.3% (min. 1.4%, max. 464%). By inserting probes late in the compilation process, we avoid interfering with compiler optimizations, which yields more realistic results.

  • Towards Realistic Results for Instrumentation-Based Profilers for JIT-Compiled Systems
    H. Burchell, O. Larose, S. Marr; In Proceedings of the 21st ACM SIGPLAN International Conference on Managed Programming Languages and Runtimes, MPLR'24, ACM, 2024.
  • Paper: PDF
  • DOI: 10.1145/3679007.3685058
  • BibTex: bibtex
    @inproceedings{Burchell:2024:InstBased,
      abstract = {Profilers are crucial tools for identifying and improving application performance. However, for language implementations with just-in-time (JIT) compilation, e.g., for Java and JavaScript, instrumentation-based profilers can have significant overheads and report unrealistic results caused by the instrumentation.
      
      In this paper, we examine state-of-the-art instrumentation-based profilers for Java to determine the realism of their results. We assess their overhead, the effect on compilation time, and the generated bytecode. We found that the profiler with the lowest overhead increased run time by 82x. Additionally, we investigate the realism of results by testing a profiler’s ability to detect whether inlining is enabled, which is an important compiler optimization. Our results document that instrumentation can alter program behavior so that performance observations are unrealistic, i.e., they do not reflect the performance of the uninstrumented program.
      
      As a solution, we sketch late-compiler-phase-based instrumentation for just-in-time compilers, which gives us the precision of instrumentation-based profiling with an overhead that is multiple magnitudes lower than that of standard instrumentation-based profilers, with a median overhead of 23.3% (min. 1.4%, max. 464%). By inserting probes late in the compilation process, we avoid interfering with compiler optimizations, which yields more realistic results.},
      author = {Burchell, Humphrey and Larose, Octave and Marr, Stefan},
      blog = {https://stefan-marr.de/2024/09/instrumenation-based-profiling-on-jvms-is-broken/},
      booktitle = {Proceedings of the 21st ACM SIGPLAN International Conference on Managed Programming Languages and Runtimes},
      doi = {10.1145/3679007.3685058},
      keywords = {Graal Instrumentation JVM Java MeMyPublication Optimization Profiler Profiling Sampling myown},
      month = sep,
      pdf = {https://stefan-marr.de/downloads/mplr24-burchell-et-al-towards-realistic-results-for-instrumentation-based-profilers-for-jit-compiled-systems.pdf},
      publisher = {ACM},
      series = {MPLR'24},
      title = {{Towards Realistic Results for Instrumentation-Based Profilers for JIT-Compiled Systems}},
      year = {2024},
      month_numeric = {9}
    }
    

5 Reasons Why Box Plots are the Better Default Choice for Visualizing Performance

Box Plots, Or Better!

This post is motivated by discussions I have been having for, ehm, forever?

To encourage others to use good research practices and avoid bar charts, I’ll argue that people should use box plots as their go-to choice when presenting performance results. Of course, box plots aren’t a one-size-fits-all solution. However, I believe they should be the preferred choice for many standard situations. For some situations, more appropriate chart types should be chosen based on careful consideration.

Thus, box plots should be the default choice instead of the omnipresent bar chart. Or short: Box Plots, or Better!

When working on performance, I usually work with just-in-time compiling language runtimes, on which I would run various experiments that I want to compare. For examples, check the papers of Humphrey, Octave, and Sophie (copies are here). However, I believe the argument applies more generally beyond our own work.

Reason 1: Performance Measurements Are Samples from a Distribution

When we measure the performance of a system, we usually get a data point that has been influenced by many different factors. This is independent of whether we measure wall-clock time, the number of executed instructions, or perhaps memory. While we can control some factors and influence others, today’s systems are often too complex for us to fully understand them. For example, cache effects, thermal properties, as well as hard- and software interactions outside our control can change performance non-deterministically. In practice, we therefore often treat the system as a black box.11 I’d encourage people to dig deeper, but I’m aware that time does not always allow for it. Treating it as a black box then of course requires us to repeat our experiments multiple times to be able to characterize the range of results that are to be expected. Statisticians would perhaps describe our measuring as “sampling a distribution”.

And this is the point where box plots come in. They are designed to be a convenient way to characterize distributions. Let’s assume we have an experiment A and B, and we have taken 50 measurements each. Figure 1 shows the results of our experiments as box plots.

Figure 1: Box plot comparing A and B,
including annotations for the key elements of a box plot.

I annotated the box plot for A with some key elements, including the median, 25th, and 75th percentile. We also see the notion of an interquartile range, which tells us a bit about the shape of the result distribution and outliers, i.e., typically all measurements that are further from the 25th and 75th percentile than 1.5x the interquartile range.

Wikipedia has a good overview of box plots that also goes deeper.

Reason 2: Allows Detailed Visual Comparison

With box plots, we have enough details to see that the two experiments behave differently in a number of ways.

The median lines tell us that A is usually faster than B. However, we also see that A is not always faster than B, because the results are further spread out. In the worst case, A takes 19 seconds, which is more than B’s worst case of 15 seconds. While the main half of all data points for both experiments don’t overlap, we see that a good chunk of A’s results still fall within what’s often not considered to be outliers, i.e., the range between the 75th percentile with 1.5x of the interquartile range added.

By looking at the figure and comparing these plots, I believe we can get a reasonable intuition of the performance tradeoffs of the two options.

Reason 3: Box Plots Give Enough Details

The above analysis of the results would not have been possible for instance with a classic bar chart as shown in Figure 2.

Figure 2: Bar chart comparing A and B, showing the mean and the standard deviation as error bars.

Bar charts are often used to compare the performance of two or more systems or experiments. However, they show only three values per bar, typically a chosen “measure of centrality”, and some form of “error”. Very common are here things like the arithmetic mean, geometric mean, harmonic mean, and perhaps the median. Each of these has different properties, and one has to carefully think about which one to use based on the type of data one is working with (or perhaps not). At this point one also still has to chose how to characterize measurement errors.

This means that bar charts are less standardized than box plots, and one has to be explicit about what is shown.

Figure 3: Bar chart comparing A and B, showing the median and 25th and 75th percentile.

To just give one example, Figure 3 is the same data but shows the median and the 25th and 75th percentile instead of the mean and standard deviation.

Since we show different sets of statistics, our impression of the results somewhat changes. Of course, this is the power of visualization and picking statistics. We can draw attention to specific aspects of the data. Figure 3 would lead me to conclude that A is always better than B, while Figure 2 would make me wonder what the underlying data looks like to understand how we got to the depicted standard deviation.

Compared to our box plot in Figure 1, the choice of statistics to show, and the reduced number of details we see here can result in misleading others and ourselves. Thus, I’d strongly argue that bar charts are neither a good default to represent data during data analysis, nor when presenting the final insights in a paper. They show too few details, oversimplifying an often more complex story.

Reason 4: Box Plots Don’t Overwhelm With Details

Of course, we could also go in the other direction and choose a plot type that shows much more detail.

Let’s start with Figure 4, which shows a violin plot. I selected here a version that shows just the density distribution of our results. One could go and highlight specific statistics on it for clarity of course. However, just looking at Figure 4, we get a more detailed look at how our measurements are distributed. From this, we see very clearly that B’s results are grouped much tighter together, and at each end, i.e., at 9 and 15 seconds, there are outliers. A on the other hand, is much more stretched out, though, a good chunk of the results are indeed roughly in the area indicated by the box plot previously. Though, what we see here also is that the area is wider and stretches from perhaps 8 to 15, only outside of which we likely have significantly fewer samples. We did not see these details on Figure 1.

Figure 4: A violin plot to compare A and B showing the density distribution of the results.

For data analysis, this way of looking at the data is very helpful, because it allows us to see the underlying distribution. For reporting data in a paper, this might however be too detailed, in the sense that it is not as easily interpretable visually and makes drawing conclusions harder.

Figure 5: A combination of violin, box plot, and raw data. The mean is indicated as a red dot.

While not ideal for final reports, violin plots are useful during analysis. Perhaps one wants to go even a step further and use a combination of violin and box plot together with the raw data and the mean during analysis. An example of this is shown in Figure 5. While the plot is very busy and not suitable for a paper, I’d think, it prevents us from jumping to conclusions based on data summaries.

If you’re analyzing your data in R, a package like ggstatsplot might be a good solution.

Reason 5: They Are Very Versatile

Box plots can be used for many different purposes, independent of the type of distribution of data one wants to visualize, for different types of experiments, and to represent experimental data, as well as data summaries.

Because box plots visualize selected “percentile statistics”, we can use them without having to adapt them for specific experiments or types of distributions. They are nonparametric, i.e., one does not have to select any parameters for specific input data. This is useful for performance evaluations, because we do not generally know what type of distribution we are dealing with, and samples are not generally independent, which makes the use of various other statistical tools more complicated.

Figure 6: Comparing experiments A, B, and C. Their data is drawn from different distributions, none of which are normal distributions. The density plot at the bottom characterizes the sample distributions more precisely.

Figure 6 shows box plots for three different distributions. Important here is that neither of these experiments gives normally distributed data. Nonetheless, we can use box plots to describe them more abstractly and see certain key details such as A being skewed to the left, B slightly less so but much more narrow, and C having outliers to the left, with a small skew to the right.

So far, I have used examples where there was an experiment A and B, and perhaps C. Though, often we may want to understand the relation of perhaps two variables. This might be in the sense of scaling a computation over multiple processor cores. Figure 7 shows a box plot that visualizes data for such a hypothetical experiment.

Figure 7: Comparing A and B as they change for increasing values of a second variable from 1 to 20.

While one would often use line charts for such scaling experiments, box plots can be used here as well. One can still see the rough shape of a line, but we do not lose sight of the distribution of our experimental data. Arguably, Figure 7 is very busy though, and a line chart with a confidence interval or similar would look better (Python, ggplot).

We can also see that box plots “scale” reasonably well themselves in the sense that they work for data that is spread out as well as for data that is very closely grouped. For example for B, we see the values at x-axis point 1 to be very narrowly together. Similarly for A, we see at 20 that data is tightly grouped. In either case, we still have the complete power of the box plot and can draw conclusions.

If we would now want to summarize these results, we can of course use box plots!

Figure 8: Summary of the data of Figure 7. A box plot of a box plot. In the upper half, it's a box plot over the medians. In the lower half, it's a box plot over all raw data.

Figure 8 shows a summary. The plot at the top uses the medians for each of the experiments over the variable that went from 1 to 20. So, for A and B, we have 20 values each, and plot them as box plots. Note, for this to be a valid statistic, technically the medians have to be derived from independent samples, so, you may need to consult your friendly neighborhood statistician.

In the bottom plot, I used the raw data of all experiments. In a way this still “works”, and results in a very similar box plots in this case. Though, here the meaning changes and of course whether you can do this with your data is something you need to ask your statistician about. I think, common wisdom in our field is to first normalize the data and then “bootstrap” it. This would give us bootstrapped medians etc. The median is then technically from a normal distribution of independent samples, and standard statistics are legal again.

Conclusion: Box Plots Answer Important Questions At A Glance. Use Box Plots, Or Better!

When it comes to writing academic papers, I do believe that box plots are a much better default choice for communicating performance results than bar charts are.

The key reasons for me are:

  • they are a concise representation of the result distribution
  • they allow a visual comparison of more than the most basic statistics
  • and thus, answer more questions than bar charts
  • but without making things too complicated
  • they are also more standardize, and thus, remain more readable when taken out of context
  • and they can be used sensibly for a wide range of use cases

So, for me box plots strike a good overall balance, which makes them a good standard choice for papers.

Though, as mentioned earlier, they are not a universally best choice either. For data analysis, one would want more details, and for specific use cases or types of data distributions, e.g., bi- or multi-modal distribution, other types of plots are more suitable. I can recommend this piece with many examples where other types of plots than box plots may be better choices.

For questions, comments, or suggestions, please find me on Twitter @smarr or Mastodon.

Older Posts

Subscribe via RSS