Modern CPUs, operating systems, and software in general do lots of smart and hard-to-track optimizations, leading to warmup behavior, cache effects, profile pollution and other unexpected interactions. For us engineers and scientists, whether in industry or academia, this unfortunately means that we may not fully understand the system on top of which we are trying to measure the performance impact of, for instance, an optimization, a new feature, a data structure, or even a bug fix.

Many of us even treat the hardware and software we run on top of as black boxes, relying on the scientific method to give us a good degree of confidence in the understanding of the performance results we are seeing.

Unfortunately, with the complexity of today’s systems, we can easily miss important confounding variables. Did we account, e.g., for CPU frequency scaling, garbage collection, JIT compilation, and network latency correctly? If not, this can lead us down the wrong, and possibly time-consuming path of implementing experiments that do not yield the results we are hoping for, or our experiments are too specific to allow us to draw general conclusions.

So, what’s the solution? What could a PhD student or industrial researcher do when planning for the next large project?

How about getting early feedback?

Get Early Feedback at a Language Implementation Workshop!

At the MoreVMs and VMIL workshop series, we introduced a new category of submissions last year: Experimental Setups.

We solicited extended abstracts that focus on the experiments themselves before an implementation is completed. This way, the experimental setup can receive feedback and guidance to improve the chances that the experiments lead to the desired outcomes. With early feedback, we can avoid common traps and pitfalls, share best practices, and deeper understanding of the systems we are using.

With the complexity of today’s systems, one person, or even one group, is not likely to think of all the issues that may be relevant. Instead of encountering these issues only in the review process after all experiments are done, we can share knowledge and ideas ahead of time, and hopefully improve the science!

So, if you think you may benefit from such feedback, please consider submitting an extended abstract describing your experimental goals and methodology. No results needed!

The next submission deadlines for the MoreVMs’26 workshop are:

  • December 17th, 2025
  • January 12th, 2026

For questions and suggestions, find me on Mastodon, BlueSky, or Twitter, or send me an email!