Sep 17, 2024: Instrumentation-based Profiling on JVMs is Broken!
Last year, we looked at how well sampling profilers work on top of the JVM. Unfortunately, they suffer from issues such as safepoint bias and may not correctly attribute observed run time to the correct methods because of the complexities introduced by inlining and other compiler optimizations.
Jun 18, 2024: 5 Reasons Why Box Plots are the Better Default Choice for Visualizing Performance
Box Plots, Or Better!
Sep 20, 2023: Don't Blindly Trust Your Java Profiler!
How do we know on what to focus our attention when trying to optimize the performance of a program? I suspect at least some of us will reach for sampling profilers. They keep the direct impact on the program execution low, and collect stack traces every so often during the program execution. This gives us an approximate view of where a program spends its time. Though, this approximation as it turns out can be surprisingly unreliable.
Jun 6, 2023: Squeezing a Little More Performance Out of Bytecode Interpreters
Jan 2, 2021: Towards a Synthetic Benchmark to Assess VM Startup, Warmup, and Cold-Code Performance
One of the hard problems in language implementation research is benchmarking. Some people argue, we should benchmark only applications that actually matter to people. Though, this has various issues. Often, such applications are embedded in larger systems, and it’s hard to isolate the relevant parts. In many cases, these applications can also not be made available to other researchers. And, of course, things change over time, which means maintaining projects like DaCapo, Renaissance, or Jet Stream is a huge effort.
Dec 30, 2020: The Shape of 6M Lines of Ruby
Following up on my last blog post, I am going to look at how Ruby is used to get a bit of an impression of whether there are major differences between Ruby and Smalltalk in their usage.
Dec 15, 2020: The Shape of 1.7M Lines of Code
Recently, I was wondering how large code bases look like when it comes to the basic properties compiler might care about. And here I am not thinking about dynamic properties, but simply static properties such as length of methods, number of methods per class, number of fields, and so on.
Jul 7, 2020: Is This Noise, or Does This Mean Something? #benchmarking
Do my performance measurements allow me to conclude anything at all?
Jul 11, 2019: SOMns 0.7.0 Release with Extension Modules and Artifacts
It has been a while since we put together a release for SOMns. And it has been even longer, since I last wrote about it on this blog.
Jul 5, 2017: A 10 Year Journey, Stop 5: Growing the SOM Family
Jun 6, 2017: A 10 Year Journey, Stop 4: Concurrency and Tooling
This post, the fourth in the series, is about my current work on concurrency and tooling. As mentioned before, I believe that there is not a single concurrency model that is suitable for all problems we might want to solve. Actually, I think, this can be stated even stronger: Not a single concurrency model is appropriate for a majority of the problems we want to solve.
Apr 30, 2017: 10 Years of Language Implementations
First Stop: VMs, Compilers, and Modularity
Mar 7, 2017: SOMns 0.2 Release with CSP, STM, Threads, and Fork/Join
Since SOMns is a pure research project, we aren’t usually doing releases for SOMns yet. However, we added many different concurrency abstractions since December and have plans for bigger changes. So, it seems like a good time to wrap up another step, and get it into a somewhat stable shape.
Jun 25, 2016: Writing Papers with Completely Automated Data Processing
One of the first things that I found problematic about paper writing was the manual processing and updating of numbers based on experiments. Ever since my master thesis, this felt like a unnecessary and error prone step.
Jan 27, 2015: Partitioned Global Address Space Languages
More than a decade ago, programmer productivity was identified as one of the main hurdles for future parallel systems. The so-called Partitioned Global Address Space (PGAS) languages try to improve productivity and explore a range of language design ideas. These PGAS languages are designed for large-scale high-performance parallel programming and provide the notion of a globally shared address space, while exposing the notion of explicit locality on the language level. Even so the main focus is high-performance computing, the language ideas are also relevant for the parallel and concurrent programming world in general.
Sep 18, 2011: Using R to Understand Benchmarking Results
Why R?
Dec 7, 2010: The Price of the Free Lunch: Programming in the Multicore Era
Last Friday was the annual Lab event of our Software Languages Lab. Like last year, many people related to the lab in one or the other way came to get an overview of what the current topics of our research are.
Oct 30, 2010: Workshops at SPLASH 2010
As usual I will write about a few of my personal highlights of SPLASH and the co-located workshops. That is mostly from my spotty notes, and from memory, so I don’t guarantee 100% accuracy, especially with respect to what other people might have said.
Feb 21, 2010: Towards an Actor-based Concurrent Machine Model
Already quite a while ago, I was involved in writing a workshop paper about an actor model for virtual machines. Actually, the main idea was to find a concurrency model for a VM which supports multi-dimensional separation of concerns. However, AOP is not that interesting for me at the moment, so I am focussing on the concurrency, especially the actor-based VM model.
Feb 7, 2010: Virtual Machine Support for Many-Core Architectures: Decoupling Abstract from Concrete Concurrency Models
Finally, my first workshop paper got published, which was a little odyssey with some misunderstandings, but anyway, now it is out. It is just a position paper, thus, do not expect to many insights. However, what it describes is my big plan, and hopefully the story of my PhD. Am working on it…