Are we there yet? Simple Language-Implementation Techniques for the 21st Century

This repository contains the performance evaluation setup for the paper published in IEEE Software [todo-ref]. The repository and its scripts are meant to facilitate simple re-execution of the experiments in order to reproduce and verify the performance numbers given in the paper.

Furthermore, we use this repository to extend the performance discussion of the paper a little and provide additional data for the mentioned microbenchmarks.

Setup for Re-Execution of Experiments

The following steps, recreate the basic environment:

git clone --recursive -b papers/ieee-software-2014 https://github.com/smarr/selfopt-interp-performance
cd implementations
./setup.sh

The setup.sh script executes for each of the VMs used in the benchmark evaluation the necessary operations to rebuild the binaries from source. Note, in case the script does not execute or is erroneous, please see its source and the build-*.sh files. They encode all necessary steps. However, the setup has been used and tested only on Ubuntu and OS X, and thus it might need adaptions for other environments.

In general, the following programs are required for execution:

Benchmark Execution

After downloading the source code and complete compilation as described above, the benchmarks can be executed with ReBench. ReBench can be install via Python's package manager pip, or directly form the sources:

pip install rebench==0.6.1

The benchmarks and their parameters are specified in the are-we-there-yet.conf file, which is used by ReBench to execute them. To run the benchmarks, and get a few information during their execution, use the -d flag and start ReBench like this:

rebench -d are-we-there-yet.conf

Licensing

The material in this repository is licensed under the terms of the MIT License. Please note, the repository links in form of submodules to other repositories which are licensed under different terms.