MCBench - Monte Carlo Sampling Benchmark Suite
The MCBench benchmark suite is designed for quantitative comparisons of Monte Carlo (MC) samples. It offers a standardized method for evaluating MC sample quality and provides researchers and practitioners with a tool for validating, developing, and refining MC sampling algorithms.
For benchmarking, different metrics are applied to point clouds of both independent and identically distributed (iid) samples and correlated samples generated by MC techniques, such as Markov Chain Monte Carlo or Nested Sampling. Through repeated comparisons, test statistics of the metrics are gathered allowing to evaluate the quality of the MC samples. A variety of target functions with different complexities and dimensionalities are availble, providing a versatile platform for testing the capabilities of sampling algorithms.
MCBench is implemented as Julia package but users can run external sampling algorithms of their choice on these test functions and input the resulting samples to obtain detailed metrics that quantify the quality of their samples compared to the iid samples generated by MCBench.
Workflow
- Pick a test cases from the list of available target functions.
- Implement these functions into the sampling software of your choice. We provide basic implementations of the listed test cases in Julia, Python (to be used with PyMC) and Stan.
- Generate samples of the target functions with the algorithm you want to benchmark. Save the samples as a
.csv
file withnparameters
columns andnsamples
rows. - Use the Julia package
MCBench
to load your samples and benchmark them against IID samples (which are automatically generated by the package). - See Using MCBench for an example on how to use the
MCBench
package.