The code was tangled from an Org Babel document and is adapted it to run standalone to the best of our abilities. Additionally it is coded in a language called Hy, which is a dialect of Lisp that transpiles to Python. Should one want to generate Python code, there is a utility called "hy2py".

A Nix shell environement is provided for complete reproducibility. After installing the Nix package manager (https://nixos.org/) one can use "nix develop .#cpu" or "nix develop .#cuda" to enter a shell environment with all the same libraries (including CUDA versions) that were used to run the experiments.

The main experiments are located in the folders:
- two_circles
- alexnet_mnist
- resnet20_fashion_mnist
- mnist_lenet_testbench
- mnist_fcn_testbench
- fk_lenet_testbench
- fk_fcn_testbench

For experiments where hyper parameter tuning was done, there is a "tune.hy" that does a grid search over a set of predefine parameters. Afterwards "run.hy" will produce the actual results. Note the chosen parameter from when we ran the experiments is already hard coded into "run.hy", so running "tune.hy" will not change anything by itself.

For the testbench experiments the code needs to be transpiled to Python as we utilise multi-processing which does not work well with Hy. Follow these steps in each testbench directory:
i. Run "hy2py run.hy > run.py"
ii. Then start the testbench with "python run.py $num_gpus" where $num_gpus is the number of GPU's available

Although the code was written for multiple GPU's, no experiments were actually run in a multi-gpu environment due to not being able to get ahold of one.

Run all the experiments first and then you can run each of the "plot.hy" files that will generate all plots and tables found in the main article.

Make sure your PYTHONPATH includes the root directory of the code folder as there is some common code we use from there. E.g. "PYTHONPATH=$PYTHONPATH:../../. hy run.hy".

Unfortunately we were not able to include the data due to its size.
