Some months ago, I was frustrated by the monotony of the task of writing, running and collecting data from experiments. I was bored of facing always the same challenges, writing always the same code and facing always the same problems. In addition, every experiment ran on different platforms, and they quickly become difficult to replicate (and this should be the very point of every experiment).
Thus, I decided to write my personal framework for running experiments in Python: YoshiX. The idea behind YoshiX, inspired by classical Unit Testing libraries, is quite simple. You write your test in a separate file, then you run yoshix
and it automatically finds every experiment in a specific folder, runs them, and collects the output in the format of choice.
It is a simple tool; I still have not spent a lot of time in it. But I think it has some interesting developments. Let’s look into some details.
Features
The slowly increasing list of features includes:
- Runs experiments in a replicable way (this will be truly enforced in later versions, I hope).
- Modular experiment specification! Each experiment is a Python file, YoshiX will automatically find every experiment in a folder and will run each one of them!
- Automatically iterates over the parameter space according your own generators (any generator can be used as a source, from the
range
function to a custom generator!). - Collects the experiment results in a synthetic way (the YoshiX Egg).
- Export the eggs into different formats (CSV, JSON and many others to come).
Usage
Suppose we want to run an experiment to identify the average score of two dice on 100 shots. The first dice is a standard d6, a six-faces dice. The second changes for each iteration from a d2 (a coin) to a d100 (a one hundred-faces dice). The first thing to do is to initialize the YoshiX experiment.
|
|
To do that, first we create an empty file and we extend the yoshix.YoshiExperiment
class. This allows the system to identify this class as an experiment and provide to us some useful tools to write the actual experiment code.
Then, it is time to fill the setup
method. This method will be called at the beginning of the experiment. In it, we need to initialize the experiment. The first thing you always want to do is to specify the attributes of the experiment. In fact, the output of an experiment is a table and each attribute is a column in it. To do that, we use the setup_egg
method with a list/tuple of string identifiers.
The second step is to assign a role to each attribute. In general, there are three different roles:
- Fixed Parameters: these are just constant you want to show in the result table. They will be the same for the full duration of the experiment. In our example
Dice1
is a fixed parameter because it will be always 6. - Variable Parameters: these parameters change during the execution of the experiment. In our example,
Dice2
is a variable parameter. As you can see in the code, you can specify a variable parameter using a generator. In this case, we just userange(2,101)
to obtain every dice from d2 to d100.But you can use complex hand-made generators as well. - Result Attributes: these are attributes which will be filled during the experiment. They are the dependent variables of the experiment. In our example, the
AVG
attribute is where we will put our results. You don’t have to do nothing with these attributes, they are the default behavior.
Now it is time to write the real experiment.
|
|
The single_run
method represents the basic code of the experiment. The method automatically gets as parameters each fixed and variable parameters we specified earlier and it is executed for every element of the Cartesian product of every generator with each other. In this case, because we have only a single variable parameter with 99 values, the single_run
method will be called 99 times with the Dice2
parameter going from 2 to 100.
In the method we will write the actual experiment (or we can call an external function, we are not forced to write the actual experiment code in this method). In this case we just want to compute the sum of the 2 dice 100 times and then we put the average value in the AVG
output attribute.
Finally, we need to end the experiment. Common tasks in this step are: clean the temporary files and produce the output file. We can do that with the after_run
method.
|
|
In this example, we can see how we can export the egg
containing our experiment results into a csv file. Done. Easy.
Now? How we can run the experiment? We just run yoshix
in the right folder.
yoshi_run ./examples
In this case there is only a single file, so it will run only this experiment. If you have a folder full of experiments, it will run each experiment once.
Conclusion
There are a lot of things I can still implement in YoshiX (different exporters, a smarter yoshix
command with more informative output and so on) and many things are still not completely tested (YoshiX is tailored around my type of experiments which may be very different from some other needs). But if you enjoy the idea, star the repository on GitHub, fork it, submit as many pull requests as you want. I will be happy to share YoshiX with everyone.