Currently, the tooling just keeps the whatever benchmark order
that was present, and this is fine nowadays, but once the benchmarks
will be optionally run interleaved, that will be rather suboptimal.
So, now that i have introduced family index and per-family instance index,
we can define an order for the benchmarks, and sort them accordingly.
There is a caveat with aggregates, we assume that they are in-order,
and hopefully we won't mess that order up..
* Create pylint.yml
* improve file matching
* fix some pylint issues
* run on PR and push (force on master only)
* more pylint fixes
* suppress noisy exit code and filter to fatals
* add conan as a dep so the module is importable
* fix lint error on unreachable branch
This patch cleans up a number of issues with how compare_bench.py handled
the command line arguments.
* Use the 'argparse' python module instead of hand rolled parsing. This gives
better usage messages.
* Add diagnostics for certain --benchmark flags that cannot or should not
be used with compare_bench.py (eg --benchmark_out_format=csv).
* Don't override the user specified --benchmark_out flag if it's provided.
In future I would like the user to be able to capture both benchmark output
files, but this change is big enough for now.
This fixes issue #313.
This patch adds the compare_bench.py utility which can be used to compare the result of benchmarks.
The program is invoked like:
$ compare_bench.py <old-benchmark> <new-benchmark> [benchmark options]...
Where <old-benchmark> and <new-benchmark> either specify a benchmark executable file, or a JSON output file. The type of the input file is automatically detected. If a benchmark executable is specified then the benchmark is run to obtain the results. Otherwise the results are simply loaded from the output file.