Commit Graph

7 Commits

Author SHA1 Message Date
Roman Lebedev
6e32352c79
compare.py: sort the results (#1168)
Currently, the tooling just keeps the whatever benchmark order
that was present, and this is fine nowadays, but once the benchmarks
will be optionally run interleaved, that will be rather suboptimal.

So, now that i have introduced family index and per-family instance index,
we can define an order for the benchmarks, and sort them accordingly.

There is a caveat with aggregates, we assume that they are in-order,
and hopefully we won't mess that order up..
2021-06-03 16:22:52 +03:00
Dominic Hamon
beb360d03e
Create pylint.yml (#1039)
* Create pylint.yml

* improve file matching

* fix some pylint issues

* run on PR and push (force on master only)

* more pylint fixes

* suppress noisy exit code and filter to fatals

* add conan as a dep so the module is importable

* fix lint error on unreachable branch
2020-09-09 09:43:26 +01:00
Jusufadis Bakamovic
eee8b05c97 [tools] Run autopep8 and apply fixes found. (#739) 2018-12-07 16:34:00 +03:00
Ray Glover
17298b2dc0 Python 2/3 compatibility (#361)
* [tools] python 2/3 support

* update authors/contributors
2017-03-29 03:39:18 -07:00
Eric Fiselier
a8aa40c596 Fix obvious typo in string formatting 2016-11-19 05:17:52 -07:00
Eric Fiselier
2373382284 Rewrite compare_bench.py argument parsing.
This patch cleans up a number of issues with how compare_bench.py handled
the command line arguments.

* Use the 'argparse' python module instead of hand rolled parsing. This gives
  better usage messages.

* Add diagnostics for certain --benchmark flags that cannot or should not
  be used with compare_bench.py (eg --benchmark_out_format=csv).

* Don't override the user specified --benchmark_out flag if it's provided.

In future I would like the user to be able to capture both benchmark output
files, but this change is big enough for now.

This fixes issue #313.
2016-11-18 15:42:02 -07:00
Eric
5eac66249c Add a "compare_bench.py" tooling script. (#266)
This patch adds the compare_bench.py utility which can be used to compare the result of benchmarks.
The program is invoked like:

$ compare_bench.py <old-benchmark> <new-benchmark> [benchmark options]...
Where <old-benchmark> and <new-benchmark> either specify a benchmark executable file, or a JSON output file. The type of the input file is automatically detected. If a benchmark executable is specified then the benchmark is run to obtain the results. Otherwise the results are simply loaded from the output file.
2016-08-09 12:33:57 -06:00