mirror of
https://github.com/google/benchmark.git
synced 2024-12-28 05:20:14 +08:00
d6ba952fc1
Despite the wide variety of the features we provide, some people still have the audacity to complain and demand more. Concretely, i *very* often would like to see the overall result of the benchmark. Is the 'new' better or worse, overall, over all the non-aggregate time/cpu measurements. This comes up for me most often when i want to quickly see what effect some LLVM optimization change has on the benchmark. The idea is straight-forward, just produce four lists: wall times for LHS benchmark, CPU times for LHS benchmark, wall times for RHS benchmark, CPU times for RHS benchmark; then compute geomean for each one of those four lists, and compute the two percentage change between * geomean wall time for LHS benchmark and geomean wall time for RHS benchmark * geomean CPU time for LHS benchmark and geomean CPU time for RHS benchmark and voila! It is complicated by the fact that it needs to graciously handle different time units, so pandas.Timedelta dependency is introduced. That is the only library that does not barf upon floating times, i have tried numpy.timedelta64 (only takes integers) and python's datetime.timedelta (does not take nanosecons), and they won't do. Fixes https://github.com/google/benchmark/issues/1147 |
||
---|---|---|
.. | ||
Inputs | ||
__init__.py | ||
report.py | ||
util.py |