* Json reporter: passthrough fp, don't cast it to int; adjust tooling
Json output format is generally meant for further processing
using some automated tools. Thus, it makes sense not to
intentionally limit the precision of the values contained
in the report.
As it can be seen, FormatKV() for doubles, used %.2f format,
which was meant to preserve at least some of the precision.
However, before that function is ever called, the doubles
were already cast to the integer via RoundDouble()...
This is also the case for console reporter, where it makes
sense because the screen space is limited, and this reporter,
however the CSV reporter does output some( decimal digits.
Thus i can only conclude that the loss of the precision
was not really considered, so i have decided to adjust the
code of the json reporter to output the full fp precision.
There can be several reasons why that is the right thing
to do, the bigger the time_unit used, the greater the
precision loss, so i'd say any sort of further processing
(like e.g. tools/compare_bench.py does) is best done
on the values with most precision.
Also, that cast skewed the data away from zero, which
i think may or may not result in false- positives/negatives
in the output of tools/compare_bench.py
* Json reporter: FormatKV(double): address review note
* tools/gbench/report.py: skip benchmarks with different time units
While it may be useful to teach it to operate on the
measurements with different time units, which is now
possible since floats are stored, and not the integers,
but for now at least doing such a sanity-checking
is better than providing misinformation.
This prevents errors when additional non-timing data are present in
the JSON that is loaded, for example when complexity data has been
computed (see #379).
This patch cleans up a number of issues with how compare_bench.py handled
the command line arguments.
* Use the 'argparse' python module instead of hand rolled parsing. This gives
better usage messages.
* Add diagnostics for certain --benchmark flags that cannot or should not
be used with compare_bench.py (eg --benchmark_out_format=csv).
* Don't override the user specified --benchmark_out flag if it's provided.
In future I would like the user to be able to capture both benchmark output
files, but this change is big enough for now.
This fixes issue #313.
* Change to using per-thread timers
* fix bad assertions
* fix copy paste error on windows
* Fix thread safety annotations
* Make null-log thread safe
* remove remaining globals
* use chrono for walltime since it is thread safe
* consolidate timer functions
* Add missing ctime include
* Rename to be consistent with Google style
* Format patch using clang-format
* cleanup -Wthread-safety configuration
* Don't trust _POSIX_FEATURE macros because OS X lies.
* Fix OS X thread timings
* attempt to fix mingw build
* Attempt to make mingw work again
* Revert old mingw workaround
* improve diagnostics
* Drastically improve OS X measurements
* Use average real time instead of max
This patch adds the compare_bench.py utility which can be used to compare the result of benchmarks.
The program is invoked like:
$ compare_bench.py <old-benchmark> <new-benchmark> [benchmark options]...
Where <old-benchmark> and <new-benchmark> either specify a benchmark executable file, or a JSON output file. The type of the input file is automatically detected. If a benchmark executable is specified then the benchmark is run to obtain the results. Otherwise the results are simply loaded from the output file.