Commit Graph

8 Commits

Author SHA1 Message Date
Roman Lebedev
b9be142d1e Json reporter: don't cast floating-point to int; adjust tooling (#426)
* Json reporter: passthrough fp, don't cast it to int; adjust tooling

Json output format is generally meant for further processing
using some automated tools. Thus, it makes sense not to
intentionally limit the precision of the values contained
in the report.

As it can be seen, FormatKV() for doubles, used %.2f format,
which was meant to preserve at least some of the precision.
However, before that function is ever called, the doubles
were already cast to the integer via RoundDouble()...

This is also the case for console reporter, where it makes
sense because the screen space is limited, and this reporter,
however the CSV reporter does output some( decimal digits.

Thus i can only conclude that the loss of the precision
was not really considered, so i have decided to adjust the
code of the json reporter to output the full fp precision.

There can be several reasons why that is the right thing
to do, the bigger the time_unit used, the greater the
precision loss, so i'd say any sort of further processing
(like e.g. tools/compare_bench.py does) is best done
on the values with most precision.

Also, that cast skewed the data away from zero, which
i think may or may not result in false- positives/negatives
in the output of tools/compare_bench.py

* Json reporter: FormatKV(double): address review note

* tools/gbench/report.py: skip benchmarks with different time units

While it may be useful to teach it to operate on the
measurements with different time units, which is now
possible since floats are stored, and not the integers,
but for now at least doing such a sanity-checking
is better than providing misinformation.
2017-07-24 16:13:55 -07:00
Felix Duvallet
feb69ae710 Ensure all the necessary keys are present before parsing JSON data (#380)
This prevents errors when additional non-timing data are present in
the JSON that is loaded, for example when complexity data has been
computed (see #379).
2017-05-02 08:19:35 -07:00
Ray Glover
17298b2dc0 Python 2/3 compatibility (#361)
* [tools] python 2/3 support

* update authors/contributors
2017-03-29 03:39:18 -07:00
Pavel Campr
e381139474 fix compare script - output formatting - correctly align numbers >9999 (#322)
* fix compare script - output formatting - correctly align numbers >9999

* fix failing test (report.py); fix compare script output formatting (large numbers alignment)
2016-12-09 05:24:31 -07:00
Eric Fiselier
a8aa40c596 Fix obvious typo in string formatting 2016-11-19 05:17:52 -07:00
Eric Fiselier
2373382284 Rewrite compare_bench.py argument parsing.
This patch cleans up a number of issues with how compare_bench.py handled
the command line arguments.

* Use the 'argparse' python module instead of hand rolled parsing. This gives
  better usage messages.

* Add diagnostics for certain --benchmark flags that cannot or should not
  be used with compare_bench.py (eg --benchmark_out_format=csv).

* Don't override the user specified --benchmark_out flag if it's provided.

In future I would like the user to be able to capture both benchmark output
files, but this change is big enough for now.

This fixes issue #313.
2016-11-18 15:42:02 -07:00
Eric
cba945e37d Make PauseTiming() and ResumeTiming() per thread. (#286)
* Change to using per-thread timers

* fix bad assertions

* fix copy paste error on windows

* Fix thread safety annotations

* Make null-log thread safe

* remove remaining globals

* use chrono for walltime since it is thread safe

* consolidate timer functions

* Add missing ctime include

* Rename to be consistent with Google style

* Format patch using clang-format

* cleanup -Wthread-safety configuration

* Don't trust _POSIX_FEATURE macros because OS X lies.

* Fix OS X thread timings

* attempt to fix mingw build

* Attempt to make mingw work again

* Revert old mingw workaround

* improve diagnostics

* Drastically improve OS X measurements

* Use average real time instead of max
2016-09-02 21:34:34 -06:00
Eric
5eac66249c Add a "compare_bench.py" tooling script. (#266)
This patch adds the compare_bench.py utility which can be used to compare the result of benchmarks.
The program is invoked like:

$ compare_bench.py <old-benchmark> <new-benchmark> [benchmark options]...
Where <old-benchmark> and <new-benchmark> either specify a benchmark executable file, or a JSON output file. The type of the input file is automatically detected. If a benchmark executable is specified then the benchmark is run to obtain the results. Otherwise the results are simply loaded from the output file.
2016-08-09 12:33:57 -06:00