Commit Graph

20 Commits

Author SHA1 Message Date
Mike Apodaca
adb0d3d0bf
[FR] state.SkipWithMessage #963 (#1564)
* Add `SkipWithMessage`

* Added `enum Skipped`

* Fix: error at end of enumerator list

* Fix lint errors

---------

Co-authored-by: dominic <510002+dmah42@users.noreply.github.com>
2023-03-08 18:24:48 +00:00
Dominic Hamon
74ae567294
Small optimization to counter map management (#1382)
* Small optimization to counter map management

Avoids an unnecessary lookup.

* formatting
2022-04-07 14:37:22 +01:00
Dominic Hamon
fcef4fb669
clang-format Google on {src/,include/} (#1280) 2021-11-10 16:04:32 +00:00
Roman Lebedev
45b194e4d4
Introduce Coefficient of variation aggregate (#1220)
* Introduce Coefficient of variation aggregate

I believe, it is much more useful / use to understand,
because it is already normalized by the mean,
so it is not affected by the duration of the benchmark,
unlike the standard deviation.

Example of real-world output:
```
raw.pixls.us-unique/GoPro/HERO6 Black$ ~/rawspeed/build-old/src/utilities/rsbench/rsbench GOPR9172.GPR --benchmark_repetitions=27 --benchmark_display_aggregates_only=true --benchmark_counters_tabular=true
2021-09-03T18:05:56+03:00
Running /home/lebedevri/rawspeed/build-old/src/utilities/rsbench/rsbench
Run on (32 X 3596.16 MHz CPU s)
CPU Caches:
  L1 Data 32 KiB (x16)
  L1 Instruction 32 KiB (x16)
  L2 Unified 512 KiB (x16)
  L3 Unified 32768 KiB (x2)
Load Average: 7.00, 2.99, 1.85
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Benchmark                                                      Time             CPU   Iterations  CPUTime,s CPUTime/WallTime     Pixels Pixels/CPUTime Pixels/WallTime Raws/CPUTime Raws/WallTime WallTime,s
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
GOPR9172.GPR/threads:32/process_time/real_time_mean         11.1 ms          353 ms           27   0.353122          31.9473        12M       33.9879M        1085.84M      2.83232       90.4864  0.0110535
GOPR9172.GPR/threads:32/process_time/real_time_median       11.0 ms          352 ms           27   0.351696          31.9599        12M       34.1203M        1090.11M      2.84336       90.8425  0.0110081
GOPR9172.GPR/threads:32/process_time/real_time_stddev      0.159 ms         4.60 ms           27   4.59539m        0.0462064          0       426.371k        14.9631M    0.0355309       1.24692   158.944u
GOPR9172.GPR/threads:32/process_time/real_time_cv           1.44 %          1.30 %            27  0.0130136         1.44633m          0      0.0125448       0.0137802    0.0125448     0.0137802  0.0143795
```

Fixes https://github.com/google/benchmark/issues/1146

* Be consistent, it's CV, not 'rel std dev'
2021-09-03 18:44:10 +01:00
Roman Lebedev
12dc5eeafc
Statistics: add support for percentage unit in addition to time (#1219)
* Statistics: add support for percentage unit in addition to time

I think, `stddev` statistic is useful, but confusing.

What does it mean if `stddev` of `1ms` is reported?
Is that good or bad? If the `median` is `1s`,
then that means that the measurements are pretty noise-less.

And what about `stddev` of `100ms` is reported?
If the `median` is `1s` - awful, if the `median` is `10s` - good.

And hurray, there is just the statistic that we need:
https://en.wikipedia.org/wiki/Coefficient_of_variation

But, naturally, that produces a value in percents,
but the statistics are currently hardcoded to produce time.

So this refactors thinkgs a bit, and allows a percentage unit for statistics.

I'm not sure whether or not `benchmark` would be okay
with adding this `RSD` statistic by default,
but regales, that is a separate patch.

Refs. https://github.com/google/benchmark/issues/1146

* Address review notes
2021-09-03 15:36:56 +01:00
Dominic Hamon
6a5bf081d3
prefix macros to avoid clashes (#1186) 2021-06-24 18:21:59 +01:00
Roman Lebedev
80a62618e8
Introduce per-family instance index (#1165)
Much like it makes sense to enumerate all the families,
it makes sense to enumerate stuff within families.
Alternatively, we could have a global instance index,
but i'm not sure why that would be better.

This will be useful when the benchmarks are run not in order,
for the tools to sort the results properly.
2021-06-02 23:45:41 +03:00
Roman Lebedev
4c2e32f1d0
Introduce "family index" field into JSON output (#1164)
It may be useful for those wishing to further post-process JSON results,
but it is mainly geared towards better support for run interleaving,
where results from the same family may not be close-by in the JSON.

While we won't be able to do much about that for outputs,
the tools can and perhaps should reorder the results to that
at least in their output they are in proper order, not run order.

Note that this only counts the families that were filtered-in,
so if e.g. there were three families, and we filtered-out
the second one, the two families (which were first and third)
will have family indexes 0 and 1.
2021-06-02 18:06:45 +03:00
Roman Lebedev
090faecb45
Use IterationCount in one more place
Found in -UNDEBUG build
2019-05-13 22:42:18 +03:00
BaaMeow
478eafa36b [JSON] add threads and repetitions to the json output (#748)
* [JSON] add threads and repetitions to the json output, for better ide…
[Tests] explicitly check for thread == 1
[Tests] specifically mark all repetition checks
[JSON] add repetition_index reporting, but only for non-aggregates (i…

* [Formatting] Be very, very explicit about pointer alignment so clang-format can not put pointers/references on the wrong side of arguments.
[Benchmark::Run] Make sure to use explanatory sentinel variable rather than a magic number.

* Do not pass redundant information
2019-03-26 09:53:07 +00:00
Daniel Harvey
f6e96861a3 BENCHMARK_CAPTURE() and Complexity() - naming problem (#761)
Created BenchmarkName class which holds the full benchmark
name and allows specifying and retrieving different components
of the name (e.g. ARGS, THREADS etc.)

Fixes #730.
2019-03-17 16:38:51 +03:00
Roman Lebedev
507c06e636
Aggregates: use non-aggregate count as iteration count. (#706)
It is incorrect to say that an aggregate is computed over
run's iterations, because those iterations already got averaged.
Similarly, if there are N repetitions with 1 iterations each,
an aggregate will be computed over N measurements, not 1.
Thus it is best to simply use the count of separate reports.

Fixes #586.
2018-10-18 17:17:14 +03:00
Roman Lebedev
1b44120cd1
Un-deprecate [SG]et{Item,Byte}sProcessed, re-implement as custom counters. (#676)
As discussed with @dominichamon and @dbabokin, sugar is nice.
Well, maybe not for the health, but it's sweet.
Alright, enough puns.

A special care needs to be applied not to break csv reporter. UGH.
We end up shedding some code over this.
We no longer specially pretty-print them, they are printed just like the rest of custom counters.

Fixes #627.
2018-09-13 22:03:47 +03:00
Roman Lebedev
58588476ce
Track two more details about runs - the aggregate name, and run name. (#675)
This is related to @BaaMeow's work in https://github.com/google/benchmark/pull/616 but is not based on it.

Two new fields are tracked, and dumped into JSON:
* If the run is an aggregate, the aggregate's name is stored.
  It can be RMS, BigO, mean, median, stddev, or any custom stat name.
* The aggregate-name-less run name is additionally stored.
  I.e. not some name of the benchmark function, but the actual
  name, but without the 'aggregate name' suffix.

This way one can group/filter all the runs,
and filter by the particular aggregate type.

I *might* need this for further tooling improvement.
Or maybe not.
But this is certainly worthwhile for custom tooling.
2018-09-13 15:08:15 +03:00
Roman Lebedev
caa2fcb19c
Counter(): add 'one thousand' param. (#657)
* Counter(): add 'one thousand' param.

Needed for https://github.com/google/benchmark/pull/654

Custom user counters are quite custom. It is not guaranteed
that the user *always* expects for these to have 1k == 1000.
If the counter represents bytes/memory/etc, 1k should be 1024.

Some bikeshedding points:
1. Is this sufficient, or do we really want to go full on
   into custom types with names?
   I think just the '1000' is sufficient for now.
2. Should there be a helper benchmark::Counter::Counter{1000,1024}()
   static 'constructor' functions, since these two, by far,
   will be the most used?
3. In the future, we should be somehow encoding this info into JSON.

* Counter(): use std::pair<> to represent 'one thousand'

* Counter(): just use a new enum with two values 1000 vs 1024.

Simpler is better. If someone comes up with a real reason
to need something more advanced, it can be added later on.

* Counter: just store the 1000 or 1024 in the One_K values directly

* Counter: s/One_K/OneK/
2018-08-29 21:11:06 +03:00
Roman Lebedev
8688c5c4cf
Track 'type' of the run - is it an actual measurement, or an aggregate. (#658)
This is *only* exposed in the JSON. Not in CSV, which is deprecated.

This *only* supposed to track these two states.
An additional field could later track which aggregate this is,
specifically (statistic name, rms, bigo, ...)

The motivation is that we already have ReportAggregatesOnly,
but it affects the entire reports, both the display,
and the reporters (json files), which isn't ideal.

It would be very useful to have a 'display aggregates only' option,
both in the library's console reporter, and the python tooling,
This will be especially needed for the 'store separate iterations'.
2018-08-28 18:11:36 +03:00
BaaMeow
4c2af07889 (clang-)format all the things (#610)
* format all documents according to contributor guidelines and specifications
use clang-format on/off to stop formatting when it makes excessively poor decisions

* format all tests as well, and mark blocks which change too much
2018-06-01 11:14:19 +01:00
Fred Tingaud
50ffc781b1 Optimize by using nth_element instead of partial_sort to find the median. (#565) 2018-04-09 13:40:58 +01:00
Dominic Hamon
df415adb2a
Some small clang-tidy fixes (#520) 2018-01-29 08:38:47 -08:00
Roman Lebedev
a271c36af9 Drop Stat1, refactor statistics to be user-providable, add median. (#428)
* Drop Stat1, refactor statistics to be user-providable, add median.

My main goal was to add median statistic. Since Stat1
calculated the stats incrementally, and did not store
the values themselves, it is was not possible. Thus,
i have replaced Stat1 with simple std::vector<double>,
containing all the values.

Then, i have refactored current mean/stdev to be a
function that is provided with values vector, and
returns the statistic. While there, it seemed to make
sense to deduplicate the code by storing all the
statistics functions in a map, and then simply iterate
over it. And the interface to add new statistics is
intentionally exposed, so they may be added easily.

The notable change is that Iterations are no longer
displayed as 0 for stdev. Is could be changed, but
i'm not sure how to nicely fit that into the API.

Similarly, this dance about sometimes (for some fields,
for some statistics) dividing by run.iterations, and
then multiplying the calculated stastic back is also
dropped, and if you do the math, i fail to see why
it was needed there in the first place.

Since that was the only use of stat.h, it is removed.

* complexity.h: attempt to fix MSVC build

* Update README.md

* Store statistics to compute in a vector, ensures ordering.

* Add a bit more tests for repetitions.

* Partially address review notes.

* Fix gcc build: drop extra ';'

clang, why didn't you warn me?

* Address review comments.

* double() -> 0.0
* early return
2017-08-23 16:44:29 -07:00