It is incorrect to say that an aggregate is computed over
run's iterations, because those iterations already got averaged.
Similarly, if there are N repetitions with 1 iterations each,
an aggregate will be computed over N measurements, not 1.
Thus it is best to simply use the count of separate reports.
Fixes#586.
As discussed with @dominichamon and @dbabokin, sugar is nice.
Well, maybe not for the health, but it's sweet.
Alright, enough puns.
A special care needs to be applied not to break csv reporter. UGH.
We end up shedding some code over this.
We no longer specially pretty-print them, they are printed just like the rest of custom counters.
Fixes#627.
This is related to @BaaMeow's work in https://github.com/google/benchmark/pull/616 but is not based on it.
Two new fields are tracked, and dumped into JSON:
* If the run is an aggregate, the aggregate's name is stored.
It can be RMS, BigO, mean, median, stddev, or any custom stat name.
* The aggregate-name-less run name is additionally stored.
I.e. not some name of the benchmark function, but the actual
name, but without the 'aggregate name' suffix.
This way one can group/filter all the runs,
and filter by the particular aggregate type.
I *might* need this for further tooling improvement.
Or maybe not.
But this is certainly worthwhile for custom tooling.
* Counter(): add 'one thousand' param.
Needed for https://github.com/google/benchmark/pull/654
Custom user counters are quite custom. It is not guaranteed
that the user *always* expects for these to have 1k == 1000.
If the counter represents bytes/memory/etc, 1k should be 1024.
Some bikeshedding points:
1. Is this sufficient, or do we really want to go full on
into custom types with names?
I think just the '1000' is sufficient for now.
2. Should there be a helper benchmark::Counter::Counter{1000,1024}()
static 'constructor' functions, since these two, by far,
will be the most used?
3. In the future, we should be somehow encoding this info into JSON.
* Counter(): use std::pair<> to represent 'one thousand'
* Counter(): just use a new enum with two values 1000 vs 1024.
Simpler is better. If someone comes up with a real reason
to need something more advanced, it can be added later on.
* Counter: just store the 1000 or 1024 in the One_K values directly
* Counter: s/One_K/OneK/
This is *only* exposed in the JSON. Not in CSV, which is deprecated.
This *only* supposed to track these two states.
An additional field could later track which aggregate this is,
specifically (statistic name, rms, bigo, ...)
The motivation is that we already have ReportAggregatesOnly,
but it affects the entire reports, both the display,
and the reporters (json files), which isn't ideal.
It would be very useful to have a 'display aggregates only' option,
both in the library's console reporter, and the python tooling,
This will be especially needed for the 'store separate iterations'.
found while working on reproducible builds for openSUSE
To reproduce there
osc checkout openSUSE:Factory/benchmark && cd $_
osc build -j1 --vm-type=kvm
High system load can skew benchmark results. By including system load averages
in the library's output, we help users identify a potential issue in the
quality of their measurements, and thus assist them in producing better (more
reproducible) results.
I got the idea for this from Brendan Gregg's checklist for benchmark accuracy
(http://www.brendangregg.com/blog/2018-06-30/benchmarking-checklist.html).
* Set -Wno-deprecated-declarations for Intel
Intel compiler silently ignores -Wno-deprecated-declarations
so warning no. 1786 must be explicitly suppressed.
* Make std::int64_t → double casts explicit
While std::int64_t → double is a perfectly conformant
implicit conversion, Intel compiler warns about it.
Make them explicit via static_cast<double>.
* Make std::int64_t → int casts explicit
Intel compiler warns about emplacing an std::int64_t
into an int container. Just make the conversion explicit
via static_cast<int>.
* Cleanup Intel -Wno-deprecated-declarations workaround logic
Inspired by these [two](a1ebe07bea) [bugs](0891555be5) in my code due to the lack of those i have found fixed in my code:
* `kIsIterationInvariant` - `* state.iterations()`
The value is constant for every iteration, and needs to be **multiplied** by the iteration count.
* `kAvgIterations` - `/ state.iterations()`
The is global over all the iterations, and needs to be **divided** by the iteration count.
They play nice with `kIsRate`:
* `kIsIterationInvariantRate`
* `kAvgIterationsRate`.
I'm not sure how meaningful they are when combined with `kAvgThreads`.
I guess the `kIsThreadInvariant` can be added, too, for symmetry with `kAvgThreads`.
* Fix compilation on Android with GNU STL
GNU STL in Android NDK lacks string conversion functions from C++11, including std::stoul, std::stoi, and std::stod.
This patch reimplements these functions in benchmark:: namespace using C-style equivalents from C++03.
* Avoid use of log2 which doesn't exist in Android GNU STL
GNU STL in Android NDK lacks log2 function from C99/C++11.
This patch replaces their use in the code with double log(double) function.
* format all documents according to contributor guidelines and specifications
use clang-format on/off to stop formatting when it makes excessively poor decisions
* format all tests as well, and mark blocks which change too much
* Add benchmark_main library with support for Bazel.
* fix newline at end of file
* Add CMake support for benchmark_main.
* Mention optionally using benchmark_main in README.
* Allow support for negative regex filtering
This patch allows one to apply a negation to the entire regex filter
by appending it with a '-' character, much in the same style as
GoogleTest uses.
* Address issues in PR
* Add unit tests for negative filtering
* Ensure 64-bit truncation doesn't happen for complexity results
* One more complexity_n 64-bit fix
* Missed another vector of int
* Piping through the int64_t
* Allow AddRange to work with int64_t.
Fixes#516
Also, tweak how we manage per-test build needs, and create a standard
_gtest suffix for googletest to differentiate from non-googletest tests.
I also ran clang-format on the files that I changed (but not the
benchmark include or main src as they have too many clang-format
issues).
* Add benchmark_gtest to cmake
* Set(Items|Bytes)Processed now take int64_t
* Add tests to verify assembler output -- Fix DoNotOptimize.
For things like `DoNotOptimize`, `ClobberMemory`, and even `KeepRunning()`,
it is important exactly what assembly they generate. However, we currently
have no way to test this. Instead it must be manually validated every
time a change occurs -- including a change in compiler version.
This patch attempts to introduce a way to test the assembled output automatically.
It's mirrors how LLVM verifies compiler output, and it uses LLVM FileCheck to run
the tests in a similar way.
The tests function by generating the assembly for a test in CMake, and then
using FileCheck to verify the // CHECK lines in the source file are found
in the generated assembly.
Currently, the tests only run on 64-bit x86 systems under GCC and Clang,
and when FileCheck is found on the system.
Additionally, this patch tries to improve the code gen from DoNotOptimize.
This should probably be a separate change, but I needed something to test.
* Disable assembly tests on Bazel for now
* Link FIXME to github issue
* Fix Tests on OS X
* fix strip_asm.py to work on both Linux and OS X like targets
Having the copts set on a per-target level can lead to ODR violations
in some cases. Avoid this by ensuring the regex engine is picked
through compiler intrinsics in the header directly.
Older CMake versions, in particular 2.8, don't seem to correctly handle
interface include directories. This causes failures when building the
tests. Additionally, older CMake versions use a different library install
directory than expected (i.e. they use lib/<target-triple>). This caused
certain tests to fail to link.
This patch fixes both those issues. The first by manually adding the
correct include directory when building the tests. The second by specifying
the library output directory when configuring the GTest build.
* Add myself to CONTRIBUTORS under the corp CLA for Stripe, Inc.
* Add support for building with Bazel.
Limitations compared to existing CMake rules:
* Defaults to using C++11 `<regex>`, with an override via Bazel flag
`--define` of `google_benchmark.have_regex`. The TravisCI config sets
the regex implementation to `posix` because it uses ancient compilers.
* Debug vs Opt mode can't be set per test. TravisCI runs all the tests
in debug mode to satisfy `diagnostics_test`, which depends on `CHECK`
being live.
* Set Bazel workspace name so other repos can refer to it by stable name.
This is recommended by the Bazel style guide to avoid each dependent
workspace defining its own name for the dependency.
* Print the executable name as part of the context.
A common use case of the library is to run two different
versions of a benchmark to compare them. In my experience
this often means compiling a benchmark twice, renaming
one of the executables, and then running the executables
back-to-back. In this case the name of the executable
is important contextually information. Unfortunately the
benchmark does not report this information.
This patch adds the executable name to the context reported
by the benchmark.
* attempt to fix tests on Windows
* attempt to fix tests on Windows
Due to ADL lookup performed on the begin and end functions
of `for (auto _ : State)`, std::iterator_traits may get
incidentally instantiated. This patch ensures the library
can tolerate that.
The appveyor bot sometimes fails because the time it
outputs is 6 digits long, but the output test regex expects at most
5 digits. This patch increases the size to 6 digits to placate the
test. This should not *really* affect the correctness of the test.
* Support State::KeepRunningBatch().
State::KeepRunning() can take large amounts of time relative to quick
operations (on the order of 1ns, depending on hardware). For such
sensitive operations, it is recommended to run batches of repeated
operations.
This commit simplifies handling of total_iterations_. Rather than
predecrementing such that total_iterations_ == 1 signals that
KeepRunning() should exit, total_iterations_ == 0 now signals the
intention for the benchmark to exit.
* Create better fast path in State::KeepRunningBatch()
* Replace int parameter with size_t to fix signed mismatch warnings
* Ensure benchmark State has been started even on error.
* Simplify KeepRunningBatch()
* Add support for GTest based unit tests.
As Dominic and I have previously discussed, there is some
need/desire to improve the testing situation in Google Benchmark.
One step to fixing this problem is to make it easier to write
unit tests by adding support for GTest, which is what this patch does.
By default it looks for an installed version of GTest. However the
user can specify -DBENCHMARK_BUILD_EXTERNAL_GTEST=ON to instead
download, build, and use copy of gtest from source. This is
quite useful when Benchmark is being built in non-standard configurations,
such as against libc++ or in 32 bit mode.
* Improve CPU Cache info reporting -- Add Windows support.
This patch does a couple of thing regarding CPU Cache reporting.
First, it adds an implementation on Windows. Second it fixes
the JSONReporter to correctly (and actually) output the CPU
configuration information.
And finally, third, it detects and reports the number of
physical CPU's that share the same cache.
* Fix BM_SetInsert example
Move declaration of `std::set<int> data` outside the timing loop, so that the
destructor is not timed.
* Speed up BM_SetInsert test
Since the time taken to ConstructRandomSet() is so large compared to the time
to insert one element, but only the latter is used to determine number of
iterations, this benchmark now takes an extremely long time to run in
benchmark_test.
Speed it up two ways:
- Increase the Ranges() parameters
- Cache ConstructRandomSet() result (it's not random anyway), and do only
O(N) copy every iteration
* Fix same issue in BM_MapLookup test
* Make BM_SetInsert test consistent with README
- Use the same Ranges everywhere, but increase the 2nd range
- Change order of Args() calls in README to more closely match the result of Ranges
- Don't cache ConstructRandomSet, since it doesn't make sense in README
- Get a smaller optimization inside it, by givint a hint to insert()
Recently the library added a new ranged-for variant of the KeepRunning
loop that is much faster. For this reason it should be preferred in all
new code.
Because a library, its documentation, and its tests should all embody
the best practices of using the library, this patch changes all but a
few usages of KeepRunning() into for (auto _ : state).
The remaining usages in the tests and documentation persist only
to document and test behavior that is different between the two formulations.
Also note that because the range-for loop requires C++11, the KeepRunning
variant has not been deprecated at this time.
* Add C++11 Ranged For loop alternative to KeepRunning
As pointed out by @astrelni and @dominichamon, the KeepRunning
loop requires a bunch of memory loads and stores every iterations,
which affects the measurements.
The main reason for these additional loads and stores is that the
State object is passed in by reference, making its contents externally
visible memory, and the compiler doesn't know it hasn't been changed
by non-visible code.
It's also possible the large size of the State struct is hindering
optimizations.
This patch allows the `State` object to be iterated over using
a range-based for loop. Example:
void BM_Foo(benchmark::State& state) {
for (auto _ : state) {
[...]
}
}
This formulation is much more efficient, because the variable counting
the loop index is stored in the iterator produced by `State::begin()`,
which itself is stored in function-local memory and therefore not accessible
by code outside of the function. Therefore the compiler knows the iterator
hasn't been changed every iteration.
This initial patch and idea was from Alex Strelnikov.
* Fix null pointer initialization in C++03
* Fix#444 - Use BENCHMARK_HAS_CXX11 over __cplusplus.
MSVC incorrectly defines __cplusplus to report C++03, despite the compiler
actually providing C++11 or greater. Therefore we have to detect C++11 differently
for MSVC. This patch uses `_MSVC_LANG` which has been defined since
Visual Studio 2015 Update 3; which should be sufficient for detecting C++11.
Secondly this patch changes over most usages of __cplusplus >= 201103L to
check BENCHMARK_HAS_CXX11 instead.
* remove redunant comment
* Drop Stat1, refactor statistics to be user-providable, add median.
My main goal was to add median statistic. Since Stat1
calculated the stats incrementally, and did not store
the values themselves, it is was not possible. Thus,
i have replaced Stat1 with simple std::vector<double>,
containing all the values.
Then, i have refactored current mean/stdev to be a
function that is provided with values vector, and
returns the statistic. While there, it seemed to make
sense to deduplicate the code by storing all the
statistics functions in a map, and then simply iterate
over it. And the interface to add new statistics is
intentionally exposed, so they may be added easily.
The notable change is that Iterations are no longer
displayed as 0 for stdev. Is could be changed, but
i'm not sure how to nicely fit that into the API.
Similarly, this dance about sometimes (for some fields,
for some statistics) dividing by run.iterations, and
then multiplying the calculated stastic back is also
dropped, and if you do the math, i fail to see why
it was needed there in the first place.
Since that was the only use of stat.h, it is removed.
* complexity.h: attempt to fix MSVC build
* Update README.md
* Store statistics to compute in a vector, ensures ordering.
* Add a bit more tests for repetitions.
* Partially address review notes.
* Fix gcc build: drop extra ';'
clang, why didn't you warn me?
* Address review comments.
* double() -> 0.0
* early return
The benchmark library is compiled as C++11, but certain
tests are compiled as C++03. When -flto is enabled GCC 5.4
and above will diagnose an ODR violation in libstdc++'s <map>.
This ODR violation, although real, should likely be benign. For
this reason it seems sensible to simply suppress -Wodr when building
the C++03 test.
This patch fixes#420 and supersede's PR #424.
* Json reporter: passthrough fp, don't cast it to int; adjust tooling
Json output format is generally meant for further processing
using some automated tools. Thus, it makes sense not to
intentionally limit the precision of the values contained
in the report.
As it can be seen, FormatKV() for doubles, used %.2f format,
which was meant to preserve at least some of the precision.
However, before that function is ever called, the doubles
were already cast to the integer via RoundDouble()...
This is also the case for console reporter, where it makes
sense because the screen space is limited, and this reporter,
however the CSV reporter does output some( decimal digits.
Thus i can only conclude that the loss of the precision
was not really considered, so i have decided to adjust the
code of the json reporter to output the full fp precision.
There can be several reasons why that is the right thing
to do, the bigger the time_unit used, the greater the
precision loss, so i'd say any sort of further processing
(like e.g. tools/compare_bench.py does) is best done
on the values with most precision.
Also, that cast skewed the data away from zero, which
i think may or may not result in false- positives/negatives
in the output of tools/compare_bench.py
* Json reporter: FormatKV(double): address review note
* tools/gbench/report.py: skip benchmarks with different time units
While it may be useful to teach it to operate on the
measurements with different time units, which is now
possible since floats are stored, and not the integers,
but for now at least doing such a sanity-checking
is better than providing misinformation.
* Make Benchmark a single header library (but not header-only)
This patch refactors benchmark into a single header, to allow
for slightly easier usage.
The initial reason for the header split was to keep C++ library
components from being included by benchmark_api.h, making that
part of the library STL agnostic. However this has since changed
and there seems to be little reason to separate the reporters from
the rest of the library.
* Fix internal_macros.h
* Remove more references to macros.h
* Add ClearRegisteredBenchmark() function.
Since benchmarks can be registered at runtime using the RegisterBenchmark(...)
functions, it makes sense to have a ClearRegisteredBenchmarks() function too,
that can be used at runtime to clear the currently registered benchmark and
re-register an entirely new set.
This allows users to run a set of registered benchmarks, get the output using
a custom reporter, and then clear and re-register new benchmarks based on the
previous results.
This fixes issue #400, at least partially.
* Remove unused change
* Fix#342: DoNotOptimize causes compile errors on older GCC versions.
DoNotOptimize uses inline assembly contraints to tell
the compiler what the type of the input variable. The 'g'
operand allows the input to be any register, memory, or
immediate integer operand. However this constraint seems
to be too weak on older GCC versions, and certain inputs
will cause compile errors.
This patch changes the constraint to 'X', which is documented
as "any operand whatsoever is allowed". This appears to fix
the issues with older GCC versions.
However Clang doesn't seem to like "X", and will attempt
to put the input into a register even when it can't/shouldn't;
causing a compile error. However using "g" seems to work like
"X" with GCC, so for this reason Clang still uses "g".
* Try alternative formulation to placate GCC
This thing with the pragma ignore was getting out of hand: now
MinGW (and probably GCC) was erroring too. So I chose to move
the definition of IsZero() out of the anonymous namespace into
benchmark.cc.