This commit adds a job running after the wheel building job responsible for uploading the built wheels to PyPI.
The job only runs on successful completion of all build jobs, and uploads to PyPI using a secret added to the Google Benchmark repo (TBD).
Also, the setup-python action has been bumped to the latest version v3.
* Add SetBenchmarkFilter() to set --benchmark_filter flag value in user code.
Use case: Provide an API to set this flag indepedence of the flag's implementation (ie., absl flag vs benchmark's flag facility)
* add test
* added notes on Initialize()
This commit adds the two fields `long_description` and `long_description_content_type` to `setup.py`. These can be used for proper project presentation on the PyPI project page, which is currently a placeholder.
* Add option to set the default time unit globally
This commit introduces the `--benchmark_time_unit={ns|us|ms|s}` command line argument. The argument only affects benchmarks where the time unit is not set explicitly.
* Update AUTHORS and CONTRIBUTORS
* Test `SetDefaultTimeUnit`
* clang format
* Use `GetDefaultTimeUnit()` for initializing `TimeUnit` variables
* Review fixes
* Export functions
* Add comment
* Make generate_export_header.bzl work for Windows.
While I'm here, bring the generated code slightly closer to what CMake
would generate nowadays.
Fixes#1351.
* Fix define.
* Fix export_import_condition.
* Fix guard.
* introduce the possibility to customize the help printer function
Signed-off-by: Vincenzo Palazzo <vincenzopalazzodev@gmail.com>
* fixed naming convertion, and introduce the option function in the init method
Signed-off-by: Vincenzo Palazzo <vincenzopalazzodev@gmail.com>
* remove the macros to inject the helper function
Signed-off-by: Vincenzo Palazzo <vincenzopalazzodev@gmail.com>
* remove the default implementation, and introduce the nullprt
Signed-off-by: Vincenzo Palazzo <vincenzopalazzodev@gmail.com>
* Expose default display reporter creation in public API
this is useful when a custom reporter wants to fall back on the default
display reporter, but doesn't necessarily have access to the benchmark
library flag configuration.
* Make use of unique_ptr in the random interleaving test.
* clang-format
* The parameterized tests check both floating point and integral types. We might as well use types that avoid truncation warnings across the platforms
* static_cast version of how to avoid truncation warnings in basic_test
Co-authored-by: Staffan Tjernstrom <staffantj@users.noreply.github.com>
This commit contains a fix for macOS ARM64 wheel buils in Google Benchmark's wheel building CI.
Previously, while `cibuildwheel` itself properly identified the need for cross-compilations and produced valid ARM platform wheels, the included shared library containing the Python bindings
built by `bazel` was built for x86, resulting in immediate errors upon import.
To fix this, logic was added to the setup.py file that adds the "--cpu=darwin_arm64" and "--macos_cpus=arm64" switches to the `bazel build` command if
1) The current system platform is macOS Darwin running on the x86_64 architecture, and
2) The ARCHFLAGS environment variable, set by wheel build systems like conda and cibuildwheel, contains the tag "arm64".
This way, bazel correctly sets the target CPU to ARM64, and produces functional wheels for the macOS ARM line of CPUs.
This patch fixes#1306, by reducing the pinned instances of
PerfCounters.
The issue is caused by creating multiple pinned events in the
same thread, doing so results in the Snapshot(PerfCounterValues* values)
failing, and that's now discoverable.
Creating multile pinned events is an unsupported behavior currently.
The error would be detected at read() time, not
perf_event_open() / iotcl() time.
The unsupported benavior above is confirmed by Stephane Eranian @seranian,
and he also pointed the dectection method.
Finished this patch under the guidance of Mircea Trofin @mtrofin.
* Revert "Refine docs on changing cpufreq governor (#1325)"
This reverts commit 9e859f5bf5.
* Refine the User Guide CPU Frequency Scaling section
The text now describes the cpupower command, so users in a hurry
have something to copy/paste that will likely work. It then
suggests that there are probably more convenient optons available
that people can look into.
This reverts the prior commit, which introduced a shell script
that doesn't work. It also retains the spirit of the original
fix: no longer recommend setting the frequency governor to
"powersave", which might not be appropriate or available.
Note: I did attempt to write a bash script that set the govenor
to "powersave" for the duration of a single command, but I gave
up for many reasons:
1) it got complex, in part because the cpupower command does not
seem to be designed for scripts (e.g. it prints out complex
English phrases).
2) munging /proc/sys files directly feels unstable and less than
universal. The libcpupower and cpupower are designed to abstract
those away, because the details can vary.
3) there are better options. E.g. various GUI programs, and
even Gnome's core Settings UI, let you adjust the system's
performance mode without root access.
Fixes#1325, #1327
* Address MSVC C4722 warning in tests
Some test paths deliberately exit, and it appears that the appropriate declspec
does not stop the compiler generating the C4722 warning as one might expect.
Per https://github.com/google/benchmark/issues/826#issuecomment-851995549
this commit ignores the warning for the affected call site.
* Fix up Formatting
* Fix up formatting issue on pragmas
* Fix up formatting issue on pragmas take 2
Co-authored-by: Staffan Tjernstrom <staffantj@users.noreply.github.com>
* Fix#1159 Harmonize the types between the loop counter and the vector of indices
The loop counter is a size_t, but they're stored in a vector-of-int leading to
potential overflow warnings. In order to avoid accidental run-time overflow
this change modifies the vector type, rather than the loop counter.
* Fix up line endings
* Update AUTHORS
Add Staffan Tjernstrom <staffantj@gmail.com>
Co-authored-by: Staffan Tjernstrom <staffantj@users.noreply.github.com>
This applies a fix that used to exist in LLVM's downstream copy of
this library, from
948ce4e6ed.
I presume this warning isn't present if built with MSVC or Clang-cl,
but it's printed in MinGW mode. As the benchmark library adds
-Werror, this is a fatal error when builtin MinGW mode.
Despite the wide variety of the features we provide,
some people still have the audacity to complain and demand more.
Concretely, i *very* often would like to see the overall result
of the benchmark. Is the 'new' better or worse, overall,
over all the non-aggregate time/cpu measurements.
This comes up for me most often when i want to quickly see
what effect some LLVM optimization change has on the benchmark.
The idea is straight-forward, just produce four lists:
wall times for LHS benchmark, CPU times for LHS benchmark,
wall times for RHS benchmark, CPU times for RHS benchmark;
then compute geomean for each one of those four lists,
and compute the two percentage change between
* geomean wall time for LHS benchmark and geomean wall time for RHS benchmark
* geomean CPU time for LHS benchmark and geomean CPU time for RHS benchmark
and voila!
It is complicated by the fact that it needs to graciously handle
different time units, so pandas.Timedelta dependency is introduced.
That is the only library that does not barf upon floating times,
i have tried numpy.timedelta64 (only takes integers)
and python's datetime.timedelta (does not take nanosecons),
and they won't do.
Fixes https://github.com/google/benchmark/issues/1147
* Add Setup/Teardown option on Benchmark.
Motivations:
- feature parity with our internal library. (which has ~718 callers)
- more flexible than cordinating setup/teardown inside the benchmark routine.
* change Setup/Teardown callback type to raw function pointers
* add test file to cmake file
* move b.Teardown() up
* add const to param of Setup/Teardown callbacks
* fix comment and add doc to user_guide
* fix typo
* fix doc, fix test and add bindings to python/benchmark.cc
* fix binding again
* remove explicit C cast - that was wrong
* change policy to reference_internal
* try removing the bindinds ...
* clean up
* add more tests with repetitions and fixtures
* more comments
* init setup/teardown callbacks to NULL
* s/nullptr/NULL
* removed unused var
* change assertion on fixture_interaction::fixture_setup
* move NULL init to .cc file
It seems according to [1] that bazelbuild/rules_cc has been put on hold
and that the recommended way for now, is to use the native cc rules.
[1]: https://github.com/bazelbuild/rules_go/pull/2950
* Fix dependency typo and unpin cibuildwheel version in wheel building action
* Move to monolithic build jobs, restrict to x64 architectures
As of this commit, all wheel building jobs complete on GitHub Actions. Since some platform-specific options had to be set to fix different types of build problems underway, the build job matrix was unrolled.
Still left TODO:
* Wheel testing after build (running the Python bindings test)
* Emulating bazel on other architectures to build aarch64/i686/ppc64le
* Enabling Win32 (this fails due to linker errors).
* Add binding test commands for all wheels, set macOSX deployment target to 10.9
* Add instructions for updating Python __version__ variable before release creation