* Enable Large-file Support
This should fix https://github.com/google/benchmark/issues/1725
* Use whitespaces instead of tab in BUILD.bazel
---------
Co-authored-by: dominic <510002+dmah42@users.noreply.github.com>
Starting with Linux 6.6 [1], RDCYCLE is a privileged instruction on
RISC-V and can't be used directly from userland. There is a sysctl
option to change that as a transition period, but it will eventually
disappear.
Use RDTIME instead, which while less accurate has the advantage of being
synchronized between CPU (and thus monotonic) and of constant frequency.
[1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=cc4c07c89aada16229084eeb93895c95b7eabaa3
Co-authored-by: dominic <510002+dmah42@users.noreply.github.com>
The reason for this is that `setuptools-scm` installs a version relative
to the last release tag - if no tag is found, the default version is taken
to be v0.1.0. This was the case in GitHub Actions, where only the PR
branch is checked out.
Also unpins build system requirements in the `pyproject.toml`.
The sdist build system was changed to `build` from `python setup.py sdist`
for forward compatibility - `build` is superior in every way, and the
advertised solution by both cibuildwheel and PyPA itself.
Bump `actions/setup-python` to v5, `pypa/gh-action-pypi-publish` to v1.8.11,
and `docker/setup-qemu-action` to v3.
* Run `pre-commit autoupdate`, fix stale pyproject.toml comments
* Set `--enable_bzlmod=false` for the moment
Until the newer nanobind tags are pushed to the BCR, it's best to disable
bzlmod for the bindings, because the Python CI breaks due to Bazel 7
enabling bzlmod by default.
* Remove E203 ignore, add linebreaks to semantically group ruff options
Bumps `rules_foreign_cc` to v0.10.1 (October 2023), `bazel_skylib` to
v1.5.0 (November 2023), `rules_python` to v0.27.1 (December 2023).
Also syncs GoogleTest to v1.12.1 (the last C++11 supporting version) to
be the same as in MODULE.bazel.
Since the latest `rules_python` changed its setup calling convention,
that is updated also in the WORKSPACE file.
This method was the culprit for the recent editable install breakage,
since it just tries to copy the generated extension file without checking
its existence.
Since the `BazelExtension` uses a non-standard location to store the
build artifacts, calling the copy method fails the build since the
extension is not found in the expected location.
But, since we already copy the file into the source tree as part of the
`BazelExtension.bazel_build` method, it's fine - the extension appears
in the right place, and the egg info is generated correctly as well.
This method also does not affect the general install, so it solves the
editable problem without regressing the fixed install.
Co-authored-by: dominic <510002+dmah42@users.noreply.github.com>
There is no bug here, but it gave me a scare the other day.
It is not incorrect to use `IterationCount` here,
since it's just an `int64_t` either way,
but it's wildly confusing. Let's not do that.
Co-authored-by: dominic <510002+dmah42@users.noreply.github.com>
For some reason, editable pip installs are now broken, which means that
they will break the pre-commit workflow due to the `pip install -e .`
instruction.
Since the normal install is unaffected, we can just drop the `-e` switch.
It does not matter which mode is used, since the environment is only
used for linting.
Saves one pre-commit hook, some pyproject.toml configuration,
and provides much better performance with almost identical behavior.
Co-authored-by: dominic <510002+dmah42@users.noreply.github.com>
donotoptimize_test.cc could not be compiled under non-gnu / non-msvc compilers,
because only deprecated version of DoNotOptimize is available for these
compilers. Tests are compiled with -Werror. Patch fixes test compilation by
providing non-deprecated version of DoNotOptimize for compilers with c++11
standard support.
Co-authored-by: dominic <510002+dmah42@users.noreply.github.com>
* Add `setuptools_scm` for dynamic zero-config Python versioning
This removes the need for manually bumping versions in the Python
bindings.
For the wheel uploads, the correct semver version is inferred in the case
of tagged commits, which is exactly the case in GitHub CI.
The docs were updated to reflect the changes in the release workflow.
* Add separate version variable and module, use PEP484-compliant exports
This is the best practice mentioned in the `setuptools_scm` docs, see
https://setuptools-scm.readthedocs.io/en/latest/usage/#version-at-runtime.
This behaves the same, and saves a pre-commit step. ruff just needs an
additional package location hint to correctly map first-part packages
(in this case, `google_benchmark`).
This revealed a misformat in the `google_benchmark.__init__`, which is
now fixed.
* Add pre-commit config and GitHub Actions job
Contains the following hooks:
* buildifier - for formatting and linting Bazel files.
* mypy, ruff, isort, black - for Python typechecking, import hygiene,
static analysis, and formatting.
The pylint CI job was changed to be a pre-commit CI job, where pre-commit
is bootstrapped via Python.
Pylint is currently no longer part of the
code checks, but can be re-added if requested. The reason to drop was
that it does not play nicely with pre-commit, and lots of its
functionality and responsibilities are actually covered in ruff.
* Add dev extra to pyproject.toml for development installs
* Clarify that pre-commit contains only Python and Bazel hooks
* Add one-line docstrings to Bazel modules
* Apply buildifier pre-commit fixes to Bazel files
* Apply pre-commit fixes to Python files
* Supply --profile=black to isort to prevent conflicts
* Fix nanobind build file formatting
* Add tooling configs to `pyproject.toml`
In particular, set line length 80 for all Python files.
* Reformat all Python files to line length 80, fix return type annotations
Also ignores the `tools/compare.py` and `tools/gbench/report.py` files
for mypy, since they emit a barrage of errors which we can deal with
later. The errors are mostly related to dynamic classmethod definition.
* Add LTO builds on Windows+MSVC
Gates the MSVC switches behind an `@bazel_skylib:selects` statement.
This is a first experiment from best guesses and studying the MSVC docs.
* Fix misleading inline comment
* Reapply size optimization for clang, equivalent options for MSVC
Working towards cross-platform optimal nanobind building configurations.
* Add LTO back to non-Windows builds
The Windows case (the option name is "/GL") is more complicated, since
there, the compiler options also need to be passed to the linker if LTO
is enabled.
Since we are gating the linker options on platform at the moment instead
of compiler, we need to implement a Bazel boolean flag for the case
"Platform == MacOS && Compiler == AnyOf(gcc, clang)".
* Change nanobind linkage to response file approach
This change needs https://github.com/bazelbuild/bazel/pull/18952 to be
merged first. Fixes macOS linkage of GBM's nanobind bindings on macOS by
supplying a linker response file instead of `-undefined dynamic_lookup`.
The latter has since been deprecated on macOS.
* Fix bazel_skylib checksum, bump skylib version in MODULE.bazel
* Bump Bazel to version 6.4.0 for linker response file support
* Add Python 3.12 support tag
* Bump nanobind to latest stable v1.6.2 tag
* Add PyPI trusted publishing to GitHub workflow, add Python 3.12 wheel builds
Trusted publishing has been available since v1.8.0 of the pypa-publish
action. It enables password-less authentication and wheel uploads from
the wheel upload job.
`cibuildwheel` was bumped to v2.16.2 to allow Python 3.12 wheel builds.
More info on trusted publishing:
https://github.com/marketplace/actions/pypi-publish#trusted-publishing
The Windows distribution was reverted to `latest` in the OS matrix,
since the discovery problem of MSVC was fixed in a Bazel patch release.
* Bump nanobind to stable v1.7.0 tag
We used assert() a lot in tests and that can cause build breakages in some of the opt builds (since assert() are removed)
it's not practical to sprinkle "(void)" everywhere so I think setting this warning option is the best option for now.
* Increase the kMaxIterations limit
This fixes#1663. Note that as a result of this change, the columns in the console output can become misaligned if the actual iteration count is too high. This will be dealt with in a separate commit.
* Fix failing test on Windows
* Fix formatting
---------
Co-authored-by: dominic <510002+dmah42@users.noreply.github.com>
* Make json and csv output consistent.
Currently, the --benchmark_format=csv option does not output the correct value for the cv statistics. Also, the json output should not contain a time unit for the cv statistics.
* fix formatting
* undo json change
---------
Co-authored-by: dominic <510002+dmah42@users.noreply.github.com>
There are three major compilers on Windows targeting the MSVC ABI (i.e.
linking with microsofts STL etc.):
- `MSVC`
- `clang-cl` aka clang with the MSVC compatible CLI
- `clang++` aka clang with gcc compatible CLI
The cmake variable `MSVC` is only set for the first two as it defined in
terms of the CLI interface provided:
> Set to true when the compiler is some version of Microsoft Visual
> C++ or another compiler simulating the Visual C++ cl command-line syntax.
(from cmake docs)
For many of the tests in the library its the ABI that matters not the
cmdline, so check `CMAKE_CXX_SIMULATE_ID` too, if it is `MSVC` the
current compiler is targeting the MSVC ABI. This handles `clang++`
Previously, this could return the wrong result when there
was an even number of elements.
There were two `nth_element` calls. The second call could
change elements in `[center2, end])`, which was where
`center` pointed. Therefore, `*center` sometimes had the
wrong value after the second `nth_element` call.
Rewrite to use `max_element` instead of the second call to
`nth_element`. This avoids modifying the vector.
* test: Use gtest_main only when needed
There are two types of tests. `*_gtest.cc` files use `gtest` and
`gtest_main`. `*_test.cc` files define their own main.
Only depend on `gtest`/`gtest_main` when needed. This is similar
to what `CMakeLists.txt` does.
* comment-only: gunit => gtest
* Fix typo
* perf_counters: Initialize once only when needed
This works around some performance problems running Android under QEMU.
Calling `pfm_initialize` was very slow, and was called during dynamic
initialization (before `main` or when loaded as a shared library).
This happened whenever benchmark was linked, even if no benchmarks
were run.
Instead, call `pfm_initialize` at most once, and only when one of:
1. `PerfCounters::Initialize` is called
2. `PerfCounters::Create` is called with a non-empty counter list
3. `PerfCounters::IsCounterSupported` is called
The return value of the first `pfm_initialize()` is saved and
returned from all subsequent `PerfCounters::Initialize` calls.
* perf_counters: Make success var const
* InitLibPfmOnce: Inline function
* State: Initialize counters with kAvgIteration in constructor
Previously, `counters` was updated in `PauseTiming()` with
`counters[name] += Counter(measurement, kAvgIteration)`.
The first `counters[name]` call inserts a counter with no flags.
There is no `operator+=` for `Counter`, so the insertion is done
by converting the `Counter` to a `double`, then constructing a
`Counter` to insert from the `double`, which drops the flags.
Pre-insert the `Counter` with the correct flags, then only
update `Counter::value`.
Introduced in 1c64a36 ([perf-counters] Fix pause/resume (#1643)).
* perf_counters_test.cc: Don't divide by iterations
Perf counters are now divided by iterations, so dividing again
in the test is wrong.
* State: Fix shadowed param error
* benchmark.cc: Fix clang-tidy error
---------
Co-authored-by: dominic <510002+dmah42@users.noreply.github.com>
* perf_counters_gtest: Make test pass on Android
Tested on Pixel 3 and Pixel 6. Reduce test to the intersection of
what passes on all platforms.
Pixel 6 doesn't support BRANCHES, and only supports two perf
counters.
---------
Co-authored-by: dominic <510002+dmah42@users.noreply.github.com>