Commit Graph

13 Commits

Author SHA1 Message Date
Nicholas Junge
eaafe694d2
Add Python bindings build using bzlmod (#1764)
* Add a bzlmod Python bindings build

Uses the newly started `@nanobind_bazel` project to build nanobind
extensions. This means that we can drop all in-tree custom build defs
and build files for nanobind and the C++ Python headers.

Additionally, the temporary WORKSPACE overwrite hack naturally goes away
due to the WORKSPACE system being obsolete.

* Bump ruff -> v0.3.1, change ruff settings

The latest minor releases incurred some formatting and configuration
changes, this commit rolls them out.

---------

Co-authored-by: dominic <510002+dmah42@users.noreply.github.com>
2024-03-07 12:28:55 +00:00
Roman Lebedev
17bc235ab3
Output library / schema versions in JSON context block (#1742)
* CMake: `get_git_version()`: just use `--dirty` flag of `git describe`

* CMake: move version normalization out of `get_git_version()`

Mainly, i want `get_git_version()` to return true version,
not something sanitized.

* JSON reporter: store library version and schema version in `context`

* Tools: discard inputs with unexpected `json_schema_version`

* Extract version string into `GetBenchmarkVersiom()`

---------

Co-authored-by: dominic <510002+dmah42@users.noreply.github.com>
2024-01-29 13:15:43 +00:00
Nicholas Junge
4682db08bc
Bump pre-commit dependencies (#1740)
Also fix a mypy error in `tools.gbench.util` - the condition behaves the
same as before, but in the new mypy version, the old condition results
in an unreachable code error for the final `return False` statement.

This is most likely a bug in mypy's reachability analysis, but the fix
is easy enough here to circumvent it.
2024-01-18 13:35:57 +00:00
Nicholas Junge
b93f5a5929
Add pre-commit config and GitHub Actions job (#1688)
* Add pre-commit config and GitHub Actions job

Contains the following hooks:
* buildifier - for formatting and linting Bazel files.
* mypy, ruff, isort, black - for Python typechecking, import hygiene,
static analysis, and formatting.

The pylint CI job was changed to be a pre-commit CI job, where pre-commit
is bootstrapped via Python.

Pylint is currently no longer part of the
code checks, but can be re-added if requested. The reason to drop was
that it does not play nicely with pre-commit, and lots of its
functionality and responsibilities are actually covered in ruff.

* Add dev extra to pyproject.toml for development installs

* Clarify that pre-commit contains only Python and Bazel hooks

* Add one-line docstrings to Bazel modules

* Apply buildifier pre-commit fixes to Bazel files

* Apply pre-commit fixes to Python files

* Supply --profile=black to isort to prevent conflicts

* Fix nanobind build file formatting

* Add tooling configs to `pyproject.toml`

In particular, set line length 80 for all Python files.

* Reformat all Python files to line length 80, fix return type annotations

Also ignores the `tools/compare.py` and `tools/gbench/report.py` files
for mypy, since they emit a barrage of errors which we can deal with
later. The errors are mostly related to dynamic classmethod definition.
2023-10-30 15:35:37 +00:00
Matt Armstrong
6bc17754f6
Support --benchmarks_filter in the compare.py 'benchmarks' command (#1486)
Previously compare.py ignored the --benchmarks_filter
argument when loading JSON.  This defeated any workflow when
a single run of the benchmark was run, followed by multiple
"subset reports" run against it with the 'benchmarks'
command.

Concretely this came up with the simple case:

 compare.py benchmarks a.json b.json --benchmarks_filter=BM_Example

This has no practical impact on the 'filters' and
'benchmarkfiltered' comand, which do their thing at a later
stage.

Fixes #1484

Co-authored-by: dominic <510002+dmah42@users.noreply.github.com>
2023-02-06 16:57:07 +00:00
SunBlack
fe65457e80
Fix typos found by codespell (#1519) 2023-01-10 12:25:32 +00:00
Roman Lebedev
6e32352c79
compare.py: sort the results (#1168)
Currently, the tooling just keeps the whatever benchmark order
that was present, and this is fine nowadays, but once the benchmarks
will be optionally run interleaved, that will be rather suboptimal.

So, now that i have introduced family index and per-family instance index,
we can define an order for the benchmarks, and sort them accordingly.

There is a caveat with aggregates, we assume that they are in-order,
and hopefully we won't mess that order up..
2021-06-03 16:22:52 +03:00
Dominic Hamon
beb360d03e
Create pylint.yml (#1039)
* Create pylint.yml

* improve file matching

* fix some pylint issues

* run on PR and push (force on master only)

* more pylint fixes

* suppress noisy exit code and filter to fatals

* add conan as a dep so the module is importable

* fix lint error on unreachable branch
2020-09-09 09:43:26 +01:00
Jusufadis Bakamovic
eee8b05c97 [tools] Run autopep8 and apply fixes found. (#739) 2018-12-07 16:34:00 +03:00
Ray Glover
17298b2dc0 Python 2/3 compatibility (#361)
* [tools] python 2/3 support

* update authors/contributors
2017-03-29 03:39:18 -07:00
Eric Fiselier
a8aa40c596 Fix obvious typo in string formatting 2016-11-19 05:17:52 -07:00
Eric Fiselier
2373382284 Rewrite compare_bench.py argument parsing.
This patch cleans up a number of issues with how compare_bench.py handled
the command line arguments.

* Use the 'argparse' python module instead of hand rolled parsing. This gives
  better usage messages.

* Add diagnostics for certain --benchmark flags that cannot or should not
  be used with compare_bench.py (eg --benchmark_out_format=csv).

* Don't override the user specified --benchmark_out flag if it's provided.

In future I would like the user to be able to capture both benchmark output
files, but this change is big enough for now.

This fixes issue #313.
2016-11-18 15:42:02 -07:00
Eric
5eac66249c Add a "compare_bench.py" tooling script. (#266)
This patch adds the compare_bench.py utility which can be used to compare the result of benchmarks.
The program is invoked like:

$ compare_bench.py <old-benchmark> <new-benchmark> [benchmark options]...
Where <old-benchmark> and <new-benchmark> either specify a benchmark executable file, or a JSON output file. The type of the input file is automatically detected. If a benchmark executable is specified then the benchmark is run to obtain the results. Otherwise the results are simply loaded from the output file.
2016-08-09 12:33:57 -06:00