c7192c8a9af0a1cb4d013c589af92d6dceedef60 |
|
18-Aug-2017 |
Roman Lebedev <lebedev.ri@gmail.com> |
compare_bench.py: fixup benchmark_options. (#435) https://github.com/google/benchmark/commit/2373382284918fda13f726aefd6e2f700784797f reworked parsing, and introduced a regression in handling of the optional options that should be passed to both of the benchmarks. Now, unless the *first* optional argument starts with '-', it would just complain about that argument: Unrecognized positional argument arguments: '['q']' which is wrong. However if some dummy arg like '-q' was passed first, it would then happily passthrough them all... This commit fixes benchmark_options behavior, by restoring original passthrough behavior for all the optional positional arguments.
/external/google-benchmark/tools/compare_bench.py
|
17298b2dc0e6dc9f78b149ab9256064d0ac96520 |
|
29-Mar-2017 |
Ray Glover <ray.glover@uk.ibm.com> |
Python 2/3 compatibility (#361) * [tools] python 2/3 support * update authors/contributors
/external/google-benchmark/tools/compare_bench.py
|
2373382284918fda13f726aefd6e2f700784797f |
|
18-Nov-2016 |
Eric Fiselier <eric@efcs.ca> |
Rewrite compare_bench.py argument parsing. This patch cleans up a number of issues with how compare_bench.py handled the command line arguments. * Use the 'argparse' python module instead of hand rolled parsing. This gives better usage messages. * Add diagnostics for certain --benchmark flags that cannot or should not be used with compare_bench.py (eg --benchmark_out_format=csv). * Don't override the user specified --benchmark_out flag if it's provided. In future I would like the user to be able to capture both benchmark output files, but this change is big enough for now. This fixes issue #313.
/external/google-benchmark/tools/compare_bench.py
|
5eac66249ce28f6baae80a2565d8d53e1a3f3945 |
|
09-Aug-2016 |
Eric <eric@efcs.ca> |
Add a "compare_bench.py" tooling script. (#266) This patch adds the compare_bench.py utility which can be used to compare the result of benchmarks. The program is invoked like: $ compare_bench.py <old-benchmark> <new-benchmark> [benchmark options]... Where <old-benchmark> and <new-benchmark> either specify a benchmark executable file, or a JSON output file. The type of the input file is automatically detected. If a benchmark executable is specified then the benchmark is run to obtain the results. Otherwise the results are simply loaded from the output file.
/external/google-benchmark/tools/compare_bench.py
|