2016-09-29 07:57:47 +02:00
|
|
|
# Copyright (c) 2015-2016 The Bitcoin Core developers
|
2018-01-12 16:11:18 +01:00
|
|
|
# Copyright (c) 2014-2018 The Dash Core developers
|
2016-09-29 07:57:47 +02:00
|
|
|
# Distributed under the MIT software license, see the accompanying
|
|
|
|
# file COPYING or http://www.opensource.org/licenses/mit-license.php.
|
|
|
|
|
2016-02-02 16:28:56 +01:00
|
|
|
bin_PROGRAMS += bench/bench_dash
|
Simple benchmarking framework
Benchmarking framework, loosely based on google's micro-benchmarking
library (https://github.com/google/benchmark)
Wny not use the Google Benchmark framework? Because adding Even More Dependencies
isn't worth it. If we get a dozen or three benchmarks and need nanosecond-accurate
timings of threaded code then switching to the full-blown Google Benchmark library
should be considered.
The benchmark framework is hard-coded to run each benchmark for one wall-clock second,
and then spits out .csv-format timing information to stdout. It is left as an
exercise for later (or maybe never) to add command-line arguments to specify which
benchmark(s) to run, how long to run them for, how to format results, etc etc etc.
Again, see the Google Benchmark framework for where that might end up.
See src/bench/MilliSleep.cpp for a sanity-test benchmark that just benchmarks
'sleep 100 milliseconds.'
To compile and run benchmarks:
cd src; make bench
Sample output:
Benchmark,count,min,max,average
Sleep100ms,10,0.101854,0.105059,0.103881
2015-09-24 19:13:38 +02:00
|
|
|
BENCH_SRCDIR = bench
|
2016-02-02 16:28:56 +01:00
|
|
|
BENCH_BINARY = bench/bench_dash$(EXEEXT)
|
Simple benchmarking framework
Benchmarking framework, loosely based on google's micro-benchmarking
library (https://github.com/google/benchmark)
Wny not use the Google Benchmark framework? Because adding Even More Dependencies
isn't worth it. If we get a dozen or three benchmarks and need nanosecond-accurate
timings of threaded code then switching to the full-blown Google Benchmark library
should be considered.
The benchmark framework is hard-coded to run each benchmark for one wall-clock second,
and then spits out .csv-format timing information to stdout. It is left as an
exercise for later (or maybe never) to add command-line arguments to specify which
benchmark(s) to run, how long to run them for, how to format results, etc etc etc.
Again, see the Google Benchmark framework for where that might end up.
See src/bench/MilliSleep.cpp for a sanity-test benchmark that just benchmarks
'sleep 100 milliseconds.'
To compile and run benchmarks:
cd src; make bench
Sample output:
Benchmark,count,min,max,average
Sleep100ms,10,0.101854,0.105059,0.103881
2015-09-24 19:13:38 +02:00
|
|
|
|
2017-10-04 15:07:38 +02:00
|
|
|
RAW_BENCH_FILES = \
|
2018-02-01 18:05:35 +01:00
|
|
|
bench/data/block813851.raw
|
2017-10-04 15:07:38 +02:00
|
|
|
GENERATED_BENCH_FILES = $(RAW_BENCH_FILES:.raw=.raw.h)
|
Simple benchmarking framework
Benchmarking framework, loosely based on google's micro-benchmarking
library (https://github.com/google/benchmark)
Wny not use the Google Benchmark framework? Because adding Even More Dependencies
isn't worth it. If we get a dozen or three benchmarks and need nanosecond-accurate
timings of threaded code then switching to the full-blown Google Benchmark library
should be considered.
The benchmark framework is hard-coded to run each benchmark for one wall-clock second,
and then spits out .csv-format timing information to stdout. It is left as an
exercise for later (or maybe never) to add command-line arguments to specify which
benchmark(s) to run, how long to run them for, how to format results, etc etc etc.
Again, see the Google Benchmark framework for where that might end up.
See src/bench/MilliSleep.cpp for a sanity-test benchmark that just benchmarks
'sleep 100 milliseconds.'
To compile and run benchmarks:
cd src; make bench
Sample output:
Benchmark,count,min,max,average
Sleep100ms,10,0.101854,0.105059,0.103881
2015-09-24 19:13:38 +02:00
|
|
|
|
2016-02-02 16:28:56 +01:00
|
|
|
bench_bench_dash_SOURCES = \
|
2017-10-04 15:07:38 +02:00
|
|
|
$(RAW_BENCH_FILES) \
|
2020-04-25 14:38:34 +02:00
|
|
|
bench/addrman.cpp \
|
2022-07-08 10:40:53 +02:00
|
|
|
bench/bench_bitcoin.cpp \
|
Simple benchmarking framework
Benchmarking framework, loosely based on google's micro-benchmarking
library (https://github.com/google/benchmark)
Wny not use the Google Benchmark framework? Because adding Even More Dependencies
isn't worth it. If we get a dozen or three benchmarks and need nanosecond-accurate
timings of threaded code then switching to the full-blown Google Benchmark library
should be considered.
The benchmark framework is hard-coded to run each benchmark for one wall-clock second,
and then spits out .csv-format timing information to stdout. It is left as an
exercise for later (or maybe never) to add command-line arguments to specify which
benchmark(s) to run, how long to run them for, how to format results, etc etc etc.
Again, see the Google Benchmark framework for where that might end up.
See src/bench/MilliSleep.cpp for a sanity-test benchmark that just benchmarks
'sleep 100 milliseconds.'
To compile and run benchmarks:
cd src; make bench
Sample output:
Benchmark,count,min,max,average
Sleep100ms,10,0.101854,0.105059,0.103881
2015-09-24 19:13:38 +02:00
|
|
|
bench/bench.cpp \
|
|
|
|
bench/bench.h \
|
2023-06-26 22:50:21 +02:00
|
|
|
bench/bip324_ecdh.cpp \
|
2021-10-25 11:19:12 +02:00
|
|
|
bench/block_assemble.cpp \
|
2018-05-24 13:56:49 +02:00
|
|
|
bench/bls.cpp \
|
|
|
|
bench/bls_dkg.cpp \
|
2016-11-10 16:02:45 +01:00
|
|
|
bench/checkblock.cpp \
|
2017-01-19 15:21:27 +01:00
|
|
|
bench/checkqueue.cpp \
|
2021-12-12 18:04:46 +01:00
|
|
|
bench/data.h \
|
|
|
|
bench/data.cpp \
|
2021-10-25 09:38:17 +02:00
|
|
|
bench/duplicate_inputs.cpp \
|
2018-09-14 18:04:45 +02:00
|
|
|
bench/ecdsa.cpp \
|
2023-06-26 22:50:21 +02:00
|
|
|
bench/ellswift.cpp \
|
2018-06-12 14:02:14 +02:00
|
|
|
bench/examples.cpp \
|
2016-05-12 01:57:08 +02:00
|
|
|
bench/rollingbloom.cpp \
|
2019-05-10 09:26:02 +02:00
|
|
|
bench/chacha20.cpp \
|
2016-05-30 13:05:43 +02:00
|
|
|
bench/crypto_hash.cpp \
|
2016-10-18 22:03:28 +02:00
|
|
|
bench/ccoins_caching.cpp \
|
2018-08-26 16:57:01 +02:00
|
|
|
bench/gcs_filter.cpp \
|
2022-02-26 06:17:54 +01:00
|
|
|
bench/hashpadding.cpp \
|
2024-10-13 18:46:11 +02:00
|
|
|
bench/load_external.cpp \
|
2018-06-04 09:11:18 +02:00
|
|
|
bench/merkle_root.cpp \
|
2016-10-18 22:03:28 +02:00
|
|
|
bench/mempool_eviction.cpp \
|
2022-02-25 14:05:25 +01:00
|
|
|
bench/mempool_stress.cpp \
|
2021-06-26 12:03:16 +02:00
|
|
|
bench/nanobench.h \
|
|
|
|
bench/nanobench.cpp \
|
2024-06-12 16:27:51 +02:00
|
|
|
bench/peer_eviction.cpp \
|
2022-06-11 09:23:51 +02:00
|
|
|
bench/pool.cpp \
|
2019-07-08 20:03:25 +02:00
|
|
|
bench/rpc_blockchain.cpp \
|
2019-02-23 17:04:20 +01:00
|
|
|
bench/rpc_mempool.cpp \
|
Merge bitcoin/bitcoin#24852: util: optimize HexStr
5e61532e72c1021fda9c7b213bd9cf397cb3a802 util: optimizes HexStr (Martin Leitner-Ankerl)
4e2b99f72a90b956f3050095abed4949aff9b516 bench: Adds a benchmark for HexStr (Martin Leitner-Ankerl)
67c8411c37b483caa2fe3f7f4f40b68ed2a9bcf7 test: Adds a test for HexStr that checks all 256 bytes (Martin Leitner-Ankerl)
Pull request description:
In my benchmark, this rewrite improves runtime 27% (g++) to 46% (clang++) for the benchmark `HexStrBench`:
g++ 11.2.0
| ns/byte | byte/s | err% | ins/byte | cyc/byte | IPC | bra/byte | miss% | total | benchmark
|--------------------:|--------------------:|--------:|----------------:|----------------:|-------:|---------------:|--------:|----------:|:----------
| 0.94 | 1,061,381,310.36 | 0.7% | 12.00 | 3.01 | 3.990 | 1.00 | 0.0% | 0.01 | `HexStrBench` master
| 0.68 | 1,465,366,544.25 | 1.7% | 6.00 | 2.16 | 2.778 | 1.00 | 0.0% | 0.01 | `HexStrBench` branch
clang++ 13.0.1
| ns/byte | byte/s | err% | ins/byte | cyc/byte | IPC | bra/byte | miss% | total | benchmark
|--------------------:|--------------------:|--------:|----------------:|----------------:|-------:|---------------:|--------:|----------:|:----------
| 0.80 | 1,244,713,415.92 | 0.9% | 10.00 | 2.56 | 3.913 | 0.50 | 0.0% | 0.01 | `HexStrBench` master
| 0.43 | 2,324,188,940.72 | 0.2% | 4.00 | 1.37 | 2.914 | 0.25 | 0.0% | 0.01 | `HexStrBench` branch
Note that the idea for this change comes from denis2342 in #23364. This is a rewrite so no unaligned accesses occur.
Also, the lookup table is now calculated at compile time, which hopefully makes the code a bit easier to review.
ACKs for top commit:
laanwj:
Code review ACK 5e61532e72c1021fda9c7b213bd9cf397cb3a802
aureleoules:
tACK 5e61532e72c1021fda9c7b213bd9cf397cb3a802.
theStack:
ACK 5e61532e72c1021fda9c7b213bd9cf397cb3a802 🚤
Tree-SHA512: 40b53d5908332473ef24918d3a80ad1292b60566c02585fa548eb4c3189754971be5a70325f4968fce6d714df898b52d9357aba14d4753a8c70e6ffd273a2319
2022-05-04 20:19:51 +02:00
|
|
|
bench/strencodings.cpp \
|
2019-05-29 13:39:45 +02:00
|
|
|
bench/util_time.cpp \
|
2016-11-02 11:16:19 +01:00
|
|
|
bench/base58.cpp \
|
2018-07-11 11:40:44 +02:00
|
|
|
bench/bech32.cpp \
|
2016-11-29 12:41:17 +01:00
|
|
|
bench/lockedpool.cpp \
|
2019-03-27 11:46:07 +01:00
|
|
|
bench/poly1305.cpp \
|
2018-03-01 12:12:55 +01:00
|
|
|
bench/prevector.cpp \
|
Merge #16902: O(1) OP_IF/NOTIF/ELSE/ENDIF script implementation
e6e622e5a0e22c2ac1b50b96af818e412d67ac54 Implement O(1) OP_IF/NOTIF/ELSE/ENDIF logic (Pieter Wuille)
d0e8f4d5d8ddaccb37f98b7989fb944081e41ab8 [refactor] interpreter: define interface for vfExec (Anthony Towns)
89fb241c54fc85befacfa3703d8e21bf3b8a76eb Benchmark script verification with 100 nested IFs (Pieter Wuille)
Pull request description:
While investigating what mechanisms are possible to maximize the per-opcode verification cost of scripts, I noticed that the logic for determining whether a particular opcode is to be executed is O(n) in the nesting depth. This issue was also pointed out by Sergio Demian Lerner in https://bitslog.wordpress.com/2017/04/17/new-quadratic-delays-in-bitcoin-scripts/, and this PR implements a variant of the O(1) algorithm suggested there.
This is not a problem currently, because even with a nesting depth of 100 (the maximum possible right now due to the 201 ops limit), the slowdown caused by this on my machine is around 70 ns per opcode (or 0.25 s per block) at worst, far lower than what is possible with other opcodes.
This PR mostly serves as a proof of concept that it's possible to avoid it, which may be relevant in discussions around increasing the opcode limits in future script versions. Without it, the execution time of scripts can grow quadratically with the nesting depth, which very quickly becomes unreasonable.
This improves upon #14245 by completely removing the `vfExec` vector.
ACKs for top commit:
jnewbery:
Code review ACK e6e622e5a0e22c2ac1b50b96af818e412d67ac54
MarcoFalke:
ACK e6e622e5a0e22c2ac1b50b96af818e412d67ac54 🐴
fjahr:
ACK e6e622e5a0e22c2ac1b50b96af818e412d67ac54
ajtowns:
ACK e6e622e5a0e22c2ac1b50b96af818e412d67ac54
laanwj:
concept and code review ACK e6e622e5a0e22c2ac1b50b96af818e412d67ac54
jonatack:
ACK e6e622e5a0e22c2ac1b50b96af818e412d67ac54 code review, build, benches, fuzzing
Tree-SHA512: 1dcfac3411ff04773de461959298a177f951cb5f706caa2734073bcec62224d7cd103767cfeef85cd129813e70c14c74fa8f1e38e4da70ec38a0f615aab1f7f7
2020-03-14 18:42:25 +01:00
|
|
|
bench/string_cast.cpp \
|
|
|
|
bench/verify_script.cpp
|
Simple benchmarking framework
Benchmarking framework, loosely based on google's micro-benchmarking
library (https://github.com/google/benchmark)
Wny not use the Google Benchmark framework? Because adding Even More Dependencies
isn't worth it. If we get a dozen or three benchmarks and need nanosecond-accurate
timings of threaded code then switching to the full-blown Google Benchmark library
should be considered.
The benchmark framework is hard-coded to run each benchmark for one wall-clock second,
and then spits out .csv-format timing information to stdout. It is left as an
exercise for later (or maybe never) to add command-line arguments to specify which
benchmark(s) to run, how long to run them for, how to format results, etc etc etc.
Again, see the Google Benchmark framework for where that might end up.
See src/bench/MilliSleep.cpp for a sanity-test benchmark that just benchmarks
'sleep 100 milliseconds.'
To compile and run benchmarks:
cd src; make bench
Sample output:
Benchmark,count,min,max,average
Sleep100ms,10,0.101854,0.105059,0.103881
2015-09-24 19:13:38 +02:00
|
|
|
|
2017-10-04 15:07:38 +02:00
|
|
|
nodist_bench_bench_dash_SOURCES = $(GENERATED_BENCH_FILES)
|
2016-11-10 16:02:45 +01:00
|
|
|
|
2020-03-26 09:05:35 +01:00
|
|
|
bench_bench_dash_CPPFLAGS = $(AM_CPPFLAGS) $(BITCOIN_INCLUDES) $(EVENT_CFLAGS) $(EVENT_PTHREADS_CFLAGS) -I$(builddir)/bench/
|
2016-02-02 16:28:56 +01:00
|
|
|
bench_bench_dash_CXXFLAGS = $(AM_CXXFLAGS) $(PIE_FLAGS)
|
|
|
|
bench_bench_dash_LDADD = \
|
Simple benchmarking framework
Benchmarking framework, loosely based on google's micro-benchmarking
library (https://github.com/google/benchmark)
Wny not use the Google Benchmark framework? Because adding Even More Dependencies
isn't worth it. If we get a dozen or three benchmarks and need nanosecond-accurate
timings of threaded code then switching to the full-blown Google Benchmark library
should be considered.
The benchmark framework is hard-coded to run each benchmark for one wall-clock second,
and then spits out .csv-format timing information to stdout. It is left as an
exercise for later (or maybe never) to add command-line arguments to specify which
benchmark(s) to run, how long to run them for, how to format results, etc etc etc.
Again, see the Google Benchmark framework for where that might end up.
See src/bench/MilliSleep.cpp for a sanity-test benchmark that just benchmarks
'sleep 100 milliseconds.'
To compile and run benchmarks:
cd src; make bench
Sample output:
Benchmark,count,min,max,average
Sleep100ms,10,0.101854,0.105059,0.103881
2015-09-24 19:13:38 +02:00
|
|
|
$(LIBBITCOIN_SERVER) \
|
2020-12-17 13:46:20 +01:00
|
|
|
$(LIBBITCOIN_WALLET) \
|
Simple benchmarking framework
Benchmarking framework, loosely based on google's micro-benchmarking
library (https://github.com/google/benchmark)
Wny not use the Google Benchmark framework? Because adding Even More Dependencies
isn't worth it. If we get a dozen or three benchmarks and need nanosecond-accurate
timings of threaded code then switching to the full-blown Google Benchmark library
should be considered.
The benchmark framework is hard-coded to run each benchmark for one wall-clock second,
and then spits out .csv-format timing information to stdout. It is left as an
exercise for later (or maybe never) to add command-line arguments to specify which
benchmark(s) to run, how long to run them for, how to format results, etc etc etc.
Again, see the Google Benchmark framework for where that might end up.
See src/bench/MilliSleep.cpp for a sanity-test benchmark that just benchmarks
'sleep 100 milliseconds.'
To compile and run benchmarks:
cd src; make bench
Sample output:
Benchmark,count,min,max,average
Sleep100ms,10,0.101854,0.105059,0.103881
2015-09-24 19:13:38 +02:00
|
|
|
$(LIBBITCOIN_COMMON) \
|
|
|
|
$(LIBBITCOIN_UTIL) \
|
2016-02-02 18:37:36 +01:00
|
|
|
$(LIBBITCOIN_CONSENSUS) \
|
Simple benchmarking framework
Benchmarking framework, loosely based on google's micro-benchmarking
library (https://github.com/google/benchmark)
Wny not use the Google Benchmark framework? Because adding Even More Dependencies
isn't worth it. If we get a dozen or three benchmarks and need nanosecond-accurate
timings of threaded code then switching to the full-blown Google Benchmark library
should be considered.
The benchmark framework is hard-coded to run each benchmark for one wall-clock second,
and then spits out .csv-format timing information to stdout. It is left as an
exercise for later (or maybe never) to add command-line arguments to specify which
benchmark(s) to run, how long to run them for, how to format results, etc etc etc.
Again, see the Google Benchmark framework for where that might end up.
See src/bench/MilliSleep.cpp for a sanity-test benchmark that just benchmarks
'sleep 100 milliseconds.'
To compile and run benchmarks:
cd src; make bench
Sample output:
Benchmark,count,min,max,average
Sleep100ms,10,0.101854,0.105059,0.103881
2015-09-24 19:13:38 +02:00
|
|
|
$(LIBBITCOIN_CRYPTO) \
|
2022-11-22 18:34:46 +01:00
|
|
|
$(LIBDASHBLS) \
|
2019-11-21 21:13:08 +01:00
|
|
|
$(LIBTEST_UTIL) \
|
Simple benchmarking framework
Benchmarking framework, loosely based on google's micro-benchmarking
library (https://github.com/google/benchmark)
Wny not use the Google Benchmark framework? Because adding Even More Dependencies
isn't worth it. If we get a dozen or three benchmarks and need nanosecond-accurate
timings of threaded code then switching to the full-blown Google Benchmark library
should be considered.
The benchmark framework is hard-coded to run each benchmark for one wall-clock second,
and then spits out .csv-format timing information to stdout. It is left as an
exercise for later (or maybe never) to add command-line arguments to specify which
benchmark(s) to run, how long to run them for, how to format results, etc etc etc.
Again, see the Google Benchmark framework for where that might end up.
See src/bench/MilliSleep.cpp for a sanity-test benchmark that just benchmarks
'sleep 100 milliseconds.'
To compile and run benchmarks:
cd src; make bench
Sample output:
Benchmark,count,min,max,average
Sleep100ms,10,0.101854,0.105059,0.103881
2015-09-24 19:13:38 +02:00
|
|
|
$(LIBLEVELDB) \
|
2017-06-13 19:38:37 +02:00
|
|
|
$(LIBLEVELDB_SSE42) \
|
Simple benchmarking framework
Benchmarking framework, loosely based on google's micro-benchmarking
library (https://github.com/google/benchmark)
Wny not use the Google Benchmark framework? Because adding Even More Dependencies
isn't worth it. If we get a dozen or three benchmarks and need nanosecond-accurate
timings of threaded code then switching to the full-blown Google Benchmark library
should be considered.
The benchmark framework is hard-coded to run each benchmark for one wall-clock second,
and then spits out .csv-format timing information to stdout. It is left as an
exercise for later (or maybe never) to add command-line arguments to specify which
benchmark(s) to run, how long to run them for, how to format results, etc etc etc.
Again, see the Google Benchmark framework for where that might end up.
See src/bench/MilliSleep.cpp for a sanity-test benchmark that just benchmarks
'sleep 100 milliseconds.'
To compile and run benchmarks:
cd src; make bench
Sample output:
Benchmark,count,min,max,average
Sleep100ms,10,0.101854,0.105059,0.103881
2015-09-24 19:13:38 +02:00
|
|
|
$(LIBMEMENV) \
|
2017-07-04 05:20:19 +02:00
|
|
|
$(LIBSECP256K1) \
|
2019-02-23 17:04:20 +01:00
|
|
|
$(LIBUNIVALUE) \
|
|
|
|
$(EVENT_PTHREADS_LIBS) \
|
|
|
|
$(EVENT_LIBS)
|
Simple benchmarking framework
Benchmarking framework, loosely based on google's micro-benchmarking
library (https://github.com/google/benchmark)
Wny not use the Google Benchmark framework? Because adding Even More Dependencies
isn't worth it. If we get a dozen or three benchmarks and need nanosecond-accurate
timings of threaded code then switching to the full-blown Google Benchmark library
should be considered.
The benchmark framework is hard-coded to run each benchmark for one wall-clock second,
and then spits out .csv-format timing information to stdout. It is left as an
exercise for later (or maybe never) to add command-line arguments to specify which
benchmark(s) to run, how long to run them for, how to format results, etc etc etc.
Again, see the Google Benchmark framework for where that might end up.
See src/bench/MilliSleep.cpp for a sanity-test benchmark that just benchmarks
'sleep 100 milliseconds.'
To compile and run benchmarks:
cd src; make bench
Sample output:
Benchmark,count,min,max,average
Sleep100ms,10,0.101854,0.105059,0.103881
2015-09-24 19:13:38 +02:00
|
|
|
|
|
|
|
if ENABLE_ZMQ
|
2016-02-02 16:28:56 +01:00
|
|
|
bench_bench_dash_LDADD += $(LIBBITCOIN_ZMQ) $(ZMQ_LIBS)
|
Simple benchmarking framework
Benchmarking framework, loosely based on google's micro-benchmarking
library (https://github.com/google/benchmark)
Wny not use the Google Benchmark framework? Because adding Even More Dependencies
isn't worth it. If we get a dozen or three benchmarks and need nanosecond-accurate
timings of threaded code then switching to the full-blown Google Benchmark library
should be considered.
The benchmark framework is hard-coded to run each benchmark for one wall-clock second,
and then spits out .csv-format timing information to stdout. It is left as an
exercise for later (or maybe never) to add command-line arguments to specify which
benchmark(s) to run, how long to run them for, how to format results, etc etc etc.
Again, see the Google Benchmark framework for where that might end up.
See src/bench/MilliSleep.cpp for a sanity-test benchmark that just benchmarks
'sleep 100 milliseconds.'
To compile and run benchmarks:
cd src; make bench
Sample output:
Benchmark,count,min,max,average
Sleep100ms,10,0.101854,0.105059,0.103881
2015-09-24 19:13:38 +02:00
|
|
|
endif
|
|
|
|
|
|
|
|
if ENABLE_WALLET
|
2016-10-18 22:03:28 +02:00
|
|
|
bench_bench_dash_SOURCES += bench/coin_selection.cpp
|
2021-10-25 16:43:43 +02:00
|
|
|
bench_bench_dash_SOURCES += bench/wallet_balance.cpp
|
Simple benchmarking framework
Benchmarking framework, loosely based on google's micro-benchmarking
library (https://github.com/google/benchmark)
Wny not use the Google Benchmark framework? Because adding Even More Dependencies
isn't worth it. If we get a dozen or three benchmarks and need nanosecond-accurate
timings of threaded code then switching to the full-blown Google Benchmark library
should be considered.
The benchmark framework is hard-coded to run each benchmark for one wall-clock second,
and then spits out .csv-format timing information to stdout. It is left as an
exercise for later (or maybe never) to add command-line arguments to specify which
benchmark(s) to run, how long to run them for, how to format results, etc etc etc.
Again, see the Google Benchmark framework for where that might end up.
See src/bench/MilliSleep.cpp for a sanity-test benchmark that just benchmarks
'sleep 100 milliseconds.'
To compile and run benchmarks:
cd src; make bench
Sample output:
Benchmark,count,min,max,average
Sleep100ms,10,0.101854,0.105059,0.103881
2015-09-24 19:13:38 +02:00
|
|
|
endif
|
|
|
|
|
2024-08-07 10:06:27 +02:00
|
|
|
bench_bench_dash_LDADD += $(BACKTRACE_LIB) $(BDB_LIBS) $(EVENT_PTHREADS_LIBS) $(EVENT_LIBS) $(MINIUPNPC_LIBS) $(NATPMP_LIBS) $(SQLITE_LIBS) $(GMP_LIBS)
|
2022-08-23 18:03:33 +02:00
|
|
|
bench_bench_dash_LDFLAGS = $(LDFLAGS_WRAP_EXCEPTIONS) $(RELDFLAGS) $(AM_LDFLAGS) $(LIBTOOL_APP_LDFLAGS) $(PTHREAD_FLAGS)
|
Simple benchmarking framework
Benchmarking framework, loosely based on google's micro-benchmarking
library (https://github.com/google/benchmark)
Wny not use the Google Benchmark framework? Because adding Even More Dependencies
isn't worth it. If we get a dozen or three benchmarks and need nanosecond-accurate
timings of threaded code then switching to the full-blown Google Benchmark library
should be considered.
The benchmark framework is hard-coded to run each benchmark for one wall-clock second,
and then spits out .csv-format timing information to stdout. It is left as an
exercise for later (or maybe never) to add command-line arguments to specify which
benchmark(s) to run, how long to run them for, how to format results, etc etc etc.
Again, see the Google Benchmark framework for where that might end up.
See src/bench/MilliSleep.cpp for a sanity-test benchmark that just benchmarks
'sleep 100 milliseconds.'
To compile and run benchmarks:
cd src; make bench
Sample output:
Benchmark,count,min,max,average
Sleep100ms,10,0.101854,0.105059,0.103881
2015-09-24 19:13:38 +02:00
|
|
|
|
2017-10-04 15:07:38 +02:00
|
|
|
CLEAN_BITCOIN_BENCH = bench/*.gcda bench/*.gcno $(GENERATED_BENCH_FILES)
|
Simple benchmarking framework
Benchmarking framework, loosely based on google's micro-benchmarking
library (https://github.com/google/benchmark)
Wny not use the Google Benchmark framework? Because adding Even More Dependencies
isn't worth it. If we get a dozen or three benchmarks and need nanosecond-accurate
timings of threaded code then switching to the full-blown Google Benchmark library
should be considered.
The benchmark framework is hard-coded to run each benchmark for one wall-clock second,
and then spits out .csv-format timing information to stdout. It is left as an
exercise for later (or maybe never) to add command-line arguments to specify which
benchmark(s) to run, how long to run them for, how to format results, etc etc etc.
Again, see the Google Benchmark framework for where that might end up.
See src/bench/MilliSleep.cpp for a sanity-test benchmark that just benchmarks
'sleep 100 milliseconds.'
To compile and run benchmarks:
cd src; make bench
Sample output:
Benchmark,count,min,max,average
Sleep100ms,10,0.101854,0.105059,0.103881
2015-09-24 19:13:38 +02:00
|
|
|
|
|
|
|
CLEANFILES += $(CLEAN_BITCOIN_BENCH)
|
|
|
|
|
2021-12-12 18:04:46 +01:00
|
|
|
bench/data.cpp: bench/data/block813851.raw.h
|
2016-11-10 16:02:45 +01:00
|
|
|
|
Simple benchmarking framework
Benchmarking framework, loosely based on google's micro-benchmarking
library (https://github.com/google/benchmark)
Wny not use the Google Benchmark framework? Because adding Even More Dependencies
isn't worth it. If we get a dozen or three benchmarks and need nanosecond-accurate
timings of threaded code then switching to the full-blown Google Benchmark library
should be considered.
The benchmark framework is hard-coded to run each benchmark for one wall-clock second,
and then spits out .csv-format timing information to stdout. It is left as an
exercise for later (or maybe never) to add command-line arguments to specify which
benchmark(s) to run, how long to run them for, how to format results, etc etc etc.
Again, see the Google Benchmark framework for where that might end up.
See src/bench/MilliSleep.cpp for a sanity-test benchmark that just benchmarks
'sleep 100 milliseconds.'
To compile and run benchmarks:
cd src; make bench
Sample output:
Benchmark,count,min,max,average
Sleep100ms,10,0.101854,0.105059,0.103881
2015-09-24 19:13:38 +02:00
|
|
|
bitcoin_bench: $(BENCH_BINARY)
|
|
|
|
|
|
|
|
bench: $(BENCH_BINARY) FORCE
|
|
|
|
$(BENCH_BINARY)
|
|
|
|
|
|
|
|
bitcoin_bench_clean : FORCE
|
2016-02-02 16:28:56 +01:00
|
|
|
rm -f $(CLEAN_BITCOIN_BENCH) $(bench_bench_dash_OBJECTS) $(BENCH_BINARY)
|
2016-11-10 16:02:45 +01:00
|
|
|
|
2018-01-24 16:29:24 +01:00
|
|
|
bench/data/%.raw.h: bench/data/%.raw
|
2016-11-10 16:02:45 +01:00
|
|
|
@$(MKDIR_P) $(@D)
|
2016-11-15 10:34:46 +01:00
|
|
|
@{ \
|
2018-01-24 16:29:24 +01:00
|
|
|
echo "namespace raw_bench{" && \
|
2021-12-12 18:04:46 +01:00
|
|
|
echo "static unsigned const char $(*F)_raw[] = {" && \
|
2016-11-15 10:34:46 +01:00
|
|
|
$(HEXDUMP) -v -e '8/1 "0x%02x, "' -e '"\n"' $< | $(SED) -e 's/0x ,//g' && \
|
2018-01-24 16:29:24 +01:00
|
|
|
echo "};};"; \
|
2016-11-15 10:34:46 +01:00
|
|
|
} > "$@.new" && mv -f "$@.new" "$@"
|
2016-11-10 16:02:45 +01:00
|
|
|
@echo "Generated $@"
|