2016-09-29 07:57:47 +02:00
|
|
|
# Copyright (c) 2015-2016 The Bitcoin Core developers
|
2018-01-12 16:11:18 +01:00
|
|
|
# Copyright (c) 2014-2018 The Dash Core developers
|
2016-09-29 07:57:47 +02:00
|
|
|
# Distributed under the MIT software license, see the accompanying
|
|
|
|
# file COPYING or http://www.opensource.org/licenses/mit-license.php.
|
|
|
|
|
2016-02-02 16:28:56 +01:00
|
|
|
bin_PROGRAMS += bench/bench_dash
|
Simple benchmarking framework
Benchmarking framework, loosely based on google's micro-benchmarking
library (https://github.com/google/benchmark)
Wny not use the Google Benchmark framework? Because adding Even More Dependencies
isn't worth it. If we get a dozen or three benchmarks and need nanosecond-accurate
timings of threaded code then switching to the full-blown Google Benchmark library
should be considered.
The benchmark framework is hard-coded to run each benchmark for one wall-clock second,
and then spits out .csv-format timing information to stdout. It is left as an
exercise for later (or maybe never) to add command-line arguments to specify which
benchmark(s) to run, how long to run them for, how to format results, etc etc etc.
Again, see the Google Benchmark framework for where that might end up.
See src/bench/MilliSleep.cpp for a sanity-test benchmark that just benchmarks
'sleep 100 milliseconds.'
To compile and run benchmarks:
cd src; make bench
Sample output:
Benchmark,count,min,max,average
Sleep100ms,10,0.101854,0.105059,0.103881
2015-09-24 19:13:38 +02:00
|
|
|
BENCH_SRCDIR = bench
|
2016-02-02 16:28:56 +01:00
|
|
|
BENCH_BINARY = bench/bench_dash$(EXEEXT)
|
Simple benchmarking framework
Benchmarking framework, loosely based on google's micro-benchmarking
library (https://github.com/google/benchmark)
Wny not use the Google Benchmark framework? Because adding Even More Dependencies
isn't worth it. If we get a dozen or three benchmarks and need nanosecond-accurate
timings of threaded code then switching to the full-blown Google Benchmark library
should be considered.
The benchmark framework is hard-coded to run each benchmark for one wall-clock second,
and then spits out .csv-format timing information to stdout. It is left as an
exercise for later (or maybe never) to add command-line arguments to specify which
benchmark(s) to run, how long to run them for, how to format results, etc etc etc.
Again, see the Google Benchmark framework for where that might end up.
See src/bench/MilliSleep.cpp for a sanity-test benchmark that just benchmarks
'sleep 100 milliseconds.'
To compile and run benchmarks:
cd src; make bench
Sample output:
Benchmark,count,min,max,average
Sleep100ms,10,0.101854,0.105059,0.103881
2015-09-24 19:13:38 +02:00
|
|
|
|
2017-10-04 15:07:38 +02:00
|
|
|
RAW_BENCH_FILES = \
|
2018-02-01 18:05:35 +01:00
|
|
|
bench/data/block813851.raw
|
2017-10-04 15:07:38 +02:00
|
|
|
GENERATED_BENCH_FILES = $(RAW_BENCH_FILES:.raw=.raw.h)
|
Simple benchmarking framework
Benchmarking framework, loosely based on google's micro-benchmarking
library (https://github.com/google/benchmark)
Wny not use the Google Benchmark framework? Because adding Even More Dependencies
isn't worth it. If we get a dozen or three benchmarks and need nanosecond-accurate
timings of threaded code then switching to the full-blown Google Benchmark library
should be considered.
The benchmark framework is hard-coded to run each benchmark for one wall-clock second,
and then spits out .csv-format timing information to stdout. It is left as an
exercise for later (or maybe never) to add command-line arguments to specify which
benchmark(s) to run, how long to run them for, how to format results, etc etc etc.
Again, see the Google Benchmark framework for where that might end up.
See src/bench/MilliSleep.cpp for a sanity-test benchmark that just benchmarks
'sleep 100 milliseconds.'
To compile and run benchmarks:
cd src; make bench
Sample output:
Benchmark,count,min,max,average
Sleep100ms,10,0.101854,0.105059,0.103881
2015-09-24 19:13:38 +02:00
|
|
|
|
2016-02-02 16:28:56 +01:00
|
|
|
bench_bench_dash_SOURCES = \
|
2017-10-04 15:07:38 +02:00
|
|
|
$(RAW_BENCH_FILES) \
|
2020-04-25 14:38:34 +02:00
|
|
|
bench/addrman.cpp \
|
2022-07-08 10:40:53 +02:00
|
|
|
bench/bench_bitcoin.cpp \
|
Simple benchmarking framework
Benchmarking framework, loosely based on google's micro-benchmarking
library (https://github.com/google/benchmark)
Wny not use the Google Benchmark framework? Because adding Even More Dependencies
isn't worth it. If we get a dozen or three benchmarks and need nanosecond-accurate
timings of threaded code then switching to the full-blown Google Benchmark library
should be considered.
The benchmark framework is hard-coded to run each benchmark for one wall-clock second,
and then spits out .csv-format timing information to stdout. It is left as an
exercise for later (or maybe never) to add command-line arguments to specify which
benchmark(s) to run, how long to run them for, how to format results, etc etc etc.
Again, see the Google Benchmark framework for where that might end up.
See src/bench/MilliSleep.cpp for a sanity-test benchmark that just benchmarks
'sleep 100 milliseconds.'
To compile and run benchmarks:
cd src; make bench
Sample output:
Benchmark,count,min,max,average
Sleep100ms,10,0.101854,0.105059,0.103881
2015-09-24 19:13:38 +02:00
|
|
|
bench/bench.cpp \
|
|
|
|
bench/bench.h \
|
2021-10-25 11:19:12 +02:00
|
|
|
bench/block_assemble.cpp \
|
2018-05-24 13:56:49 +02:00
|
|
|
bench/bls.cpp \
|
|
|
|
bench/bls_dkg.cpp \
|
2016-11-10 16:02:45 +01:00
|
|
|
bench/checkblock.cpp \
|
2017-01-19 15:21:27 +01:00
|
|
|
bench/checkqueue.cpp \
|
2021-12-12 18:04:46 +01:00
|
|
|
bench/data.h \
|
|
|
|
bench/data.cpp \
|
2021-10-25 09:38:17 +02:00
|
|
|
bench/duplicate_inputs.cpp \
|
2018-09-14 18:04:45 +02:00
|
|
|
bench/ecdsa.cpp \
|
2018-06-12 14:02:14 +02:00
|
|
|
bench/examples.cpp \
|
2016-05-12 01:57:08 +02:00
|
|
|
bench/rollingbloom.cpp \
|
2019-05-10 09:26:02 +02:00
|
|
|
bench/chacha20.cpp \
|
2019-07-11 21:36:46 +02:00
|
|
|
bench/chacha_poly_aead.cpp \
|
2016-05-30 13:05:43 +02:00
|
|
|
bench/crypto_hash.cpp \
|
2016-10-18 22:03:28 +02:00
|
|
|
bench/ccoins_caching.cpp \
|
2018-08-26 16:57:01 +02:00
|
|
|
bench/gcs_filter.cpp \
|
2022-02-26 06:17:54 +01:00
|
|
|
bench/hashpadding.cpp \
|
2018-06-04 09:11:18 +02:00
|
|
|
bench/merkle_root.cpp \
|
2016-10-18 22:03:28 +02:00
|
|
|
bench/mempool_eviction.cpp \
|
2022-02-25 14:05:25 +01:00
|
|
|
bench/mempool_stress.cpp \
|
2021-06-26 12:03:16 +02:00
|
|
|
bench/nanobench.h \
|
|
|
|
bench/nanobench.cpp \
|
2019-07-08 20:03:25 +02:00
|
|
|
bench/rpc_blockchain.cpp \
|
2019-02-23 17:04:20 +01:00
|
|
|
bench/rpc_mempool.cpp \
|
2019-05-29 13:39:45 +02:00
|
|
|
bench/util_time.cpp \
|
2016-11-02 11:16:19 +01:00
|
|
|
bench/base58.cpp \
|
2018-07-11 11:40:44 +02:00
|
|
|
bench/bech32.cpp \
|
2016-11-29 12:41:17 +01:00
|
|
|
bench/lockedpool.cpp \
|
2019-03-27 11:46:07 +01:00
|
|
|
bench/poly1305.cpp \
|
2018-03-01 12:12:55 +01:00
|
|
|
bench/prevector.cpp \
|
Merge #16902: O(1) OP_IF/NOTIF/ELSE/ENDIF script implementation
e6e622e5a0e22c2ac1b50b96af818e412d67ac54 Implement O(1) OP_IF/NOTIF/ELSE/ENDIF logic (Pieter Wuille)
d0e8f4d5d8ddaccb37f98b7989fb944081e41ab8 [refactor] interpreter: define interface for vfExec (Anthony Towns)
89fb241c54fc85befacfa3703d8e21bf3b8a76eb Benchmark script verification with 100 nested IFs (Pieter Wuille)
Pull request description:
While investigating what mechanisms are possible to maximize the per-opcode verification cost of scripts, I noticed that the logic for determining whether a particular opcode is to be executed is O(n) in the nesting depth. This issue was also pointed out by Sergio Demian Lerner in https://bitslog.wordpress.com/2017/04/17/new-quadratic-delays-in-bitcoin-scripts/, and this PR implements a variant of the O(1) algorithm suggested there.
This is not a problem currently, because even with a nesting depth of 100 (the maximum possible right now due to the 201 ops limit), the slowdown caused by this on my machine is around 70 ns per opcode (or 0.25 s per block) at worst, far lower than what is possible with other opcodes.
This PR mostly serves as a proof of concept that it's possible to avoid it, which may be relevant in discussions around increasing the opcode limits in future script versions. Without it, the execution time of scripts can grow quadratically with the nesting depth, which very quickly becomes unreasonable.
This improves upon #14245 by completely removing the `vfExec` vector.
ACKs for top commit:
jnewbery:
Code review ACK e6e622e5a0e22c2ac1b50b96af818e412d67ac54
MarcoFalke:
ACK e6e622e5a0e22c2ac1b50b96af818e412d67ac54 🐴
fjahr:
ACK e6e622e5a0e22c2ac1b50b96af818e412d67ac54
ajtowns:
ACK e6e622e5a0e22c2ac1b50b96af818e412d67ac54
laanwj:
concept and code review ACK e6e622e5a0e22c2ac1b50b96af818e412d67ac54
jonatack:
ACK e6e622e5a0e22c2ac1b50b96af818e412d67ac54 code review, build, benches, fuzzing
Tree-SHA512: 1dcfac3411ff04773de461959298a177f951cb5f706caa2734073bcec62224d7cd103767cfeef85cd129813e70c14c74fa8f1e38e4da70ec38a0f615aab1f7f7
2020-03-14 18:42:25 +01:00
|
|
|
bench/string_cast.cpp \
|
|
|
|
bench/verify_script.cpp
|
Simple benchmarking framework
Benchmarking framework, loosely based on google's micro-benchmarking
library (https://github.com/google/benchmark)
Wny not use the Google Benchmark framework? Because adding Even More Dependencies
isn't worth it. If we get a dozen or three benchmarks and need nanosecond-accurate
timings of threaded code then switching to the full-blown Google Benchmark library
should be considered.
The benchmark framework is hard-coded to run each benchmark for one wall-clock second,
and then spits out .csv-format timing information to stdout. It is left as an
exercise for later (or maybe never) to add command-line arguments to specify which
benchmark(s) to run, how long to run them for, how to format results, etc etc etc.
Again, see the Google Benchmark framework for where that might end up.
See src/bench/MilliSleep.cpp for a sanity-test benchmark that just benchmarks
'sleep 100 milliseconds.'
To compile and run benchmarks:
cd src; make bench
Sample output:
Benchmark,count,min,max,average
Sleep100ms,10,0.101854,0.105059,0.103881
2015-09-24 19:13:38 +02:00
|
|
|
|
2017-10-04 15:07:38 +02:00
|
|
|
nodist_bench_bench_dash_SOURCES = $(GENERATED_BENCH_FILES)
|
2016-11-10 16:02:45 +01:00
|
|
|
|
2016-02-02 16:28:56 +01:00
|
|
|
bench_bench_dash_CPPFLAGS = $(AM_CPPFLAGS) $(BITCOIN_INCLUDES) $(EVENT_CLFAGS) $(EVENT_PTHREADS_CFLAGS) -I$(builddir)/bench/
|
|
|
|
bench_bench_dash_CXXFLAGS = $(AM_CXXFLAGS) $(PIE_FLAGS)
|
|
|
|
bench_bench_dash_LDADD = \
|
Simple benchmarking framework
Benchmarking framework, loosely based on google's micro-benchmarking
library (https://github.com/google/benchmark)
Wny not use the Google Benchmark framework? Because adding Even More Dependencies
isn't worth it. If we get a dozen or three benchmarks and need nanosecond-accurate
timings of threaded code then switching to the full-blown Google Benchmark library
should be considered.
The benchmark framework is hard-coded to run each benchmark for one wall-clock second,
and then spits out .csv-format timing information to stdout. It is left as an
exercise for later (or maybe never) to add command-line arguments to specify which
benchmark(s) to run, how long to run them for, how to format results, etc etc etc.
Again, see the Google Benchmark framework for where that might end up.
See src/bench/MilliSleep.cpp for a sanity-test benchmark that just benchmarks
'sleep 100 milliseconds.'
To compile and run benchmarks:
cd src; make bench
Sample output:
Benchmark,count,min,max,average
Sleep100ms,10,0.101854,0.105059,0.103881
2015-09-24 19:13:38 +02:00
|
|
|
$(LIBBITCOIN_SERVER) \
|
2020-12-17 13:46:20 +01:00
|
|
|
$(LIBBITCOIN_WALLET) \
|
Simple benchmarking framework
Benchmarking framework, loosely based on google's micro-benchmarking
library (https://github.com/google/benchmark)
Wny not use the Google Benchmark framework? Because adding Even More Dependencies
isn't worth it. If we get a dozen or three benchmarks and need nanosecond-accurate
timings of threaded code then switching to the full-blown Google Benchmark library
should be considered.
The benchmark framework is hard-coded to run each benchmark for one wall-clock second,
and then spits out .csv-format timing information to stdout. It is left as an
exercise for later (or maybe never) to add command-line arguments to specify which
benchmark(s) to run, how long to run them for, how to format results, etc etc etc.
Again, see the Google Benchmark framework for where that might end up.
See src/bench/MilliSleep.cpp for a sanity-test benchmark that just benchmarks
'sleep 100 milliseconds.'
To compile and run benchmarks:
cd src; make bench
Sample output:
Benchmark,count,min,max,average
Sleep100ms,10,0.101854,0.105059,0.103881
2015-09-24 19:13:38 +02:00
|
|
|
$(LIBBITCOIN_COMMON) \
|
|
|
|
$(LIBBITCOIN_UTIL) \
|
2016-02-02 18:37:36 +01:00
|
|
|
$(LIBBITCOIN_CONSENSUS) \
|
Simple benchmarking framework
Benchmarking framework, loosely based on google's micro-benchmarking
library (https://github.com/google/benchmark)
Wny not use the Google Benchmark framework? Because adding Even More Dependencies
isn't worth it. If we get a dozen or three benchmarks and need nanosecond-accurate
timings of threaded code then switching to the full-blown Google Benchmark library
should be considered.
The benchmark framework is hard-coded to run each benchmark for one wall-clock second,
and then spits out .csv-format timing information to stdout. It is left as an
exercise for later (or maybe never) to add command-line arguments to specify which
benchmark(s) to run, how long to run them for, how to format results, etc etc etc.
Again, see the Google Benchmark framework for where that might end up.
See src/bench/MilliSleep.cpp for a sanity-test benchmark that just benchmarks
'sleep 100 milliseconds.'
To compile and run benchmarks:
cd src; make bench
Sample output:
Benchmark,count,min,max,average
Sleep100ms,10,0.101854,0.105059,0.103881
2015-09-24 19:13:38 +02:00
|
|
|
$(LIBBITCOIN_CRYPTO) \
|
2022-11-22 18:34:46 +01:00
|
|
|
$(LIBDASHBLS) \
|
2019-11-21 21:13:08 +01:00
|
|
|
$(LIBTEST_UTIL) \
|
Simple benchmarking framework
Benchmarking framework, loosely based on google's micro-benchmarking
library (https://github.com/google/benchmark)
Wny not use the Google Benchmark framework? Because adding Even More Dependencies
isn't worth it. If we get a dozen or three benchmarks and need nanosecond-accurate
timings of threaded code then switching to the full-blown Google Benchmark library
should be considered.
The benchmark framework is hard-coded to run each benchmark for one wall-clock second,
and then spits out .csv-format timing information to stdout. It is left as an
exercise for later (or maybe never) to add command-line arguments to specify which
benchmark(s) to run, how long to run them for, how to format results, etc etc etc.
Again, see the Google Benchmark framework for where that might end up.
See src/bench/MilliSleep.cpp for a sanity-test benchmark that just benchmarks
'sleep 100 milliseconds.'
To compile and run benchmarks:
cd src; make bench
Sample output:
Benchmark,count,min,max,average
Sleep100ms,10,0.101854,0.105059,0.103881
2015-09-24 19:13:38 +02:00
|
|
|
$(LIBLEVELDB) \
|
2017-06-13 19:38:37 +02:00
|
|
|
$(LIBLEVELDB_SSE42) \
|
Simple benchmarking framework
Benchmarking framework, loosely based on google's micro-benchmarking
library (https://github.com/google/benchmark)
Wny not use the Google Benchmark framework? Because adding Even More Dependencies
isn't worth it. If we get a dozen or three benchmarks and need nanosecond-accurate
timings of threaded code then switching to the full-blown Google Benchmark library
should be considered.
The benchmark framework is hard-coded to run each benchmark for one wall-clock second,
and then spits out .csv-format timing information to stdout. It is left as an
exercise for later (or maybe never) to add command-line arguments to specify which
benchmark(s) to run, how long to run them for, how to format results, etc etc etc.
Again, see the Google Benchmark framework for where that might end up.
See src/bench/MilliSleep.cpp for a sanity-test benchmark that just benchmarks
'sleep 100 milliseconds.'
To compile and run benchmarks:
cd src; make bench
Sample output:
Benchmark,count,min,max,average
Sleep100ms,10,0.101854,0.105059,0.103881
2015-09-24 19:13:38 +02:00
|
|
|
$(LIBMEMENV) \
|
2017-07-04 05:20:19 +02:00
|
|
|
$(LIBSECP256K1) \
|
2019-02-23 17:04:20 +01:00
|
|
|
$(LIBUNIVALUE) \
|
|
|
|
$(EVENT_PTHREADS_LIBS) \
|
|
|
|
$(EVENT_LIBS)
|
Simple benchmarking framework
Benchmarking framework, loosely based on google's micro-benchmarking
library (https://github.com/google/benchmark)
Wny not use the Google Benchmark framework? Because adding Even More Dependencies
isn't worth it. If we get a dozen or three benchmarks and need nanosecond-accurate
timings of threaded code then switching to the full-blown Google Benchmark library
should be considered.
The benchmark framework is hard-coded to run each benchmark for one wall-clock second,
and then spits out .csv-format timing information to stdout. It is left as an
exercise for later (or maybe never) to add command-line arguments to specify which
benchmark(s) to run, how long to run them for, how to format results, etc etc etc.
Again, see the Google Benchmark framework for where that might end up.
See src/bench/MilliSleep.cpp for a sanity-test benchmark that just benchmarks
'sleep 100 milliseconds.'
To compile and run benchmarks:
cd src; make bench
Sample output:
Benchmark,count,min,max,average
Sleep100ms,10,0.101854,0.105059,0.103881
2015-09-24 19:13:38 +02:00
|
|
|
|
|
|
|
if ENABLE_ZMQ
|
2016-02-02 16:28:56 +01:00
|
|
|
bench_bench_dash_LDADD += $(LIBBITCOIN_ZMQ) $(ZMQ_LIBS)
|
Simple benchmarking framework
Benchmarking framework, loosely based on google's micro-benchmarking
library (https://github.com/google/benchmark)
Wny not use the Google Benchmark framework? Because adding Even More Dependencies
isn't worth it. If we get a dozen or three benchmarks and need nanosecond-accurate
timings of threaded code then switching to the full-blown Google Benchmark library
should be considered.
The benchmark framework is hard-coded to run each benchmark for one wall-clock second,
and then spits out .csv-format timing information to stdout. It is left as an
exercise for later (or maybe never) to add command-line arguments to specify which
benchmark(s) to run, how long to run them for, how to format results, etc etc etc.
Again, see the Google Benchmark framework for where that might end up.
See src/bench/MilliSleep.cpp for a sanity-test benchmark that just benchmarks
'sleep 100 milliseconds.'
To compile and run benchmarks:
cd src; make bench
Sample output:
Benchmark,count,min,max,average
Sleep100ms,10,0.101854,0.105059,0.103881
2015-09-24 19:13:38 +02:00
|
|
|
endif
|
|
|
|
|
|
|
|
if ENABLE_WALLET
|
2016-10-18 22:03:28 +02:00
|
|
|
bench_bench_dash_SOURCES += bench/coin_selection.cpp
|
2021-10-25 16:43:43 +02:00
|
|
|
bench_bench_dash_SOURCES += bench/wallet_balance.cpp
|
Simple benchmarking framework
Benchmarking framework, loosely based on google's micro-benchmarking
library (https://github.com/google/benchmark)
Wny not use the Google Benchmark framework? Because adding Even More Dependencies
isn't worth it. If we get a dozen or three benchmarks and need nanosecond-accurate
timings of threaded code then switching to the full-blown Google Benchmark library
should be considered.
The benchmark framework is hard-coded to run each benchmark for one wall-clock second,
and then spits out .csv-format timing information to stdout. It is left as an
exercise for later (or maybe never) to add command-line arguments to specify which
benchmark(s) to run, how long to run them for, how to format results, etc etc etc.
Again, see the Google Benchmark framework for where that might end up.
See src/bench/MilliSleep.cpp for a sanity-test benchmark that just benchmarks
'sleep 100 milliseconds.'
To compile and run benchmarks:
cd src; make bench
Sample output:
Benchmark,count,min,max,average
Sleep100ms,10,0.101854,0.105059,0.103881
2015-09-24 19:13:38 +02:00
|
|
|
endif
|
|
|
|
|
2023-02-04 19:26:20 +01:00
|
|
|
bench_bench_dash_LDADD += $(BACKTRACE_LIB) $(BOOST_LIBS) $(BDB_LIBS) $(EVENT_PTHREADS_LIBS) $(EVENT_LIBS) $(MINIUPNPC_LIBS) $(NATPMP_LIBS) $(SQLITE_LIBS) $(GMP_LIBS)
|
2022-08-23 18:03:33 +02:00
|
|
|
bench_bench_dash_LDFLAGS = $(LDFLAGS_WRAP_EXCEPTIONS) $(RELDFLAGS) $(AM_LDFLAGS) $(LIBTOOL_APP_LDFLAGS) $(PTHREAD_FLAGS)
|
Simple benchmarking framework
Benchmarking framework, loosely based on google's micro-benchmarking
library (https://github.com/google/benchmark)
Wny not use the Google Benchmark framework? Because adding Even More Dependencies
isn't worth it. If we get a dozen or three benchmarks and need nanosecond-accurate
timings of threaded code then switching to the full-blown Google Benchmark library
should be considered.
The benchmark framework is hard-coded to run each benchmark for one wall-clock second,
and then spits out .csv-format timing information to stdout. It is left as an
exercise for later (or maybe never) to add command-line arguments to specify which
benchmark(s) to run, how long to run them for, how to format results, etc etc etc.
Again, see the Google Benchmark framework for where that might end up.
See src/bench/MilliSleep.cpp for a sanity-test benchmark that just benchmarks
'sleep 100 milliseconds.'
To compile and run benchmarks:
cd src; make bench
Sample output:
Benchmark,count,min,max,average
Sleep100ms,10,0.101854,0.105059,0.103881
2015-09-24 19:13:38 +02:00
|
|
|
|
2017-10-04 15:07:38 +02:00
|
|
|
CLEAN_BITCOIN_BENCH = bench/*.gcda bench/*.gcno $(GENERATED_BENCH_FILES)
|
Simple benchmarking framework
Benchmarking framework, loosely based on google's micro-benchmarking
library (https://github.com/google/benchmark)
Wny not use the Google Benchmark framework? Because adding Even More Dependencies
isn't worth it. If we get a dozen or three benchmarks and need nanosecond-accurate
timings of threaded code then switching to the full-blown Google Benchmark library
should be considered.
The benchmark framework is hard-coded to run each benchmark for one wall-clock second,
and then spits out .csv-format timing information to stdout. It is left as an
exercise for later (or maybe never) to add command-line arguments to specify which
benchmark(s) to run, how long to run them for, how to format results, etc etc etc.
Again, see the Google Benchmark framework for where that might end up.
See src/bench/MilliSleep.cpp for a sanity-test benchmark that just benchmarks
'sleep 100 milliseconds.'
To compile and run benchmarks:
cd src; make bench
Sample output:
Benchmark,count,min,max,average
Sleep100ms,10,0.101854,0.105059,0.103881
2015-09-24 19:13:38 +02:00
|
|
|
|
|
|
|
CLEANFILES += $(CLEAN_BITCOIN_BENCH)
|
|
|
|
|
2021-12-12 18:04:46 +01:00
|
|
|
bench/data.cpp: bench/data/block813851.raw.h
|
2016-11-10 16:02:45 +01:00
|
|
|
|
Simple benchmarking framework
Benchmarking framework, loosely based on google's micro-benchmarking
library (https://github.com/google/benchmark)
Wny not use the Google Benchmark framework? Because adding Even More Dependencies
isn't worth it. If we get a dozen or three benchmarks and need nanosecond-accurate
timings of threaded code then switching to the full-blown Google Benchmark library
should be considered.
The benchmark framework is hard-coded to run each benchmark for one wall-clock second,
and then spits out .csv-format timing information to stdout. It is left as an
exercise for later (or maybe never) to add command-line arguments to specify which
benchmark(s) to run, how long to run them for, how to format results, etc etc etc.
Again, see the Google Benchmark framework for where that might end up.
See src/bench/MilliSleep.cpp for a sanity-test benchmark that just benchmarks
'sleep 100 milliseconds.'
To compile and run benchmarks:
cd src; make bench
Sample output:
Benchmark,count,min,max,average
Sleep100ms,10,0.101854,0.105059,0.103881
2015-09-24 19:13:38 +02:00
|
|
|
bitcoin_bench: $(BENCH_BINARY)
|
|
|
|
|
|
|
|
bench: $(BENCH_BINARY) FORCE
|
|
|
|
$(BENCH_BINARY)
|
|
|
|
|
|
|
|
bitcoin_bench_clean : FORCE
|
2016-02-02 16:28:56 +01:00
|
|
|
rm -f $(CLEAN_BITCOIN_BENCH) $(bench_bench_dash_OBJECTS) $(BENCH_BINARY)
|
2016-11-10 16:02:45 +01:00
|
|
|
|
2018-01-24 16:29:24 +01:00
|
|
|
bench/data/%.raw.h: bench/data/%.raw
|
2016-11-10 16:02:45 +01:00
|
|
|
@$(MKDIR_P) $(@D)
|
2016-11-15 10:34:46 +01:00
|
|
|
@{ \
|
2018-01-24 16:29:24 +01:00
|
|
|
echo "namespace raw_bench{" && \
|
2021-12-12 18:04:46 +01:00
|
|
|
echo "static unsigned const char $(*F)_raw[] = {" && \
|
2016-11-15 10:34:46 +01:00
|
|
|
$(HEXDUMP) -v -e '8/1 "0x%02x, "' -e '"\n"' $< | $(SED) -e 's/0x ,//g' && \
|
2018-01-24 16:29:24 +01:00
|
|
|
echo "};};"; \
|
2016-11-15 10:34:46 +01:00
|
|
|
} > "$@.new" && mv -f "$@.new" "$@"
|
2016-11-10 16:02:45 +01:00
|
|
|
@echo "Generated $@"
|