Wallet must come before crypto, otherwise linking fails on some platforms.
Includes a tangentially-related general cleanup rather than making the Makefile
sloppier.
leveldb's buildsystem causes us a few problems:
- breaks out-of-tree builds
- forces flags used for some tools
- limits cross builds
Rather than continuing to add wrappers around it, simply integrate it into our
build.
Split out methods to every module, apart from 'help' and 'stop' which
are implemented in rpcserver.cpp itself.
- This makes it easier to add or remove RPC commands - no longer everything that includes
rpcserver.h has to be rebuilt when there's a change there.
- Cleans up `rpc/server.h` by getting rid of the huge cluttered list of function definitions.
- Removes most of the bitcoin-specific code from rpcserver.cpp and .h.
Continues #7307 for the non-wallet.
cf82d05 Build: Consensus: Make libbitcoinconsensus_la_SOURCES fully dynamic and dependend on both crypto and consensus packages (Jorge Timón)
4feadec Build: Libconsensus: Move libconsensus-ready files to the consensus package (Jorge Timón)
a3d5eec Build: Consensus: Move consensus files from common to its own module/package (Jorge Timón)
Add "bip125-replaceable" output field to listtransactions and gettransaction
which indicates if an unconfirmed transaction, or any unconfirmed parent, is
signaling opt-in RBF according to BIP 125.
Some extra bytes in libconsensus to get all the crypto (except for signing, which is in the common module) below the libconsensus future independent repo (that has libsecp256k1 as a subtree).
hmac_sha256.o seems to be the only thing libbitcoinconsensus doesn't depend on from crypto, some more bytes for the final libconsensus: I'm not personally worried.
aa4b0c2 When not filtering blocks, getdata sends more in one test (Pieter Wuille)
d41e44c Actually only use filterInventoryKnown with MSG_TX inventory messages. (Gregory Maxwell)
b6a0da4 Only use filterInventoryKnown with MSG_TX inventory messages. (Patick Strateman)
6b84935 Rename setInventoryKnown filterInventoryKnown (Patick Strateman)
e206724 Remove mruset as it is no longer used. (Gregory Maxwell)
ec73ef3 Replace setInventoryKnown with a rolling bloom filter. (Gregory Maxwell)
58ef0ff doc: update docs for Tor listening (Wladimir J. van der Laan)
68ccdc4 doc: Mention Tor listening in release notes (Wladimir J. van der Laan)
09c1ae1 torcontrol improvements and fixes (Wladimir J. van der Laan)
2f796e5 Better error message if Tor version too old (Peter Todd)
8f4e67f net: Automatically create hidden service, listen on Tor (Wladimir J. van der Laan)
Starting with Tor version 0.2.7.1 it is possible, through Tor's control socket
API, to create and destroy 'ephemeral' hidden services programmatically.
https://stem.torproject.org/api/control.html#stem.control.Controller.create_ephemeral_hidden_service
This means that if Tor is running (and proper authorization is available),
bitcoin automatically creates a hidden service to listen on, without user
manual configuration. This will positively affect the number of available
.onion nodes.
- When the node is started, connect to Tor through control socket
- Send `ADD_ONION` command
- First time:
- Make it create a hidden service key
- Save the key in the data directory for later usage
- Make it redirect port 8333 to the local port 8333 (or whatever port we're listening on).
- Keep control socket connection open for as long node is running. The hidden service will
(by default) automatically go away when the connection is closed.
Until now there were quite a few leftovers, and only the coverage
related files in `src/` were cleaned, while the ones in the other dirs
remained. `qa/tmp/` is related to the BitcoinJ tests, and `cache/` is
related to RPC tests.
Benchmarking framework, loosely based on google's micro-benchmarking
library (https://github.com/google/benchmark)
Wny not use the Google Benchmark framework? Because adding Even More Dependencies
isn't worth it. If we get a dozen or three benchmarks and need nanosecond-accurate
timings of threaded code then switching to the full-blown Google Benchmark library
should be considered.
The benchmark framework is hard-coded to run each benchmark for one wall-clock second,
and then spits out .csv-format timing information to stdout. It is left as an
exercise for later (or maybe never) to add command-line arguments to specify which
benchmark(s) to run, how long to run them for, how to format results, etc etc etc.
Again, see the Google Benchmark framework for where that might end up.
See src/bench/MilliSleep.cpp for a sanity-test benchmark that just benchmarks
'sleep 100 milliseconds.'
To compile and run benchmarks:
cd src; make bench
Sample output:
Benchmark,count,min,max,average
Sleep100ms,10,0.101854,0.105059,0.103881