dash/test
Andrew Chow ed88ba72af Merge #17261: Make ScriptPubKeyMan an actual interface and the wallet to have multiple
3f373659d732a5b1e5fdc692a45b2b8179f66bec Refactor: Replace SigningProvider pointers with unique_ptrs (Andrew Chow)
3afe53c4039103670cec5f9cace897ead76e20a8 Cleanup: Drop unused GUI learnRelatedScripts method (Andrew Chow)
e2f02aa59e3402048269362ff692d49a6df35cfd Refactor: Copy CWallet signals and print function to LegacyScriptPubKeyMan (Andrew Chow)
c729afd0a3b74a3943e4c359270beaf3e6ff8a7b Box the wallet: Add multiple keyman maps and loops (Andrew Chow)
4977c30d59e88a3e5ee248144bcc023debcd895b refactor: define a UINT256_ONE global constant (Andrew Chow)
415afcccd3e5583defdb76e3a280f48e98983301 HD Split: Avoid redundant upgrades (Andrew Chow)
01b4511206e399981a77976deb15785d18db46ae Make UpgradeKeyMetadata work only on LegacyScriptPubKeyMan (Andrew Chow)
4a7e43e8460127a40a7895519587399feff3b682 Store p2sh scripts in AddAndGetDestinationForScript (Andrew Chow)
501acb5538008d98abe79288b92040bc186b93f3 Always try to sign for all pubkeys in multisig (Andrew Chow)
81610eddbc57c46ae243f45d73e715d509f53a6c List output types in an array in order to be iterated over (Andrew Chow)
eb81fc3ee58d3e88af36d8091b9e4017a8603b3c Refactor: Allow LegacyScriptPubKeyMan to be null (Andrew Chow)
fadc08ad944cad42e805228cdd58e0332f4d7184 Locking: Lock cs_KeyStore instead of cs_wallet in legacy keyman (Andrew Chow)
f5be479694d4dbaf59eef562d80fbeacb3bb7dc1 wallet: Improve CWallet:MarkDestinationsDirty (João Barbosa)

Pull request description:

  Continuation of wallet boxes project.

  Actually makes ScriptPubKeyMan an interface which LegacyScriptPubkeyMan. Moves around functions and things from CWallet into LegacyScriptPubKeyMan so that they are actually separate things without circular dependencies.

  ***

  Introducing the `ScriptPubKeyMan` (short for ScriptPubKeyManager) for managing scriptPubKeys and their associated scripts and keys. This functionality is moved over from `CWallet`. Instead, `CWallet` will have a pointer to a `ScriptPubKeyMan` for every possible address type, internal and external. It will fetch the correct `ScriptPubKeyMan` as necessary. When fetching new addresses, it chooses the `ScriptPubKeyMan` based on address type and whether it is change. For signing, it takes the script and asks each `ScriptPubKeyMan` for whether that `ScriptPubKeyMan` considers that script `IsMine`, whether it has that script, or whether it is able to produce a signature for it. If so, the `ScriptPubKeyMan` will provide a `SigningProvider` to the caller which will use that in order to sign.

  There is currently one `ScriptPubKeyMan` - the `LegacyScriptPubKeyMan`. Each `CWallet` will have only one `LegacyScriptPubKeyMan` with the pointers for all of the address types and change pointing to this `LegacyScriptPubKeyMan`. It is created when the wallet is loaded and all keys and metadata are loaded into it instead of `CWallet`. The `LegacyScriptPubKeyMan` is primarily made up of all of the key and script management that used to be in `CWallet`. For convenience, `CWallet` has a `GetLegacyScriptPubKeyMan` which will return the `LegacyScriptPubKeyMan` or a `nullptr` if it does not have one (not yet implemented, but callers will check for the `nullptr`). For purposes of signing, `LegacyScriptPubKeyMan`'s `GetSigningProvider` will return itself rather than a separate `SigningProvider`. This will be different for future `ScriptPubKeyMan`s.

  The `LegacyScriptPubKeyMan` will also handle the importing and exporting of keys and scripts instead of `CWallet`. As such, a number of RPCs have been limited to work only if a `LegacyScriptPubKeyMan` can be retrieved from the wallet. These RPCs are `sethdseed`, `addmultisigaddress`, `importaddress`, `importprivkey`, `importpubkey`, `importmulti`, `dumpprivkey`, and `dumpwallet`. Other RPCs which relied on the wallet for scripts and keys have been modified in order to take the `SigningProvider` retrieved from the `ScriptPubKeyMan` for a given script.

  Overall, these changes should not effect how everything actually works and the user should experience no difference between having this change and not having it. As such, no functional tests were changed, and the only unit tests changed were those that were directly accessing `CWallet` functions that have been removed.

  This PR is the last step in the [Wallet Structure Changes](https://github.com/bitcoin-core/bitcoin-devwiki/wiki/Wallet-Class-Structure-Changes).

ACKs for top commit:
  instagibbs:
    re-utACK 3f373659d7
  Sjors:
    re-utACK 3f373659d732a5b1e5fdc692a45b2b8179f66bec (it still compiles on macOS after https://github.com/bitcoin/bitcoin/pull/17261#discussion_r370377070)
  meshcollider:
    Tested re-ACK 3f373659d732a5b1e5fdc692a45b2b8179f66bec

Tree-SHA512: f8e2b8d9efa750b617691e8702d217ec4c33569ec2554a060141d9eb9b9a3a5323e4216938e2485c44625d7a6e0925d40dea1362b3af9857cf08860c2f344716
2023-03-19 11:08:31 -05:00
..
functional Merge #14588: Refactor PSBT signing logic to enforce invariant and fix signing bug 2023-03-19 11:08:31 -05:00
fuzz Merge #20759: doc: [test] Remove outdated comment in fuzz runner 2023-01-23 12:22:32 -06:00
lint Merge #17261: Make ScriptPubKeyMan an actual interface and the wallet to have multiple 2023-03-19 11:08:31 -05:00
sanitizer_suppressions Merge bitcoin#14822: bench: Destroy wallet txs instead of leaking their memory 2023-02-10 23:34:57 +03:00
util merge bitcoin#18416: Limit decimal range of numbers ParseScript accepts 2022-11-01 00:28:53 -05:00
config.ini.in merge bitcoin#20458: add is_bdb_compiled helper 2023-02-17 14:21:19 -06:00
README.md Merge #18986: tests: Add capability to disable RPC timeout in functional tests 2022-09-08 00:40:11 +03:00

This directory contains integration tests that test dashd and its utilities in their entirety. It does not contain unit tests, which can be found in /src/test, /src/wallet/test, etc.

This directory contains the following sets of tests:

  • functional which test the functionality of dashd and dash-qt by interacting with them through the RPC and P2P interfaces.
  • util which tests the dash utilities, currently only dash-tx.
  • lint which perform various static analysis checks.

The util tests are run as part of make check target. The functional tests and lint scripts can be run as explained in the sections below.

Running tests locally

Before tests can be run locally, Dash Core must be built. See the building instructions for help.

Functional tests

Dependencies and prerequisites

Many Dash specific tests require dash_hash. To install it:

  • Clone the repo git clone https://github.com/dashpay/dash_hash
  • Install dash_hash cd dash_hash && python3 setup.py install

The ZMQ functional test requires a python ZMQ library. To install it:

  • on Unix, run sudo apt-get install python3-zmq
  • on mac OS, run pip3 install pyzmq

On Windows the PYTHONUTF8 environment variable must be set to 1:

set PYTHONUTF8=1

Running the tests

Individual tests can be run by directly calling the test script, e.g.:

test/functional/wallet_hd.py

or can be run through the test_runner harness, eg:

test/functional/test_runner.py wallet_hd.py

You can run any combination (incl. duplicates) of tests by calling:

test/functional/test_runner.py <testname1> <testname2> <testname3> ...

Wildcard test names can be passed, if the paths are coherent and the test runner is called from a bash shell or similar that does the globbing. For example, to run all the wallet tests:

test/functional/test_runner.py test/functional/wallet*
functional/test_runner.py functional/wallet* (called from the test/ directory)
test_runner.py wallet* (called from the test/functional/ directory)

but not

test/functional/test_runner.py wallet*

Combinations of wildcards can be passed:

test/functional/test_runner.py ./test/functional/tool* test/functional/mempool*
test_runner.py tool* mempool*

Run the regression test suite with:

test/functional/test_runner.py

Run all possible tests with

test/functional/test_runner.py --extended

By default, up to 4 tests will be run in parallel by test_runner. To specify how many jobs to run, append --jobs=n

The individual tests and the test_runner harness have many command-line options. Run test/functional/test_runner.py -h to see them all.

Troubleshooting and debugging test failures

Resource contention

The P2P and RPC ports used by the dashd nodes-under-test are chosen to make conflicts with other processes unlikely. However, if there is another dashd process running on the system (perhaps from a previous test which hasn't successfully killed all its dashd nodes), then there may be a port conflict which will cause the test to fail. It is recommended that you run the tests on a system where no other dashd processes are running.

On linux, the test framework will warn if there is another dashd process running when the tests are started.

If there are zombie dashd processes after test failure, you can kill them by running the following commands. Note that these commands will kill all dashd processes running on the system, so should not be used if any non-test dashd processes are being run.

killall dashd

or

pkill -9 dashd
Data directory cache

A pre-mined blockchain with 200 blocks is generated the first time a functional test is run and is stored in test/cache. This speeds up test startup times since new blockchains don't need to be generated for each test. However, the cache may get into a bad state, in which case tests will fail. If this happens, remove the cache directory (and make sure dashd processes are stopped as above):

rm -rf test/cache
killall dashd
Test logging

The tests contain logging at five different levels (DEBUG, INFO, WARNING, ERROR and CRITICAL). From within your functional tests you can log to these different levels using the logger included in the test_framework, e.g. self.log.debug(object). By default:

  • when run through the test_runner harness, all logs are written to test_framework.log and no logs are output to the console.
  • when run directly, all logs are written to test_framework.log and INFO level and above are output to the console.
  • when run on Travis, no logs are output to the console. However, if a test fails, the test_framework.log and dashd debug.logs will all be dumped to the console to help troubleshooting.

These log files can be located under the test data directory (which is always printed in the first line of test output):

  • <test data directory>/test_framework.log
  • <test data directory>/node<node number>/regtest/debug.log.

The node number identifies the relevant test node, starting from node0, which corresponds to its position in the nodes list of the specific test, e.g. self.nodes[0].

To change the level of logs output to the console, use the -l command line argument.

test_framework.log and dashd debug.logs can be combined into a single aggregate log by running the combine_logs.py script. The output can be plain text, colorized text or html. For example:

test/functional/combine_logs.py -c <test data directory> | less -r

will pipe the colorized logs from the test into less.

Use --tracerpc to trace out all the RPC calls and responses to the console. For some tests (eg any that use submitblock to submit a full block over RPC), this can result in a lot of screen output.

By default, the test data directory will be deleted after a successful run. Use --nocleanup to leave the test data directory intact. The test data directory is never deleted after a failed test.

Attaching a debugger

A python debugger can be attached to tests at any point. Just add the line:

import pdb; pdb.set_trace()

anywhere in the test. You will then be able to inspect variables, as well as call methods that interact with the dashd nodes-under-test.

If further introspection of the dashd instances themselves becomes necessary, this can be accomplished by first setting a pdb breakpoint at an appropriate location, running the test to that point, then using gdb (or lldb on macOS) to attach to the process and debug.

For instance, to attach to self.node[1] during a run you can get the pid of the node within pdb.

(pdb) self.node[1].process.pid

Alternatively, you can find the pid by inspecting the temp folder for the specific test you are running. The path to that folder is printed at the beginning of every test run:

2017-06-27 14:13:56.686000 TestFramework (INFO): Initializing test directory /tmp/user/1000/testo9vsdjo3

Use the path to find the pid file in the temp folder:

cat /tmp/user/1000/testo9vsdjo3/node1/regtest/dashd.pid

Then you can use the pid to start gdb:

gdb /home/example/dashd <pid>

Note: gdb attach step may require ptrace_scope to be modified, or sudo preceding the gdb. See this link for considerations: https://www.kernel.org/doc/Documentation/security/Yama.txt

Often while debugging rpc calls from functional tests, the test might reach timeout before process can return a response. Use --timeout-factor 0 to disable all rpc timeouts for that partcular functional test. Ex: test/functional/wallet_hd.py --timeout-factor 0.

Profiling

An easy way to profile node performance during functional tests is provided for Linux platforms using perf.

Perf will sample the running node and will generate profile data in the node's datadir. The profile data can then be presented using perf report or a graphical tool like hotspot.

To generate a profile during test suite runs, use the --perf flag.

To see render the output to text, run

perf report -i /path/to/datadir/send-big-msgs.perf.data.xxxx --stdio | c++filt | less

For ways to generate more granular profiles, see the README in test/functional.

Util tests

Util tests can be run locally by running test/util/bitcoin-util-test.py. Use the -v option for verbose output.

Lint tests

Dependencies

Lint test Dependency Version used by CI Installation
lint-python.sh flake8 3.8.3 pip3 install flake8==3.8.3
lint-shell.sh ShellCheck 0.7.1 details...
lint-spelling.sh codespell 1.17.1 pip3 install codespell==1.17.1

Please be aware that on Linux distributions all dependencies are usually available as packages, but could be outdated.

Running the tests

Individual tests can be run by directly calling the test script, e.g.:

test/lint/lint-filenames.sh

You can run all the shell-based lint tests by running:

test/lint/lint-all.sh

Writing functional tests

You are encouraged to write functional tests for new or existing features. Further information about the functional test framework and individual tests is found in test/functional.