2019-05-19 22:20:34 +02:00
|
|
|
This directory contains integration tests that test dashd and its
|
|
|
|
utilities in their entirety. It does not contain unit tests, which
|
|
|
|
can be found in [/src/test](/src/test), [/src/wallet/test](/src/wallet/test),
|
|
|
|
etc.
|
|
|
|
|
2018-10-21 04:08:59 +02:00
|
|
|
This directory contains the following sets of tests:
|
2019-05-19 22:20:34 +02:00
|
|
|
|
2020-07-29 03:23:12 +02:00
|
|
|
- [functional](/test/functional) which test the functionality of
|
2019-05-19 22:20:34 +02:00
|
|
|
dashd and dash-qt by interacting with them through the RPC and P2P
|
|
|
|
interfaces.
|
2018-01-22 14:20:26 +01:00
|
|
|
- [util](/test/util) which tests the dash utilities, currently only
|
2019-05-19 22:20:34 +02:00
|
|
|
dash-tx.
|
2018-10-21 04:08:59 +02:00
|
|
|
- [lint](/test/lint/) which perform various static analysis checks.
|
2019-05-19 22:20:34 +02:00
|
|
|
|
|
|
|
The util tests are run as part of `make check` target. The functional
|
2019-06-19 21:51:41 +02:00
|
|
|
tests and lint scripts can be run as explained in the sections below.
|
2019-05-19 22:20:34 +02:00
|
|
|
|
2017-06-27 12:05:54 +02:00
|
|
|
# Running tests locally
|
|
|
|
|
2021-06-29 00:39:58 +02:00
|
|
|
Before tests can be run locally, Dash Core must be built. See the [building instructions](/doc#building) for help.
|
2019-01-09 23:23:48 +01:00
|
|
|
|
2017-06-27 12:05:54 +02:00
|
|
|
|
|
|
|
### Functional tests
|
|
|
|
|
2021-09-10 07:45:30 +02:00
|
|
|
#### Dependencies and prerequisites
|
2017-06-27 12:05:54 +02:00
|
|
|
|
2020-03-30 14:22:06 +02:00
|
|
|
Many Dash specific tests require dash_hash. To install it:
|
|
|
|
|
|
|
|
- Clone the repo `git clone https://github.com/dashpay/dash_hash`
|
|
|
|
- Install dash_hash `cd dash_hash && python3 setup.py install`
|
|
|
|
|
2019-05-19 22:20:34 +02:00
|
|
|
The ZMQ functional test requires a python ZMQ library. To install it:
|
|
|
|
|
|
|
|
- on Unix, run `sudo apt-get install python3-zmq`
|
|
|
|
- on mac OS, run `pip3 install pyzmq`
|
|
|
|
|
2021-09-10 07:45:30 +02:00
|
|
|
|
|
|
|
On Windows the `PYTHONUTF8` environment variable must be set to 1:
|
|
|
|
|
|
|
|
```cmd
|
|
|
|
set PYTHONUTF8=1
|
|
|
|
```
|
|
|
|
|
2017-06-27 12:05:54 +02:00
|
|
|
#### Running the tests
|
2019-05-19 22:20:34 +02:00
|
|
|
|
2018-10-21 04:08:59 +02:00
|
|
|
Individual tests can be run by directly calling the test script, e.g.:
|
2017-04-18 09:25:36 +02:00
|
|
|
|
2017-06-27 12:05:54 +02:00
|
|
|
```
|
2020-08-01 22:04:13 +02:00
|
|
|
test/functional/wallet_hd.py
|
2017-06-27 12:05:54 +02:00
|
|
|
```
|
2019-05-19 22:20:34 +02:00
|
|
|
|
2017-06-27 12:05:54 +02:00
|
|
|
or can be run through the test_runner harness, eg:
|
2019-05-19 22:20:34 +02:00
|
|
|
|
2017-06-27 12:05:54 +02:00
|
|
|
```
|
2020-08-01 22:04:13 +02:00
|
|
|
test/functional/test_runner.py wallet_hd.py
|
2017-06-27 12:05:54 +02:00
|
|
|
```
|
2019-05-19 22:20:34 +02:00
|
|
|
|
2017-06-27 12:05:54 +02:00
|
|
|
You can run any combination (incl. duplicates) of tests by calling:
|
2019-05-19 22:20:34 +02:00
|
|
|
|
2017-06-27 12:05:54 +02:00
|
|
|
```
|
|
|
|
test/functional/test_runner.py <testname1> <testname2> <testname3> ...
|
|
|
|
```
|
2019-05-19 22:20:34 +02:00
|
|
|
|
2019-07-18 03:39:24 +02:00
|
|
|
Wildcard test names can be passed, if the paths are coherent and the test runner
|
|
|
|
is called from a `bash` shell or similar that does the globbing. For example,
|
|
|
|
to run all the wallet tests:
|
|
|
|
|
|
|
|
```
|
|
|
|
test/functional/test_runner.py test/functional/wallet*
|
|
|
|
functional/test_runner.py functional/wallet* (called from the test/ directory)
|
|
|
|
test_runner.py wallet* (called from the test/functional/ directory)
|
|
|
|
```
|
|
|
|
|
|
|
|
but not
|
|
|
|
|
|
|
|
```
|
|
|
|
test/functional/test_runner.py wallet*
|
|
|
|
```
|
|
|
|
|
|
|
|
Combinations of wildcards can be passed:
|
|
|
|
|
|
|
|
```
|
|
|
|
test/functional/test_runner.py ./test/functional/tool* test/functional/mempool*
|
|
|
|
test_runner.py tool* mempool*
|
|
|
|
```
|
|
|
|
|
2017-06-27 12:05:54 +02:00
|
|
|
Run the regression test suite with:
|
2019-05-19 22:20:34 +02:00
|
|
|
|
2017-06-27 12:05:54 +02:00
|
|
|
```
|
|
|
|
test/functional/test_runner.py
|
|
|
|
```
|
2019-05-19 22:20:34 +02:00
|
|
|
|
|
|
|
Run all possible tests with
|
|
|
|
|
2017-06-27 12:05:54 +02:00
|
|
|
```
|
|
|
|
test/functional/test_runner.py --extended
|
|
|
|
```
|
|
|
|
|
|
|
|
By default, up to 4 tests will be run in parallel by test_runner. To specify
|
|
|
|
how many jobs to run, append `--jobs=n`
|
2019-05-19 22:20:34 +02:00
|
|
|
|
2017-06-27 12:05:54 +02:00
|
|
|
The individual tests and the test_runner harness have many command-line
|
2019-10-17 20:26:36 +02:00
|
|
|
options. Run `test/functional/test_runner.py -h` to see them all.
|
2019-05-19 22:20:34 +02:00
|
|
|
|
2017-06-27 12:05:54 +02:00
|
|
|
#### Troubleshooting and debugging test failures
|
2019-05-19 22:20:34 +02:00
|
|
|
|
2017-06-27 12:05:54 +02:00
|
|
|
##### Resource contention
|
2019-05-19 22:20:34 +02:00
|
|
|
|
2017-06-27 12:05:54 +02:00
|
|
|
The P2P and RPC ports used by the dashd nodes-under-test are chosen to make
|
|
|
|
conflicts with other processes unlikely. However, if there is another dashd
|
|
|
|
process running on the system (perhaps from a previous test which hasn't successfully
|
|
|
|
killed all its dashd nodes), then there may be a port conflict which will
|
|
|
|
cause the test to fail. It is recommended that you run the tests on a system
|
|
|
|
where no other dashd processes are running.
|
|
|
|
|
2019-10-17 20:26:36 +02:00
|
|
|
On linux, the test framework will warn if there is another
|
2017-06-27 12:05:54 +02:00
|
|
|
dashd process running when the tests are started.
|
|
|
|
|
|
|
|
If there are zombie dashd processes after test failure, you can kill them
|
|
|
|
by running the following commands. **Note that these commands will kill all
|
|
|
|
dashd processes running on the system, so should not be used if any non-test
|
|
|
|
dashd processes are being run.**
|
|
|
|
|
|
|
|
```bash
|
|
|
|
killall dashd
|
2019-05-19 22:20:34 +02:00
|
|
|
```
|
|
|
|
|
2017-06-27 12:05:54 +02:00
|
|
|
or
|
|
|
|
|
|
|
|
```bash
|
|
|
|
pkill -9 dashd
|
|
|
|
```
|
2019-05-19 22:20:34 +02:00
|
|
|
|
|
|
|
|
2017-06-27 12:05:54 +02:00
|
|
|
##### Data directory cache
|
2019-05-19 22:20:34 +02:00
|
|
|
|
2017-06-27 12:05:54 +02:00
|
|
|
A pre-mined blockchain with 200 blocks is generated the first time a
|
|
|
|
functional test is run and is stored in test/cache. This speeds up
|
|
|
|
test startup times since new blockchains don't need to be generated for
|
|
|
|
each test. However, the cache may get into a bad state, in which case
|
|
|
|
tests will fail. If this happens, remove the cache directory (and make
|
2019-07-12 18:37:59 +02:00
|
|
|
sure dashd processes are stopped as above):
|
2019-05-19 22:20:34 +02:00
|
|
|
|
|
|
|
```bash
|
2019-10-17 20:26:36 +02:00
|
|
|
rm -rf test/cache
|
2019-05-19 22:20:34 +02:00
|
|
|
killall dashd
|
|
|
|
```
|
|
|
|
|
2017-06-27 12:05:54 +02:00
|
|
|
##### Test logging
|
|
|
|
|
2019-09-30 09:27:21 +02:00
|
|
|
The tests contain logging at five different levels (DEBUG, INFO, WARNING, ERROR
|
|
|
|
and CRITICAL). From within your functional tests you can log to these different
|
|
|
|
levels using the logger included in the test_framework, e.g.
|
|
|
|
`self.log.debug(object)`. By default:
|
2017-06-27 12:05:54 +02:00
|
|
|
|
|
|
|
- when run through the test_runner harness, *all* logs are written to
|
|
|
|
`test_framework.log` and no logs are output to the console.
|
|
|
|
- when run directly, *all* logs are written to `test_framework.log` and INFO
|
|
|
|
level and above are output to the console.
|
|
|
|
- when run on Travis, no logs are output to the console. However, if a test
|
2019-07-12 18:37:59 +02:00
|
|
|
fails, the `test_framework.log` and dashd `debug.log`s will all be dumped
|
2017-06-27 12:05:54 +02:00
|
|
|
to the console to help troubleshooting.
|
|
|
|
|
2019-10-17 20:26:36 +02:00
|
|
|
These log files can be located under the test data directory (which is always
|
|
|
|
printed in the first line of test output):
|
|
|
|
- `<test data directory>/test_framework.log`
|
|
|
|
- `<test data directory>/node<node number>/regtest/debug.log`.
|
|
|
|
|
|
|
|
The node number identifies the relevant test node, starting from `node0`, which
|
|
|
|
corresponds to its position in the nodes list of the specific test,
|
|
|
|
e.g. `self.nodes[0]`.
|
|
|
|
|
2017-06-27 12:05:54 +02:00
|
|
|
To change the level of logs output to the console, use the `-l` command line
|
|
|
|
argument.
|
|
|
|
|
|
|
|
`test_framework.log` and dashd `debug.log`s can be combined into a single
|
|
|
|
aggregate log by running the `combine_logs.py` script. The output can be plain
|
|
|
|
text, colorized text or html. For example:
|
|
|
|
|
|
|
|
```
|
2019-10-17 20:26:36 +02:00
|
|
|
test/functional/combine_logs.py -c <test data directory> | less -r
|
2017-06-27 12:05:54 +02:00
|
|
|
```
|
|
|
|
|
|
|
|
will pipe the colorized logs from the test into less.
|
|
|
|
|
|
|
|
Use `--tracerpc` to trace out all the RPC calls and responses to the console. For
|
|
|
|
some tests (eg any that use `submitblock` to submit a full block over RPC),
|
|
|
|
this can result in a lot of screen output.
|
|
|
|
|
|
|
|
By default, the test data directory will be deleted after a successful run.
|
|
|
|
Use `--nocleanup` to leave the test data directory intact. The test data
|
|
|
|
directory is never deleted after a failed test.
|
|
|
|
|
|
|
|
##### Attaching a debugger
|
|
|
|
|
|
|
|
A python debugger can be attached to tests at any point. Just add the line:
|
|
|
|
|
|
|
|
```py
|
|
|
|
import pdb; pdb.set_trace()
|
|
|
|
```
|
|
|
|
|
|
|
|
anywhere in the test. You will then be able to inspect variables, as well as
|
2019-07-12 18:37:59 +02:00
|
|
|
call methods that interact with the dashd nodes-under-test.
|
2017-06-27 12:05:54 +02:00
|
|
|
|
2019-07-24 19:03:39 +02:00
|
|
|
If further introspection of the dashd instances themselves becomes
|
2017-07-19 22:45:22 +02:00
|
|
|
necessary, this can be accomplished by first setting a pdb breakpoint
|
|
|
|
at an appropriate location, running the test to that point, then using
|
2019-09-30 09:27:21 +02:00
|
|
|
`gdb` (or `lldb` on macOS) to attach to the process and debug.
|
2017-07-19 22:45:22 +02:00
|
|
|
|
2019-09-30 09:27:21 +02:00
|
|
|
For instance, to attach to `self.node[1]` during a run you can get
|
|
|
|
the pid of the node within `pdb`.
|
|
|
|
|
|
|
|
```
|
|
|
|
(pdb) self.node[1].process.pid
|
|
|
|
```
|
|
|
|
|
|
|
|
Alternatively, you can find the pid by inspecting the temp folder for the specific test
|
|
|
|
you are running. The path to that folder is printed at the beginning of every
|
|
|
|
test run:
|
2017-07-19 22:45:22 +02:00
|
|
|
|
|
|
|
```bash
|
|
|
|
2017-06-27 14:13:56.686000 TestFramework (INFO): Initializing test directory /tmp/user/1000/testo9vsdjo3
|
|
|
|
```
|
|
|
|
|
2019-09-30 09:27:21 +02:00
|
|
|
Use the path to find the pid file in the temp folder:
|
2017-07-19 22:45:22 +02:00
|
|
|
|
|
|
|
```bash
|
2019-07-24 19:03:39 +02:00
|
|
|
cat /tmp/user/1000/testo9vsdjo3/node1/regtest/dashd.pid
|
2019-09-30 09:27:21 +02:00
|
|
|
```
|
|
|
|
|
|
|
|
Then you can use the pid to start `gdb`:
|
|
|
|
|
|
|
|
```bash
|
2019-07-24 19:03:39 +02:00
|
|
|
gdb /home/example/dashd <pid>
|
2017-07-19 22:45:22 +02:00
|
|
|
```
|
|
|
|
|
2019-01-31 15:15:17 +01:00
|
|
|
Note: gdb attach step may require ptrace_scope to be modified, or `sudo` preceding the `gdb`.
|
|
|
|
See this link for considerations: https://www.kernel.org/doc/Documentation/security/Yama.txt
|
2017-07-19 22:45:22 +02:00
|
|
|
|
Merge #14519: tests: add utility to easily profile node performance with perf
13782b8ba8 docs: add perf section to developer docs (James O'Beirne)
58180b5fd4 tests: add utility to easily profile node performance with perf (James O'Beirne)
Pull request description:
Adds a context manager to easily (and selectively) profile node performance during functional test execution using `perf`.
While writing some tests, I encountered some odd bitcoind slowness. I wrote up a utility (`TestNode.profile_with_perf`) that generates performance diagnostics for a node by running `perf` during the execution of a particular region of test code.
`perf` usage is detailed in the excellent (and sadly unmerged) https://github.com/bitcoin/bitcoin/pull/12649; all due props to @eklitzke.
### Example
```python
with node.profile_with_perf("large-msgs"):
for i in range(200):
node.p2p.send_message(some_large_msg)
node.p2p.sync_with_ping()
```
This generates a perf data file in the test node's datadir (`/tmp/testtxmpod0y/node0/node-0-TestName-large-msgs.perf.data`).
Running `perf report` generates nice output about where the node spent most of its time while running that part of the test:
```bash
$ perf report -i /tmp/testtxmpod0y/node0/node-0-TestName-large-msgs.perf.data --stdio \
| c++filt \
| less
# To display the perf.data header info, please use --header/--header-only options.
#
#
# Total Lost Samples: 0
#
# Samples: 135 of event 'cycles:pp'
# Event count (approx.): 1458205679493582
#
# Children Self Command Shared Object Symbol
# ........ ........ ............... ................... ........................................................................................................................................................................................................................................................................
#
70.14% 0.00% bitcoin-net bitcoind [.] CNode::ReceiveMsgBytes(char const*, unsigned int, bool&)
|
---CNode::ReceiveMsgBytes(char const*, unsigned int, bool&)
70.14% 0.00% bitcoin-net bitcoind [.] CNetMessage::readData(char const*, unsigned int)
|
---CNetMessage::readData(char const*, unsigned int)
CNode::ReceiveMsgBytes(char const*, unsigned int, bool&)
35.52% 0.00% bitcoin-net bitcoind [.] std::vector<char, zero_after_free_allocator<char> >::_M_fill_insert(__gnu_cxx::__normal_iterator<char*, std::vector<char, zero_after_free_allocator<char> > >, unsigned long, char const&)
|
---std::vector<char, zero_after_free_allocator<char> >::_M_fill_insert(__gnu_cxx::__normal_iterator<char*, std::vector<char, zero_after_free_allocator<char> > >, unsigned long, char const&)
CNetMessage::readData(char const*, unsigned int)
CNode::ReceiveMsgBytes(char const*, unsigned int, bool&)
...
```
Tree-SHA512: 9ac4ceaa88818d5eca00994e8e3c8ad42ae019550d6583972a0a4f7b0c4f61032e3d0c476b4ae58756bc5eb8f8015a19a7fc26c095bd588f31d49a37ed0c6b3e
2019-02-05 23:40:11 +01:00
|
|
|
##### Profiling
|
|
|
|
|
|
|
|
An easy way to profile node performance during functional tests is provided
|
|
|
|
for Linux platforms using `perf`.
|
|
|
|
|
|
|
|
Perf will sample the running node and will generate profile data in the node's
|
|
|
|
datadir. The profile data can then be presented using `perf report` or a graphical
|
|
|
|
tool like [hotspot](https://github.com/KDAB/hotspot).
|
|
|
|
|
|
|
|
To generate a profile during test suite runs, use the `--perf` flag.
|
|
|
|
|
|
|
|
To see render the output to text, run
|
|
|
|
|
|
|
|
```sh
|
|
|
|
perf report -i /path/to/datadir/send-big-msgs.perf.data.xxxx --stdio | c++filt | less
|
|
|
|
```
|
|
|
|
|
|
|
|
For ways to generate more granular profiles, see the README in
|
|
|
|
[test/functional](/test/functional).
|
|
|
|
|
2017-06-27 12:05:54 +02:00
|
|
|
### Util tests
|
2019-05-19 22:20:34 +02:00
|
|
|
|
2019-07-24 19:52:32 +02:00
|
|
|
Util tests can be run locally by running `test/util/bitcoin-util-test.py`.
|
2019-05-19 22:20:34 +02:00
|
|
|
Use the `-v` option for verbose output.
|
|
|
|
|
2018-10-21 04:08:59 +02:00
|
|
|
### Lint tests
|
|
|
|
|
|
|
|
#### Dependencies
|
|
|
|
|
2019-11-06 13:41:28 +01:00
|
|
|
| Lint test | Dependency | Version [used by CI](../ci/lint/04_install.sh) | Installation
|
|
|
|
|-----------|:----------:|:-------------------------------------------:|--------------
|
2020-06-25 17:08:37 +02:00
|
|
|
| [`lint-python.sh`](lint/lint-python.sh) | [flake8](https://gitlab.com/pycqa/flake8) | [3.8.3](https://github.com/bitcoin/bitcoin/pull/19348) | `pip3 install flake8==3.8.3`
|
|
|
|
| [`lint-shell.sh`](lint/lint-shell.sh) | [ShellCheck](https://github.com/koalaman/shellcheck) | [0.7.1](https://github.com/bitcoin/bitcoin/pull/19348) | [details...](https://github.com/koalaman/shellcheck#installing)
|
|
|
|
| [`lint-spelling.sh`](lint/lint-spelling.sh) | [codespell](https://github.com/codespell-project/codespell) | [1.17.1](https://github.com/bitcoin/bitcoin/pull/19348) | `pip3 install codespell==1.17.1`
|
2019-11-06 13:41:28 +01:00
|
|
|
|
|
|
|
Please be aware that on Linux distributions all dependencies are usually available as packages, but could be outdated.
|
2018-10-21 04:08:59 +02:00
|
|
|
|
|
|
|
#### Running the tests
|
|
|
|
|
|
|
|
Individual tests can be run by directly calling the test script, e.g.:
|
|
|
|
|
|
|
|
```
|
|
|
|
test/lint/lint-filenames.sh
|
|
|
|
```
|
|
|
|
|
|
|
|
You can run all the shell-based lint tests by running:
|
|
|
|
|
|
|
|
```
|
|
|
|
test/lint/lint-all.sh
|
|
|
|
```
|
|
|
|
|
2017-06-27 12:05:54 +02:00
|
|
|
# Writing functional tests
|
2019-05-19 22:20:34 +02:00
|
|
|
|
|
|
|
You are encouraged to write functional tests for new or existing features.
|
2020-07-29 03:23:12 +02:00
|
|
|
Further information about the functional test framework and individual
|
2019-05-19 22:20:34 +02:00
|
|
|
tests is found in [test/functional](/test/functional).
|