Merge #6232: backport: merge bitcoin#22006, #22955, #23375, #23302, #22902, #23724, #26944, #23907, #24358, #25360, #25794, #26945, #27458, #28110 (userspace static defined tracing)

0a7a234bd3 merge bitcoin#28110: correct Fedora systemtap dep (Kittywhiskers Van Gogh)
c92eb67c40 merge bitcoin#27458: Detect USDT the same way how it is used in the code (Kittywhiskers Van Gogh)
f4a53ba8ce merge bitcoin#26945: systemtap 4.8 (Kittywhiskers Van Gogh)
88696129f3 merge bitcoin#25794: don't rely on block_connected USDT event order in tests (Kittywhiskers Van Gogh)
457bbd3f8b merge bitcoin#25360: SystemTap 4.7 (RISC-V support) (Kittywhiskers Van Gogh)
f3b219ad0d merge bitcoin#24358: USDT tracepoint interface tests (Kittywhiskers Van Gogh)
5b10a5a2fd merge bitcoin#23907: utxocache tracepoints follow up (Kittywhiskers Van Gogh)
c3d7e3a192 merge bitcoin#26944: fix systemtap download URL (Kittywhiskers Van Gogh)
264e02fcc7 merge bitcoin#23724: add systemtap's sys/sdt.h as depends for GUIX builds with USDT tracepoints (Kittywhiskers Van Gogh)
6cc596b99a merge bitcoin#22902: utxocache tracepoints (Kittywhiskers Van Gogh)
644a47ef9a merge bitcoin#23302: drop GetHash().ToString() argument from the `validation:block_connected` tracepoint (Kittywhiskers Van Gogh)
bfdc9ad364 merge bitcoin#23375: more deterministic coin selection for coinbase UTXOs (oldest first) (Kittywhiskers Van Gogh)
5718716cd2 merge bitcoin#22955: Rename fBlocksOnly, Add test (Kittywhiskers Van Gogh)
cacc31213b merge bitcoin#22006: first tracepoints and documentation on User-Space, Statically Defined Tracing (USDT) (Kittywhiskers Van Gogh)

Pull request description:

  ## Additional Information

  * In [bitcoin#22955](https://github.com/bitcoin/bitcoin/pull/22955), `fBlocksOnly` has not been renamed to `reject_tx_invs` as Dash uses the inventory system beyond relaying transaction data with the block-relay-only blocklist have a greater set of prohibited messages. This renders the new name misleading but coining a new name may be a source of confusion, making retaining the legacy name desirable.

    * Additionally, because the word "transaction" isn't hardcoded into the log message (instead opting to use `CInv::GetCommand()`'s output instead, which for transactions, are "tx"), the expected log message in `p2p_blocksonly.py` has been adjusted accordingly.

  * [bitcoin#24358](https://github.com/bitcoin/bitcoin/pull/24358) depends on [bitcoin#22955](https://github.com/bitcoin/bitcoin/pull/22955) and [bitcoin#23375](https://github.com/bitcoin/bitcoin/pull/23375) in order to work as without the latter backport, `interface_usdt_utxocache.py` will return the following error

    <details>

    <summary>Test run:</summary>

    ```
    debian@debian:~/dash$ sudo ./test/functional/interface_usdt_utxocache.py
    [sudo] password for debian:
    2024-08-26T17:08:05.234000Z TestFramework (INFO): Initializing test directory /tmp/dash_func_test_n5rb0xy4
    2024-08-26T17:08:07.023000Z TestFramework (INFO): testing the utxocache:uncache tracepoint API
    2024-08-26T17:08:07.026000Z TestFramework (ERROR): Assertion failed
    Traceback (most recent call last):
      File "/home/debian/dash/test/functional/test_framework/test_framework.py", line 160, in main
        self.run_test()
      File "/home/debian/dash/./test/functional/interface_usdt_utxocache.py", line 149, in run_test
        self.test_uncache()
      File "/home/debian/dash/./test/functional/interface_usdt_utxocache.py", line 172, in test_uncache
        invalid_tx = self.wallet.create_self_transfer(
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/home/debian/dash/test/functional/test_framework/wallet.py", line 166, in create_self_transfer
        assert_equal(mempool_valid, tx_info['allowed'])
      File "/home/debian/dash/test/functional/test_framework/util.py", line 51, in assert_equal
        raise AssertionError("not(%s)" % " == ".join(str(arg) for arg in (thing1, thing2) + args))
    AssertionError: not(True == False)
    2024-08-26T17:08:07.533000Z TestFramework (INFO): Stopping nodes
    2024-08-26T17:08:08.535000Z TestFramework (WARNING): Not cleaning up dir /tmp/dash_func_test_n5rb0xy4
    2024-08-26T17:08:08.535000Z TestFramework (ERROR): Test failed. Test logging available at /tmp/dash_func_test_n5rb0xy4/test_framework.log
    2024-08-26T17:08:08.535000Z TestFramework (ERROR):
    2024-08-26T17:08:08.535000Z TestFramework (ERROR): Hint: Call /home/debian/dash/test/functional/combine_logs.py '/tmp/dash_func_test_n5rb0xy4' to consolidate all logs
    2024-08-26T17:08:08.536000Z TestFramework (ERROR):
    2024-08-26T17:08:08.536000Z TestFramework (ERROR): If this failure happened unexpectedly or intermittently, please file a bug and provide a link or upload of the combined log.
    2024-08-26T17:08:08.536000Z TestFramework (ERROR): https://github.com/dashpay/dash/issues
    2024-08-26T17:08:08.536000Z TestFramework (ERROR):
    ```

    </details>

    with the underlying error that has been alluded to in d2c4904

    ```
    {'txid': '44b58b10e69321dacf00724f1893c9ecb50fc1f89ed7c70a6c0b3c08f7dc750f', 'allowed': False, 'reject-reason': 'bad-txns-premature-spend-of-coinbase'}
    ```

  * In [bitcoin#24358](https://github.com/bitcoin/bitcoin/pull/24358), all `interface_usdt_*.py` tests needed minor syntax changes to account for how we invoke certain RPCs. `interface_usdt_utxocache.py` required tweaking with the blocks needed to be eligible for pruning (`450` vs. `350` upstream) and stop the node explicitly to account for the governance validation warning .

    This is important as the placement assumes that `test_flush()` is the last test, should this change, the node may need to be manually started before continuing on with the tests after.

  * Some `TRACE*` entries may not exactly match the backports they come from because the variables or functions they access may have been amended by a backport done _after_ the backport that introduce the trace entry.

  ## Breaking Changes

  None observed.

  ## Checklist:

  - [x] I have performed a self-review of my own code
  - [x] I have commented my code, particularly in hard-to-understand areas **(note: N/A)**
  - [x] I have added or updated relevant unit/integration/functional/e2e tests
  - [x] I have made corresponding changes to the documentation
  - [x] I have assigned this pull request to a milestone _(for repository code-owners and collaborators only)_

ACKs for top commit:
  UdjinM6:
    utACK 0a7a234bd3
  PastaPastaPasta:
    utACK [0a7a234](0a7a234bd3)

Tree-SHA512: 8df11392dd8d152c18d55ac70a446d1eec336bdf1a984cbf41c3202c353358180e05ba4b7182ec2962ea09eefa41d1dc3cd383d358f9b3dec57ce8b67c6e6afd
This commit is contained in:
pasta 2024-09-05 11:53:00 -05:00
commit 93e62f3e78
No known key found for this signature in database
GPG Key ID: 52527BEDABE87984
29 changed files with 2339 additions and 30 deletions

View File

@ -138,11 +138,11 @@ AC_ARG_WITH([bdb],
[use_bdb=$withval], [use_bdb=$withval],
[use_bdb=auto]) [use_bdb=auto])
AC_ARG_ENABLE([ebpf], AC_ARG_ENABLE([usdt],
[AS_HELP_STRING([--enable-ebpf], [AS_HELP_STRING([--enable-usdt],
[enable eBPF tracing (default is yes if sys/sdt.h is found)])], [enable tracepoints for Userspace, Statically Defined Tracing (default is yes if sys/sdt.h is found)])],
[use_ebpf=$enableval], [use_usdt=$enableval],
[use_ebpf=yes]) [use_usdt=yes])
AC_ARG_WITH([miniupnpc], AC_ARG_WITH([miniupnpc],
[AS_HELP_STRING([--with-miniupnpc], [AS_HELP_STRING([--with-miniupnpc],
@ -1408,17 +1408,20 @@ if test x$enable_wallet != xno; then
fi fi
fi fi
if test x$use_ebpf != xno; then if test x$use_usdt != xno; then
AC_MSG_CHECKING([whether eBPF tracepoints are supported]) AC_MSG_CHECKING([whether Userspace, Statically Defined Tracing tracepoints are supported])
AC_COMPILE_IFELSE([ AC_COMPILE_IFELSE([
AC_LANG_PROGRAM( AC_LANG_PROGRAM(
[#include <sys/sdt.h>], [#include <sys/sdt.h>],
[DTRACE_PROBE("context", "event");] [DTRACE_PROBE(context, event);
int a, b, c, d, e, f, g;
DTRACE_PROBE7(context, event, a, b, c, d, e, f, g);]
)], )],
[AC_MSG_RESULT(yes); have_sdt=yes; AC_DEFINE([ENABLE_TRACING], [1], [Define to 1 to enable eBPF user static defined tracepoints])], [AC_MSG_RESULT(yes); AC_DEFINE([ENABLE_TRACING], [1], [Define to 1 to enable tracepoints for Userspace, Statically Defined Tracing])],
[AC_MSG_RESULT(no); have_sdt=no;] [AC_MSG_RESULT(no); use_usdt=no;]
) )
fi fi
AM_CONDITIONAL([ENABLE_USDT_TRACEPOINTS], [test "$use_usdt" = "yes"])
dnl Check for libminiupnpc (optional) dnl Check for libminiupnpc (optional)
if test x$use_upnp != xno; then if test x$use_upnp != xno; then
@ -1795,7 +1798,6 @@ AM_CONDITIONAL([TARGET_WINDOWS], [test x$TARGET_OS = xwindows])
AM_CONDITIONAL([ENABLE_WALLET],[test x$enable_wallet = xyes]) AM_CONDITIONAL([ENABLE_WALLET],[test x$enable_wallet = xyes])
AM_CONDITIONAL([USE_SQLITE], [test "x$use_sqlite" = "xyes"]) AM_CONDITIONAL([USE_SQLITE], [test "x$use_sqlite" = "xyes"])
AM_CONDITIONAL([USE_BDB], [test "x$use_bdb" = "xyes"]) AM_CONDITIONAL([USE_BDB], [test "x$use_bdb" = "xyes"])
AM_CONDITIONAL([ENABLE_TRACING],[test x$have_sdt = xyes])
AM_CONDITIONAL([ENABLE_TESTS],[test x$BUILD_TEST = xyes]) AM_CONDITIONAL([ENABLE_TESTS],[test x$BUILD_TEST = xyes])
AM_CONDITIONAL([ENABLE_FUZZ],[test x$enable_fuzz = xyes]) AM_CONDITIONAL([ENABLE_FUZZ],[test x$enable_fuzz = xyes])
AM_CONDITIONAL([ENABLE_FUZZ_BINARY],[test x$enable_fuzz_binary = xyes]) AM_CONDITIONAL([ENABLE_FUZZ_BINARY],[test x$enable_fuzz_binary = xyes])
@ -1943,7 +1945,7 @@ echo " with bench = $use_bench"
echo " with upnp = $use_upnp" echo " with upnp = $use_upnp"
echo " with natpmp = $use_natpmp" echo " with natpmp = $use_natpmp"
echo " use asm = $use_asm" echo " use asm = $use_asm"
echo " ebpf tracing = $have_sdt" echo " USDT tracing = $use_usdt"
echo " sanitizers = $use_sanitizers" echo " sanitizers = $use_sanitizers"
echo " debug enabled = $enable_debug" echo " debug enabled = $enable_debug"
echo " stacktraces enabled = $enable_stacktraces" echo " stacktraces enabled = $enable_stacktraces"

288
contrib/tracing/README.md Normal file
View File

@ -0,0 +1,288 @@
Example scripts for User-space, Statically Defined Tracing (USDT)
=================================================================
This directory contains scripts showcasing User-space, Statically Defined
Tracing (USDT) support for Dash Core on Linux using. For more information on
USDT support in Dash Core see the [USDT documentation].
[USDT documentation]: ../../doc/tracing.md
Examples for the two main eBPF front-ends, [bpftrace] and
[BPF Compiler Collection (BCC)], with support for USDT, are listed. BCC is used
for complex tools and daemons and `bpftrace` is preferred for one-liners and
shorter scripts.
[bpftrace]: https://github.com/iovisor/bpftrace
[BPF Compiler Collection (BCC)]: https://github.com/iovisor/bcc
To develop and run bpftrace and BCC scripts you need to install the
corresponding packages. See [installing bpftrace] and [installing BCC] for more
information. For development there exist a [bpftrace Reference Guide], a
[BCC Reference Guide], and a [bcc Python Developer Tutorial].
[installing bpftrace]: https://github.com/iovisor/bpftrace/blob/master/INSTALL.md
[installing BCC]: https://github.com/iovisor/bcc/blob/master/INSTALL.md
[bpftrace Reference Guide]: https://github.com/iovisor/bpftrace/blob/master/docs/reference_guide.md
[BCC Reference Guide]: https://github.com/iovisor/bcc/blob/master/docs/reference_guide.md
[bcc Python Developer Tutorial]: https://github.com/iovisor/bcc/blob/master/docs/tutorial_bcc_python_developer.md
## Examples
The bpftrace examples contain a relative path to the `dashd` binary. By
default, the scripts should be run from the repository-root and assume a
self-compiled `dashd` binary. The paths in the examples can be changed, for
example, to point to release builds if needed. See the
[Dash Core USDT documentation] on how to list available tracepoints in your
`dashd` binary.
[Dash Core USDT documentation]: ../../doc/tracing.md#listing-available-tracepoints
**WARNING: eBPF programs require root privileges to be loaded into a Linux
kernel VM. This means the bpftrace and BCC examples must be executed with root
privileges. Make sure to carefully review any scripts that you run with root
privileges first!**
### log_p2p_traffic.bt
A bpftrace script logging information about inbound and outbound P2P network
messages. Based on the `net:inbound_message` and `net:outbound_message`
tracepoints.
By default, `bpftrace` limits strings to 64 bytes due to the limited stack size
in the eBPF VM. For example, Tor v3 addresses exceed the string size limit which
results in the port being cut off during logging. The string size limit can be
increased with the `BPFTRACE_STRLEN` environment variable (`BPFTRACE_STRLEN=70`
works fine).
```
$ bpftrace contrib/tracing/log_p2p_traffic.bt
```
Output
```
outbound 'ping' msg to peer 11 (outbound-full-relay, [2a02:b10c:f747:1:ef:fake:ipv6:addr]:8333) with 8 bytes
inbound 'pong' msg from peer 11 (outbound-full-relay, [2a02:b10c:f747:1:ef:fake:ipv6:addr]:8333) with 8 bytes
inbound 'inv' msg from peer 16 (outbound-full-relay, XX.XX.XXX.121:8333) with 37 bytes
outbound 'getdata' msg to peer 16 (outbound-full-relay, XX.XX.XXX.121:8333) with 37 bytes
inbound 'tx' msg from peer 16 (outbound-full-relay, XX.XX.XXX.121:8333) with 222 bytes
outbound 'inv' msg to peer 9 (outbound-full-relay, faketorv3addressa2ufa6odvoi3s77j4uegey0xb10csyfyve2t33curbyd.onion:8333) with 37 bytes
outbound 'inv' msg to peer 7 (outbound-full-relay, XX.XX.XXX.242:8333) with 37 bytes
```
### p2p_monitor.py
A BCC Python script using curses for an interactive P2P message monitor. Based
on the `net:inbound_message` and `net:outbound_message` tracepoints.
Inbound and outbound traffic is listed for each peer together with information
about the connection. Peers can be selected individually to view recent P2P
messages.
```
$ python3 contrib/tracing/p2p_monitor.py ./src/dashd
```
Lists selectable peers and traffic and connection information.
```
P2P Message Monitor
Navigate with UP/DOWN or J/K and select a peer with ENTER or SPACE to see individual P2P messages
PEER OUTBOUND INBOUND TYPE ADDR
0 46 398 byte 61 1407590 byte block-relay-only XX.XX.XXX.196:8333
11 1156 253570 byte 3431 2394924 byte outbound-full-relay XXX.X.XX.179:8333
13 3425 1809620 byte 1236 305458 byte inbound XXX.X.X.X:60380
16 1046 241633 byte 1589 1199220 byte outbound-full-relay 4faketorv2pbfu7x.onion:8333
19 577 181679 byte 390 148951 byte outbound-full-relay kfake4vctorjv2o2.onion:8333
20 11 1248 byte 13 1283 byte block-relay-only [2600:fake:64d9:b10c:4436:aaaa:fe:bb]:8333
21 11 1248 byte 13 1299 byte block-relay-only XX.XXX.X.155:8333
22 5 103 byte 1 102 byte feeler XX.XX.XXX.173:8333
23 11 1248 byte 12 1255 byte block-relay-only XX.XXX.XXX.220:8333
24 3 103 byte 1 102 byte feeler XXX.XXX.XXX.64:8333
```
Showing recent P2P messages between our node and a selected peer.
```
----------------------------------------------------------------------
| PEER 16 (4faketorv2pbfu7x.onion:8333) |
| OUR NODE outbound-full-relay PEER |
| <--- sendcmpct (9 bytes) |
| inv (37 byte) ---> |
| <--- ping (8 bytes) |
| pong (8 byte) ---> |
| inv (37 byte) ---> |
| <--- addr (31 bytes) |
| inv (37 byte) ---> |
| <--- getheaders (1029 bytes) |
| headers (1 byte) ---> |
| <--- feefilter (8 bytes) |
| <--- pong (8 bytes) |
| <--- headers (82 bytes) |
| <--- addr (30003 bytes) |
| inv (1261 byte) ---> |
| … |
```
### log_raw_p2p_msgs.py
A BCC Python script showcasing eBPF and USDT limitations when passing data
larger than about 32kb. Based on the `net:inbound_message` and
`net:outbound_message` tracepoints.
Dash P2P messages can be larger than 32kb (e.g. `tx`, `block`, ...). The
eBPF VM's stack is limited to 512 bytes, and we can't allocate more than about
32kb for a P2P message in the eBPF VM. The **message data is cut off** when the
message is larger than MAX_MSG_DATA_LENGTH (see script). This can be detected
in user-space by comparing the data length to the message length variable. The
message is cut off when the data length is smaller than the message length.
A warning is included with the printed message data.
Data is submitted to user-space (i.e. to this script) via a ring buffer. The
throughput of the ring buffer is limited. Each p2p_message is about 32kb in
size. In- or outbound messages submitted to the ring buffer in rapid
succession fill the ring buffer faster than it can be read. Some messages are
lost. BCC prints: `Possibly lost 2 samples` on lost messages.
```
$ python3 contrib/tracing/log_raw_p2p_msgs.py ./src/dashd
```
```
Logging raw P2P messages.
Messages larger that about 32kb will be cut off!
Some messages might be lost!
outbound msg 'inv' from peer 4 (outbound-full-relay, XX.XXX.XX.4:8333) with 253 bytes: 0705000000be2245c8f844c9f763748e1a7…
Warning: incomplete message (only 32568 out of 53552 bytes)! inbound msg 'tx' from peer 32 (outbound-full-relay, XX.XXX.XXX.43:8333) with 53552 bytes: 020000000001fd3c01939c85ad6756ed9fc…
Possibly lost 2 samples
```
### connectblock_benchmark.bt
A `bpftrace` script to benchmark the `ConnectBlock()` function during, for
example, a blockchain re-index. Based on the `validation:block_connected` USDT
tracepoint.
The script takes three positional arguments. The first two arguments, the start,
and end height indicate between which blocks the benchmark should be run. The
third acts as a duration threshold in milliseconds. When the `ConnectBlock()`
function takes longer than the threshold, information about the block, is
printed. For more details, see the header comment in the script.
The following command can be used to benchmark, for example, `ConnectBlock()`
between height 20000 and 38000 on SigNet while logging all blocks that take
longer than 25ms to connect.
```
$ bpftrace contrib/tracing/connectblock_benchmark.bt 20000 38000 25
```
In a different terminal, starting Dash Core in SigNet mode and with
re-indexing enabled.
```
$ ./src/dashd -signet -reindex
```
This produces the following output.
```
Attaching 5 probes...
ConnectBlock Benchmark between height 20000 and 38000 inclusive
Logging blocks taking longer than 25 ms to connect.
Starting Connect Block Benchmark between height 20000 and 38000.
BENCH 39 blk/s 59 tx/s 59 inputs/s 20 sigops/s (height 20038)
Block 20492 (000000f555653bb05e2f3c6e79925e01a20dd57033f4dc7c354b46e34735d32b) 20 tx 2319 ins 2318 sigops took 38 ms
BENCH 1840 blk/s 2117 tx/s 4478 inputs/s 2471 sigops/s (height 21879)
BENCH 1816 blk/s 4972 tx/s 4982 inputs/s 125 sigops/s (height 23695)
BENCH 2095 blk/s 2890 tx/s 2910 inputs/s 152 sigops/s (height 25790)
BENCH 1684 blk/s 3979 tx/s 4053 inputs/s 288 sigops/s (height 27474)
BENCH 1155 blk/s 3216 tx/s 3252 inputs/s 115 sigops/s (height 28629)
BENCH 1797 blk/s 2488 tx/s 2503 inputs/s 111 sigops/s (height 30426)
BENCH 1849 blk/s 6318 tx/s 6569 inputs/s 12189 sigops/s (height 32275)
BENCH 946 blk/s 20209 tx/s 20775 inputs/s 83809 sigops/s (height 33221)
Block 33406 (0000002adfe4a15cfcd53bd890a89bbae836e5bb7f38bac566f61ad4548c87f6) 25 tx 2045 ins 2090 sigops took 29 ms
Block 33687 (00000073231307a9828e5607ceb8156b402efe56747271a4442e75eb5b77cd36) 52 tx 1797 ins 1826 sigops took 26 ms
BENCH 582 blk/s 21581 tx/s 27673 inputs/s 60345 sigops/s (height 33803)
BENCH 1035 blk/s 19735 tx/s 19776 inputs/s 51355 sigops/s (height 34838)
Block 35625 (0000006b00b347390c4768ea9df2655e9ff4b120f29d78594a2a702f8a02c997) 20 tx 3374 ins 3371 sigops took 49 ms
BENCH 887 blk/s 17857 tx/s 22191 inputs/s 24404 sigops/s (height 35725)
Block 35937 (000000d816d13d6e39b471cd4368db60463a764ba1f29168606b04a22b81ea57) 75 tx 3943 ins 3940 sigops took 61 ms
BENCH 823 blk/s 16298 tx/s 21031 inputs/s 18440 sigops/s (height 36548)
Block 36583 (000000c3e260556dbf42968aae3f904dba8b8c1ff96a6f6e3aa5365d2e3ad317) 24 tx 2198 ins 2194 sigops took 34 ms
Block 36700 (000000b3b173de9e65a3cfa738d976af6347aaf83fa17ab3f2a4d2ede3ddfac4) 73 tx 1615 ins 1611 sigops took 31 ms
Block 36832 (0000007859578c02c1ac37dabd1b9ec19b98f350b56935f5dd3a41e9f79f836e) 34 tx 1440 ins 1436 sigops took 26 ms
BENCH 613 blk/s 16718 tx/s 25074 inputs/s 23022 sigops/s (height 37161)
Block 37870 (000000f5c1086291ba2d943fb0c3bc82e71c5ee341ee117681d1456fbf6c6c38) 25 tx 1517 ins 1514 sigops took 29 ms
BENCH 811 blk/s 16031 tx/s 20921 inputs/s 18696 sigops/s (height 37972)
Took 14055 ms to connect the blocks between height 20000 and 38000.
Histogram of block connection times in milliseconds (ms).
@durations:
[0] 16838 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
[1] 882 |@@ |
[2, 4) 236 | |
[4, 8) 23 | |
[8, 16) 9 | |
[16, 32) 9 | |
[32, 64) 4 | |
```
### log_utxocache_flush.py
A BCC Python script to log the UTXO cache flushes. Based on the
`utxocache:flush` tracepoint.
```bash
$ python3 contrib/tracing/log_utxocache_flush.py ./src/dashd
```
```
Logging utxocache flushes. Ctrl-C to end...
Duration (µs) Mode Coins Count Memory Usage Prune
730451 IF_NEEDED 22990 3323.54 kB True
637657 ALWAYS 122320 17124.80 kB False
81349 ALWAYS 0 1383.49 kB False
```
### log_utxos.bt
A `bpftrace` script to log information about the coins that are added, spent, or
uncached from the UTXO set. Based on the `utxocache:add`, `utxocache:spend` and
`utxocache:uncache` tracepoints.
```bash
$ bpftrace contrib/tracing/log_utxos.bt
```
This should produce an output similar to the following. If you see bpftrace
warnings like `Lost 24 events`, the eBPF perf ring-buffer is filled faster
than it is being read. You can increase the ring-buffer size by setting the
ENV variable `BPFTRACE_PERF_RB_PAGES` (default 64) at a cost of higher
memory usage. See the [bpftrace reference guide] for more information.
[bpftrace reference guide]: https://github.com/iovisor/bpftrace/blob/master/docs/reference_guide.md#98-bpftrace_perf_rb_pages
```bash
Attaching 4 probes...
OP Outpoint Value Height Coinbase
Added 6ba9ad857e1ef2eb2a2c94f06813c414c7ab273e3d6bd7ad64e000315a887e7c:1 10000 2094512 No
Spent fa7dc4db56637a151f6649d8f26732956d1c5424c82aae400a83d02b2cc2c87b:0 182264897 2094512 No
Added eeb2f099b1af6a2a12e6ddd2eeb16fc5968582241d7f08ba202d28b60ac264c7:0 10000 2094512 No
Added eeb2f099b1af6a2a12e6ddd2eeb16fc5968582241d7f08ba202d28b60ac264c7:1 182254756 2094512 No
Added a0c7f4ec9cccef2d89672a624a4e6c8237a17572efdd4679eea9e9ee70d2db04:0 10072679 2094513 Yes
Spent 25e0df5cc1aeb1b78e6056bf403e5e8b7e41f138060ca0a50a50134df0549a5e:2 540 2094508 No
Spent 42f383c04e09c26a2378272ec33aa0c1bf4883ca5ab739e8b7e06be5a5787d61:1 3848399 2007724 No
Added f85e3b4b89270863a389395cc9a4123e417ab19384cef96533c6649abd6b0561:0 3788399 2094513 No
Added f85e3b4b89270863a389395cc9a4123e417ab19384cef96533c6649abd6b0561:2 540 2094513 No
Spent a05880b8c77971ed0b9f73062c7c4cdb0ff3856ab14cbf8bc481ed571cd34b83:1 5591281046 2094511 No
Added eb689865f7d957938978d6207918748f74e6aa074f47874724327089445b0960:0 5589696005 2094513 No
Added eb689865f7d957938978d6207918748f74e6aa074f47874724327089445b0960:1 1565556 2094513 No
```

View File

@ -0,0 +1,156 @@
#!/usr/bin/env bpftrace
/*
USAGE:
bpftrace contrib/tracing/connectblock_benchmark.bt <start height> <end height> <logging threshold in ms>
- <start height> sets the height at which the benchmark should start. Setting
the start height to 0 starts the benchmark immediately, even before the
first block is connected.
- <end height> sets the height after which the benchmark should end. Setting
the end height to 0 disables the benchmark. The script only logs blocks
over <logging threshold in ms>.
- Threshold <logging threshold in ms>
This script requires a 'dashd' binary compiled with eBPF support and the
'validation:block_connected' USDT. By default, it's assumed that 'dashd' is
located in './src/dashd'. This can be modified in the script below.
EXAMPLES:
bpftrace contrib/tracing/connectblock_benchmark.bt 300000 680000 1000
When run together 'dashd -reindex', this benchmarks the time it takes to
connect the blocks between height 300.000 and 680.000 (inclusive) and prints
details about all blocks that take longer than 1000ms to connect. Prints a
histogram with block connection times when the benchmark is finished.
bpftrace contrib/tracing/connectblock_benchmark.bt 0 0 500
When running together 'dashd', all newly connected blocks that
take longer than 500ms to connect are logged. A histogram with block
connection times is shown when the script is terminated.
*/
BEGIN
{
$start_height = $1;
$end_height = $2;
$logging_threshold_ms = $3;
if ($end_height < $start_height) {
printf("Error: start height (%d) larger than end height (%d)!\n", $start_height, $end_height);
exit();
}
if ($end_height > 0) {
printf("ConnectBlock benchmark between height %d and %d inclusive\n", $start_height, $end_height);
} else {
printf("ConnectBlock logging starting at height %d\n", $start_height);
}
if ($logging_threshold_ms > 0) {
printf("Logging blocks taking longer than %d ms to connect.\n", $3);
}
if ($start_height == 0) {
@start = nsecs;
}
}
/*
Attaches to the 'validation:block_connected' USDT and collects stats when the
connected block is between the start and end height (or the end height is
unset).
*/
usdt:./src/dashd:validation:block_connected /arg1 >= $1 && (arg1 <= $2 || $2 == 0 )/
{
$height = arg1;
$transactions = arg2;
$inputs = arg3;
$sigops = arg4;
$duration = (uint64) arg5;
@height = $height;
@blocks = @blocks + 1;
@transactions = @transactions + $transactions;
@inputs = @inputs + $inputs;
@sigops = @sigops + $sigops;
@durations = hist($duration / 1000);
if ($height == $1 && $height != 0) {
@start = nsecs;
printf("Starting Connect Block Benchmark between height %d and %d.\n", $1, $2);
}
if ($2 > 0 && $height >= $2) {
@end = nsecs;
$duration = @end - @start;
printf("\nTook %d ms to connect the blocks between height %d and %d.\n", $duration / 1000000, $1, $2);
exit();
}
}
/*
Attaches to the 'validation:block_connected' USDT and logs information about
blocks where the time it took to connect the block is above the
<logging threshold in ms>.
*/
usdt:./src/dashd:validation:block_connected / (uint64) arg5 / 1000> $3 /
{
$hash = arg0;
$height = (int32) arg1;
$transactions = (uint64) arg2;
$inputs = (int32) arg3;
$sigops = (int64) arg4;
$duration = (int64) arg5;
printf("Block %d (", $height);
/* Prints each byte of the block hash as hex in big-endian (the block-explorer format) */
$p = $hash + 31;
unroll(32) {
$b = *(uint8*)$p;
printf("%02x", $b);
$p -= 1;
}
printf(") %4d tx %5d ins %5d sigops took %4d ms\n", $transactions, $inputs, $sigops, (uint64) $duration / 1000);
}
/*
Prints stats about the blocks, transactions, inputs, and sigops processed in
the last second (if any).
*/
interval:s:1 {
if (@blocks > 0) {
printf("BENCH %4d blk/s %6d tx/s %7d inputs/s %8d sigops/s (height %d)\n", @blocks, @transactions, @inputs, @sigops, @height);
zero(@blocks);
zero(@transactions);
zero(@inputs);
zero(@sigops);
}
}
END
{
printf("\nHistogram of block connection times in milliseconds (ms).\n");
print(@durations);
clear(@durations);
clear(@blocks);
clear(@transactions);
clear(@inputs);
clear(@sigops);
clear(@height);
clear(@start);
clear(@end);
}

View File

@ -0,0 +1,28 @@
#!/usr/bin/env bpftrace
BEGIN
{
printf("Logging P2P traffic\n")
}
usdt:./src/dashd:net:inbound_message
{
$peer_id = (int64) arg0;
$peer_addr = str(arg1);
$peer_type = str(arg2);
$msg_type = str(arg3);
$msg_len = arg4;
printf("inbound '%s' msg from peer %d (%s, %s) with %d bytes\n", $msg_type, $peer_id, $peer_type, $peer_addr, $msg_len);
}
usdt:./src/dashd:net:outbound_message
{
$peer_id = (int64) arg0;
$peer_addr = str(arg1);
$peer_type = str(arg2);
$msg_type = str(arg3);
$msg_len = arg4;
printf("outbound '%s' msg to peer %d (%s, %s) with %d bytes\n", $msg_type, $peer_id, $peer_type, $peer_addr, $msg_len);
}

View File

@ -0,0 +1,180 @@
#!/usr/bin/env python3
""" Demonstration of eBPF limitations and the effect on USDT with the
net:inbound_message and net:outbound_message tracepoints. """
# This script shows a limitation of eBPF when data larger than 32kb is passed to
# user-space. It uses BCC (https://github.com/iovisor/bcc) to load a sandboxed
# eBPF program into the Linux kernel (root privileges are required). The eBPF
# program attaches to two statically defined tracepoints. The tracepoint
# 'net:inbound_message' is called when a new P2P message is received, and
# 'net:outbound_message' is called on outbound P2P messages. The eBPF program
# submits the P2P messages to this script via a BPF ring buffer. The submitted
# messages are printed.
# eBPF Limitations:
#
# Dash P2P messages can be larger than 32kb (e.g. tx, block, ...). The eBPF
# VM's stack is limited to 512 bytes, and we can't allocate more than about 32kb
# for a P2P message in the eBPF VM. The message data is cut off when the message
# is larger than MAX_MSG_DATA_LENGTH (see definition below). This can be detected
# in user-space by comparing the data length to the message length variable. The
# message is cut off when the data length is smaller than the message length.
# A warning is included with the printed message data.
#
# Data is submitted to user-space (i.e. to this script) via a ring buffer. The
# throughput of the ring buffer is limited. Each p2p_message is about 32kb in
# size. In- or outbound messages submitted to the ring buffer in rapid
# succession fill the ring buffer faster than it can be read. Some messages are
# lost.
#
# BCC prints: "Possibly lost 2 samples" on lost messages.
import sys
from bcc import BPF, USDT
# BCC: The C program to be compiled to an eBPF program (by BCC) and loaded into
# a sandboxed Linux kernel VM.
program = """
#include <uapi/linux/ptrace.h>
#define MIN(a,b) ({ __typeof__ (a) _a = (a); __typeof__ (b) _b = (b); _a < _b ? _a : _b; })
// Maximum possible allocation size
// from include/linux/percpu.h in the Linux kernel
#define PCPU_MIN_UNIT_SIZE (32 << 10)
// Tor v3 addresses are 62 chars + 6 chars for the port (':12345').
#define MAX_PEER_ADDR_LENGTH 62 + 6
#define MAX_PEER_CONN_TYPE_LENGTH 20
#define MAX_MSG_TYPE_LENGTH 20
#define MAX_MSG_DATA_LENGTH PCPU_MIN_UNIT_SIZE - 200
struct p2p_message
{
u64 peer_id;
char peer_addr[MAX_PEER_ADDR_LENGTH];
char peer_conn_type[MAX_PEER_CONN_TYPE_LENGTH];
char msg_type[MAX_MSG_TYPE_LENGTH];
u64 msg_size;
u8 msg[MAX_MSG_DATA_LENGTH];
};
// We can't store the p2p_message struct on the eBPF stack as it is limited to
// 512 bytes and P2P message can be bigger than 512 bytes. However, we can use
// an BPF-array with a length of 1 to allocate up to 32768 bytes (this is
// defined by PCPU_MIN_UNIT_SIZE in include/linux/percpu.h in the Linux kernel).
// Also see https://github.com/iovisor/bcc/issues/2306
BPF_ARRAY(msg_arr, struct p2p_message, 1);
// Two BPF perf buffers for pushing data (here P2P messages) to user-space.
BPF_PERF_OUTPUT(inbound_messages);
BPF_PERF_OUTPUT(outbound_messages);
int trace_inbound_message(struct pt_regs *ctx) {
int idx = 0;
struct p2p_message *msg = msg_arr.lookup(&idx);
// lookup() does not return a NULL pointer. However, the BPF verifier
// requires an explicit check that that the `msg` pointer isn't a NULL
// pointer. See https://github.com/iovisor/bcc/issues/2595
if (msg == NULL) return 1;
bpf_usdt_readarg(1, ctx, &msg->peer_id);
bpf_usdt_readarg_p(2, ctx, &msg->peer_addr, MAX_PEER_ADDR_LENGTH);
bpf_usdt_readarg_p(3, ctx, &msg->peer_conn_type, MAX_PEER_CONN_TYPE_LENGTH);
bpf_usdt_readarg_p(4, ctx, &msg->msg_type, MAX_MSG_TYPE_LENGTH);
bpf_usdt_readarg(5, ctx, &msg->msg_size);
bpf_usdt_readarg_p(6, ctx, &msg->msg, MIN(msg->msg_size, MAX_MSG_DATA_LENGTH));
inbound_messages.perf_submit(ctx, msg, sizeof(*msg));
return 0;
};
int trace_outbound_message(struct pt_regs *ctx) {
int idx = 0;
struct p2p_message *msg = msg_arr.lookup(&idx);
// lookup() does not return a NULL pointer. However, the BPF verifier
// requires an explicit check that that the `msg` pointer isn't a NULL
// pointer. See https://github.com/iovisor/bcc/issues/2595
if (msg == NULL) return 1;
bpf_usdt_readarg(1, ctx, &msg->peer_id);
bpf_usdt_readarg_p(2, ctx, &msg->peer_addr, MAX_PEER_ADDR_LENGTH);
bpf_usdt_readarg_p(3, ctx, &msg->peer_conn_type, MAX_PEER_CONN_TYPE_LENGTH);
bpf_usdt_readarg_p(4, ctx, &msg->msg_type, MAX_MSG_TYPE_LENGTH);
bpf_usdt_readarg(5, ctx, &msg->msg_size);
bpf_usdt_readarg_p(6, ctx, &msg->msg, MIN(msg->msg_size, MAX_MSG_DATA_LENGTH));
outbound_messages.perf_submit(ctx, msg, sizeof(*msg));
return 0;
};
"""
def print_message(event, inbound):
print(f"%s %s msg '%s' from peer %d (%s, %s) with %d bytes: %s" %
(
f"Warning: incomplete message (only %d out of %d bytes)!" % (
len(event.msg), event.msg_size) if len(event.msg) < event.msg_size else "",
"inbound" if inbound else "outbound",
event.msg_type.decode("utf-8"),
event.peer_id,
event.peer_conn_type.decode("utf-8"),
event.peer_addr.decode("utf-8"),
event.msg_size,
bytes(event.msg[:event.msg_size]).hex(),
)
)
def main(bitcoind_path):
bitcoind_with_usdts = USDT(path=str(bitcoind_path))
# attaching the trace functions defined in the BPF program to the tracepoints
bitcoind_with_usdts.enable_probe(
probe="inbound_message", fn_name="trace_inbound_message")
bitcoind_with_usdts.enable_probe(
probe="outbound_message", fn_name="trace_outbound_message")
bpf = BPF(text=program, usdt_contexts=[bitcoind_with_usdts])
# BCC: perf buffer handle function for inbound_messages
def handle_inbound(_, data, size):
""" Inbound message handler.
Called each time a message is submitted to the inbound_messages BPF table."""
event = bpf["inbound_messages"].event(data)
print_message(event, True)
# BCC: perf buffer handle function for outbound_messages
def handle_outbound(_, data, size):
""" Outbound message handler.
Called each time a message is submitted to the outbound_messages BPF table."""
event = bpf["outbound_messages"].event(data)
print_message(event, False)
# BCC: add handlers to the inbound and outbound perf buffers
bpf["inbound_messages"].open_perf_buffer(handle_inbound)
bpf["outbound_messages"].open_perf_buffer(handle_outbound)
print("Logging raw P2P messages.")
print("Messages larger that about 32kb will be cut off!")
print("Some messages might be lost!")
while True:
try:
bpf.perf_buffer_poll()
except KeyboardInterrupt:
exit()
if __name__ == "__main__":
if len(sys.argv) < 2:
print("USAGE:", sys.argv[0], "path/to/dashd")
exit()
path = sys.argv[1]
main(path)

View File

@ -0,0 +1,104 @@
#!/usr/bin/env python3
import sys
import ctypes
from bcc import BPF, USDT
"""Example logging Dash Core utxo set cache flushes utilizing
the utxocache:flush tracepoint."""
# USAGE: ./contrib/tracing/log_utxocache_flush.py path/to/dashd
# BCC: The C program to be compiled to an eBPF program (by BCC) and loaded into
# a sandboxed Linux kernel VM.
program = """
# include <uapi/linux/ptrace.h>
struct data_t
{
u64 duration;
u32 mode;
u64 coins_count;
u64 coins_mem_usage;
bool is_flush_for_prune;
};
// BPF perf buffer to push the data to user space.
BPF_PERF_OUTPUT(flush);
int trace_flush(struct pt_regs *ctx) {
struct data_t data = {};
bpf_usdt_readarg(1, ctx, &data.duration);
bpf_usdt_readarg(2, ctx, &data.mode);
bpf_usdt_readarg(3, ctx, &data.coins_count);
bpf_usdt_readarg(4, ctx, &data.coins_mem_usage);
bpf_usdt_readarg(5, ctx, &data.is_flush_for_prune);
flush.perf_submit(ctx, &data, sizeof(data));
return 0;
}
"""
FLUSH_MODES = [
'NONE',
'IF_NEEDED',
'PERIODIC',
'ALWAYS'
]
class Data(ctypes.Structure):
# define output data structure corresponding to struct data_t
_fields_ = [
("duration", ctypes.c_uint64),
("mode", ctypes.c_uint32),
("coins_count", ctypes.c_uint64),
("coins_mem_usage", ctypes.c_uint64),
("is_flush_for_prune", ctypes.c_bool)
]
def print_event(event):
print("%-15d %-10s %-15d %-15s %-8s" % (
event.duration,
FLUSH_MODES[event.mode],
event.coins_count,
"%.2f kB" % (event.coins_mem_usage/1000),
event.is_flush_for_prune
))
def main(bitcoind_path):
bitcoind_with_usdts = USDT(path=str(bitcoind_path))
# attaching the trace functions defined in the BPF program
# to the tracepoints
bitcoind_with_usdts.enable_probe(
probe="flush", fn_name="trace_flush")
b = BPF(text=program, usdt_contexts=[bitcoind_with_usdts])
def handle_flush(_, data, size):
""" Coins Flush handler.
Called each time coin caches and indexes are flushed."""
event = ctypes.cast(data, ctypes.POINTER(Data)).contents
print_event(event)
b["flush"].open_perf_buffer(handle_flush)
print("Logging utxocache flushes. Ctrl-C to end...")
print("%-15s %-10s %-15s %-15s %-8s" % ("Duration (µs)", "Mode",
"Coins Count", "Memory Usage",
"Flush for Prune"))
while True:
try:
b.perf_buffer_poll()
except KeyboardInterrupt:
exit(0)
if __name__ == "__main__":
if len(sys.argv) < 2:
print("USAGE: ", sys.argv[0], "path/to/dashd")
exit(1)
path = sys.argv[1]
main(path)

86
contrib/tracing/log_utxos.bt Executable file
View File

@ -0,0 +1,86 @@
#!/usr/bin/env bpftrace
/*
USAGE:
bpftrace contrib/tracing/log_utxos.bt
This script requires a 'dashd' binary compiled with eBPF support and the
'utxocache' tracepoints. By default, it's assumed that 'dashd' is
located in './src/dashd'. This can be modified in the script below.
NOTE: requires bpftrace v0.12.0 or above.
*/
BEGIN
{
printf("%-7s %-71s %16s %7s %8s\n",
"OP", "Outpoint", "Value", "Height", "Coinbase");
}
/*
Attaches to the 'utxocache:add' tracepoint and prints additions to the UTXO set cache.
*/
usdt:./src/dashd:utxocache:add
{
$txid = arg0;
$index = (uint32)arg1;
$height = (uint32)arg2;
$value = (int64)arg3;
$isCoinbase = arg4;
printf("Added ");
$p = $txid + 31;
unroll(32) {
$b = *(uint8*)$p;
printf("%02x", $b);
$p-=1;
}
printf(":%-6d %16ld %7d %s\n", $index, $value, $height, ($isCoinbase ? "Yes" : "No" ));
}
/*
Attaches to the 'utxocache:spent' tracepoint and prints spents from the UTXO set cache.
*/
usdt:./src/dashd:utxocache:spent
{
$txid = arg0;
$index = (uint32)arg1;
$height = (uint32)arg2;
$value = (int64)arg3;
$isCoinbase = arg4;
printf("Spent ");
$p = $txid + 31;
unroll(32) {
$b = *(uint8*)$p;
printf("%02x", $b);
$p-=1;
}
printf(":%-6d %16ld %7d %s\n", $index, $value, $height, ($isCoinbase ? "Yes" : "No" ));
}
/*
Attaches to the 'utxocache:uncache' tracepoint and uncache UTXOs from the UTXO set cache.
*/
usdt:./src/dashd:utxocache:uncache
{
$txid = arg0;
$index = (uint32)arg1;
$height = (uint32)arg2;
$value = (int64)arg3;
$isCoinbase = arg4;
printf("Uncache ");
$p = $txid + 31;
unroll(32) {
$b = *(uint8*)$p;
printf("%02x", $b);
$p-=1;
}
printf(":%-6d %16ld %7d %s\n", $index, $value, $height, ($isCoinbase ? "Yes" : "No" ));
}

250
contrib/tracing/p2p_monitor.py Executable file
View File

@ -0,0 +1,250 @@
#!/usr/bin/env python3
""" Interactive dashd P2P network traffic monitor utilizing USDT and the
net:inbound_message and net:outbound_message tracepoints. """
# This script demonstrates what USDT for Dash Core can enable. It uses BCC
# (https://github.com/iovisor/bcc) to load a sandboxed eBPF program into the
# Linux kernel (root privileges are required). The eBPF program attaches to two
# statically defined tracepoints. The tracepoint 'net:inbound_message' is called
# when a new P2P message is received, and 'net:outbound_message' is called on
# outbound P2P messages. The eBPF program submits the P2P messages to
# this script via a BPF ring buffer.
import sys
import curses
from curses import wrapper, panel
from bcc import BPF, USDT
# BCC: The C program to be compiled to an eBPF program (by BCC) and loaded into
# a sandboxed Linux kernel VM.
program = """
#include <uapi/linux/ptrace.h>
// Tor v3 addresses are 62 chars + 6 chars for the port (':12345').
// I2P addresses are 60 chars + 6 chars for the port (':12345').
#define MAX_PEER_ADDR_LENGTH 62 + 6
#define MAX_PEER_CONN_TYPE_LENGTH 20
#define MAX_MSG_TYPE_LENGTH 20
struct p2p_message
{
u64 peer_id;
char peer_addr[MAX_PEER_ADDR_LENGTH];
char peer_conn_type[MAX_PEER_CONN_TYPE_LENGTH];
char msg_type[MAX_MSG_TYPE_LENGTH];
u64 msg_size;
};
// Two BPF perf buffers for pushing data (here P2P messages) to user space.
BPF_PERF_OUTPUT(inbound_messages);
BPF_PERF_OUTPUT(outbound_messages);
int trace_inbound_message(struct pt_regs *ctx) {
struct p2p_message msg = {};
bpf_usdt_readarg(1, ctx, &msg.peer_id);
bpf_usdt_readarg_p(2, ctx, &msg.peer_addr, MAX_PEER_ADDR_LENGTH);
bpf_usdt_readarg_p(3, ctx, &msg.peer_conn_type, MAX_PEER_CONN_TYPE_LENGTH);
bpf_usdt_readarg_p(4, ctx, &msg.msg_type, MAX_MSG_TYPE_LENGTH);
bpf_usdt_readarg(5, ctx, &msg.msg_size);
inbound_messages.perf_submit(ctx, &msg, sizeof(msg));
return 0;
};
int trace_outbound_message(struct pt_regs *ctx) {
struct p2p_message msg = {};
bpf_usdt_readarg(1, ctx, &msg.peer_id);
bpf_usdt_readarg_p(2, ctx, &msg.peer_addr, MAX_PEER_ADDR_LENGTH);
bpf_usdt_readarg_p(3, ctx, &msg.peer_conn_type, MAX_PEER_CONN_TYPE_LENGTH);
bpf_usdt_readarg_p(4, ctx, &msg.msg_type, MAX_MSG_TYPE_LENGTH);
bpf_usdt_readarg(5, ctx, &msg.msg_size);
outbound_messages.perf_submit(ctx, &msg, sizeof(msg));
return 0;
};
"""
class Message:
""" A P2P network message. """
msg_type = ""
size = 0
data = bytes()
inbound = False
def __init__(self, msg_type, size, inbound):
self.msg_type = msg_type
self.size = size
self.inbound = inbound
class Peer:
""" A P2P network peer. """
id = 0
address = ""
connection_type = ""
last_messages = list()
total_inbound_msgs = 0
total_inbound_bytes = 0
total_outbound_msgs = 0
total_outbound_bytes = 0
def __init__(self, id, address, connection_type):
self.id = id
self.address = address
self.connection_type = connection_type
self.last_messages = list()
def add_message(self, message):
self.last_messages.append(message)
if len(self.last_messages) > 25:
self.last_messages.pop(0)
if message.inbound:
self.total_inbound_bytes += message.size
self.total_inbound_msgs += 1
else:
self.total_outbound_bytes += message.size
self.total_outbound_msgs += 1
def main(bitcoind_path):
peers = dict()
bitcoind_with_usdts = USDT(path=str(bitcoind_path))
# attaching the trace functions defined in the BPF program to the tracepoints
bitcoind_with_usdts.enable_probe(
probe="inbound_message", fn_name="trace_inbound_message")
bitcoind_with_usdts.enable_probe(
probe="outbound_message", fn_name="trace_outbound_message")
bpf = BPF(text=program, usdt_contexts=[bitcoind_with_usdts])
# BCC: perf buffer handle function for inbound_messages
def handle_inbound(_, data, size):
""" Inbound message handler.
Called each time a message is submitted to the inbound_messages BPF table."""
event = bpf["inbound_messages"].event(data)
if event.peer_id not in peers:
peer = Peer(event.peer_id, event.peer_addr.decode(
"utf-8"), event.peer_conn_type.decode("utf-8"))
peers[peer.id] = peer
peers[event.peer_id].add_message(
Message(event.msg_type.decode("utf-8"), event.msg_size, True))
# BCC: perf buffer handle function for outbound_messages
def handle_outbound(_, data, size):
""" Outbound message handler.
Called each time a message is submitted to the outbound_messages BPF table."""
event = bpf["outbound_messages"].event(data)
if event.peer_id not in peers:
peer = Peer(event.peer_id, event.peer_addr.decode(
"utf-8"), event.peer_conn_type.decode("utf-8"))
peers[peer.id] = peer
peers[event.peer_id].add_message(
Message(event.msg_type.decode("utf-8"), event.msg_size, False))
# BCC: add handlers to the inbound and outbound perf buffers
bpf["inbound_messages"].open_perf_buffer(handle_inbound)
bpf["outbound_messages"].open_perf_buffer(handle_outbound)
wrapper(loop, bpf, peers)
def loop(screen, bpf, peers):
screen.nodelay(1)
cur_list_pos = 0
win = curses.newwin(30, 70, 2, 7)
win.erase()
win.border(ord("|"), ord("|"), ord("-"), ord("-"),
ord("-"), ord("-"), ord("-"), ord("-"))
info_panel = panel.new_panel(win)
info_panel.hide()
ROWS_AVALIABLE_FOR_LIST = curses.LINES - 5
scroll = 0
while True:
try:
# BCC: poll the perf buffers for new events or timeout after 50ms
bpf.perf_buffer_poll(timeout=50)
ch = screen.getch()
if (ch == curses.KEY_DOWN or ch == ord("j")) and cur_list_pos < len(
peers.keys()) -1 and info_panel.hidden():
cur_list_pos += 1
if cur_list_pos >= ROWS_AVALIABLE_FOR_LIST:
scroll += 1
if (ch == curses.KEY_UP or ch == ord("k")) and cur_list_pos > 0 and info_panel.hidden():
cur_list_pos -= 1
if scroll > 0:
scroll -= 1
if ch == ord('\n') or ch == ord(' '):
if info_panel.hidden():
info_panel.show()
else:
info_panel.hide()
screen.erase()
render(screen, peers, cur_list_pos, scroll, ROWS_AVALIABLE_FOR_LIST, info_panel)
curses.panel.update_panels()
screen.refresh()
except KeyboardInterrupt:
exit()
def render(screen, peers, cur_list_pos, scroll, ROWS_AVALIABLE_FOR_LIST, info_panel):
""" renders the list of peers and details panel
This code is unrelated to USDT, BCC and BPF.
"""
header_format = "%6s %-20s %-20s %-22s %-67s"
row_format = "%6s %-5d %9d byte %-5d %9d byte %-22s %-67s"
screen.addstr(0, 1, (" P2P Message Monitor "), curses.A_REVERSE)
screen.addstr(
1, 0, (" Navigate with UP/DOWN or J/K and select a peer with ENTER or SPACE to see individual P2P messages"), curses.A_NORMAL)
screen.addstr(3, 0,
header_format % ("PEER", "OUTBOUND", "INBOUND", "TYPE", "ADDR"), curses.A_BOLD | curses.A_UNDERLINE)
peer_list = sorted(peers.keys())[scroll:ROWS_AVALIABLE_FOR_LIST+scroll]
for i, peer_id in enumerate(peer_list):
peer = peers[peer_id]
screen.addstr(i + 4, 0,
row_format % (peer.id, peer.total_outbound_msgs, peer.total_outbound_bytes,
peer.total_inbound_msgs, peer.total_inbound_bytes,
peer.connection_type, peer.address),
curses.A_REVERSE if i + scroll == cur_list_pos else curses.A_NORMAL)
if i + scroll == cur_list_pos:
info_window = info_panel.window()
info_window.erase()
info_window.border(
ord("|"), ord("|"), ord("-"), ord("-"),
ord("-"), ord("-"), ord("-"), ord("-"))
info_window.addstr(
1, 1, f"PEER {peer.id} ({peer.address})".center(68), curses.A_REVERSE | curses.A_BOLD)
info_window.addstr(
2, 1, f" OUR NODE{peer.connection_type:^54}PEER ",
curses.A_BOLD)
for i, msg in enumerate(peer.last_messages):
if msg.inbound:
info_window.addstr(
i + 3, 1, "%68s" %
(f"<--- {msg.msg_type} ({msg.size} bytes) "), curses.A_NORMAL)
else:
info_window.addstr(
i + 3, 1, " %s (%d byte) --->" %
(msg.msg_type, msg.size), curses.A_NORMAL)
if __name__ == "__main__":
if len(sys.argv) < 2:
print("USAGE:", sys.argv[0], "path/to/dashd")
exit()
path = sys.argv[1]
main(path)

View File

@ -39,6 +39,7 @@ NO_SQLITE ?=
NO_WALLET ?= NO_WALLET ?=
NO_ZMQ ?= NO_ZMQ ?=
NO_UPNP ?= NO_UPNP ?=
NO_USDT ?=
NO_NATPMP ?= NO_NATPMP ?=
MULTIPROCESS ?= MULTIPROCESS ?=
FALLBACK_DOWNLOAD_PATH ?= http://dash-depends-sources.s3-website-us-west-2.amazonaws.com FALLBACK_DOWNLOAD_PATH ?= http://dash-depends-sources.s3-website-us-west-2.amazonaws.com
@ -155,8 +156,9 @@ natpmp_packages_$(NO_NATPMP) = $(natpmp_packages)
zmq_packages_$(NO_ZMQ) = $(zmq_packages) zmq_packages_$(NO_ZMQ) = $(zmq_packages)
multiprocess_packages_$(MULTIPROCESS) = $(multiprocess_packages) multiprocess_packages_$(MULTIPROCESS) = $(multiprocess_packages)
usdt_packages_$(NO_USDT) = $(usdt_$(host_os)_packages)
packages += $($(host_arch)_$(host_os)_packages) $($(host_os)_packages) $(qt_packages_) $(wallet_packages_) $(upnp_packages_) $(natpmp_packages_) packages += $($(host_arch)_$(host_os)_packages) $($(host_os)_packages) $(qt_packages_) $(wallet_packages_) $(upnp_packages_) $(natpmp_packages_) $(usdt_packages_)
native_packages += $($(host_arch)_$(host_os)_native_packages) $($(host_os)_native_packages) native_packages += $($(host_arch)_$(host_os)_native_packages) $($(host_os)_native_packages)
ifneq ($(zmq_packages_),) ifneq ($(zmq_packages_),)
@ -234,6 +236,7 @@ $(host_prefix)/share/config.site : config.site.in $(host_prefix)/.stamp_$(final_
-e 's|@no_bdb@|$(NO_BDB)|' \ -e 's|@no_bdb@|$(NO_BDB)|' \
-e 's|@no_sqlite@|$(NO_SQLITE)|' \ -e 's|@no_sqlite@|$(NO_SQLITE)|' \
-e 's|@no_upnp@|$(NO_UPNP)|' \ -e 's|@no_upnp@|$(NO_UPNP)|' \
-e 's|@no_usdt@|$(NO_USDT)|' \
-e 's|@no_natpmp@|$(NO_NATPMP)|' \ -e 's|@no_natpmp@|$(NO_NATPMP)|' \
-e 's|@multiprocess@|$(MULTIPROCESS)|' \ -e 's|@multiprocess@|$(MULTIPROCESS)|' \
-e 's|@debug@|$(DEBUG)|' \ -e 's|@debug@|$(DEBUG)|' \

View File

@ -62,12 +62,16 @@ if test -z "$with_gui" && test -n "@no_qt@"; then
with_gui=no with_gui=no
fi fi
if test -n "@debug@" && test -z "@no_qt@" && test "x$with_gui" != xno; then
with_gui=qt5_debug
fi
if test -z "$enable_zmq" && test -n "@no_zmq@"; then if test -z "$enable_zmq" && test -n "@no_zmq@"; then
enable_zmq=no enable_zmq=no
fi fi
if test -n "@debug@" && test -z "@no_qt@" && test "x$with_gui" != xno; then if test -z "$enable_usdt" && test -n "@no_usdt@"; then
with_gui=qt5_debug enable_usdt=no
fi fi
if test "@host_os@" = darwin; then if test "@host_os@" = darwin; then

View File

@ -22,6 +22,8 @@ natpmp_packages=libnatpmp
multiprocess_packages = libmultiprocess capnp multiprocess_packages = libmultiprocess capnp
multiprocess_native_packages = native_libmultiprocess native_capnp multiprocess_native_packages = native_libmultiprocess native_capnp
usdt_linux_packages=systemtap
darwin_native_packages = native_ds_store native_mac_alias darwin_native_packages = native_ds_store native_mac_alias
ifneq ($(build_os),darwin) ifneq ($(build_os),darwin)

View File

@ -0,0 +1,13 @@
package=systemtap
$(package)_version=4.8
$(package)_download_path=https://sourceware.org/ftp/systemtap/releases/
$(package)_file_name=$(package)-$($(package)_version).tar.gz
$(package)_sha256_hash=cbd50a4eba5b261394dc454c12448ddec73e55e6742fda7f508f9fbc1331c223
$(package)_patches=remove_SDT_ASM_SECTION_AUTOGROUP_SUPPORT_check.patch fix_variadic_warning.patch
define $(package)_preprocess_cmds
patch -p1 < $($(package)_patch_dir)/remove_SDT_ASM_SECTION_AUTOGROUP_SUPPORT_check.patch && \
patch -p1 < $($(package)_patch_dir)/fix_variadic_warning.patch && \
mkdir -p $($(package)_staging_prefix_dir)/include/sys && \
cp includes/sys/sdt.h $($(package)_staging_prefix_dir)/include/sys/sdt.h
endef

View File

@ -0,0 +1,16 @@
Could be dropped after a migration to C++20.
See: https://github.com/bitcoin/bitcoin/issues/26916.
diff --git a/includes/sys/sdt.h b/includes/sys/sdt.h
index 4075a5f..7c6138c 100644
--- a/includes/sys/sdt.h
+++ b/includes/sys/sdt.h
@@ -276,7 +276,7 @@ __extension__ extern unsigned long long __sdt_unsp;
_SDT_ASM_1(.purgem _SDT_TYPE_) \
_SDT_ASM_1(.purgem _SDT_TYPE)
-#define _SDT_ASM_BODY(provider, name, pack_args, args, ...) \
+#define _SDT_ASM_BODY(provider, name, pack_args, args) \
_SDT_DEF_MACROS \
_SDT_ASM_1(990: _SDT_NOP) \
_SDT_ASM_3( .pushsection .note.stapsdt,_SDT_ASM_AUTOGROUP,"note") \

View File

@ -0,0 +1,27 @@
Remove _SDT_ASM_SECTION_AUTOGROUP_SUPPORT check
We assume that the assembler supports "?" in .pushsection directives.
This enables us to skip configure and make.
See https://github.com/bitcoin/bitcoin/issues/23297.
diff --git a/includes/sys/sdt.h b/includes/sys/sdt.h
index ca0162b..f96e0ee 100644
--- a/includes/sys/sdt.h
+++ b/includes/sys/sdt.h
@@ -241,12 +241,10 @@ __extension__ extern unsigned long long __sdt_unsp;
nice with code in COMDAT sections, which comes up in C++ code.
Without that assembler support, some combinations of probe placements
in certain kinds of C++ code may produce link-time errors. */
-#include "sdt-config.h"
-#if _SDT_ASM_SECTION_AUTOGROUP_SUPPORT
+/* PATCH: We assume that the assembler supports the feature. This
+ enables us to skip configure and make. In turn, this means we
+ require fewer dependencies and have shorter depend build times. */
# define _SDT_ASM_AUTOGROUP "?"
-#else
-# define _SDT_ASM_AUTOGROUP ""
-#endif
#define _SDT_DEF_MACROS \
_SDT_ASM_1(.altmacro) \

View File

@ -50,6 +50,7 @@ Optional dependencies:
libqrencode | QR codes in GUI | QR code generation (only needed when GUI enabled) libqrencode | QR codes in GUI | QR code generation (only needed when GUI enabled)
libzmq3 | ZMQ notification | ZMQ notifications (requires ZMQ version >= 4.0.0) libzmq3 | ZMQ notification | ZMQ notifications (requires ZMQ version >= 4.0.0)
sqlite3 | SQLite DB | Wallet storage (only needed when descriptor wallet enabled) sqlite3 | SQLite DB | Wallet storage (only needed when descriptor wallet enabled)
systemtap | Tracing (USDT) | Statically defined tracepoints
For the versions used, see [dependencies.md](dependencies.md) For the versions used, see [dependencies.md](dependencies.md)
@ -115,6 +116,12 @@ GMP dependencies (provides platform-optimized routines):
sudo apt-get install libgmp-dev sudo apt-get install libgmp-dev
``` ```
User-Space, Statically Defined Tracing (USDT) dependencies:
```sh
sudo apt install systemtap-sdt-dev
```
GUI dependencies: GUI dependencies:
If you want to build dash-qt, make sure that the required packages for Qt development If you want to build dash-qt, make sure that the required packages for Qt development
@ -196,6 +203,12 @@ GMP dependencies (provides platform-optimized routines):
sudo dnf install gmp-devel sudo dnf install gmp-devel
``` ```
User-Space, Statically Defined Tracing (USDT) dependencies:
```sh
sudo dnf install systemtap-sdt-devel
```
GUI dependencies: GUI dependencies:
If you want to build dash-qt, make sure that the required packages for Qt development If you want to build dash-qt, make sure that the required packages for Qt development

View File

@ -24,6 +24,7 @@ These are the dependencies currently used by Dash Core. You can find instruction
| Qt | [5.15.11](https://download.qt.io/official_releases/qt/) | [5.11.3](https://github.com/bitcoin/bitcoin/pull/24132) | No | | | | Qt | [5.15.11](https://download.qt.io/official_releases/qt/) | [5.11.3](https://github.com/bitcoin/bitcoin/pull/24132) | No | | |
| SQLite | [3.32.1](https://sqlite.org/download.html) | [3.7.17](https://github.com/bitcoin/bitcoin/pull/19077) | | | | | SQLite | [3.32.1](https://sqlite.org/download.html) | [3.7.17](https://github.com/bitcoin/bitcoin/pull/19077) | | | |
| XCB | | | | | [Yes](https://github.com/dashpay/dash/blob/develop/depends/packages/qt.mk) (Linux only) | | XCB | | | | | [Yes](https://github.com/dashpay/dash/blob/develop/depends/packages/qt.mk) (Linux only) |
| systemtap ([tracing](tracing.md))| [4.8](https://sourceware.org/ftp/systemtap/releases/) | | | | |
| xkbcommon | | | | | [Yes](https://github.com/dashpay/dash/blob/develop/depends/packages/qt.mk) (Linux only) | | xkbcommon | | | | | [Yes](https://github.com/dashpay/dash/blob/develop/depends/packages/qt.mk) (Linux only) |
| ZeroMQ | [4.3.1](https://github.com/zeromq/libzmq/releases) | 4.0.0 | No | | | | ZeroMQ | [4.3.1](https://github.com/zeromq/libzmq/releases) | 4.0.0 | No | | |
| zlib | | | | | [Yes](https://github.com/dashpay/dash/blob/develop/depends/packages/qt.mk) | | zlib | | | | | [Yes](https://github.com/dashpay/dash/blob/develop/depends/packages/qt.mk) |
@ -41,6 +42,7 @@ Some dependencies are not needed in all configurations. The following are some f
* libnatpmp is not needed with `--without-natpmp`. * libnatpmp is not needed with `--without-natpmp`.
* Qt is not needed with `--without-gui`. * Qt is not needed with `--without-gui`.
* If the qrencode dependency is absent, QR support won't be added. To force an error when that happens, pass `--with-qrencode`. * If the qrencode dependency is absent, QR support won't be added. To force an error when that happens, pass `--with-qrencode`.
* If the systemtap dependency is absent, USDT support won't compiled in.
* ZeroMQ is needed only with the `--with-zmq` option. * ZeroMQ is needed only with the `--with-zmq` option.
#### Other #### Other

319
doc/tracing.md Normal file
View File

@ -0,0 +1,319 @@
# User-space, Statically Defined Tracing (USDT) for Dash Core
Dash Core includes statically defined tracepoints to allow for more
observability during development, debugging, code review, and production usage.
These tracepoints make it possible to keep track of custom statistics and
enable detailed monitoring of otherwise hidden internals. They have
little to no performance impact when unused.
```
eBPF and USDT Overview
======================
┌──────────────────┐ ┌──────────────┐
│ tracing script │ │ dashd │
│==================│ 2. │==============│
│ eBPF │ tracing │ hooks │ │
│ code │ logic │ into┌─┤►tracepoint 1─┼───┐ 3.
└────┬───┴──▲──────┘ ├─┤►tracepoint 2 │ │ pass args
1. │ │ 4. │ │ ... │ │ to eBPF
User compiles │ │ pass data to │ └──────────────┘ │ program
Space & loads │ │ tracing script │ │
─────────────────┼──────┼─────────────────┼────────────────────┼───
Kernel │ │ │ │
Space ┌──┬─▼──────┴─────────────────┴────────────┐ │
│ │ eBPF program │◄──────┘
│ └───────────────────────────────────────┤
│ eBPF kernel Virtual Machine (sandboxed) │
└──────────────────────────────────────────┘
1. The tracing script compiles the eBPF code and loads the eBPF program into a kernel VM
2. The eBPF program hooks into one or more tracepoints
3. When the tracepoint is called, the arguments are passed to the eBPF program
4. The eBPF program processes the arguments and returns data to the tracing script
```
The Linux kernel can hook into the tracepoints during runtime and pass data to
sandboxed [eBPF] programs running in the kernel. These eBPF programs can, for
example, collect statistics or pass data back to user-space scripts for further
processing.
[eBPF]: https://ebpf.io/
The two main eBPF front-ends with support for USDT are [bpftrace] and
[BPF Compiler Collection (BCC)]. BCC is used for complex tools and daemons and
`bpftrace` is preferred for one-liners and shorter scripts. Examples for both can
be found in [contrib/tracing].
[bpftrace]: https://github.com/iovisor/bpftrace
[BPF Compiler Collection (BCC)]: https://github.com/iovisor/bcc
[contrib/tracing]: ../contrib/tracing/
## Tracepoint documentation
The currently available tracepoints are listed here.
### Context `net`
#### Tracepoint `net:inbound_message`
Is called when a message is received from a peer over the P2P network. Passes
information about our peer, the connection and the message as arguments.
Arguments passed:
1. Peer ID as `int64`
2. Peer Address and Port (IPv4, IPv6, Tor v3, I2P, ...) as `pointer to C-style String` (max. length 68 characters)
3. Connection Type (inbound, feeler, outbound-full-relay, ...) as `pointer to C-style String` (max. length 20 characters)
4. Message Type (inv, ping, getdata, addrv2, ...) as `pointer to C-style String` (max. length 20 characters)
5. Message Size in bytes as `uint64`
6. Message Bytes as `pointer to unsigned chars` (i.e. bytes)
Note: The message is passed to the tracepoint in full, however, due to space
limitations in the eBPF kernel VM it might not be possible to pass the message
to user-space in full. Messages longer than a 32kb might be cut off. This can
be detected in tracing scripts by comparing the message size to the length of
the passed message.
#### Tracepoint `net:outbound_message`
Is called when a message is send to a peer over the P2P network. Passes
information about our peer, the connection and the message as arguments.
Arguments passed:
1. Peer ID as `int64`
2. Peer Address and Port (IPv4, IPv6, Tor v3, I2P, ...) as `pointer to C-style String` (max. length 68 characters)
3. Connection Type (inbound, feeler, outbound-full-relay, ...) as `pointer to C-style String` (max. length 20 characters)
4. Message Type (inv, ping, getdata, addrv2, ...) as `pointer to C-style String` (max. length 20 characters)
5. Message Size in bytes as `uint64`
6. Message Bytes as `pointer to unsigned chars` (i.e. bytes)
Note: The message is passed to the tracepoint in full, however, due to space
limitations in the eBPF kernel VM it might not be possible to pass the message
to user-space in full. Messages longer than a 32kb might be cut off. This can
be detected in tracing scripts by comparing the message size to the length of
the passed message.
### Context `validation`
#### Tracepoint `validation:block_connected`
Is called *after* a block is connected to the chain. Can, for example, be used
to benchmark block connections together with `-reindex`.
Arguments passed:
1. Block Header Hash as `pointer to unsigned chars` (i.e. 32 bytes in little-endian)
2. Block Height as `int32`
3. Transactions in the Block as `uint64`
4. Inputs spend in the Block as `int32`
5. SigOps in the Block (excluding coinbase SigOps) `uint64`
6. Time it took to connect the Block in microseconds (µs) as `uint64`
### Context `utxocache`
The following tracepoints cover the in-memory UTXO cache. UTXOs are, for example,
added to and removed (spent) from the cache when we connect a new block.
**Note**: Dash Core uses temporary clones of the _main_ UTXO cache
(`chainstate.CoinsTip()`). For example, the RPCs `generateblock` and
`getblocktemplate` call `TestBlockValidity()`, which applies the UTXO set
changes to a temporary cache. Similarly, mempool consistency checks, which are
frequent on regtest, also apply the the UTXO set changes to a temporary cache.
Changes to the _main_ UTXO cache and to temporary caches trigger the tracepoints.
We can't tell if a temporary cache or the _main_ cache was changed.
#### Tracepoint `utxocache:flush`
Is called *after* the in-memory UTXO cache is flushed.
Arguments passed:
1. Time it took to flush the cache microseconds as `int64`
2. Flush state mode as `uint32`. It's an enumerator class with values `0`
(`NONE`), `1` (`IF_NEEDED`), `2` (`PERIODIC`), `3` (`ALWAYS`)
3. Cache size (number of coins) before the flush as `uint64`
4. Cache memory usage in bytes as `uint64`
5. If pruning caused the flush as `bool`
#### Tracepoint `utxocache:add`
Is called when a coin is added to a UTXO cache. This can be a temporary UTXO cache too.
Arguments passed:
1. Transaction ID (hash) as `pointer to unsigned chars` (i.e. 32 bytes in little-endian)
2. Output index as `uint32`
3. Block height the coin was added to the UTXO-set as `uint32`
4. Value of the coin as `int64`
5. If the coin is a coinbase as `bool`
#### Tracepoint `utxocache:spent`
Is called when a coin is spent from a UTXO cache. This can be a temporary UTXO cache too.
Arguments passed:
1. Transaction ID (hash) as `pointer to unsigned chars` (i.e. 32 bytes in little-endian)
2. Output index as `uint32`
3. Block height the coin was spent, as `uint32`
4. Value of the coin as `int64`
5. If the coin is a coinbase as `bool`
#### Tracepoint `utxocache:uncache`
Is called when a coin is purposefully unloaded from a UTXO cache. This
happens, for example, when we load an UTXO into a cache when trying to accept
a transaction that turns out to be invalid. The loaded UTXO is uncached to avoid
filling our UTXO cache up with irrelevant UTXOs.
Arguments passed:
1. Transaction ID (hash) as `pointer to unsigned chars` (i.e. 32 bytes in little-endian)
2. Output index as `uint32`
3. Block height the coin was uncached, as `uint32`
4. Value of the coin as `int64`
5. If the coin is a coinbase as `bool`
## Adding tracepoints to Dash Core
To add a new tracepoint, `#include <util/trace.h>` in the compilation unit where
the tracepoint is inserted. Use one of the `TRACEx` macros listed below
depending on the number of arguments passed to the tracepoint. Up to 12
arguments can be provided. The `context` and `event` specify the names by which
the tracepoint is referred to. Please use `snake_case` and try to make sure that
the tracepoint names make sense even without detailed knowledge of the
implementation details. Do not forget to update the tracepoint list in this
document.
```c
#define TRACE(context, event)
#define TRACE1(context, event, a)
#define TRACE2(context, event, a, b)
#define TRACE3(context, event, a, b, c)
#define TRACE4(context, event, a, b, c, d)
#define TRACE5(context, event, a, b, c, d, e)
#define TRACE6(context, event, a, b, c, d, e, f)
#define TRACE7(context, event, a, b, c, d, e, f, g)
#define TRACE8(context, event, a, b, c, d, e, f, g, h)
#define TRACE9(context, event, a, b, c, d, e, f, g, h, i)
#define TRACE10(context, event, a, b, c, d, e, f, g, h, i, j)
#define TRACE11(context, event, a, b, c, d, e, f, g, h, i, j, k)
#define TRACE12(context, event, a, b, c, d, e, f, g, h, i, j, k, l)
```
For example:
```C++
TRACE6(net, inbound_message,
pnode->GetId(),
pnode->m_addr_name.c_str(),
pnode->ConnectionTypeAsString().c_str(),
sanitizedType.c_str(),
msg.data.size(),
msg.data.data()
);
```
### Guidelines and best practices
#### Clear motivation and use-case
Tracepoints need a clear motivation and use-case. The motivation should
outweigh the impact on, for example, code readability. There is no point in
adding tracepoints that don't end up being used.
#### Provide an example
When adding a new tracepoint, provide an example. Examples can show the use case
and help reviewers testing that the tracepoint works as intended. The examples
can be kept simple but should give others a starting point when working with
the tracepoint. See existing examples in [contrib/tracing/].
[contrib/tracing/]: ../contrib/tracing/
#### No expensive computations for tracepoints
Data passed to the tracepoint should be inexpensive to compute. Although the
tracepoint itself only has overhead when enabled, the code to compute arguments
is always run - even if the tracepoint is not used. For example, avoid
serialization and parsing.
#### Semi-stable API
Tracepoints should have a semi-stable API. Users should be able to rely on the
tracepoints for scripting. This means tracepoints need to be documented, and the
argument order ideally should not change. If there is an important reason to
change argument order, make sure to document the change and update the examples
using the tracepoint.
#### eBPF Virtual Machine limits
Keep the eBPF Virtual Machine limits in mind. eBPF programs receiving data from
the tracepoints run in a sandboxed Linux kernel VM. This VM has a limited stack
size of 512 bytes. Check if it makes sense to pass larger amounts of data, for
example, with a tracing script that can handle the passed data.
#### `bpftrace` argument limit
While tracepoints can have up to 12 arguments, bpftrace scripts currently only
support reading from the first six arguments (`arg0` till `arg5`) on `x86_64`.
bpftrace currently lacks real support for handling and printing binary data,
like block header hashes and txids. When a tracepoint passes more than six
arguments, then string and integer arguments should preferably be placed in the
first six argument fields. Binary data can be placed in later arguments. The BCC
supports reading from all 12 arguments.
#### Strings as C-style String
Generally, strings should be passed into the `TRACEx` macros as pointers to
C-style strings (a null-terminated sequence of characters). For C++
`std::strings`, [`c_str()`] can be used. It's recommended to document the
maximum expected string size if known.
[`c_str()`]: https://www.cplusplus.com/reference/string/string/c_str/
## Listing available tracepoints
Multiple tools can list the available tracepoints in a `dashd` binary with
USDT support.
### GDB - GNU Project Debugger
To list probes in Dash Core, use `info probes` in `gdb`:
```
$ gdb ./src/dashd
(gdb) info probes
Type Provider Name Where Semaphore Object
stap net inbound_message 0x000000000014419e /src/dashd
stap net outbound_message 0x0000000000107c05 /src/dashd
stap validation block_connected 0x00000000002fb10c /src/dashd
```
### With `readelf`
The `readelf` tool can be used to display the USDT tracepoints in Dash Core.
Look for the notes with the description `NT_STAPSDT`.
```
$ readelf -n ./src/dashd | grep NT_STAPSDT -A 4 -B 2
Displaying notes found in: .note.stapsdt
Owner Data size Description
stapsdt 0x0000005d NT_STAPSDT (SystemTap probe descriptors)
Provider: net
Name: outbound_message
Location: 0x0000000000107c05, Base: 0x0000000000579c90, Semaphore: 0x0000000000000000
Arguments: -8@%r12 8@%rbx 8@%rdi 8@192(%rsp) 8@%rax 8@%rdx
```
### With `tplist`
The `tplist` tool is provided by BCC (see [Installing BCC]). It displays kernel
tracepoints or USDT probes and their formats (for more information, see the
[`tplist` usage demonstration]). There are slight binary naming differences
between distributions. For example, on
[Ubuntu the binary is called `tplist-bpfcc`][ubuntu binary].
[Installing BCC]: https://github.com/iovisor/bcc/blob/master/INSTALL.md
[`tplist` usage demonstration]: https://github.com/iovisor/bcc/blob/master/tools/tplist_example.txt
[ubuntu binary]: https://github.com/iovisor/bcc/blob/master/INSTALL.md#ubuntu---binary
```
$ tplist -l ./src/dashd -v
b'net':b'outbound_message' [sema 0x0]
1 location(s)
6 argument(s)
```

View File

@ -7,6 +7,7 @@
#include <consensus/consensus.h> #include <consensus/consensus.h>
#include <logging.h> #include <logging.h>
#include <random.h> #include <random.h>
#include <util/trace.h>
#include <version.h> #include <version.h>
bool CCoinsView::GetCoin(const COutPoint &outpoint, Coin &coin) const { return false; } bool CCoinsView::GetCoin(const COutPoint &outpoint, Coin &coin) const { return false; }
@ -98,6 +99,12 @@ void CCoinsViewCache::AddCoin(const COutPoint &outpoint, Coin&& coin, bool possi
it->second.coin = std::move(coin); it->second.coin = std::move(coin);
it->second.flags |= CCoinsCacheEntry::DIRTY | (fresh ? CCoinsCacheEntry::FRESH : 0); it->second.flags |= CCoinsCacheEntry::DIRTY | (fresh ? CCoinsCacheEntry::FRESH : 0);
cachedCoinsUsage += it->second.coin.DynamicMemoryUsage(); cachedCoinsUsage += it->second.coin.DynamicMemoryUsage();
TRACE5(utxocache, add,
outpoint.hash.data(),
(uint32_t)outpoint.n,
(uint32_t)coin.nHeight,
(int64_t)coin.out.nValue,
(bool)coin.IsCoinBase());
} }
void CCoinsViewCache::EmplaceCoinInternalDANGER(COutPoint&& outpoint, Coin&& coin) { void CCoinsViewCache::EmplaceCoinInternalDANGER(COutPoint&& outpoint, Coin&& coin) {
@ -123,6 +130,12 @@ bool CCoinsViewCache::SpendCoin(const COutPoint &outpoint, Coin* moveout) {
CCoinsMap::iterator it = FetchCoin(outpoint); CCoinsMap::iterator it = FetchCoin(outpoint);
if (it == cacheCoins.end()) return false; if (it == cacheCoins.end()) return false;
cachedCoinsUsage -= it->second.coin.DynamicMemoryUsage(); cachedCoinsUsage -= it->second.coin.DynamicMemoryUsage();
TRACE5(utxocache, spent,
outpoint.hash.data(),
(uint32_t)outpoint.n,
(uint32_t)it->second.coin.nHeight,
(int64_t)it->second.coin.out.nValue,
(bool)it->second.coin.IsCoinBase());
if (moveout) { if (moveout) {
*moveout = std::move(it->second.coin); *moveout = std::move(it->second.coin);
} }
@ -273,6 +286,12 @@ void CCoinsViewCache::Uncache(const COutPoint& hash)
CCoinsMap::iterator it = cacheCoins.find(hash); CCoinsMap::iterator it = cacheCoins.find(hash);
if (it != cacheCoins.end() && it->second.flags == 0) { if (it != cacheCoins.end() && it->second.flags == 0) {
cachedCoinsUsage -= it->second.coin.DynamicMemoryUsage(); cachedCoinsUsage -= it->second.coin.DynamicMemoryUsage();
TRACE5(utxocache, uncache,
hash.hash.data(),
(uint32_t)hash.n,
(uint32_t)it->second.coin.nHeight,
(int64_t)it->second.coin.out.nValue,
(bool)it->second.coin.IsCoinBase());
cacheCoins.erase(it); cacheCoins.erase(it);
} }
} }

View File

@ -32,6 +32,7 @@
#include <util/system.h> #include <util/system.h>
#include <util/thread.h> #include <util/thread.h>
#include <util/time.h> #include <util/time.h>
#include <util/trace.h>
#include <util/translation.h> #include <util/translation.h>
#include <util/wpipe.h> #include <util/wpipe.h>
@ -4276,10 +4277,20 @@ void CConnman::PushMessage(CNode* pnode, CSerializedNetMsg&& msg)
{ {
AssertLockNotHeld(m_total_bytes_sent_mutex); AssertLockNotHeld(m_total_bytes_sent_mutex);
size_t nMessageSize = msg.data.size(); size_t nMessageSize = msg.data.size();
LogPrint(BCLog::NET, "sending %s (%d bytes) peer=%d\n", SanitizeString(msg.m_type), nMessageSize, pnode->GetId()); LogPrint(BCLog::NET, "sending %s (%d bytes) peer=%d\n", msg.m_type, nMessageSize, pnode->GetId());
if (gArgs.GetBoolArg("-capturemessages", false)) { if (gArgs.GetBoolArg("-capturemessages", false)) {
CaptureMessage(pnode->addr, msg.m_type, msg.data, /* incoming */ false); CaptureMessage(pnode->addr, msg.m_type, msg.data, /* incoming */ false);
} }
TRACE6(net, outbound_message,
pnode->GetId(),
pnode->m_addr_name.c_str(),
pnode->ConnectionTypeAsString().c_str(),
msg.m_type.c_str(),
msg.data.size(),
msg.data.data()
);
statsClient.count(strprintf("bandwidth.message.%s.bytesSent", msg.m_type), nMessageSize, 1.0f); statsClient.count(strprintf("bandwidth.message.%s.bytesSent", msg.m_type), nMessageSize, 1.0f);
statsClient.inc(strprintf("message.sent.%s", msg.m_type), 1.0f); statsClient.inc(strprintf("message.sent.%s", msg.m_type), 1.0f);

View File

@ -33,6 +33,7 @@
#include <util/check.h> // For NDEBUG compile time check #include <util/check.h> // For NDEBUG compile time check
#include <util/system.h> #include <util/system.h>
#include <util/strencodings.h> #include <util/strencodings.h>
#include <util/trace.h>
#include <algorithm> #include <algorithm>
#include <atomic> #include <atomic>
@ -5030,6 +5031,15 @@ bool PeerManagerImpl::ProcessMessages(CNode* pfrom, std::atomic<bool>& interrupt
} }
CNetMessage& msg(msgs.front()); CNetMessage& msg(msgs.front());
TRACE6(net, inbound_message,
pfrom->GetId(),
pfrom->m_addr_name.c_str(),
pfrom->ConnectionTypeAsString().c_str(),
msg.m_type.c_str(),
msg.m_recv.size(),
msg.m_recv.data()
);
if (gArgs.GetBoolArg("-capturemessages", false)) { if (gArgs.GetBoolArg("-capturemessages", false)) {
CaptureMessage(pfrom->addr, msg.m_type, MakeUCharSpan(msg.m_recv), /* incoming */ true); CaptureMessage(pfrom->addr, msg.m_type, MakeUCharSpan(msg.m_recv), /* incoming */ true);
} }

View File

@ -46,6 +46,7 @@
#include <util/check.h> // For NDEBUG compile time check #include <util/check.h> // For NDEBUG compile time check
#include <util/hasher.h> #include <util/hasher.h>
#include <util/strencodings.h> #include <util/strencodings.h>
#include <util/trace.h>
#include <util/translation.h> #include <util/translation.h>
#include <util/system.h> #include <util/system.h>
#include <validationinterface.h> #include <validationinterface.h>
@ -2302,6 +2303,15 @@ bool CChainState::ConnectBlock(const CBlock& block, BlockValidationState& state,
statsClient.gauge("blocks.tip.NumTransactions", block.vtx.size(), 1.0f); statsClient.gauge("blocks.tip.NumTransactions", block.vtx.size(), 1.0f);
statsClient.gauge("blocks.tip.SigOps", nSigOps, 1.0f); statsClient.gauge("blocks.tip.SigOps", nSigOps, 1.0f);
TRACE6(validation, block_connected,
block.GetHash().data(),
pindex->nHeight,
block.vtx.size(),
nInputs,
nSigOps,
GetTimeMicros() - nTimeStart // in microseconds (µs)
);
return true; return true;
} }
@ -2461,6 +2471,12 @@ bool CChainState::FlushStateToDisk(
} }
nLastFlush = nNow; nLastFlush = nNow;
full_flush_completed = true; full_flush_completed = true;
TRACE5(utxocache, flush,
(int64_t)(GetTimeMicros() - nNow.count()), // in microseconds (µs)
(u_int32_t)mode,
(u_int64_t)coins_count,
(u_int64_t)coins_mem_usage,
(bool)fFlushForPrune);
} }
} }
if (full_flush_completed) { if (full_flush_completed) {

View File

@ -23,3 +23,4 @@ RPCAUTH=@abs_top_srcdir@/share/rpcauth/rpcauth.py
@BUILD_BITCOIND_TRUE@ENABLE_BITCOIND=true @BUILD_BITCOIND_TRUE@ENABLE_BITCOIND=true
@ENABLE_FUZZ_TRUE@ENABLE_FUZZ=true @ENABLE_FUZZ_TRUE@ENABLE_FUZZ=true
@ENABLE_ZMQ_TRUE@ENABLE_ZMQ=true @ENABLE_ZMQ_TRUE@ENABLE_ZMQ=true
@ENABLE_USDT_TRACEPOINTS_TRUE@ENABLE_USDT_TRACEPOINTS=true

View File

@ -0,0 +1,171 @@
#!/usr/bin/env python3
# Copyright (c) 2022 The Bitcoin Core developers
# Distributed under the MIT software license, see the accompanying
# file COPYING or http://www.opensource.org/licenses/mit-license.php.
""" Tests the net:* tracepoint API interface.
See https://github.com/dashpay/dash/blob/develop/doc/tracing.md#context-net
"""
import ctypes
from io import BytesIO
# Test will be skipped if we don't have bcc installed
try:
from bcc import BPF, USDT # type: ignore[import]
except ImportError:
pass
from test_framework.messages import msg_version
from test_framework.p2p import P2PInterface
from test_framework.test_framework import BitcoinTestFramework
from test_framework.util import assert_equal
# Tor v3 addresses are 62 chars + 6 chars for the port (':12345').
MAX_PEER_ADDR_LENGTH = 68
MAX_PEER_CONN_TYPE_LENGTH = 20
MAX_MSG_TYPE_LENGTH = 20
# We won't process messages larger than 150 byte in this test. For reading
# larger messanges see contrib/tracing/log_raw_p2p_msgs.py
MAX_MSG_DATA_LENGTH = 150
net_tracepoints_program = """
#include <uapi/linux/ptrace.h>
#define MAX_PEER_ADDR_LENGTH {}
#define MAX_PEER_CONN_TYPE_LENGTH {}
#define MAX_MSG_TYPE_LENGTH {}
#define MAX_MSG_DATA_LENGTH {}
""".format(
MAX_PEER_ADDR_LENGTH,
MAX_PEER_CONN_TYPE_LENGTH,
MAX_MSG_TYPE_LENGTH,
MAX_MSG_DATA_LENGTH
) + """
#define MIN(a,b) ({ __typeof__ (a) _a = (a); __typeof__ (b) _b = (b); _a < _b ? _a : _b; })
struct p2p_message
{
u64 peer_id;
char peer_addr[MAX_PEER_ADDR_LENGTH];
char peer_conn_type[MAX_PEER_CONN_TYPE_LENGTH];
char msg_type[MAX_MSG_TYPE_LENGTH];
u64 msg_size;
u8 msg[MAX_MSG_DATA_LENGTH];
};
BPF_PERF_OUTPUT(inbound_messages);
int trace_inbound_message(struct pt_regs *ctx) {
struct p2p_message msg = {};
bpf_usdt_readarg(1, ctx, &msg.peer_id);
bpf_usdt_readarg_p(2, ctx, &msg.peer_addr, MAX_PEER_ADDR_LENGTH);
bpf_usdt_readarg_p(3, ctx, &msg.peer_conn_type, MAX_PEER_CONN_TYPE_LENGTH);
bpf_usdt_readarg_p(4, ctx, &msg.msg_type, MAX_MSG_TYPE_LENGTH);
bpf_usdt_readarg(5, ctx, &msg.msg_size);
bpf_usdt_readarg_p(6, ctx, &msg.msg, MIN(msg.msg_size, MAX_MSG_DATA_LENGTH));
inbound_messages.perf_submit(ctx, &msg, sizeof(msg));
return 0;
}
BPF_PERF_OUTPUT(outbound_messages);
int trace_outbound_message(struct pt_regs *ctx) {
struct p2p_message msg = {};
bpf_usdt_readarg(1, ctx, &msg.peer_id);
bpf_usdt_readarg_p(2, ctx, &msg.peer_addr, MAX_PEER_ADDR_LENGTH);
bpf_usdt_readarg_p(3, ctx, &msg.peer_conn_type, MAX_PEER_CONN_TYPE_LENGTH);
bpf_usdt_readarg_p(4, ctx, &msg.msg_type, MAX_MSG_TYPE_LENGTH);
bpf_usdt_readarg(5, ctx, &msg.msg_size);
bpf_usdt_readarg_p(6, ctx, &msg.msg, MIN(msg.msg_size, MAX_MSG_DATA_LENGTH));
outbound_messages.perf_submit(ctx, &msg, sizeof(msg));
return 0;
};
"""
class NetTracepointTest(BitcoinTestFramework):
def set_test_params(self):
self.num_nodes = 1
def skip_test_if_missing_module(self):
self.skip_if_platform_not_linux()
self.skip_if_no_bitcoind_tracepoints()
self.skip_if_no_python_bcc()
self.skip_if_no_bpf_permissions()
def run_test(self):
# Tests the net:inbound_message and net:outbound_message tracepoints
# See https://github.com/dashpay/dash/blob/develop/doc/tracing.md#context-net
class P2PMessage(ctypes.Structure):
_fields_ = [
("peer_id", ctypes.c_uint64),
("peer_addr", ctypes.c_char * MAX_PEER_ADDR_LENGTH),
("peer_conn_type", ctypes.c_char * MAX_PEER_CONN_TYPE_LENGTH),
("msg_type", ctypes.c_char * MAX_MSG_TYPE_LENGTH),
("msg_size", ctypes.c_uint64),
("msg", ctypes.c_ubyte * MAX_MSG_DATA_LENGTH),
]
def __repr__(self):
return f"P2PMessage(peer={self.peer_id}, addr={self.peer_addr.decode('utf-8')}, conn_type={self.peer_conn_type.decode('utf-8')}, msg_type={self.msg_type.decode('utf-8')}, msg_size={self.msg_size})"
self.log.info(
"hook into the net:inbound_message and net:outbound_message tracepoints")
ctx = USDT(path=str(self.options.bitcoind))
ctx.enable_probe(probe="net:inbound_message",
fn_name="trace_inbound_message")
ctx.enable_probe(probe="net:outbound_message",
fn_name="trace_outbound_message")
bpf = BPF(text=net_tracepoints_program, usdt_contexts=[ctx], debug=0)
# The handle_* function is a ctypes callback function called from C. When
# we assert in the handle_* function, the AssertError doesn't propagate
# back to Python. The exception is ignored. We manually count and assert
# that the handle_* functions succeeded.
EXPECTED_INOUTBOUND_VERSION_MSG = 1
checked_inbound_version_msg = 0
checked_outbound_version_msg = 0
def check_p2p_message(event, inbound):
nonlocal checked_inbound_version_msg, checked_outbound_version_msg
if event.msg_type.decode("utf-8") == "version":
self.log.info(
f"check_p2p_message(): {'inbound' if inbound else 'outbound'} {event}")
peer = self.nodes[0].getpeerinfo()[0]
msg = msg_version()
msg.deserialize(BytesIO(bytes(event.msg[:event.msg_size])))
assert_equal(peer["id"], event.peer_id, peer["id"])
assert_equal(peer["addr"], event.peer_addr.decode("utf-8"))
assert_equal(peer["connection_type"],
event.peer_conn_type.decode("utf-8"))
if inbound:
checked_inbound_version_msg += 1
else:
checked_outbound_version_msg += 1
def handle_inbound(_, data, __):
event = ctypes.cast(data, ctypes.POINTER(P2PMessage)).contents
check_p2p_message(event, True)
def handle_outbound(_, data, __):
event = ctypes.cast(data, ctypes.POINTER(P2PMessage)).contents
check_p2p_message(event, False)
bpf["inbound_messages"].open_perf_buffer(handle_inbound)
bpf["outbound_messages"].open_perf_buffer(handle_outbound)
self.log.info("connect a P2P test node to our dashd node")
test_node = P2PInterface()
self.nodes[0].add_p2p_connection(test_node)
bpf.perf_buffer_poll(timeout=200)
self.log.info(
"check that we got both an inbound and outbound version message")
assert_equal(EXPECTED_INOUTBOUND_VERSION_MSG,
checked_inbound_version_msg)
assert_equal(EXPECTED_INOUTBOUND_VERSION_MSG,
checked_outbound_version_msg)
bpf.cleanup()
if __name__ == '__main__':
NetTracepointTest().main()

View File

@ -0,0 +1,408 @@
#!/usr/bin/env python3
# Copyright (c) 2022 The Bitcoin Core developers
# Distributed under the MIT software license, see the accompanying
# file COPYING or http://www.opensource.org/licenses/mit-license.php.
""" Tests the utxocache:* tracepoint API interface.
See https://github.com/dashpay/dash/blob/develop/doc/tracing.md#context-utxocache
"""
import ctypes
# Test will be skipped if we don't have bcc installed
try:
from bcc import BPF, USDT # type: ignore[import]
except ImportError:
pass
from test_framework.messages import COIN
from test_framework.test_framework import BitcoinTestFramework
from test_framework.util import assert_equal
from test_framework.wallet import MiniWallet
utxocache_changes_program = """
#include <uapi/linux/ptrace.h>
typedef signed long long i64;
struct utxocache_change
{
char txid[32];
u32 index;
u32 height;
i64 value;
bool is_coinbase;
};
BPF_PERF_OUTPUT(utxocache_add);
int trace_utxocache_add(struct pt_regs *ctx) {
struct utxocache_change add = {};
bpf_usdt_readarg_p(1, ctx, &add.txid, 32);
bpf_usdt_readarg(2, ctx, &add.index);
bpf_usdt_readarg(3, ctx, &add.height);
bpf_usdt_readarg(4, ctx, &add.value);
bpf_usdt_readarg(5, ctx, &add.is_coinbase);
utxocache_add.perf_submit(ctx, &add, sizeof(add));
return 0;
}
BPF_PERF_OUTPUT(utxocache_spent);
int trace_utxocache_spent(struct pt_regs *ctx) {
struct utxocache_change spent = {};
bpf_usdt_readarg_p(1, ctx, &spent.txid, 32);
bpf_usdt_readarg(2, ctx, &spent.index);
bpf_usdt_readarg(3, ctx, &spent.height);
bpf_usdt_readarg(4, ctx, &spent.value);
bpf_usdt_readarg(5, ctx, &spent.is_coinbase);
utxocache_spent.perf_submit(ctx, &spent, sizeof(spent));
return 0;
}
BPF_PERF_OUTPUT(utxocache_uncache);
int trace_utxocache_uncache(struct pt_regs *ctx) {
struct utxocache_change uncache = {};
bpf_usdt_readarg_p(1, ctx, &uncache.txid, 32);
bpf_usdt_readarg(2, ctx, &uncache.index);
bpf_usdt_readarg(3, ctx, &uncache.height);
bpf_usdt_readarg(4, ctx, &uncache.value);
bpf_usdt_readarg(5, ctx, &uncache.is_coinbase);
utxocache_uncache.perf_submit(ctx, &uncache, sizeof(uncache));
return 0;
}
"""
utxocache_flushes_program = """
#include <uapi/linux/ptrace.h>
typedef signed long long i64;
struct utxocache_flush
{
i64 duration;
u32 mode;
u64 size;
u64 memory;
bool for_prune;
};
BPF_PERF_OUTPUT(utxocache_flush);
int trace_utxocache_flush(struct pt_regs *ctx) {
struct utxocache_flush flush = {};
bpf_usdt_readarg(1, ctx, &flush.duration);
bpf_usdt_readarg(2, ctx, &flush.mode);
bpf_usdt_readarg(3, ctx, &flush.size);
bpf_usdt_readarg(4, ctx, &flush.memory);
bpf_usdt_readarg(5, ctx, &flush.for_prune);
utxocache_flush.perf_submit(ctx, &flush, sizeof(flush));
return 0;
}
"""
FLUSHMODE_NAME = {
0: "NONE",
1: "IF_NEEDED",
2: "PERIODIC",
3: "ALWAYS",
}
class UTXOCacheChange(ctypes.Structure):
_fields_ = [
("txid", ctypes.c_ubyte * 32),
("index", ctypes.c_uint32),
("height", ctypes.c_uint32),
("value", ctypes.c_uint64),
("is_coinbase", ctypes.c_bool),
]
def __repr__(self):
return f"UTXOCacheChange(outpoint={bytes(self.txid[::-1]).hex()}:{self.index}, height={self.height}, value={self.value}sat, is_coinbase={self.is_coinbase})"
class UTXOCacheFlush(ctypes.Structure):
_fields_ = [
("duration", ctypes.c_int64),
("mode", ctypes.c_uint32),
("size", ctypes.c_uint64),
("memory", ctypes.c_uint64),
("for_prune", ctypes.c_bool),
]
def __repr__(self):
return f"UTXOCacheFlush(duration={self.duration}, mode={FLUSHMODE_NAME[self.mode]}, size={self.size}, memory={self.memory}, for_prune={self.for_prune})"
class UTXOCacheTracepointTest(BitcoinTestFramework):
def set_test_params(self):
self.setup_clean_chain = False
self.num_nodes = 1
self.extra_args = [["-txindex"]]
def skip_test_if_missing_module(self):
self.skip_if_platform_not_linux()
self.skip_if_no_bitcoind_tracepoints()
self.skip_if_no_python_bcc()
self.skip_if_no_bpf_permissions()
def run_test(self):
self.wallet = MiniWallet(self.nodes[0])
self.wallet.generate(101)
self.test_uncache()
self.test_add_spent()
self.test_flush()
def test_uncache(self):
""" Tests the utxocache:uncache tracepoint API.
https://github.com/dashpay/dash/blob/develop/doc/tracing.md#tracepoint-utxocacheuncache
"""
# To trigger an UTXO uncache from the cache, we create an invalid transaction
# spending a not-cached, but existing UTXO. During transaction validation, this
# the UTXO is added to the utxo cache, but as the transaction is invalid, it's
# uncached again.
self.log.info("testing the utxocache:uncache tracepoint API")
# Retrieve the txid for the UTXO created in the first block. This UTXO is not
# in our UTXO cache.
EARLY_BLOCK_HEIGHT = 1
block_1_hash = self.nodes[0].getblockhash(EARLY_BLOCK_HEIGHT)
block_1 = self.nodes[0].getblock(block_1_hash)
block_1_coinbase_txid = block_1["tx"][0]
# Create a transaction and invalidate it by changing the txid of the previous
# output to the coinbase txid of the block at height 1.
invalid_tx = self.wallet.create_self_transfer(
from_node=self.nodes[0])["tx"]
invalid_tx.vin[0].prevout.hash = int(block_1_coinbase_txid, 16)
self.log.info("hooking into the utxocache:uncache tracepoint")
ctx = USDT(path=str(self.options.bitcoind))
ctx.enable_probe(probe="utxocache:uncache",
fn_name="trace_utxocache_uncache")
bpf = BPF(text=utxocache_changes_program, usdt_contexts=[ctx], debug=0)
# The handle_* function is a ctypes callback function called from C. When
# we assert in the handle_* function, the AssertError doesn't propagate
# back to Python. The exception is ignored. We manually count and assert
# that the handle_* functions succeeded.
EXPECTED_HANDLE_UNCACHE_SUCCESS = 1
handle_uncache_succeeds = 0
def handle_utxocache_uncache(_, data, __):
nonlocal handle_uncache_succeeds
event = ctypes.cast(data, ctypes.POINTER(UTXOCacheChange)).contents
self.log.info(f"handle_utxocache_uncache(): {event}")
assert_equal(block_1_coinbase_txid, bytes(event.txid[::-1]).hex())
assert_equal(0, event.index) # prevout index
assert_equal(EARLY_BLOCK_HEIGHT, event.height)
assert_equal(500 * COIN, event.value)
assert_equal(True, event.is_coinbase)
handle_uncache_succeeds += 1
bpf["utxocache_uncache"].open_perf_buffer(handle_utxocache_uncache)
self.log.info(
"testmempoolaccept the invalid transaction to trigger an UTXO-cache uncache")
result = self.nodes[0].testmempoolaccept(
[invalid_tx.serialize().hex()])[0]
assert_equal(result["allowed"], False)
bpf.perf_buffer_poll(timeout=100)
bpf.cleanup()
self.log.info(
f"check that we successfully traced {EXPECTED_HANDLE_UNCACHE_SUCCESS} uncaches")
assert_equal(EXPECTED_HANDLE_UNCACHE_SUCCESS, handle_uncache_succeeds)
def test_add_spent(self):
""" Tests the utxocache:add utxocache:spent tracepoint API
See https://github.com/dashpay/dash/blob/develop/doc/tracing.md#tracepoint-utxocacheadd
and https://github.com/dashpay/dash/blob/develop/doc/tracing.md#tracepoint-utxocachespent
"""
self.log.info(
"test the utxocache:add and utxocache:spent tracepoint API")
self.log.info("create an unconfirmed transaction")
self.wallet.send_self_transfer(from_node=self.nodes[0])
# We mine a block to trace changes (add/spent) to the active in-memory cache
# of the UTXO set (see CoinsTip() of CCoinsViewCache). However, in some cases
# temporary clones of the active cache are made. For example, during mining with
# the generate RPC call, the block is first tested in TestBlockValidity(). There,
# a clone of the active cache is modified during a test ConnectBlock() call.
# These are implementation details we don't want to test here. Thus, after
# mining, we invalidate the block, start the tracing, and then trace the cache
# changes to the active utxo cache.
self.log.info("mine and invalidate a block that is later reconsidered")
block_hash = self.wallet.generate(1)[0]
self.nodes[0].invalidateblock(block_hash)
self.log.info(
"hook into the utxocache:add and utxocache:spent tracepoints")
ctx = USDT(path=str(self.options.bitcoind))
ctx.enable_probe(probe="utxocache:add", fn_name="trace_utxocache_add")
ctx.enable_probe(probe="utxocache:spent",
fn_name="trace_utxocache_spent")
bpf = BPF(text=utxocache_changes_program, usdt_contexts=[ctx], debug=0)
# The handle_* function is a ctypes callback function called from C. When
# we assert in the handle_* function, the AssertError doesn't propagate
# back to Python. The exception is ignored. We manually count and assert
# that the handle_* functions succeeded.
EXPECTED_HANDLE_ADD_SUCCESS = 2
EXPECTED_HANDLE_SPENT_SUCCESS = 1
handle_add_succeeds = 0
handle_spent_succeeds = 0
expected_utxocache_spents = []
expected_utxocache_adds = []
def handle_utxocache_add(_, data, __):
nonlocal handle_add_succeeds
event = ctypes.cast(data, ctypes.POINTER(UTXOCacheChange)).contents
self.log.info(f"handle_utxocache_add(): {event}")
add = expected_utxocache_adds.pop(0)
assert_equal(add["txid"], bytes(event.txid[::-1]).hex())
assert_equal(add["index"], event.index)
assert_equal(add["height"], event.height)
assert_equal(add["value"], event.value)
assert_equal(add["is_coinbase"], event.is_coinbase)
handle_add_succeeds += 1
def handle_utxocache_spent(_, data, __):
nonlocal handle_spent_succeeds
event = ctypes.cast(data, ctypes.POINTER(UTXOCacheChange)).contents
self.log.info(f"handle_utxocache_spent(): {event}")
spent = expected_utxocache_spents.pop(0)
assert_equal(spent["txid"], bytes(event.txid[::-1]).hex())
assert_equal(spent["index"], event.index)
assert_equal(spent["height"], event.height)
assert_equal(spent["value"], event.value)
assert_equal(spent["is_coinbase"], event.is_coinbase)
handle_spent_succeeds += 1
bpf["utxocache_add"].open_perf_buffer(handle_utxocache_add)
bpf["utxocache_spent"].open_perf_buffer(handle_utxocache_spent)
# We trigger a block re-connection. This causes changes (add/spent)
# to the UTXO-cache which in turn triggers the tracepoints.
self.log.info("reconsider the previously invalidated block")
self.nodes[0].reconsiderblock(block_hash)
block = self.nodes[0].getblock(block_hash, 2)
for (block_index, tx) in enumerate(block["tx"]):
for vin in tx["vin"]:
if "coinbase" not in vin:
prevout_tx = self.nodes[0].getrawtransaction(
vin["txid"], True)
prevout_tx_block = self.nodes[0].getblockheader(
prevout_tx["blockhash"])
spends_coinbase = "coinbase" in prevout_tx["vin"][0]
expected_utxocache_spents.append({
"txid": vin["txid"],
"index": vin["vout"],
"height": prevout_tx_block["height"],
"value": int(prevout_tx["vout"][vin["vout"]]["value"] * COIN),
"is_coinbase": spends_coinbase,
})
for (i, vout) in enumerate(tx["vout"]):
if vout["scriptPubKey"]["type"] != "nulldata":
expected_utxocache_adds.append({
"txid": tx["txid"],
"index": i,
"height": block["height"],
"value": int(vout["value"] * COIN),
"is_coinbase": block_index == 0,
})
assert_equal(EXPECTED_HANDLE_ADD_SUCCESS, len(expected_utxocache_adds))
assert_equal(EXPECTED_HANDLE_SPENT_SUCCESS,
len(expected_utxocache_spents))
bpf.perf_buffer_poll(timeout=200)
bpf.cleanup()
self.log.info(
f"check that we successfully traced {EXPECTED_HANDLE_ADD_SUCCESS} adds and {EXPECTED_HANDLE_SPENT_SUCCESS} spent")
assert_equal(0, len(expected_utxocache_adds))
assert_equal(0, len(expected_utxocache_spents))
assert_equal(EXPECTED_HANDLE_ADD_SUCCESS, handle_add_succeeds)
assert_equal(EXPECTED_HANDLE_SPENT_SUCCESS, handle_spent_succeeds)
def test_flush(self):
""" Tests the utxocache:flush tracepoint API.
See https://github.com/dashpay/dash/blob/develop/doc/tracing.md#tracepoint-utxocacheflush"""
self.log.info("test the utxocache:flush tracepoint API")
self.log.info("hook into the utxocache:flush tracepoint")
ctx = USDT(path=str(self.options.bitcoind))
ctx.enable_probe(probe="utxocache:flush",
fn_name="trace_utxocache_flush")
bpf = BPF(text=utxocache_flushes_program, usdt_contexts=[ctx], debug=0)
# The handle_* function is a ctypes callback function called from C. When
# we assert in the handle_* function, the AssertError doesn't propagate
# back to Python. The exception is ignored. We manually count and assert
# that the handle_* functions succeeded.
EXPECTED_HANDLE_FLUSH_SUCCESS = 3
handle_flush_succeeds = 0
possible_cache_sizes = set()
expected_flushes = []
def handle_utxocache_flush(_, data, __):
nonlocal handle_flush_succeeds
event = ctypes.cast(data, ctypes.POINTER(UTXOCacheFlush)).contents
self.log.info(f"handle_utxocache_flush(): {event}")
expected = expected_flushes.pop(0)
assert_equal(expected["mode"], FLUSHMODE_NAME[event.mode])
possible_cache_sizes.remove(event.size) # fails if size not in set
# sanity checks only
assert(event.memory > 0)
assert(event.duration > 0)
handle_flush_succeeds += 1
bpf["utxocache_flush"].open_perf_buffer(handle_utxocache_flush)
self.log.info("stop the node to flush the UTXO cache")
UTXOS_IN_CACHE = 104 # might need to be changed if the eariler tests are modified
# A node shutdown causes two flushes. One that flushes UTXOS_IN_CACHE
# UTXOs and one that flushes 0 UTXOs. Normally the 0-UTXO-flush is the
# second flush, however it can happen that the order changes.
possible_cache_sizes = {UTXOS_IN_CACHE, 0}
flush_for_shutdown = {"mode": "ALWAYS", "for_prune": False}
expected_flushes.extend([flush_for_shutdown, flush_for_shutdown])
self.stop_node(0)
bpf.perf_buffer_poll(timeout=200)
self.log.info("check that we don't expect additional flushes")
assert_equal(0, len(expected_flushes))
assert_equal(0, len(possible_cache_sizes))
self.log.info("restart the node with -prune")
self.start_node(0, ["-fastprune=1", "-prune=1"])
BLOCKS_TO_MINE = 450
self.log.info(f"mine {BLOCKS_TO_MINE} blocks to be able to prune")
self.wallet.generate(BLOCKS_TO_MINE)
# we added BLOCKS_TO_MINE coinbase UTXOs to the cache
possible_cache_sizes = {BLOCKS_TO_MINE}
expected_flushes.append(
{"mode": "NONE", "for_prune": True, "size_fn": lambda x: x == BLOCKS_TO_MINE})
self.log.info(f"prune blockchain to trigger a flush for pruning")
self.nodes[0].pruneblockchain(415)
bpf.perf_buffer_poll(timeout=500)
bpf.cleanup()
self.log.info(
f"check that we don't expect additional flushes and that the handle_* function succeeded")
assert_equal(0, len(expected_flushes))
assert_equal(0, len(possible_cache_sizes))
assert_equal(EXPECTED_HANDLE_FLUSH_SUCCESS, handle_flush_succeeds)
self.stop_node(0, expected_stderr='Warning: You are starting with governance validation disabled. This is expected because you are running a pruned node.')
if __name__ == '__main__':
UTXOCacheTracepointTest().main()

View File

@ -0,0 +1,137 @@
#!/usr/bin/env python3
# Copyright (c) 2022 The Bitcoin Core developers
# Distributed under the MIT software license, see the accompanying
# file COPYING or http://www.opensource.org/licenses/mit-license.php.
""" Tests the validation:* tracepoint API interface.
See https://github.com/dashpay/dash/blob/develop/doc/tracing.md#context-validation
"""
import ctypes
# Test will be skipped if we don't have bcc installed
try:
from bcc import BPF, USDT # type: ignore[import]
except ImportError:
pass
from test_framework.address import ADDRESS_BCRT1_UNSPENDABLE
from test_framework.test_framework import BitcoinTestFramework
from test_framework.util import assert_equal
validation_blockconnected_program = """
#include <uapi/linux/ptrace.h>
typedef signed long long i64;
struct connected_block
{
char hash[32];
int height;
i64 transactions;
int inputs;
i64 sigops;
u64 duration;
};
BPF_PERF_OUTPUT(block_connected);
int trace_block_connected(struct pt_regs *ctx) {
struct connected_block block = {};
bpf_usdt_readarg_p(1, ctx, &block.hash, 32);
bpf_usdt_readarg(2, ctx, &block.height);
bpf_usdt_readarg(3, ctx, &block.transactions);
bpf_usdt_readarg(4, ctx, &block.inputs);
bpf_usdt_readarg(5, ctx, &block.sigops);
bpf_usdt_readarg(6, ctx, &block.duration);
block_connected.perf_submit(ctx, &block, sizeof(block));
return 0;
}
"""
class ValidationTracepointTest(BitcoinTestFramework):
def set_test_params(self):
self.num_nodes = 1
def skip_test_if_missing_module(self):
self.skip_if_platform_not_linux()
self.skip_if_no_bitcoind_tracepoints()
self.skip_if_no_python_bcc()
self.skip_if_no_bpf_permissions()
def run_test(self):
# Tests the validation:block_connected tracepoint by generating blocks
# and comparing the values passed in the tracepoint arguments with the
# blocks.
# See https://github.com/dashpay/dash/blob/develop/doc/tracing.md#tracepoint-validationblock_connected
class Block(ctypes.Structure):
_fields_ = [
("hash", ctypes.c_ubyte * 32),
("height", ctypes.c_int),
("transactions", ctypes.c_int64),
("inputs", ctypes.c_int),
("sigops", ctypes.c_int64),
("duration", ctypes.c_uint64),
]
def __repr__(self):
return "ConnectedBlock(hash=%s height=%d, transactions=%d, inputs=%d, sigops=%d, duration=%d)" % (
bytes(self.hash[::-1]).hex(),
self.height,
self.transactions,
self.inputs,
self.sigops,
self.duration)
# The handle_* function is a ctypes callback function called from C. When
# we assert in the handle_* function, the AssertError doesn't propagate
# back to Python. The exception is ignored. We manually count and assert
# that the handle_* functions succeeded.
BLOCKS_EXPECTED = 2
blocks_checked = 0
expected_blocks = dict()
self.log.info("hook into the validation:block_connected tracepoint")
ctx = USDT(path=str(self.options.bitcoind))
ctx.enable_probe(probe="validation:block_connected",
fn_name="trace_block_connected")
bpf = BPF(text=validation_blockconnected_program,
usdt_contexts=[ctx], debug=0)
def handle_blockconnected(_, data, __):
nonlocal expected_blocks, blocks_checked
event = ctypes.cast(data, ctypes.POINTER(Block)).contents
self.log.info(f"handle_blockconnected(): {event}")
block_hash = bytes(event.hash[::-1]).hex()
block = expected_blocks[block_hash]
assert_equal(block["hash"], block_hash)
assert_equal(block["height"], event.height)
assert_equal(len(block["tx"]), event.transactions)
assert_equal(len([tx["vin"] for tx in block["tx"]]), event.inputs)
assert_equal(0, event.sigops) # no sigops in coinbase tx
# only plausibility checks
assert(event.duration > 0)
del expected_blocks[block_hash]
blocks_checked += 1
bpf["block_connected"].open_perf_buffer(
handle_blockconnected)
self.log.info(f"mine {BLOCKS_EXPECTED} blocks")
block_hashes = self.nodes[0].generatetoaddress(
BLOCKS_EXPECTED, ADDRESS_BCRT1_UNSPENDABLE)
for block_hash in block_hashes:
expected_blocks[block_hash] = self.nodes[0].getblock(block_hash, 2)
bpf.perf_buffer_poll(timeout=200)
bpf.cleanup()
self.log.info(f"check that we traced {BLOCKS_EXPECTED} blocks")
assert_equal(BLOCKS_EXPECTED, blocks_checked)
assert_equal(0, len(expected_blocks))
if __name__ == '__main__':
ValidationTracepointTest().main()

View File

@ -6,7 +6,7 @@
import time import time
from test_framework.messages import msg_tx from test_framework.messages import msg_tx, msg_inv, CInv, MSG_TX
from test_framework.p2p import P2PInterface, P2PTxInvStore from test_framework.p2p import P2PInterface, P2PTxInvStore
from test_framework.test_framework import BitcoinTestFramework from test_framework.test_framework import BitcoinTestFramework
from test_framework.util import assert_equal from test_framework.util import assert_equal
@ -15,15 +15,13 @@ from test_framework.wallet import MiniWallet
class P2PBlocksOnly(BitcoinTestFramework): class P2PBlocksOnly(BitcoinTestFramework):
def set_test_params(self): def set_test_params(self):
self.setup_clean_chain = True
self.num_nodes = 1 self.num_nodes = 1
self.extra_args = [["-blocksonly"]] self.extra_args = [["-blocksonly"]]
def run_test(self): def run_test(self):
self.miniwallet = MiniWallet(self.nodes[0]) self.miniwallet = MiniWallet(self.nodes[0])
# Add enough mature utxos to the wallet, so that all txs spend confirmed coins # Add enough mature utxos to the wallet, so that all txs spend confirmed coins
self.miniwallet.generate(2) self.miniwallet.rescan_utxos()
self.nodes[0].generate(100)
self.blocksonly_mode_tests() self.blocksonly_mode_tests()
self.blocks_relay_conn_tests() self.blocks_relay_conn_tests()
@ -35,11 +33,18 @@ class P2PBlocksOnly(BitcoinTestFramework):
self.nodes[0].add_p2p_connection(P2PInterface()) self.nodes[0].add_p2p_connection(P2PInterface())
tx, txid, tx_hex = self.check_p2p_tx_violation() tx, txid, tx_hex = self.check_p2p_tx_violation()
self.log.info('Check that tx invs also violate the protocol')
self.nodes[0].add_p2p_connection(P2PInterface())
with self.nodes[0].assert_debug_log(['tx (0000000000000000000000000000000000000000000000000000000000001234) inv sent in violation of protocol, disconnecting peer']):
self.nodes[0].p2ps[0].send_message(msg_inv([CInv(t=MSG_TX, h=0x1234)]))
self.nodes[0].p2ps[0].wait_for_disconnect()
del self.nodes[0].p2ps[0]
self.log.info('Check that txs from rpc are not rejected and relayed to other peers') self.log.info('Check that txs from rpc are not rejected and relayed to other peers')
tx_relay_peer = self.nodes[0].add_p2p_connection(P2PInterface()) tx_relay_peer = self.nodes[0].add_p2p_connection(P2PInterface())
assert_equal(self.nodes[0].getpeerinfo()[0]['relaytxes'], True) assert_equal(self.nodes[0].getpeerinfo()[0]['relaytxes'], True)
assert_equal(self.nodes[0].testmempoolaccept([tx_hex])[0]['allowed'], True) assert_equal(self.nodes[0].testmempoolaccept([tx_hex])[0]['allowed'], True)
with self.nodes[0].assert_debug_log(['received getdata for: tx {} peer=1'.format(txid)]): with self.nodes[0].assert_debug_log(['received getdata for: tx {} peer'.format(txid)]):
self.nodes[0].sendrawtransaction(tx_hex) self.nodes[0].sendrawtransaction(tx_hex)
self.bump_mocktime(60) self.bump_mocktime(60)
tx_relay_peer.wait_for_tx(txid) tx_relay_peer.wait_for_tx(txid)
@ -83,7 +88,7 @@ class P2PBlocksOnly(BitcoinTestFramework):
# Ensure we disconnect if a block-relay-only connection sends us a transaction # Ensure we disconnect if a block-relay-only connection sends us a transaction
self.nodes[0].add_outbound_p2p_connection(P2PInterface(), p2p_idx=0, connection_type="block-relay-only") self.nodes[0].add_outbound_p2p_connection(P2PInterface(), p2p_idx=0, connection_type="block-relay-only")
assert_equal(self.nodes[0].getpeerinfo()[0]['relaytxes'], False) assert_equal(self.nodes[0].getpeerinfo()[0]['relaytxes'], False)
_, txid, tx_hex = self.check_p2p_tx_violation(index=2) _, txid, tx_hex = self.check_p2p_tx_violation()
self.log.info("Check that txs from RPC are not sent to blockrelay connection") self.log.info("Check that txs from RPC are not sent to blockrelay connection")
conn = self.nodes[0].add_outbound_p2p_connection(P2PTxInvStore(), p2p_idx=1, connection_type="block-relay-only") conn = self.nodes[0].add_outbound_p2p_connection(P2PTxInvStore(), p2p_idx=1, connection_type="block-relay-only")
@ -96,11 +101,9 @@ class P2PBlocksOnly(BitcoinTestFramework):
conn.sync_send_with_ping() conn.sync_send_with_ping()
assert(int(txid, 16) not in conn.get_invs()) assert(int(txid, 16) not in conn.get_invs())
def check_p2p_tx_violation(self, index=1): def check_p2p_tx_violation(self):
self.log.info('Check that txs from P2P are rejected and result in disconnect') self.log.info('Check that txs from P2P are rejected and result in disconnect')
input_txid = self.nodes[0].getblock(self.nodes[0].getblockhash(index), 2)['tx'][0]['txid'] spendtx = self.miniwallet.create_self_transfer(from_node=self.nodes[0])
utxo_to_spend = self.miniwallet.get_utxo(txid=input_txid)
spendtx = self.miniwallet.create_self_transfer(from_node=self.nodes[0], utxo_to_spend=utxo_to_spend)
with self.nodes[0].assert_debug_log(['tx sent in violation of protocol peer=0']): with self.nodes[0].assert_debug_log(['tx sent in violation of protocol peer=0']):
self.nodes[0].p2ps[0].send_message(msg_tx(spendtx['tx'])) self.nodes[0].p2ps[0].send_message(msg_tx(spendtx['tx']))

View File

@ -12,6 +12,7 @@ from enum import Enum
import argparse import argparse
import logging import logging
import os import os
import platform
import pdb import pdb
import random import random
import re import re
@ -955,6 +956,29 @@ class BitcoinTestFramework(metaclass=BitcoinTestMetaClass):
except ImportError: except ImportError:
raise SkipTest("python3-zmq module not available.") raise SkipTest("python3-zmq module not available.")
def skip_if_no_python_bcc(self):
"""Attempt to import the bcc package and skip the tests if the import fails."""
try:
import bcc # type: ignore[import] # noqa: F401
except ImportError:
raise SkipTest("bcc python module not available")
def skip_if_no_bitcoind_tracepoints(self):
"""Skip the running test if dashd has not been compiled with USDT tracepoint support."""
if not self.is_usdt_compiled():
raise SkipTest("dashd has not been built with USDT tracepoints enabled.")
def skip_if_no_bpf_permissions(self):
"""Skip the running test if we don't have permissions to do BPF syscalls and load BPF maps."""
# check for 'root' permissions
if os.geteuid() != 0:
raise SkipTest("no permissions to use BPF (please review the tests carefully before running them with higher privileges)")
def skip_if_platform_not_linux(self):
"""Skip the running test if we are not on a Linux platform"""
if platform.system() != "Linux":
raise SkipTest("not on a Linux system")
def skip_if_no_bitcoind_zmq(self): def skip_if_no_bitcoind_zmq(self):
"""Skip the running test if dashd has not been compiled with zmq support.""" """Skip the running test if dashd has not been compiled with zmq support."""
if not self.is_zmq_compiled(): if not self.is_zmq_compiled():
@ -1019,6 +1043,10 @@ class BitcoinTestFramework(metaclass=BitcoinTestMetaClass):
"""Checks whether the zmq module was compiled.""" """Checks whether the zmq module was compiled."""
return self.config["components"].getboolean("ENABLE_ZMQ") return self.config["components"].getboolean("ENABLE_ZMQ")
def is_usdt_compiled(self):
"""Checks whether the USDT tracepoints were compiled."""
return self.config["components"].getboolean("ENABLE_USDT_TRACEPOINTS")
def is_sqlite_compiled(self): def is_sqlite_compiled(self):
"""Checks whether the wallet module was compiled with Sqlite support.""" """Checks whether the wallet module was compiled with Sqlite support."""
return self.config["components"].getboolean("USE_SQLITE") return self.config["components"].getboolean("USE_SQLITE")

View File

@ -78,6 +78,14 @@ class MiniWallet:
self._address = ADDRESS_BCRT1_P2SH_OP_TRUE self._address = ADDRESS_BCRT1_P2SH_OP_TRUE
self._scriptPubKey = bytes.fromhex(self._test_node.validateaddress(self._address)['scriptPubKey']) self._scriptPubKey = bytes.fromhex(self._test_node.validateaddress(self._address)['scriptPubKey'])
def rescan_utxos(self):
"""Drop all utxos and rescan the utxo set"""
self._utxos = []
res = self._test_node.scantxoutset(action="start", scanobjects=[f'raw({self._scriptPubKey.hex()})'])
assert_equal(True, res['success'])
for utxo in res['unspents']:
self._utxos.append({'txid': utxo['txid'], 'vout': utxo['vout'], 'value': utxo['amount'], 'height': utxo['height']})
def scan_blocks(self, *, start=1, num): def scan_blocks(self, *, start=1, num):
"""Scan the blocks for self._address outputs and add them to self._utxos""" """Scan the blocks for self._address outputs and add them to self._utxos"""
for i in range(start, start + num): for i in range(start, start + num):
@ -89,7 +97,7 @@ class MiniWallet:
"""Scan the tx for self._scriptPubKey outputs and add them to self._utxos""" """Scan the tx for self._scriptPubKey outputs and add them to self._utxos"""
for out in tx['vout']: for out in tx['vout']:
if out['scriptPubKey']['hex'] == self._scriptPubKey.hex(): if out['scriptPubKey']['hex'] == self._scriptPubKey.hex():
self._utxos.append({'txid': tx['txid'], 'vout': out['n'], 'value': out['value']}) self._utxos.append({'txid': tx['txid'], 'vout': out['n'], 'value': out['value'], 'height': 0})
def sign_tx(self, tx, fixed_length=True): def sign_tx(self, tx, fixed_length=True):
"""Sign tx that has been created by MiniWallet in P2PK mode""" """Sign tx that has been created by MiniWallet in P2PK mode"""
@ -110,8 +118,9 @@ class MiniWallet:
"""Generate blocks with coinbase outputs to the internal address, and append the outputs to the internal list""" """Generate blocks with coinbase outputs to the internal address, and append the outputs to the internal list"""
blocks = self._test_node.generatetodescriptor(num_blocks, f'raw({self._scriptPubKey.hex()})') blocks = self._test_node.generatetodescriptor(num_blocks, f'raw({self._scriptPubKey.hex()})')
for b in blocks: for b in blocks:
cb_tx = self._test_node.getblock(blockhash=b, verbosity=2)['tx'][0] block_info = self._test_node.getblock(blockhash=b, verbosity=2)
self._utxos.append({'txid': cb_tx['txid'], 'vout': 0, 'value': cb_tx['vout'][0]['value']}) cb_tx = block_info['tx'][0]
self._utxos.append({'txid': cb_tx['txid'], 'vout': 0, 'value': cb_tx['vout'][0]['value'], 'height': block_info['height']})
return blocks return blocks
def get_address(self): def get_address(self):
@ -143,7 +152,7 @@ class MiniWallet:
def create_self_transfer(self, *, fee_rate=Decimal("0.003"), from_node, utxo_to_spend=None, mempool_valid=True, locktime=0, sequence=0): def create_self_transfer(self, *, fee_rate=Decimal("0.003"), from_node, utxo_to_spend=None, mempool_valid=True, locktime=0, sequence=0):
"""Create and return a tx with the specified fee_rate. Fee may be exact or at most one satoshi higher than needed.""" """Create and return a tx with the specified fee_rate. Fee may be exact or at most one satoshi higher than needed."""
self._utxos = sorted(self._utxos, key=lambda k: k['value']) self._utxos = sorted(self._utxos, key=lambda k: (k['value'], -k['height']))
utxo_to_spend = utxo_to_spend or self._utxos.pop() # Pick the largest utxo (if none provided) and hope it covers the fee utxo_to_spend = utxo_to_spend or self._utxos.pop() # Pick the largest utxo (if none provided) and hope it covers the fee
if self._priv_key is None: if self._priv_key is None:
vsize = Decimal(85) # anyone-can-spend vsize = Decimal(85) # anyone-can-spend

View File

@ -186,6 +186,9 @@ BASE_SCRIPTS = [
'wallet_watchonly.py --usecli --legacy-wallet', 'wallet_watchonly.py --usecli --legacy-wallet',
'interface_http.py', 'interface_http.py',
'interface_rpc.py', 'interface_rpc.py',
'interface_usdt_net.py',
'interface_usdt_utxocache.py',
'interface_usdt_validation.py',
'rpc_psbt.py --legacy-wallet', 'rpc_psbt.py --legacy-wallet',
'rpc_psbt.py --descriptors', 'rpc_psbt.py --descriptors',
'rpc_users.py', 'rpc_users.py',