From cacc31213b14864d32fda0d1ba0b8b41938d1d39 Mon Sep 17 00:00:00 2001 From: Kittywhiskers Van Gogh <63189531+kwvg@users.noreply.github.com> Date: Sun, 25 Aug 2024 15:13:52 +0000 Subject: [PATCH 01/14] merge bitcoin#22006: first tracepoints and documentation on User-Space, Statically Defined Tracing (USDT) --- contrib/tracing/README.md | 241 ++++++++++++++++++++ contrib/tracing/connectblock_benchmark.bt | 150 ++++++++++++ contrib/tracing/log_p2p_traffic.bt | 28 +++ contrib/tracing/log_raw_p2p_msgs.py | 180 +++++++++++++++ contrib/tracing/p2p_monitor.py | 250 ++++++++++++++++++++ doc/build-unix.md | 13 ++ doc/dependencies.md | 2 + doc/tracing.md | 266 ++++++++++++++++++++++ src/net.cpp | 13 +- src/net_processing.cpp | 10 + src/validation.cpp | 11 + 11 files changed, 1163 insertions(+), 1 deletion(-) create mode 100644 contrib/tracing/README.md create mode 100755 contrib/tracing/connectblock_benchmark.bt create mode 100755 contrib/tracing/log_p2p_traffic.bt create mode 100755 contrib/tracing/log_raw_p2p_msgs.py create mode 100755 contrib/tracing/p2p_monitor.py create mode 100644 doc/tracing.md diff --git a/contrib/tracing/README.md b/contrib/tracing/README.md new file mode 100644 index 0000000000..08cc7078df --- /dev/null +++ b/contrib/tracing/README.md @@ -0,0 +1,241 @@ +Example scripts for User-space, Statically Defined Tracing (USDT) +================================================================= + +This directory contains scripts showcasing User-space, Statically Defined +Tracing (USDT) support for Dash Core on Linux using. For more information on +USDT support in Dash Core see the [USDT documentation]. + +[USDT documentation]: ../../doc/tracing.md + + +Examples for the two main eBPF front-ends, [bpftrace] and +[BPF Compiler Collection (BCC)], with support for USDT, are listed. BCC is used +for complex tools and daemons and `bpftrace` is preferred for one-liners and +shorter scripts. + +[bpftrace]: https://github.com/iovisor/bpftrace +[BPF Compiler Collection (BCC)]: https://github.com/iovisor/bcc + + +To develop and run bpftrace and BCC scripts you need to install the +corresponding packages. See [installing bpftrace] and [installing BCC] for more +information. For development there exist a [bpftrace Reference Guide], a +[BCC Reference Guide], and a [bcc Python Developer Tutorial]. + +[installing bpftrace]: https://github.com/iovisor/bpftrace/blob/master/INSTALL.md +[installing BCC]: https://github.com/iovisor/bcc/blob/master/INSTALL.md +[bpftrace Reference Guide]: https://github.com/iovisor/bpftrace/blob/master/docs/reference_guide.md +[BCC Reference Guide]: https://github.com/iovisor/bcc/blob/master/docs/reference_guide.md +[bcc Python Developer Tutorial]: https://github.com/iovisor/bcc/blob/master/docs/tutorial_bcc_python_developer.md + +## Examples + +The bpftrace examples contain a relative path to the `dashd` binary. By +default, the scripts should be run from the repository-root and assume a +self-compiled `dashd` binary. The paths in the examples can be changed, for +example, to point to release builds if needed. See the +[Dash Core USDT documentation] on how to list available tracepoints in your +`dashd` binary. + +[Dash Core USDT documentation]: ../../doc/tracing.md#listing-available-tracepoints + +**WARNING: eBPF programs require root privileges to be loaded into a Linux +kernel VM. This means the bpftrace and BCC examples must be executed with root +privileges. Make sure to carefully review any scripts that you run with root +privileges first!** + +### log_p2p_traffic.bt + +A bpftrace script logging information about inbound and outbound P2P network +messages. Based on the `net:inbound_message` and `net:outbound_message` +tracepoints. + +By default, `bpftrace` limits strings to 64 bytes due to the limited stack size +in the eBPF VM. For example, Tor v3 addresses exceed the string size limit which +results in the port being cut off during logging. The string size limit can be +increased with the `BPFTRACE_STRLEN` environment variable (`BPFTRACE_STRLEN=70` +works fine). + +``` +$ bpftrace contrib/tracing/log_p2p_traffic.bt +``` + +Output +``` +outbound 'ping' msg to peer 11 (outbound-full-relay, [2a02:b10c:f747:1:ef:fake:ipv6:addr]:8333) with 8 bytes +inbound 'pong' msg from peer 11 (outbound-full-relay, [2a02:b10c:f747:1:ef:fake:ipv6:addr]:8333) with 8 bytes +inbound 'inv' msg from peer 16 (outbound-full-relay, XX.XX.XXX.121:8333) with 37 bytes +outbound 'getdata' msg to peer 16 (outbound-full-relay, XX.XX.XXX.121:8333) with 37 bytes +inbound 'tx' msg from peer 16 (outbound-full-relay, XX.XX.XXX.121:8333) with 222 bytes +outbound 'inv' msg to peer 9 (outbound-full-relay, faketorv3addressa2ufa6odvoi3s77j4uegey0xb10csyfyve2t33curbyd.onion:8333) with 37 bytes +outbound 'inv' msg to peer 7 (outbound-full-relay, XX.XX.XXX.242:8333) with 37 bytes +… +``` + +### p2p_monitor.py + +A BCC Python script using curses for an interactive P2P message monitor. Based +on the `net:inbound_message` and `net:outbound_message` tracepoints. + +Inbound and outbound traffic is listed for each peer together with information +about the connection. Peers can be selected individually to view recent P2P +messages. + +``` +$ python3 contrib/tracing/p2p_monitor.py ./src/dashd +``` + +Lists selectable peers and traffic and connection information. +``` + P2P Message Monitor + Navigate with UP/DOWN or J/K and select a peer with ENTER or SPACE to see individual P2P messages + + PEER OUTBOUND INBOUND TYPE ADDR + 0 46 398 byte 61 1407590 byte block-relay-only XX.XX.XXX.196:8333 + 11 1156 253570 byte 3431 2394924 byte outbound-full-relay XXX.X.XX.179:8333 + 13 3425 1809620 byte 1236 305458 byte inbound XXX.X.X.X:60380 + 16 1046 241633 byte 1589 1199220 byte outbound-full-relay 4faketorv2pbfu7x.onion:8333 + 19 577 181679 byte 390 148951 byte outbound-full-relay kfake4vctorjv2o2.onion:8333 + 20 11 1248 byte 13 1283 byte block-relay-only [2600:fake:64d9:b10c:4436:aaaa:fe:bb]:8333 + 21 11 1248 byte 13 1299 byte block-relay-only XX.XXX.X.155:8333 + 22 5 103 byte 1 102 byte feeler XX.XX.XXX.173:8333 + 23 11 1248 byte 12 1255 byte block-relay-only XX.XXX.XXX.220:8333 + 24 3 103 byte 1 102 byte feeler XXX.XXX.XXX.64:8333 +… +``` + +Showing recent P2P messages between our node and a selected peer. + +``` + ---------------------------------------------------------------------- + | PEER 16 (4faketorv2pbfu7x.onion:8333) | + | OUR NODE outbound-full-relay PEER | + | <--- sendcmpct (9 bytes) | + | inv (37 byte) ---> | + | <--- ping (8 bytes) | + | pong (8 byte) ---> | + | inv (37 byte) ---> | + | <--- addr (31 bytes) | + | inv (37 byte) ---> | + | <--- getheaders (1029 bytes) | + | headers (1 byte) ---> | + | <--- feefilter (8 bytes) | + | <--- pong (8 bytes) | + | <--- headers (82 bytes) | + | <--- addr (30003 bytes) | + | inv (1261 byte) ---> | + | … | + +``` + +### log_raw_p2p_msgs.py + +A BCC Python script showcasing eBPF and USDT limitations when passing data +larger than about 32kb. Based on the `net:inbound_message` and +`net:outbound_message` tracepoints. + +Dash P2P messages can be larger than 32kb (e.g. `tx`, `block`, ...). The +eBPF VM's stack is limited to 512 bytes, and we can't allocate more than about +32kb for a P2P message in the eBPF VM. The **message data is cut off** when the +message is larger than MAX_MSG_DATA_LENGTH (see script). This can be detected +in user-space by comparing the data length to the message length variable. The +message is cut off when the data length is smaller than the message length. +A warning is included with the printed message data. + +Data is submitted to user-space (i.e. to this script) via a ring buffer. The +throughput of the ring buffer is limited. Each p2p_message is about 32kb in +size. In- or outbound messages submitted to the ring buffer in rapid +succession fill the ring buffer faster than it can be read. Some messages are +lost. BCC prints: `Possibly lost 2 samples` on lost messages. + + +``` +$ python3 contrib/tracing/log_raw_p2p_msgs.py ./src/dashd +``` + +``` +Logging raw P2P messages. +Messages larger that about 32kb will be cut off! +Some messages might be lost! + outbound msg 'inv' from peer 4 (outbound-full-relay, XX.XXX.XX.4:8333) with 253 bytes: 0705000000be2245c8f844c9f763748e1a7… +… +Warning: incomplete message (only 32568 out of 53552 bytes)! inbound msg 'tx' from peer 32 (outbound-full-relay, XX.XXX.XXX.43:8333) with 53552 bytes: 020000000001fd3c01939c85ad6756ed9fc… +… +Possibly lost 2 samples +``` + +### connectblock_benchmark.bt + +A `bpftrace` script to benchmark the `ConnectBlock()` function during, for +example, a blockchain re-index. Based on the `validation:block_connected` USDT +tracepoint. + +The script takes three positional arguments. The first two arguments, the start, +and end height indicate between which blocks the benchmark should be run. The +third acts as a duration threshold in milliseconds. When the `ConnectBlock()` +function takes longer than the threshold, information about the block, is +printed. For more details, see the header comment in the script. + +By default, `bpftrace` limits strings to 64 bytes due to the limited stack size +in the kernel VM. Block hashes as zero-terminated hex strings are 65 bytes which +exceed the string limit. The string size limit can be set to 65 bytes with the +environment variable `BPFTRACE_STRLEN`. + +The following command can be used to benchmark, for example, `ConnectBlock()` +between height 20000 and 38000 on SigNet while logging all blocks that take +longer than 25ms to connect. + +``` +$ BPFTRACE_STRLEN=65 bpftrace contrib/tracing/connectblock_benchmark.bt 20000 38000 25 +``` + +In a different terminal, starting Dash Core in SigNet mode and with +re-indexing enabled. + +``` +$ ./src/dashd -signet -reindex +``` + +This produces the following output. +``` +Attaching 5 probes... +ConnectBlock Benchmark between height 20000 and 38000 inclusive +Logging blocks taking longer than 25 ms to connect. +Starting Connect Block Benchmark between height 20000 and 38000. +BENCH 39 blk/s 59 tx/s 59 inputs/s 20 sigops/s (height 20038) +Block 20492 (000000f555653bb05e2f3c6e79925e01a20dd57033f4dc7c354b46e34735d32b) 20 tx 2319 ins 2318 sigops took 38 ms +BENCH 1840 blk/s 2117 tx/s 4478 inputs/s 2471 sigops/s (height 21879) +BENCH 1816 blk/s 4972 tx/s 4982 inputs/s 125 sigops/s (height 23695) +BENCH 2095 blk/s 2890 tx/s 2910 inputs/s 152 sigops/s (height 25790) +BENCH 1684 blk/s 3979 tx/s 4053 inputs/s 288 sigops/s (height 27474) +BENCH 1155 blk/s 3216 tx/s 3252 inputs/s 115 sigops/s (height 28629) +BENCH 1797 blk/s 2488 tx/s 2503 inputs/s 111 sigops/s (height 30426) +BENCH 1849 blk/s 6318 tx/s 6569 inputs/s 12189 sigops/s (height 32275) +BENCH 946 blk/s 20209 tx/s 20775 inputs/s 83809 sigops/s (height 33221) +Block 33406 (0000002adfe4a15cfcd53bd890a89bbae836e5bb7f38bac566f61ad4548c87f6) 25 tx 2045 ins 2090 sigops took 29 ms +Block 33687 (00000073231307a9828e5607ceb8156b402efe56747271a4442e75eb5b77cd36) 52 tx 1797 ins 1826 sigops took 26 ms +BENCH 582 blk/s 21581 tx/s 27673 inputs/s 60345 sigops/s (height 33803) +BENCH 1035 blk/s 19735 tx/s 19776 inputs/s 51355 sigops/s (height 34838) +Block 35625 (0000006b00b347390c4768ea9df2655e9ff4b120f29d78594a2a702f8a02c997) 20 tx 3374 ins 3371 sigops took 49 ms +BENCH 887 blk/s 17857 tx/s 22191 inputs/s 24404 sigops/s (height 35725) +Block 35937 (000000d816d13d6e39b471cd4368db60463a764ba1f29168606b04a22b81ea57) 75 tx 3943 ins 3940 sigops took 61 ms +BENCH 823 blk/s 16298 tx/s 21031 inputs/s 18440 sigops/s (height 36548) +Block 36583 (000000c3e260556dbf42968aae3f904dba8b8c1ff96a6f6e3aa5365d2e3ad317) 24 tx 2198 ins 2194 sigops took 34 ms +Block 36700 (000000b3b173de9e65a3cfa738d976af6347aaf83fa17ab3f2a4d2ede3ddfac4) 73 tx 1615 ins 1611 sigops took 31 ms +Block 36832 (0000007859578c02c1ac37dabd1b9ec19b98f350b56935f5dd3a41e9f79f836e) 34 tx 1440 ins 1436 sigops took 26 ms +BENCH 613 blk/s 16718 tx/s 25074 inputs/s 23022 sigops/s (height 37161) +Block 37870 (000000f5c1086291ba2d943fb0c3bc82e71c5ee341ee117681d1456fbf6c6c38) 25 tx 1517 ins 1514 sigops took 29 ms +BENCH 811 blk/s 16031 tx/s 20921 inputs/s 18696 sigops/s (height 37972) + +Took 14055 ms to connect the blocks between height 20000 and 38000. + +Histogram of block connection times in milliseconds (ms). +@durations: +[0] 16838 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@| +[1] 882 |@@ | +[2, 4) 236 | | +[4, 8) 23 | | +[8, 16) 9 | | +[16, 32) 9 | | +[32, 64) 4 | | +``` diff --git a/contrib/tracing/connectblock_benchmark.bt b/contrib/tracing/connectblock_benchmark.bt new file mode 100755 index 0000000000..f9b7bb52e3 --- /dev/null +++ b/contrib/tracing/connectblock_benchmark.bt @@ -0,0 +1,150 @@ +#!/usr/bin/env bpftrace + +/* + + USAGE: + + BPFTRACE_STRLEN=65 bpftrace contrib/tracing/connectblock_benchmark.bt + + - The environment variable BPFTRACE_STRLEN needs to be set to 65 chars as + strings are limited to 64 chars by default. Hex strings with Dash block + hashes are 64 hex chars + 1 null-termination char. + - sets the height at which the benchmark should start. Setting + the start height to 0 starts the benchmark immediately, even before the + first block is connected. + - sets the height after which the benchmark should end. Setting + the end height to 0 disables the benchmark. The script only logs blocks + over . + - Threshold + + This script requires a 'dashd' binary compiled with eBPF support and the + 'validation:block_connected' USDT. By default, it's assumed that 'dashd' is + located in './src/dashd'. This can be modified in the script below. + + EXAMPLES: + + BPFTRACE_STRLEN=65 bpftrace contrib/tracing/connectblock_benchmark.bt 300000 680000 1000 + + When run together 'dashd -reindex', this benchmarks the time it takes to + connect the blocks between height 300.000 and 680.000 (inclusive) and prints + details about all blocks that take longer than 1000ms to connect. Prints a + histogram with block connection times when the benchmark is finished. + + + BPFTRACE_STRLEN=65 bpftrace contrib/tracing/connectblock_benchmark.bt 0 0 500 + + When running together 'dashd', all newly connected blocks that + take longer than 500ms to connect are logged. A histogram with block + connection times is shown when the script is terminated. + +*/ + +BEGIN +{ + $start_height = $1; + $end_height = $2; + $logging_threshold_ms = $3; + + if ($end_height < $start_height) { + printf("Error: start height (%d) larger than end height (%d)!\n", $start_height, $end_height); + exit(); + } + + if ($end_height > 0) { + printf("ConnectBlock benchmark between height %d and %d inclusive\n", $start_height, $end_height); + } else { + printf("ConnectBlock logging starting at height %d\n", $start_height); + } + + if ($logging_threshold_ms > 0) { + printf("Logging blocks taking longer than %d ms to connect.\n", $3); + } + + if ($start_height == 0) { + @start = nsecs; + } +} + +/* + Attaches to the 'validation:block_connected' USDT and collects stats when the + connected block is between the start and end height (or the end height is + unset). +*/ +usdt:./src/dashd:validation:block_connected /arg1 >= $1 && (arg1 <= $2 || $2 == 0 )/ +{ + $height = arg1; + $transactions = arg2; + $inputs = arg3; + $sigops = arg4; + $duration = (uint64) arg5; + + @height = $height; + + @blocks = @blocks + 1; + @transactions = @transactions + $transactions; + @inputs = @inputs + $inputs; + @sigops = @sigops + $sigops; + + @durations = hist($duration / 1000); + + if ($height == $1 && $height != 0) { + @start = nsecs; + printf("Starting Connect Block Benchmark between height %d and %d.\n", $1, $2); + } + + if ($2 > 0 && $height >= $2) { + @end = nsecs; + $duration = @end - @start; + printf("\nTook %d ms to connect the blocks between height %d and %d.\n", $duration / 1000000, $1, $2); + exit(); + } +} + +/* + Attaches to the 'validation:block_connected' USDT and logs information about + blocks where the time it took to connect the block is above the + . +*/ +usdt:./src/dashd:validation:block_connected / (uint64) arg5 / 1000> $3 / +{ + $hash_str = str(arg0); + $height = (int32) arg1; + $transactions = (uint64) arg2; + $inputs = (int32) arg3; + $sigops = (int64) arg4; + $duration = (int64) arg5; + + printf("Block %d (%s) %4d tx %5d ins %5d sigops took %4d ms\n", $height, $hash_str, $transactions, $inputs, $sigops, (uint64) $duration / 1000); +} + + +/* + Prints stats about the blocks, transactions, inputs, and sigops processed in + the last second (if any). +*/ +interval:s:1 { + if (@blocks > 0) { + printf("BENCH %4d blk/s %6d tx/s %7d inputs/s %8d sigops/s (height %d)\n", @blocks, @transactions, @inputs, @sigops, @height); + + zero(@blocks); + zero(@transactions); + zero(@inputs); + zero(@sigops); + } +} + +END +{ + printf("\nHistogram of block connection times in milliseconds (ms).\n"); + print(@durations); + + clear(@durations); + clear(@blocks); + clear(@transactions); + clear(@inputs); + clear(@sigops); + clear(@height); + clear(@start); + clear(@end); +} + diff --git a/contrib/tracing/log_p2p_traffic.bt b/contrib/tracing/log_p2p_traffic.bt new file mode 100755 index 0000000000..f105dab368 --- /dev/null +++ b/contrib/tracing/log_p2p_traffic.bt @@ -0,0 +1,28 @@ +#!/usr/bin/env bpftrace + +BEGIN +{ + printf("Logging P2P traffic\n") +} + +usdt:./src/dashd:net:inbound_message +{ + $peer_id = (int64) arg0; + $peer_addr = str(arg1); + $peer_type = str(arg2); + $msg_type = str(arg3); + $msg_len = arg4; + printf("inbound '%s' msg from peer %d (%s, %s) with %d bytes\n", $msg_type, $peer_id, $peer_type, $peer_addr, $msg_len); +} + +usdt:./src/dashd:net:outbound_message +{ + $peer_id = (int64) arg0; + $peer_addr = str(arg1); + $peer_type = str(arg2); + $msg_type = str(arg3); + $msg_len = arg4; + + printf("outbound '%s' msg to peer %d (%s, %s) with %d bytes\n", $msg_type, $peer_id, $peer_type, $peer_addr, $msg_len); +} + diff --git a/contrib/tracing/log_raw_p2p_msgs.py b/contrib/tracing/log_raw_p2p_msgs.py new file mode 100755 index 0000000000..004b0f2467 --- /dev/null +++ b/contrib/tracing/log_raw_p2p_msgs.py @@ -0,0 +1,180 @@ +#!/usr/bin/env python3 + +""" Demonstration of eBPF limitations and the effect on USDT with the + net:inbound_message and net:outbound_message tracepoints. """ + +# This script shows a limitation of eBPF when data larger than 32kb is passed to +# user-space. It uses BCC (https://github.com/iovisor/bcc) to load a sandboxed +# eBPF program into the Linux kernel (root privileges are required). The eBPF +# program attaches to two statically defined tracepoints. The tracepoint +# 'net:inbound_message' is called when a new P2P message is received, and +# 'net:outbound_message' is called on outbound P2P messages. The eBPF program +# submits the P2P messages to this script via a BPF ring buffer. The submitted +# messages are printed. + +# eBPF Limitations: +# +# Dash P2P messages can be larger than 32kb (e.g. tx, block, ...). The eBPF +# VM's stack is limited to 512 bytes, and we can't allocate more than about 32kb +# for a P2P message in the eBPF VM. The message data is cut off when the message +# is larger than MAX_MSG_DATA_LENGTH (see definition below). This can be detected +# in user-space by comparing the data length to the message length variable. The +# message is cut off when the data length is smaller than the message length. +# A warning is included with the printed message data. +# +# Data is submitted to user-space (i.e. to this script) via a ring buffer. The +# throughput of the ring buffer is limited. Each p2p_message is about 32kb in +# size. In- or outbound messages submitted to the ring buffer in rapid +# succession fill the ring buffer faster than it can be read. Some messages are +# lost. +# +# BCC prints: "Possibly lost 2 samples" on lost messages. + +import sys +from bcc import BPF, USDT + +# BCC: The C program to be compiled to an eBPF program (by BCC) and loaded into +# a sandboxed Linux kernel VM. +program = """ +#include + +#define MIN(a,b) ({ __typeof__ (a) _a = (a); __typeof__ (b) _b = (b); _a < _b ? _a : _b; }) + +// Maximum possible allocation size +// from include/linux/percpu.h in the Linux kernel +#define PCPU_MIN_UNIT_SIZE (32 << 10) + +// Tor v3 addresses are 62 chars + 6 chars for the port (':12345'). +#define MAX_PEER_ADDR_LENGTH 62 + 6 +#define MAX_PEER_CONN_TYPE_LENGTH 20 +#define MAX_MSG_TYPE_LENGTH 20 +#define MAX_MSG_DATA_LENGTH PCPU_MIN_UNIT_SIZE - 200 + +struct p2p_message +{ + u64 peer_id; + char peer_addr[MAX_PEER_ADDR_LENGTH]; + char peer_conn_type[MAX_PEER_CONN_TYPE_LENGTH]; + char msg_type[MAX_MSG_TYPE_LENGTH]; + u64 msg_size; + u8 msg[MAX_MSG_DATA_LENGTH]; +}; + +// We can't store the p2p_message struct on the eBPF stack as it is limited to +// 512 bytes and P2P message can be bigger than 512 bytes. However, we can use +// an BPF-array with a length of 1 to allocate up to 32768 bytes (this is +// defined by PCPU_MIN_UNIT_SIZE in include/linux/percpu.h in the Linux kernel). +// Also see https://github.com/iovisor/bcc/issues/2306 +BPF_ARRAY(msg_arr, struct p2p_message, 1); + +// Two BPF perf buffers for pushing data (here P2P messages) to user-space. +BPF_PERF_OUTPUT(inbound_messages); +BPF_PERF_OUTPUT(outbound_messages); + +int trace_inbound_message(struct pt_regs *ctx) { + int idx = 0; + struct p2p_message *msg = msg_arr.lookup(&idx); + + // lookup() does not return a NULL pointer. However, the BPF verifier + // requires an explicit check that that the `msg` pointer isn't a NULL + // pointer. See https://github.com/iovisor/bcc/issues/2595 + if (msg == NULL) return 1; + + bpf_usdt_readarg(1, ctx, &msg->peer_id); + bpf_usdt_readarg_p(2, ctx, &msg->peer_addr, MAX_PEER_ADDR_LENGTH); + bpf_usdt_readarg_p(3, ctx, &msg->peer_conn_type, MAX_PEER_CONN_TYPE_LENGTH); + bpf_usdt_readarg_p(4, ctx, &msg->msg_type, MAX_MSG_TYPE_LENGTH); + bpf_usdt_readarg(5, ctx, &msg->msg_size); + bpf_usdt_readarg_p(6, ctx, &msg->msg, MIN(msg->msg_size, MAX_MSG_DATA_LENGTH)); + + inbound_messages.perf_submit(ctx, msg, sizeof(*msg)); + return 0; +}; + +int trace_outbound_message(struct pt_regs *ctx) { + int idx = 0; + struct p2p_message *msg = msg_arr.lookup(&idx); + + // lookup() does not return a NULL pointer. However, the BPF verifier + // requires an explicit check that that the `msg` pointer isn't a NULL + // pointer. See https://github.com/iovisor/bcc/issues/2595 + if (msg == NULL) return 1; + + bpf_usdt_readarg(1, ctx, &msg->peer_id); + bpf_usdt_readarg_p(2, ctx, &msg->peer_addr, MAX_PEER_ADDR_LENGTH); + bpf_usdt_readarg_p(3, ctx, &msg->peer_conn_type, MAX_PEER_CONN_TYPE_LENGTH); + bpf_usdt_readarg_p(4, ctx, &msg->msg_type, MAX_MSG_TYPE_LENGTH); + bpf_usdt_readarg(5, ctx, &msg->msg_size); + bpf_usdt_readarg_p(6, ctx, &msg->msg, MIN(msg->msg_size, MAX_MSG_DATA_LENGTH)); + + outbound_messages.perf_submit(ctx, msg, sizeof(*msg)); + return 0; +}; +""" + + +def print_message(event, inbound): + print(f"%s %s msg '%s' from peer %d (%s, %s) with %d bytes: %s" % + ( + f"Warning: incomplete message (only %d out of %d bytes)!" % ( + len(event.msg), event.msg_size) if len(event.msg) < event.msg_size else "", + "inbound" if inbound else "outbound", + event.msg_type.decode("utf-8"), + event.peer_id, + event.peer_conn_type.decode("utf-8"), + event.peer_addr.decode("utf-8"), + event.msg_size, + bytes(event.msg[:event.msg_size]).hex(), + ) + ) + + +def main(bitcoind_path): + bitcoind_with_usdts = USDT(path=str(bitcoind_path)) + + # attaching the trace functions defined in the BPF program to the tracepoints + bitcoind_with_usdts.enable_probe( + probe="inbound_message", fn_name="trace_inbound_message") + bitcoind_with_usdts.enable_probe( + probe="outbound_message", fn_name="trace_outbound_message") + bpf = BPF(text=program, usdt_contexts=[bitcoind_with_usdts]) + + # BCC: perf buffer handle function for inbound_messages + def handle_inbound(_, data, size): + """ Inbound message handler. + + Called each time a message is submitted to the inbound_messages BPF table.""" + + event = bpf["inbound_messages"].event(data) + print_message(event, True) + + # BCC: perf buffer handle function for outbound_messages + + def handle_outbound(_, data, size): + """ Outbound message handler. + + Called each time a message is submitted to the outbound_messages BPF table.""" + + event = bpf["outbound_messages"].event(data) + print_message(event, False) + + # BCC: add handlers to the inbound and outbound perf buffers + bpf["inbound_messages"].open_perf_buffer(handle_inbound) + bpf["outbound_messages"].open_perf_buffer(handle_outbound) + + print("Logging raw P2P messages.") + print("Messages larger that about 32kb will be cut off!") + print("Some messages might be lost!") + while True: + try: + bpf.perf_buffer_poll() + except KeyboardInterrupt: + exit() + + +if __name__ == "__main__": + if len(sys.argv) < 2: + print("USAGE:", sys.argv[0], "path/to/dashd") + exit() + path = sys.argv[1] + main(path) diff --git a/contrib/tracing/p2p_monitor.py b/contrib/tracing/p2p_monitor.py new file mode 100755 index 0000000000..2ebe293363 --- /dev/null +++ b/contrib/tracing/p2p_monitor.py @@ -0,0 +1,250 @@ +#!/usr/bin/env python3 + +""" Interactive dashd P2P network traffic monitor utilizing USDT and the + net:inbound_message and net:outbound_message tracepoints. """ + +# This script demonstrates what USDT for Dash Core can enable. It uses BCC +# (https://github.com/iovisor/bcc) to load a sandboxed eBPF program into the +# Linux kernel (root privileges are required). The eBPF program attaches to two +# statically defined tracepoints. The tracepoint 'net:inbound_message' is called +# when a new P2P message is received, and 'net:outbound_message' is called on +# outbound P2P messages. The eBPF program submits the P2P messages to +# this script via a BPF ring buffer. + +import sys +import curses +from curses import wrapper, panel +from bcc import BPF, USDT + +# BCC: The C program to be compiled to an eBPF program (by BCC) and loaded into +# a sandboxed Linux kernel VM. +program = """ +#include + +// Tor v3 addresses are 62 chars + 6 chars for the port (':12345'). +// I2P addresses are 60 chars + 6 chars for the port (':12345'). +#define MAX_PEER_ADDR_LENGTH 62 + 6 +#define MAX_PEER_CONN_TYPE_LENGTH 20 +#define MAX_MSG_TYPE_LENGTH 20 + +struct p2p_message +{ + u64 peer_id; + char peer_addr[MAX_PEER_ADDR_LENGTH]; + char peer_conn_type[MAX_PEER_CONN_TYPE_LENGTH]; + char msg_type[MAX_MSG_TYPE_LENGTH]; + u64 msg_size; +}; + + +// Two BPF perf buffers for pushing data (here P2P messages) to user space. +BPF_PERF_OUTPUT(inbound_messages); +BPF_PERF_OUTPUT(outbound_messages); + +int trace_inbound_message(struct pt_regs *ctx) { + struct p2p_message msg = {}; + + bpf_usdt_readarg(1, ctx, &msg.peer_id); + bpf_usdt_readarg_p(2, ctx, &msg.peer_addr, MAX_PEER_ADDR_LENGTH); + bpf_usdt_readarg_p(3, ctx, &msg.peer_conn_type, MAX_PEER_CONN_TYPE_LENGTH); + bpf_usdt_readarg_p(4, ctx, &msg.msg_type, MAX_MSG_TYPE_LENGTH); + bpf_usdt_readarg(5, ctx, &msg.msg_size); + + inbound_messages.perf_submit(ctx, &msg, sizeof(msg)); + return 0; +}; + +int trace_outbound_message(struct pt_regs *ctx) { + struct p2p_message msg = {}; + + bpf_usdt_readarg(1, ctx, &msg.peer_id); + bpf_usdt_readarg_p(2, ctx, &msg.peer_addr, MAX_PEER_ADDR_LENGTH); + bpf_usdt_readarg_p(3, ctx, &msg.peer_conn_type, MAX_PEER_CONN_TYPE_LENGTH); + bpf_usdt_readarg_p(4, ctx, &msg.msg_type, MAX_MSG_TYPE_LENGTH); + bpf_usdt_readarg(5, ctx, &msg.msg_size); + + outbound_messages.perf_submit(ctx, &msg, sizeof(msg)); + return 0; +}; +""" + + +class Message: + """ A P2P network message. """ + msg_type = "" + size = 0 + data = bytes() + inbound = False + + def __init__(self, msg_type, size, inbound): + self.msg_type = msg_type + self.size = size + self.inbound = inbound + + +class Peer: + """ A P2P network peer. """ + id = 0 + address = "" + connection_type = "" + last_messages = list() + + total_inbound_msgs = 0 + total_inbound_bytes = 0 + total_outbound_msgs = 0 + total_outbound_bytes = 0 + + def __init__(self, id, address, connection_type): + self.id = id + self.address = address + self.connection_type = connection_type + self.last_messages = list() + + def add_message(self, message): + self.last_messages.append(message) + if len(self.last_messages) > 25: + self.last_messages.pop(0) + if message.inbound: + self.total_inbound_bytes += message.size + self.total_inbound_msgs += 1 + else: + self.total_outbound_bytes += message.size + self.total_outbound_msgs += 1 + + +def main(bitcoind_path): + peers = dict() + + bitcoind_with_usdts = USDT(path=str(bitcoind_path)) + + # attaching the trace functions defined in the BPF program to the tracepoints + bitcoind_with_usdts.enable_probe( + probe="inbound_message", fn_name="trace_inbound_message") + bitcoind_with_usdts.enable_probe( + probe="outbound_message", fn_name="trace_outbound_message") + bpf = BPF(text=program, usdt_contexts=[bitcoind_with_usdts]) + + # BCC: perf buffer handle function for inbound_messages + def handle_inbound(_, data, size): + """ Inbound message handler. + + Called each time a message is submitted to the inbound_messages BPF table.""" + event = bpf["inbound_messages"].event(data) + if event.peer_id not in peers: + peer = Peer(event.peer_id, event.peer_addr.decode( + "utf-8"), event.peer_conn_type.decode("utf-8")) + peers[peer.id] = peer + peers[event.peer_id].add_message( + Message(event.msg_type.decode("utf-8"), event.msg_size, True)) + + # BCC: perf buffer handle function for outbound_messages + def handle_outbound(_, data, size): + """ Outbound message handler. + + Called each time a message is submitted to the outbound_messages BPF table.""" + event = bpf["outbound_messages"].event(data) + if event.peer_id not in peers: + peer = Peer(event.peer_id, event.peer_addr.decode( + "utf-8"), event.peer_conn_type.decode("utf-8")) + peers[peer.id] = peer + peers[event.peer_id].add_message( + Message(event.msg_type.decode("utf-8"), event.msg_size, False)) + + # BCC: add handlers to the inbound and outbound perf buffers + bpf["inbound_messages"].open_perf_buffer(handle_inbound) + bpf["outbound_messages"].open_perf_buffer(handle_outbound) + + wrapper(loop, bpf, peers) + + +def loop(screen, bpf, peers): + screen.nodelay(1) + cur_list_pos = 0 + win = curses.newwin(30, 70, 2, 7) + win.erase() + win.border(ord("|"), ord("|"), ord("-"), ord("-"), + ord("-"), ord("-"), ord("-"), ord("-")) + info_panel = panel.new_panel(win) + info_panel.hide() + + ROWS_AVALIABLE_FOR_LIST = curses.LINES - 5 + scroll = 0 + + while True: + try: + # BCC: poll the perf buffers for new events or timeout after 50ms + bpf.perf_buffer_poll(timeout=50) + + ch = screen.getch() + if (ch == curses.KEY_DOWN or ch == ord("j")) and cur_list_pos < len( + peers.keys()) -1 and info_panel.hidden(): + cur_list_pos += 1 + if cur_list_pos >= ROWS_AVALIABLE_FOR_LIST: + scroll += 1 + if (ch == curses.KEY_UP or ch == ord("k")) and cur_list_pos > 0 and info_panel.hidden(): + cur_list_pos -= 1 + if scroll > 0: + scroll -= 1 + if ch == ord('\n') or ch == ord(' '): + if info_panel.hidden(): + info_panel.show() + else: + info_panel.hide() + screen.erase() + render(screen, peers, cur_list_pos, scroll, ROWS_AVALIABLE_FOR_LIST, info_panel) + curses.panel.update_panels() + screen.refresh() + except KeyboardInterrupt: + exit() + + +def render(screen, peers, cur_list_pos, scroll, ROWS_AVALIABLE_FOR_LIST, info_panel): + """ renders the list of peers and details panel + + This code is unrelated to USDT, BCC and BPF. + """ + header_format = "%6s %-20s %-20s %-22s %-67s" + row_format = "%6s %-5d %9d byte %-5d %9d byte %-22s %-67s" + + screen.addstr(0, 1, (" P2P Message Monitor "), curses.A_REVERSE) + screen.addstr( + 1, 0, (" Navigate with UP/DOWN or J/K and select a peer with ENTER or SPACE to see individual P2P messages"), curses.A_NORMAL) + screen.addstr(3, 0, + header_format % ("PEER", "OUTBOUND", "INBOUND", "TYPE", "ADDR"), curses.A_BOLD | curses.A_UNDERLINE) + peer_list = sorted(peers.keys())[scroll:ROWS_AVALIABLE_FOR_LIST+scroll] + for i, peer_id in enumerate(peer_list): + peer = peers[peer_id] + screen.addstr(i + 4, 0, + row_format % (peer.id, peer.total_outbound_msgs, peer.total_outbound_bytes, + peer.total_inbound_msgs, peer.total_inbound_bytes, + peer.connection_type, peer.address), + curses.A_REVERSE if i + scroll == cur_list_pos else curses.A_NORMAL) + if i + scroll == cur_list_pos: + info_window = info_panel.window() + info_window.erase() + info_window.border( + ord("|"), ord("|"), ord("-"), ord("-"), + ord("-"), ord("-"), ord("-"), ord("-")) + + info_window.addstr( + 1, 1, f"PEER {peer.id} ({peer.address})".center(68), curses.A_REVERSE | curses.A_BOLD) + info_window.addstr( + 2, 1, f" OUR NODE{peer.connection_type:^54}PEER ", + curses.A_BOLD) + for i, msg in enumerate(peer.last_messages): + if msg.inbound: + info_window.addstr( + i + 3, 1, "%68s" % + (f"<--- {msg.msg_type} ({msg.size} bytes) "), curses.A_NORMAL) + else: + info_window.addstr( + i + 3, 1, " %s (%d byte) --->" % + (msg.msg_type, msg.size), curses.A_NORMAL) + + +if __name__ == "__main__": + if len(sys.argv) < 2: + print("USAGE:", sys.argv[0], "path/to/dashd") + exit() + path = sys.argv[1] + main(path) diff --git a/doc/build-unix.md b/doc/build-unix.md index c105aeb19c..db31540a7b 100644 --- a/doc/build-unix.md +++ b/doc/build-unix.md @@ -50,6 +50,7 @@ Optional dependencies: libqrencode | QR codes in GUI | QR code generation (only needed when GUI enabled) libzmq3 | ZMQ notification | ZMQ notifications (requires ZMQ version >= 4.0.0) sqlite3 | SQLite DB | Wallet storage (only needed when descriptor wallet enabled) + systemtap | Tracing (USDT) | Statically defined tracepoints For the versions used, see [dependencies.md](dependencies.md) @@ -115,6 +116,12 @@ GMP dependencies (provides platform-optimized routines): sudo apt-get install libgmp-dev ``` +User-Space, Statically Defined Tracing (USDT) dependencies: + +```sh +sudo apt install systemtap-sdt-dev +``` + GUI dependencies: If you want to build dash-qt, make sure that the required packages for Qt development @@ -196,6 +203,12 @@ GMP dependencies (provides platform-optimized routines): sudo dnf install gmp-devel ``` +User-Space, Statically Defined Tracing (USDT) dependencies: + +```sh +sudo dnf install systemtap +``` + GUI dependencies: If you want to build dash-qt, make sure that the required packages for Qt development diff --git a/doc/dependencies.md b/doc/dependencies.md index 11203e63c3..4d7b9c49e0 100644 --- a/doc/dependencies.md +++ b/doc/dependencies.md @@ -24,6 +24,7 @@ These are the dependencies currently used by Dash Core. You can find instruction | Qt | [5.15.11](https://download.qt.io/official_releases/qt/) | [5.11.3](https://github.com/bitcoin/bitcoin/pull/24132) | No | | | | SQLite | [3.32.1](https://sqlite.org/download.html) | [3.7.17](https://github.com/bitcoin/bitcoin/pull/19077) | | | | | XCB | | | | | [Yes](https://github.com/dashpay/dash/blob/develop/depends/packages/qt.mk) (Linux only) | +| systemtap ([tracing](tracing.md))| | | | | | | xkbcommon | | | | | [Yes](https://github.com/dashpay/dash/blob/develop/depends/packages/qt.mk) (Linux only) | | ZeroMQ | [4.3.1](https://github.com/zeromq/libzmq/releases) | 4.0.0 | No | | | | zlib | | | | | [Yes](https://github.com/dashpay/dash/blob/develop/depends/packages/qt.mk) | @@ -41,6 +42,7 @@ Some dependencies are not needed in all configurations. The following are some f * libnatpmp is not needed with `--without-natpmp`. * Qt is not needed with `--without-gui`. * If the qrencode dependency is absent, QR support won't be added. To force an error when that happens, pass `--with-qrencode`. +* If the systemtap dependency is absent, USDT support won't compiled in. * ZeroMQ is needed only with the `--with-zmq` option. #### Other diff --git a/doc/tracing.md b/doc/tracing.md new file mode 100644 index 0000000000..12c671680f --- /dev/null +++ b/doc/tracing.md @@ -0,0 +1,266 @@ +# User-space, Statically Defined Tracing (USDT) for Dash Core + +Dash Core includes statically defined tracepoints to allow for more +observability during development, debugging, code review, and production usage. +These tracepoints make it possible to keep track of custom statistics and +enable detailed monitoring of otherwise hidden internals. They have +little to no performance impact when unused. + +``` +eBPF and USDT Overview +====================== + + ┌──────────────────┐ ┌──────────────┐ + │ tracing script │ │ dashd │ + │==================│ 2. │==============│ + │ eBPF │ tracing │ hooks │ │ + │ code │ logic │ into┌─┤►tracepoint 1─┼───┐ 3. + └────┬───┴──▲──────┘ ├─┤►tracepoint 2 │ │ pass args + 1. │ │ 4. │ │ ... │ │ to eBPF + User compiles │ │ pass data to │ └──────────────┘ │ program + Space & loads │ │ tracing script │ │ + ─────────────────┼──────┼─────────────────┼────────────────────┼─── + Kernel │ │ │ │ + Space ┌──┬─▼──────┴─────────────────┴────────────┐ │ + │ │ eBPF program │◄──────┘ + │ └───────────────────────────────────────┤ + │ eBPF kernel Virtual Machine (sandboxed) │ + └──────────────────────────────────────────┘ + +1. The tracing script compiles the eBPF code and loads the eBPF program into a kernel VM +2. The eBPF program hooks into one or more tracepoints +3. When the tracepoint is called, the arguments are passed to the eBPF program +4. The eBPF program processes the arguments and returns data to the tracing script +``` + +The Linux kernel can hook into the tracepoints during runtime and pass data to +sandboxed [eBPF] programs running in the kernel. These eBPF programs can, for +example, collect statistics or pass data back to user-space scripts for further +processing. + +[eBPF]: https://ebpf.io/ + +The two main eBPF front-ends with support for USDT are [bpftrace] and +[BPF Compiler Collection (BCC)]. BCC is used for complex tools and daemons and +`bpftrace` is preferred for one-liners and shorter scripts. Examples for both can +be found in [contrib/tracing]. + +[bpftrace]: https://github.com/iovisor/bpftrace +[BPF Compiler Collection (BCC)]: https://github.com/iovisor/bcc +[contrib/tracing]: ../contrib/tracing/ + +## Tracepoint documentation + +The currently available tracepoints are listed here. + +### Context `net` + +#### Tracepoint `net:inbound_message` + +Is called when a message is received from a peer over the P2P network. Passes +information about our peer, the connection and the message as arguments. + +Arguments passed: +1. Peer ID as `int64` +2. Peer Address and Port (IPv4, IPv6, Tor v3, I2P, ...) as `pointer to C-style String` (max. length 68 characters) +3. Connection Type (inbound, feeler, outbound-full-relay, ...) as `pointer to C-style String` (max. length 20 characters) +4. Message Type (inv, ping, getdata, addrv2, ...) as `pointer to C-style String` (max. length 20 characters) +5. Message Size in bytes as `uint64` +6. Message Bytes as `pointer to unsigned chars` (i.e. bytes) + +Note: The message is passed to the tracepoint in full, however, due to space +limitations in the eBPF kernel VM it might not be possible to pass the message +to user-space in full. Messages longer than a 32kb might be cut off. This can +be detected in tracing scripts by comparing the message size to the length of +the passed message. + +#### Tracepoint `net:outbound_message` + +Is called when a message is send to a peer over the P2P network. Passes +information about our peer, the connection and the message as arguments. + +Arguments passed: +1. Peer ID as `int64` +2. Peer Address and Port (IPv4, IPv6, Tor v3, I2P, ...) as `pointer to C-style String` (max. length 68 characters) +3. Connection Type (inbound, feeler, outbound-full-relay, ...) as `pointer to C-style String` (max. length 20 characters) +4. Message Type (inv, ping, getdata, addrv2, ...) as `pointer to C-style String` (max. length 20 characters) +5. Message Size in bytes as `uint64` +6. Message Bytes as `pointer to unsigned chars` (i.e. bytes) + +Note: The message is passed to the tracepoint in full, however, due to space +limitations in the eBPF kernel VM it might not be possible to pass the message +to user-space in full. Messages longer than a 32kb might be cut off. This can +be detected in tracing scripts by comparing the message size to the length of +the passed message. + +### Context `validation` + +#### Tracepoint `validation:block_connected` + +Is called *after* a block is connected to the chain. Can, for example, be used +to benchmark block connections together with `-reindex`. + +Arguments passed: +1. Block Header Hash as `pointer to C-style String` (64 characters) +2. Block Height as `int32` +3. Transactions in the Block as `uint64` +4. Inputs spend in the Block as `int32` +5. SigOps in the Block (excluding coinbase SigOps) `uint64` +6. Time it took to connect the Block in microseconds (µs) as `uint64` +7. Block Header Hash as `pointer to unsigned chars` (i.e. 32 bytes in little-endian) + +Note: The 7th argument can't be accessed by bpftrace and is purposefully chosen +to be the block header hash as bytes. See [bpftrace argument limit] for more +details. + +[bpftrace argument limit]: #bpftrace-argument-limit + +## Adding tracepoints to Dash Core + +To add a new tracepoint, `#include ` in the compilation unit where +the tracepoint is inserted. Use one of the `TRACEx` macros listed below +depending on the number of arguments passed to the tracepoint. Up to 12 +arguments can be provided. The `context` and `event` specify the names by which +the tracepoint is referred to. Please use `snake_case` and try to make sure that +the tracepoint names make sense even without detailed knowledge of the +implementation details. Do not forget to update the tracepoint list in this +document. + +```c +#define TRACE(context, event) +#define TRACE1(context, event, a) +#define TRACE2(context, event, a, b) +#define TRACE3(context, event, a, b, c) +#define TRACE4(context, event, a, b, c, d) +#define TRACE5(context, event, a, b, c, d, e) +#define TRACE6(context, event, a, b, c, d, e, f) +#define TRACE7(context, event, a, b, c, d, e, f, g) +#define TRACE8(context, event, a, b, c, d, e, f, g, h) +#define TRACE9(context, event, a, b, c, d, e, f, g, h, i) +#define TRACE10(context, event, a, b, c, d, e, f, g, h, i, j) +#define TRACE11(context, event, a, b, c, d, e, f, g, h, i, j, k) +#define TRACE12(context, event, a, b, c, d, e, f, g, h, i, j, k, l) +``` + +For example: + +```C++ +TRACE6(net, inbound_message, + pnode->GetId(), + pnode->m_addr_name.c_str(), + pnode->ConnectionTypeAsString().c_str(), + sanitizedType.c_str(), + msg.data.size(), + msg.data.data() +); +``` + +### Guidelines and best practices + +#### Clear motivation and use-case +Tracepoints need a clear motivation and use-case. The motivation should +outweigh the impact on, for example, code readability. There is no point in +adding tracepoints that don't end up being used. + +#### Provide an example +When adding a new tracepoint, provide an example. Examples can show the use case +and help reviewers testing that the tracepoint works as intended. The examples +can be kept simple but should give others a starting point when working with +the tracepoint. See existing examples in [contrib/tracing/]. + +[contrib/tracing/]: ../contrib/tracing/ + +#### No expensive computations for tracepoints +Data passed to the tracepoint should be inexpensive to compute. Although the +tracepoint itself only has overhead when enabled, the code to compute arguments +is always run - even if the tracepoint is not used. For example, avoid +serialization and parsing. + +#### Semi-stable API +Tracepoints should have a semi-stable API. Users should be able to rely on the +tracepoints for scripting. This means tracepoints need to be documented, and the +argument order ideally should not change. If there is an important reason to +change argument order, make sure to document the change and update the examples +using the tracepoint. + +#### eBPF Virtual Machine limits +Keep the eBPF Virtual Machine limits in mind. eBPF programs receiving data from +the tracepoints run in a sandboxed Linux kernel VM. This VM has a limited stack +size of 512 bytes. Check if it makes sense to pass larger amounts of data, for +example, with a tracing script that can handle the passed data. + +#### `bpftrace` argument limit +While tracepoints can have up to 12 arguments, bpftrace scripts currently only +support reading from the first six arguments (`arg0` till `arg5`) on `x86_64`. +bpftrace currently lacks real support for handling and printing binary data, +like block header hashes and txids. When a tracepoint passes more than six +arguments, then string and integer arguments should preferably be placed in the +first six argument fields. Binary data can be placed in later arguments. The BCC +supports reading from all 12 arguments. + +#### Strings as C-style String +Generally, strings should be passed into the `TRACEx` macros as pointers to +C-style strings (a null-terminated sequence of characters). For C++ +`std::strings`, [`c_str()`] can be used. It's recommended to document the +maximum expected string size if known. + + +[`c_str()`]: https://www.cplusplus.com/reference/string/string/c_str/ + + +## Listing available tracepoints + +Multiple tools can list the available tracepoints in a `dashd` binary with +USDT support. + +### GDB - GNU Project Debugger + +To list probes in Dash Core, use `info probes` in `gdb`: + +``` +$ gdb ./src/dashd +… +(gdb) info probes +Type Provider Name Where Semaphore Object +stap net inbound_message 0x000000000014419e /src/dashd +stap net outbound_message 0x0000000000107c05 /src/dashd +stap validation block_connected 0x00000000002fb10c /src/dashd +… +``` + +### With `readelf` + +The `readelf` tool can be used to display the USDT tracepoints in Dash Core. +Look for the notes with the description `NT_STAPSDT`. + +``` +$ readelf -n ./src/dashd | grep NT_STAPSDT -A 4 -B 2 +Displaying notes found in: .note.stapsdt + Owner Data size Description + stapsdt 0x0000005d NT_STAPSDT (SystemTap probe descriptors) + Provider: net + Name: outbound_message + Location: 0x0000000000107c05, Base: 0x0000000000579c90, Semaphore: 0x0000000000000000 + Arguments: -8@%r12 8@%rbx 8@%rdi 8@192(%rsp) 8@%rax 8@%rdx +… +``` + +### With `tplist` + +The `tplist` tool is provided by BCC (see [Installing BCC]). It displays kernel +tracepoints or USDT probes and their formats (for more information, see the +[`tplist` usage demonstration]). There are slight binary naming differences +between distributions. For example, on +[Ubuntu the binary is called `tplist-bpfcc`][ubuntu binary]. + +[Installing BCC]: https://github.com/iovisor/bcc/blob/master/INSTALL.md +[`tplist` usage demonstration]: https://github.com/iovisor/bcc/blob/master/tools/tplist_example.txt +[ubuntu binary]: https://github.com/iovisor/bcc/blob/master/INSTALL.md#ubuntu---binary + +``` +$ tplist -l ./src/dashd -v +b'net':b'outbound_message' [sema 0x0] + 1 location(s) + 6 argument(s) +… +``` diff --git a/src/net.cpp b/src/net.cpp index f94011c899..4fcbbfb3c1 100644 --- a/src/net.cpp +++ b/src/net.cpp @@ -32,6 +32,7 @@ #include #include #include +#include #include #include @@ -4276,10 +4277,20 @@ void CConnman::PushMessage(CNode* pnode, CSerializedNetMsg&& msg) { AssertLockNotHeld(m_total_bytes_sent_mutex); size_t nMessageSize = msg.data.size(); - LogPrint(BCLog::NET, "sending %s (%d bytes) peer=%d\n", SanitizeString(msg.m_type), nMessageSize, pnode->GetId()); + LogPrint(BCLog::NET, "sending %s (%d bytes) peer=%d\n", msg.m_type, nMessageSize, pnode->GetId()); if (gArgs.GetBoolArg("-capturemessages", false)) { CaptureMessage(pnode->addr, msg.m_type, msg.data, /* incoming */ false); } + + TRACE6(net, outbound_message, + pnode->GetId(), + pnode->m_addr_name.c_str(), + pnode->ConnectionTypeAsString().c_str(), + msg.m_type.c_str(), + msg.data.size(), + msg.data.data() + ); + statsClient.count(strprintf("bandwidth.message.%s.bytesSent", msg.m_type), nMessageSize, 1.0f); statsClient.inc(strprintf("message.sent.%s", msg.m_type), 1.0f); diff --git a/src/net_processing.cpp b/src/net_processing.cpp index 8747950df7..4c9cb376da 100644 --- a/src/net_processing.cpp +++ b/src/net_processing.cpp @@ -33,6 +33,7 @@ #include // For NDEBUG compile time check #include #include +#include #include #include @@ -5030,6 +5031,15 @@ bool PeerManagerImpl::ProcessMessages(CNode* pfrom, std::atomic& interrupt } CNetMessage& msg(msgs.front()); + TRACE6(net, inbound_message, + pfrom->GetId(), + pfrom->m_addr_name.c_str(), + pfrom->ConnectionTypeAsString().c_str(), + msg.m_type.c_str(), + msg.m_recv.size(), + msg.m_recv.data() + ); + if (gArgs.GetBoolArg("-capturemessages", false)) { CaptureMessage(pfrom->addr, msg.m_type, MakeUCharSpan(msg.m_recv), /* incoming */ true); } diff --git a/src/validation.cpp b/src/validation.cpp index 5808589e2a..a53f4ba15e 100644 --- a/src/validation.cpp +++ b/src/validation.cpp @@ -46,6 +46,7 @@ #include // For NDEBUG compile time check #include #include +#include #include #include #include @@ -2302,6 +2303,16 @@ bool CChainState::ConnectBlock(const CBlock& block, BlockValidationState& state, statsClient.gauge("blocks.tip.NumTransactions", block.vtx.size(), 1.0f); statsClient.gauge("blocks.tip.SigOps", nSigOps, 1.0f); + TRACE7(validation, block_connected, + block.GetHash().ToString().c_str(), + pindex->nHeight, + block.vtx.size(), + nInputs, + nSigOps, + GetTimeMicros() - nTimeStart, // in microseconds (µs) + block.GetHash().data() + ); + return true; } From 5718716cd28de1102876c52da67d770b9f4b1666 Mon Sep 17 00:00:00 2001 From: Kittywhiskers Van Gogh <63189531+kwvg@users.noreply.github.com> Date: Sun, 25 Aug 2024 16:46:04 +0000 Subject: [PATCH 02/14] merge bitcoin#22955: Rename fBlocksOnly, Add test `fBlocksOnly` has not been renamed as Dash uses the inv system for relaying far more than transaction data and the blocklist of invs in block-relay-only mode extends beyond what is conveyed by `reject_tx_invs` --- test/functional/p2p_blocksonly.py | 23 +++++++++++++---------- test/functional/test_framework/wallet.py | 8 ++++++++ 2 files changed, 21 insertions(+), 10 deletions(-) diff --git a/test/functional/p2p_blocksonly.py b/test/functional/p2p_blocksonly.py index 9e2f2bcba6..e7f9592e91 100755 --- a/test/functional/p2p_blocksonly.py +++ b/test/functional/p2p_blocksonly.py @@ -6,7 +6,7 @@ import time -from test_framework.messages import msg_tx +from test_framework.messages import msg_tx, msg_inv, CInv, MSG_TX from test_framework.p2p import P2PInterface, P2PTxInvStore from test_framework.test_framework import BitcoinTestFramework from test_framework.util import assert_equal @@ -15,15 +15,13 @@ from test_framework.wallet import MiniWallet class P2PBlocksOnly(BitcoinTestFramework): def set_test_params(self): - self.setup_clean_chain = True self.num_nodes = 1 self.extra_args = [["-blocksonly"]] def run_test(self): self.miniwallet = MiniWallet(self.nodes[0]) # Add enough mature utxos to the wallet, so that all txs spend confirmed coins - self.miniwallet.generate(2) - self.nodes[0].generate(100) + self.miniwallet.rescan_utxos() self.blocksonly_mode_tests() self.blocks_relay_conn_tests() @@ -35,11 +33,18 @@ class P2PBlocksOnly(BitcoinTestFramework): self.nodes[0].add_p2p_connection(P2PInterface()) tx, txid, tx_hex = self.check_p2p_tx_violation() + self.log.info('Check that tx invs also violate the protocol') + self.nodes[0].add_p2p_connection(P2PInterface()) + with self.nodes[0].assert_debug_log(['tx (0000000000000000000000000000000000000000000000000000000000001234) inv sent in violation of protocol, disconnecting peer']): + self.nodes[0].p2ps[0].send_message(msg_inv([CInv(t=MSG_TX, h=0x1234)])) + self.nodes[0].p2ps[0].wait_for_disconnect() + del self.nodes[0].p2ps[0] + self.log.info('Check that txs from rpc are not rejected and relayed to other peers') tx_relay_peer = self.nodes[0].add_p2p_connection(P2PInterface()) assert_equal(self.nodes[0].getpeerinfo()[0]['relaytxes'], True) assert_equal(self.nodes[0].testmempoolaccept([tx_hex])[0]['allowed'], True) - with self.nodes[0].assert_debug_log(['received getdata for: tx {} peer=1'.format(txid)]): + with self.nodes[0].assert_debug_log(['received getdata for: tx {} peer'.format(txid)]): self.nodes[0].sendrawtransaction(tx_hex) self.bump_mocktime(60) tx_relay_peer.wait_for_tx(txid) @@ -83,7 +88,7 @@ class P2PBlocksOnly(BitcoinTestFramework): # Ensure we disconnect if a block-relay-only connection sends us a transaction self.nodes[0].add_outbound_p2p_connection(P2PInterface(), p2p_idx=0, connection_type="block-relay-only") assert_equal(self.nodes[0].getpeerinfo()[0]['relaytxes'], False) - _, txid, tx_hex = self.check_p2p_tx_violation(index=2) + _, txid, tx_hex = self.check_p2p_tx_violation() self.log.info("Check that txs from RPC are not sent to blockrelay connection") conn = self.nodes[0].add_outbound_p2p_connection(P2PTxInvStore(), p2p_idx=1, connection_type="block-relay-only") @@ -96,11 +101,9 @@ class P2PBlocksOnly(BitcoinTestFramework): conn.sync_send_with_ping() assert(int(txid, 16) not in conn.get_invs()) - def check_p2p_tx_violation(self, index=1): + def check_p2p_tx_violation(self): self.log.info('Check that txs from P2P are rejected and result in disconnect') - input_txid = self.nodes[0].getblock(self.nodes[0].getblockhash(index), 2)['tx'][0]['txid'] - utxo_to_spend = self.miniwallet.get_utxo(txid=input_txid) - spendtx = self.miniwallet.create_self_transfer(from_node=self.nodes[0], utxo_to_spend=utxo_to_spend) + spendtx = self.miniwallet.create_self_transfer(from_node=self.nodes[0]) with self.nodes[0].assert_debug_log(['tx sent in violation of protocol peer=0']): self.nodes[0].p2ps[0].send_message(msg_tx(spendtx['tx'])) diff --git a/test/functional/test_framework/wallet.py b/test/functional/test_framework/wallet.py index 87068b6887..38d1301b22 100644 --- a/test/functional/test_framework/wallet.py +++ b/test/functional/test_framework/wallet.py @@ -78,6 +78,14 @@ class MiniWallet: self._address = ADDRESS_BCRT1_P2SH_OP_TRUE self._scriptPubKey = bytes.fromhex(self._test_node.validateaddress(self._address)['scriptPubKey']) + def rescan_utxos(self): + """Drop all utxos and rescan the utxo set""" + self._utxos = [] + res = self._test_node.scantxoutset(action="start", scanobjects=[f'raw({self._scriptPubKey.hex()})']) + assert_equal(True, res['success']) + for utxo in res['unspents']: + self._utxos.append({'txid': utxo['txid'], 'vout': utxo['vout'], 'value': utxo['amount']}) + def scan_blocks(self, *, start=1, num): """Scan the blocks for self._address outputs and add them to self._utxos""" for i in range(start, start + num): From bfdc9ad364ab5c8d4c3a44e766b4246e4dfefc13 Mon Sep 17 00:00:00 2001 From: Kittywhiskers Van Gogh <63189531+kwvg@users.noreply.github.com> Date: Wed, 27 Oct 2021 17:34:54 +0200 Subject: [PATCH 03/14] merge bitcoin#23375: more deterministic coin selection for coinbase UTXOs (oldest first) --- test/functional/test_framework/wallet.py | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/test/functional/test_framework/wallet.py b/test/functional/test_framework/wallet.py index 38d1301b22..6323b7f39f 100644 --- a/test/functional/test_framework/wallet.py +++ b/test/functional/test_framework/wallet.py @@ -84,7 +84,7 @@ class MiniWallet: res = self._test_node.scantxoutset(action="start", scanobjects=[f'raw({self._scriptPubKey.hex()})']) assert_equal(True, res['success']) for utxo in res['unspents']: - self._utxos.append({'txid': utxo['txid'], 'vout': utxo['vout'], 'value': utxo['amount']}) + self._utxos.append({'txid': utxo['txid'], 'vout': utxo['vout'], 'value': utxo['amount'], 'height': utxo['height']}) def scan_blocks(self, *, start=1, num): """Scan the blocks for self._address outputs and add them to self._utxos""" @@ -97,7 +97,7 @@ class MiniWallet: """Scan the tx for self._scriptPubKey outputs and add them to self._utxos""" for out in tx['vout']: if out['scriptPubKey']['hex'] == self._scriptPubKey.hex(): - self._utxos.append({'txid': tx['txid'], 'vout': out['n'], 'value': out['value']}) + self._utxos.append({'txid': tx['txid'], 'vout': out['n'], 'value': out['value'], 'height': 0}) def sign_tx(self, tx, fixed_length=True): """Sign tx that has been created by MiniWallet in P2PK mode""" @@ -118,8 +118,9 @@ class MiniWallet: """Generate blocks with coinbase outputs to the internal address, and append the outputs to the internal list""" blocks = self._test_node.generatetodescriptor(num_blocks, f'raw({self._scriptPubKey.hex()})') for b in blocks: - cb_tx = self._test_node.getblock(blockhash=b, verbosity=2)['tx'][0] - self._utxos.append({'txid': cb_tx['txid'], 'vout': 0, 'value': cb_tx['vout'][0]['value']}) + block_info = self._test_node.getblock(blockhash=b, verbosity=2) + cb_tx = block_info['tx'][0] + self._utxos.append({'txid': cb_tx['txid'], 'vout': 0, 'value': cb_tx['vout'][0]['value'], 'height': block_info['height']}) return blocks def get_address(self): @@ -151,7 +152,7 @@ class MiniWallet: def create_self_transfer(self, *, fee_rate=Decimal("0.003"), from_node, utxo_to_spend=None, mempool_valid=True, locktime=0, sequence=0): """Create and return a tx with the specified fee_rate. Fee may be exact or at most one satoshi higher than needed.""" - self._utxos = sorted(self._utxos, key=lambda k: k['value']) + self._utxos = sorted(self._utxos, key=lambda k: (k['value'], -k['height'])) utxo_to_spend = utxo_to_spend or self._utxos.pop() # Pick the largest utxo (if none provided) and hope it covers the fee if self._priv_key is None: vsize = Decimal(85) # anyone-can-spend From 644a47ef9a5886e6ec560b4deacc7b549a6272de Mon Sep 17 00:00:00 2001 From: Kittywhiskers Van Gogh <63189531+kwvg@users.noreply.github.com> Date: Mon, 18 Oct 2021 13:19:13 +0200 Subject: [PATCH 04/14] merge bitcoin#23302: drop GetHash().ToString() argument from the `validation:block_connected` tracepoint --- contrib/tracing/README.md | 7 +------ contrib/tracing/connectblock_benchmark.bt | 22 ++++++++++++++-------- doc/tracing.md | 9 +-------- src/validation.cpp | 7 +++---- 4 files changed, 19 insertions(+), 26 deletions(-) diff --git a/contrib/tracing/README.md b/contrib/tracing/README.md index 08cc7078df..99901f62b6 100644 --- a/contrib/tracing/README.md +++ b/contrib/tracing/README.md @@ -176,17 +176,12 @@ third acts as a duration threshold in milliseconds. When the `ConnectBlock()` function takes longer than the threshold, information about the block, is printed. For more details, see the header comment in the script. -By default, `bpftrace` limits strings to 64 bytes due to the limited stack size -in the kernel VM. Block hashes as zero-terminated hex strings are 65 bytes which -exceed the string limit. The string size limit can be set to 65 bytes with the -environment variable `BPFTRACE_STRLEN`. - The following command can be used to benchmark, for example, `ConnectBlock()` between height 20000 and 38000 on SigNet while logging all blocks that take longer than 25ms to connect. ``` -$ BPFTRACE_STRLEN=65 bpftrace contrib/tracing/connectblock_benchmark.bt 20000 38000 25 +$ bpftrace contrib/tracing/connectblock_benchmark.bt 20000 38000 25 ``` In a different terminal, starting Dash Core in SigNet mode and with diff --git a/contrib/tracing/connectblock_benchmark.bt b/contrib/tracing/connectblock_benchmark.bt index f9b7bb52e3..14d79beb83 100755 --- a/contrib/tracing/connectblock_benchmark.bt +++ b/contrib/tracing/connectblock_benchmark.bt @@ -4,11 +4,8 @@ USAGE: - BPFTRACE_STRLEN=65 bpftrace contrib/tracing/connectblock_benchmark.bt + bpftrace contrib/tracing/connectblock_benchmark.bt - - The environment variable BPFTRACE_STRLEN needs to be set to 65 chars as - strings are limited to 64 chars by default. Hex strings with Dash block - hashes are 64 hex chars + 1 null-termination char. - sets the height at which the benchmark should start. Setting the start height to 0 starts the benchmark immediately, even before the first block is connected. @@ -23,7 +20,7 @@ EXAMPLES: - BPFTRACE_STRLEN=65 bpftrace contrib/tracing/connectblock_benchmark.bt 300000 680000 1000 + bpftrace contrib/tracing/connectblock_benchmark.bt 300000 680000 1000 When run together 'dashd -reindex', this benchmarks the time it takes to connect the blocks between height 300.000 and 680.000 (inclusive) and prints @@ -31,7 +28,7 @@ histogram with block connection times when the benchmark is finished. - BPFTRACE_STRLEN=65 bpftrace contrib/tracing/connectblock_benchmark.bt 0 0 500 + bpftrace contrib/tracing/connectblock_benchmark.bt 0 0 500 When running together 'dashd', all newly connected blocks that take longer than 500ms to connect are logged. A histogram with block @@ -107,14 +104,23 @@ usdt:./src/dashd:validation:block_connected /arg1 >= $1 && (arg1 <= $2 || $2 == */ usdt:./src/dashd:validation:block_connected / (uint64) arg5 / 1000> $3 / { - $hash_str = str(arg0); + $hash = arg0; $height = (int32) arg1; $transactions = (uint64) arg2; $inputs = (int32) arg3; $sigops = (int64) arg4; $duration = (int64) arg5; - printf("Block %d (%s) %4d tx %5d ins %5d sigops took %4d ms\n", $height, $hash_str, $transactions, $inputs, $sigops, (uint64) $duration / 1000); + + printf("Block %d (", $height); + /* Prints each byte of the block hash as hex in big-endian (the block-explorer format) */ + $p = $hash + 31; + unroll(32) { + $b = *(uint8*)$p; + printf("%02x", $b); + $p -= 1; + } + printf(") %4d tx %5d ins %5d sigops took %4d ms\n", $transactions, $inputs, $sigops, (uint64) $duration / 1000); } diff --git a/doc/tracing.md b/doc/tracing.md index 12c671680f..f8cce5c93f 100644 --- a/doc/tracing.md +++ b/doc/tracing.md @@ -101,19 +101,12 @@ Is called *after* a block is connected to the chain. Can, for example, be used to benchmark block connections together with `-reindex`. Arguments passed: -1. Block Header Hash as `pointer to C-style String` (64 characters) +1. Block Header Hash as `pointer to unsigned chars` (i.e. 32 bytes in little-endian) 2. Block Height as `int32` 3. Transactions in the Block as `uint64` 4. Inputs spend in the Block as `int32` 5. SigOps in the Block (excluding coinbase SigOps) `uint64` 6. Time it took to connect the Block in microseconds (µs) as `uint64` -7. Block Header Hash as `pointer to unsigned chars` (i.e. 32 bytes in little-endian) - -Note: The 7th argument can't be accessed by bpftrace and is purposefully chosen -to be the block header hash as bytes. See [bpftrace argument limit] for more -details. - -[bpftrace argument limit]: #bpftrace-argument-limit ## Adding tracepoints to Dash Core diff --git a/src/validation.cpp b/src/validation.cpp index a53f4ba15e..78333db21b 100644 --- a/src/validation.cpp +++ b/src/validation.cpp @@ -2303,14 +2303,13 @@ bool CChainState::ConnectBlock(const CBlock& block, BlockValidationState& state, statsClient.gauge("blocks.tip.NumTransactions", block.vtx.size(), 1.0f); statsClient.gauge("blocks.tip.SigOps", nSigOps, 1.0f); - TRACE7(validation, block_connected, - block.GetHash().ToString().c_str(), + TRACE6(validation, block_connected, + block.GetHash().data(), pindex->nHeight, block.vtx.size(), nInputs, nSigOps, - GetTimeMicros() - nTimeStart, // in microseconds (µs) - block.GetHash().data() + GetTimeMicros() - nTimeStart // in microseconds (µs) ); return true; From 6cc596b99a8b547abb0c9867d630ca059f07748f Mon Sep 17 00:00:00 2001 From: Kittywhiskers Van Gogh <63189531+kwvg@users.noreply.github.com> Date: Fri, 3 Sep 2021 18:46:13 +0530 Subject: [PATCH 05/14] merge bitcoin#22902: utxocache tracepoints --- contrib/tracing/README.md | 59 ++++++++++++++ contrib/tracing/log_utxocache_flush.py | 107 +++++++++++++++++++++++++ contrib/tracing/log_utxos.bt | 86 ++++++++++++++++++++ doc/tracing.md | 49 +++++++++++ src/coins.cpp | 19 +++++ src/validation.cpp | 7 ++ 6 files changed, 327 insertions(+) create mode 100755 contrib/tracing/log_utxocache_flush.py create mode 100755 contrib/tracing/log_utxos.bt diff --git a/contrib/tracing/README.md b/contrib/tracing/README.md index 99901f62b6..581fb95863 100644 --- a/contrib/tracing/README.md +++ b/contrib/tracing/README.md @@ -234,3 +234,62 @@ Histogram of block connection times in milliseconds (ms). [16, 32) 9 | | [32, 64) 4 | | ``` + +### log_utxocache_flush.py + +A BCC Python script to log the cache and index flushes. Based on the +`utxocache:flush` tracepoint. + +```bash +$ python3 contrib/tracing/log_utxocache_flush.py ./src/dashd +``` + +``` +Logging utxocache flushes. Ctrl-C to end... +Duration (µs) Mode Coins Count Memory Usage Prune Full Flush +0 PERIODIC 28484 3929.87 kB False False +1 PERIODIC 28485 3930.00 kB False False +0 PERIODIC 28489 3930.51 kB False False +1 PERIODIC 28490 3930.64 kB False False +0 PERIODIC 28491 3930.77 kB False False +0 PERIODIC 28491 3930.77 kB False False +0 PERIODIC 28496 3931.41 kB False False +1 PERIODIC 28496 3931.41 kB False False +0 PERIODIC 28497 3931.54 kB False False +1 PERIODIC 28497 3931.54 kB False False +1 PERIODIC 28499 3931.79 kB False False +. +. +. +53788 ALWAYS 30076 4136.27 kB False False +7463 ALWAYS 0 245.84 kB False False +``` + +### log_utxos.bt + +A `bpftrace` script to log information about the coins that are added, spent, or +uncached from the UTXO set. Based on the `utxocache:add`, `utxocache:spend` and +`utxocache:uncache` tracepoints. + +```bash +$ bpftrace contrib/tracing/log_utxos.bt +``` + +It should produce an output similar to the following. + +```bash +Attaching 4 probes... +OP Outpoint Value Height Coinbase +Added 6ba9ad857e1ef2eb2a2c94f06813c414c7ab273e3d6bd7ad64e000315a887e7c:1 10000 2094512 No +Spent fa7dc4db56637a151f6649d8f26732956d1c5424c82aae400a83d02b2cc2c87b:0 182264897 2094512 No +Added eeb2f099b1af6a2a12e6ddd2eeb16fc5968582241d7f08ba202d28b60ac264c7:0 10000 2094512 No +Added eeb2f099b1af6a2a12e6ddd2eeb16fc5968582241d7f08ba202d28b60ac264c7:1 182254756 2094512 No +Added a0c7f4ec9cccef2d89672a624a4e6c8237a17572efdd4679eea9e9ee70d2db04:0 10072679 2094513 Yes +Spent 25e0df5cc1aeb1b78e6056bf403e5e8b7e41f138060ca0a50a50134df0549a5e:2 540 2094508 No +Spent 42f383c04e09c26a2378272ec33aa0c1bf4883ca5ab739e8b7e06be5a5787d61:1 3848399 2007724 No +Added f85e3b4b89270863a389395cc9a4123e417ab19384cef96533c6649abd6b0561:0 3788399 2094513 No +Added f85e3b4b89270863a389395cc9a4123e417ab19384cef96533c6649abd6b0561:2 540 2094513 No +Spent a05880b8c77971ed0b9f73062c7c4cdb0ff3856ab14cbf8bc481ed571cd34b83:1 5591281046 2094511 No +Added eb689865f7d957938978d6207918748f74e6aa074f47874724327089445b0960:0 5589696005 2094513 No +Added eb689865f7d957938978d6207918748f74e6aa074f47874724327089445b0960:1 1565556 2094513 No +``` diff --git a/contrib/tracing/log_utxocache_flush.py b/contrib/tracing/log_utxocache_flush.py new file mode 100755 index 0000000000..b602c21f1c --- /dev/null +++ b/contrib/tracing/log_utxocache_flush.py @@ -0,0 +1,107 @@ +#!/usr/bin/env python3 + +import sys +import ctypes +from bcc import BPF, USDT + +"""Example logging Dash Core utxo set cache flushes utilizing + the utxocache:flush tracepoint.""" + +# USAGE: ./contrib/tracing/log_utxocache_flush.py path/to/dashd + +# BCC: The C program to be compiled to an eBPF program (by BCC) and loaded into +# a sandboxed Linux kernel VM. +program = """ +# include +struct data_t +{ + u64 duration; + u32 mode; + u64 coins_count; + u64 coins_mem_usage; + bool is_flush_prune; + bool is_full_flush; +}; + +// BPF perf buffer to push the data to user space. +BPF_PERF_OUTPUT(flush); + +int trace_flush(struct pt_regs *ctx) { + struct data_t data = {}; + bpf_usdt_readarg(1, ctx, &data.duration); + bpf_usdt_readarg(2, ctx, &data.mode); + bpf_usdt_readarg(3, ctx, &data.coins_count); + bpf_usdt_readarg(4, ctx, &data.coins_mem_usage); + bpf_usdt_readarg(5, ctx, &data.is_flush_prune); + bpf_usdt_readarg(5, ctx, &data.is_full_flush); + flush.perf_submit(ctx, &data, sizeof(data)); + return 0; +} +""" + +FLUSH_MODES = [ + 'NONE', + 'IF_NEEDED', + 'PERIODIC', + 'ALWAYS' +] + + +class Data(ctypes.Structure): + # define output data structure corresponding to struct data_t + _fields_ = [ + ("duration", ctypes.c_uint64), + ("mode", ctypes.c_uint32), + ("coins_count", ctypes.c_uint64), + ("coins_mem_usage", ctypes.c_uint64), + ("is_flush_prune", ctypes.c_bool), + ("is_full_flush", ctypes.c_bool) + ] + + +def print_event(event): + print("%-15d %-10s %-15d %-15s %-8s %-8s" % ( + event.duration, + FLUSH_MODES[event.mode], + event.coins_count, + "%.2f kB" % (event.coins_mem_usage/1000), + event.is_flush_prune, + event.is_full_flush + )) + + +def main(bitcoind_path): + bitcoind_with_usdts = USDT(path=str(bitcoind_path)) + + # attaching the trace functions defined in the BPF program + # to the tracepoints + bitcoind_with_usdts.enable_probe( + probe="flush", fn_name="trace_flush") + b = BPF(text=program, usdt_contexts=[bitcoind_with_usdts]) + + def handle_flush(_, data, size): + """ Coins Flush handler. + Called each time coin caches and indexes are flushed.""" + event = ctypes.cast(data, ctypes.POINTER(Data)).contents + print_event(event) + + b["flush"].open_perf_buffer(handle_flush) + print("Logging utxocache flushes. Ctrl-C to end...") + print("%-15s %-10s %-15s %-15s %-8s %-8s" % ("Duration (µs)", "Mode", + "Coins Count", "Memory Usage", + "Prune", "Full Flush")) + + while True: + try: + b.perf_buffer_poll() + except KeyboardInterrupt: + exit(0) + + +if __name__ == "__main__": + if len(sys.argv) < 2: + print("USAGE: ", sys.argv[0], "path/to/dashd") + exit(1) + + path = sys.argv[1] + main(path) diff --git a/contrib/tracing/log_utxos.bt b/contrib/tracing/log_utxos.bt new file mode 100755 index 0000000000..cf2ba00a4d --- /dev/null +++ b/contrib/tracing/log_utxos.bt @@ -0,0 +1,86 @@ +#!/usr/bin/env bpftrace + +/* + + USAGE: + + bpftrace contrib/tracing/log_utxos.bt + + This script requires a 'dashd' binary compiled with eBPF support and the + 'utxochache' tracepoints. By default, it's assumed that 'dashd' is + located in './src/dashd'. This can be modified in the script below. + + NOTE: requires bpftrace v0.12.0 or above. +*/ + +BEGIN +{ + printf("%-7s %-71s %16s %7s %8s\n", + "OP", "Outpoint", "Value", "Height", "Coinbase"); +} + +/* + Attaches to the 'utxocache:add' tracepoint and prints additions to the UTXO set cache. +*/ +usdt:./src/dashd:utxocache:add +{ + $txid = arg0; + $index = (uint32)arg1; + $height = (uint32)arg2; + $value = (int64)arg3; + $isCoinbase = arg4; + + printf("Added "); + $p = $txid + 31; + unroll(32) { + $b = *(uint8*)$p; + printf("%02x", $b); + $p-=1; + } + + printf(":%-6d %16ld %7d %s\n", $index, $value, $height, ($isCoinbase ? "Yes" : "No" )); +} + +/* + Attaches to the 'utxocache:spent' tracepoint and prints spents from the UTXO set cache. +*/ +usdt:./src/dashd:utxocache:spent +{ + $txid = arg0; + $index = (uint32)arg1; + $height = (uint32)arg2; + $value = (int64)arg3; + $isCoinbase = arg4; + + printf("Spent "); + $p = $txid + 31; + unroll(32) { + $b = *(uint8*)$p; + printf("%02x", $b); + $p-=1; + } + + printf(":%-6d %16ld %7d %s\n", $index, $value, $height, ($isCoinbase ? "Yes" : "No" )); +} + +/* + Attaches to the 'utxocache:uncache' tracepoint and uncache UTXOs from the UTXO set cache. +*/ +usdt:./src/dashd:utxocache:uncache +{ + $txid = arg0; + $index = (uint32)arg1; + $height = (uint32)arg2; + $value = (int64)arg3; + $isCoinbase = arg4; + + printf("Uncache "); + $p = $txid + 31; + unroll(32) { + $b = *(uint8*)$p; + printf("%02x", $b); + $p-=1; + } + + printf(":%-6d %16ld %7d %s\n", $index, $value, $height, ($isCoinbase ? "Yes" : "No" )); +} diff --git a/doc/tracing.md b/doc/tracing.md index f8cce5c93f..5b25eb3816 100644 --- a/doc/tracing.md +++ b/doc/tracing.md @@ -108,6 +108,55 @@ Arguments passed: 5. SigOps in the Block (excluding coinbase SigOps) `uint64` 6. Time it took to connect the Block in microseconds (µs) as `uint64` +### Context `utxocache` + +#### Tracepoint `utxocache:flush` + +Is called *after* the caches and indexes are flushed depending on the mode +`CChainState::FlushStateToDisk` is called with. + +Arguments passed: +1. Duration in microseconds as `int64` +2. Flush state mode as `uint32`. It's an enumerator class with values `0` + (`NONE`), `1` (`IF_NEEDED`), `2` (`PERIODIC`), `3` (`ALWAYS`) +3. Number of coins flushed as `uint64` +4. Memory usage in bytes as `uint64` +5. If the flush was pruned as `bool` +6. If it was full flush as `bool` + +#### Tracepoint `utxocache:add` + +It is called when a new coin is added to the UTXO cache. + +Arguments passed: +1. Transaction ID (hash) as `pointer to unsigned chars` (i.e. 32 bytes in little-endian) +2. Output index as `uint32` +3. Block height the coin was added to the UTXO-set as `uint32` +4. Value of the coin as `int64` +5. If the coin is a coinbase as `bool` + +#### Tracepoint `utxocache:spent` + +It is called when a coin is spent from the UTXO cache. + +Arguments passed: +1. Transaction ID (hash) as `pointer to unsigned chars` (i.e. 32 bytes in little-endian) +2. Output index as `uint32` +3. Block height the coin was spent, as `uint32` +4. Value of the coin as `int64` +5. If the coin is a coinbase as `bool` + +#### Tracepoint `utxocache:uncache` + +It is called when the UTXO with the given outpoint is removed from the cache. + +Arguments passed: +1. Transaction ID (hash) as `pointer to unsigned chars` (i.e. 32 bytes in little-endian) +2. Output index as `uint32` +3. Block height the coin was uncached, as `uint32` +4. Value of the coin as `int64` +. If the coin is a coinbase as `bool` + ## Adding tracepoints to Dash Core To add a new tracepoint, `#include ` in the compilation unit where diff --git a/src/coins.cpp b/src/coins.cpp index 4a1cccd0ab..e0e455d632 100644 --- a/src/coins.cpp +++ b/src/coins.cpp @@ -7,6 +7,7 @@ #include #include #include +#include #include bool CCoinsView::GetCoin(const COutPoint &outpoint, Coin &coin) const { return false; } @@ -98,6 +99,12 @@ void CCoinsViewCache::AddCoin(const COutPoint &outpoint, Coin&& coin, bool possi it->second.coin = std::move(coin); it->second.flags |= CCoinsCacheEntry::DIRTY | (fresh ? CCoinsCacheEntry::FRESH : 0); cachedCoinsUsage += it->second.coin.DynamicMemoryUsage(); + TRACE5(utxocache, add, + outpoint.hash.data(), + (uint32_t)outpoint.n, + (uint32_t)coin.nHeight, + (int64_t)coin.out.nValue, + (bool)coin.IsCoinBase()); } void CCoinsViewCache::EmplaceCoinInternalDANGER(COutPoint&& outpoint, Coin&& coin) { @@ -123,6 +130,12 @@ bool CCoinsViewCache::SpendCoin(const COutPoint &outpoint, Coin* moveout) { CCoinsMap::iterator it = FetchCoin(outpoint); if (it == cacheCoins.end()) return false; cachedCoinsUsage -= it->second.coin.DynamicMemoryUsage(); + TRACE5(utxocache, spent, + outpoint.hash.data(), + (uint32_t)outpoint.n, + (uint32_t)it->second.coin.nHeight, + (int64_t)it->second.coin.out.nValue, + (bool)it->second.coin.IsCoinBase()); if (moveout) { *moveout = std::move(it->second.coin); } @@ -273,6 +286,12 @@ void CCoinsViewCache::Uncache(const COutPoint& hash) CCoinsMap::iterator it = cacheCoins.find(hash); if (it != cacheCoins.end() && it->second.flags == 0) { cachedCoinsUsage -= it->second.coin.DynamicMemoryUsage(); + TRACE5(utxocache, uncache, + hash.hash.data(), + (uint32_t)hash.n, + (uint32_t)it->second.coin.nHeight, + (int64_t)it->second.coin.out.nValue, + (bool)it->second.coin.IsCoinBase()); cacheCoins.erase(it); } } diff --git a/src/validation.cpp b/src/validation.cpp index 78333db21b..5ba14db708 100644 --- a/src/validation.cpp +++ b/src/validation.cpp @@ -2472,6 +2472,13 @@ bool CChainState::FlushStateToDisk( nLastFlush = nNow; full_flush_completed = true; } + TRACE6(utxocache, flush, + (int64_t)(GetTimeMicros() - nNow.count()), // in microseconds (µs) + (u_int32_t)mode, + (u_int64_t)coins_count, + (u_int64_t)coins_mem_usage, + (bool)fFlushForPrune, + (bool)fDoFullFlush); } if (full_flush_completed) { // Update best block in wallet (so we can detect restored wallets). From 264e02fcc72e55fd42c94847e73dd6d792925bf9 Mon Sep 17 00:00:00 2001 From: Kittywhiskers Van Gogh <63189531+kwvg@users.noreply.github.com> Date: Tue, 7 Dec 2021 14:46:48 +0100 Subject: [PATCH 06/14] merge bitcoin#23724: add systemtap's sys/sdt.h as depends for GUIX builds with USDT tracepoints --- configure.ac | 21 ++++++------- depends/Makefile | 5 ++- depends/config.site.in | 8 +++-- depends/packages/packages.mk | 2 ++ depends/packages/systemtap.mk | 12 +++++++ ..._ASM_SECTION_AUTOGROUP_SUPPORT_check.patch | 31 +++++++++++++++++++ doc/dependencies.md | 2 +- 7 files changed, 66 insertions(+), 15 deletions(-) create mode 100644 depends/packages/systemtap.mk create mode 100644 depends/patches/systemtap/remove_SDT_ASM_SECTION_AUTOGROUP_SUPPORT_check.patch diff --git a/configure.ac b/configure.ac index c7a008fb81..d0617acca8 100644 --- a/configure.ac +++ b/configure.ac @@ -138,11 +138,11 @@ AC_ARG_WITH([bdb], [use_bdb=$withval], [use_bdb=auto]) -AC_ARG_ENABLE([ebpf], - [AS_HELP_STRING([--enable-ebpf], - [enable eBPF tracing (default is yes if sys/sdt.h is found)])], - [use_ebpf=$enableval], - [use_ebpf=yes]) +AC_ARG_ENABLE([usdt], + [AS_HELP_STRING([--enable-usdt], + [enable tracepoints for Userspace, Statically Defined Tracing (default is yes if sys/sdt.h is found)])], + [use_usdt=$enableval], + [use_usdt=yes]) AC_ARG_WITH([miniupnpc], [AS_HELP_STRING([--with-miniupnpc], @@ -1408,15 +1408,15 @@ if test x$enable_wallet != xno; then fi fi -if test x$use_ebpf != xno; then - AC_MSG_CHECKING([whether eBPF tracepoints are supported]) +if test x$use_usdt != xno; then + AC_MSG_CHECKING([whether Userspace, Statically Defined Tracing tracepoints are supported]) AC_COMPILE_IFELSE([ AC_LANG_PROGRAM( [#include ], [DTRACE_PROBE("context", "event");] )], - [AC_MSG_RESULT(yes); have_sdt=yes; AC_DEFINE([ENABLE_TRACING], [1], [Define to 1 to enable eBPF user static defined tracepoints])], - [AC_MSG_RESULT(no); have_sdt=no;] + [AC_MSG_RESULT(yes); AC_DEFINE([ENABLE_TRACING], [1], [Define to 1 to enable tracepoints for Userspace, Statically Defined Tracing])], + [AC_MSG_RESULT(no); use_usdt=no;] ) fi @@ -1795,7 +1795,6 @@ AM_CONDITIONAL([TARGET_WINDOWS], [test x$TARGET_OS = xwindows]) AM_CONDITIONAL([ENABLE_WALLET],[test x$enable_wallet = xyes]) AM_CONDITIONAL([USE_SQLITE], [test "x$use_sqlite" = "xyes"]) AM_CONDITIONAL([USE_BDB], [test "x$use_bdb" = "xyes"]) -AM_CONDITIONAL([ENABLE_TRACING],[test x$have_sdt = xyes]) AM_CONDITIONAL([ENABLE_TESTS],[test x$BUILD_TEST = xyes]) AM_CONDITIONAL([ENABLE_FUZZ],[test x$enable_fuzz = xyes]) AM_CONDITIONAL([ENABLE_FUZZ_BINARY],[test x$enable_fuzz_binary = xyes]) @@ -1943,7 +1942,7 @@ echo " with bench = $use_bench" echo " with upnp = $use_upnp" echo " with natpmp = $use_natpmp" echo " use asm = $use_asm" -echo " ebpf tracing = $have_sdt" +echo " USDT tracing = $use_usdt" echo " sanitizers = $use_sanitizers" echo " debug enabled = $enable_debug" echo " stacktraces enabled = $enable_stacktraces" diff --git a/depends/Makefile b/depends/Makefile index 69ceb518cc..5f427bb2e3 100644 --- a/depends/Makefile +++ b/depends/Makefile @@ -39,6 +39,7 @@ NO_SQLITE ?= NO_WALLET ?= NO_ZMQ ?= NO_UPNP ?= +NO_USDT ?= NO_NATPMP ?= MULTIPROCESS ?= FALLBACK_DOWNLOAD_PATH ?= http://dash-depends-sources.s3-website-us-west-2.amazonaws.com @@ -155,8 +156,9 @@ natpmp_packages_$(NO_NATPMP) = $(natpmp_packages) zmq_packages_$(NO_ZMQ) = $(zmq_packages) multiprocess_packages_$(MULTIPROCESS) = $(multiprocess_packages) +usdt_packages_$(NO_USDT) = $(usdt_$(host_os)_packages) -packages += $($(host_arch)_$(host_os)_packages) $($(host_os)_packages) $(qt_packages_) $(wallet_packages_) $(upnp_packages_) $(natpmp_packages_) +packages += $($(host_arch)_$(host_os)_packages) $($(host_os)_packages) $(qt_packages_) $(wallet_packages_) $(upnp_packages_) $(natpmp_packages_) $(usdt_packages_) native_packages += $($(host_arch)_$(host_os)_native_packages) $($(host_os)_native_packages) ifneq ($(zmq_packages_),) @@ -234,6 +236,7 @@ $(host_prefix)/share/config.site : config.site.in $(host_prefix)/.stamp_$(final_ -e 's|@no_bdb@|$(NO_BDB)|' \ -e 's|@no_sqlite@|$(NO_SQLITE)|' \ -e 's|@no_upnp@|$(NO_UPNP)|' \ + -e 's|@no_usdt@|$(NO_USDT)|' \ -e 's|@no_natpmp@|$(NO_NATPMP)|' \ -e 's|@multiprocess@|$(MULTIPROCESS)|' \ -e 's|@debug@|$(DEBUG)|' \ diff --git a/depends/config.site.in b/depends/config.site.in index a20e54068e..bcfc2af171 100644 --- a/depends/config.site.in +++ b/depends/config.site.in @@ -62,12 +62,16 @@ if test -z "$with_gui" && test -n "@no_qt@"; then with_gui=no fi +if test -n "@debug@" && test -z "@no_qt@" && test "x$with_gui" != xno; then + with_gui=qt5_debug +fi + if test -z "$enable_zmq" && test -n "@no_zmq@"; then enable_zmq=no fi -if test -n "@debug@" && test -z "@no_qt@" && test "x$with_gui" != xno; then - with_gui=qt5_debug +if test -z "$enable_usdt" && test -n "@no_usdt@"; then + enable_usdt=no fi if test "@host_os@" = darwin; then diff --git a/depends/packages/packages.mk b/depends/packages/packages.mk index c4ca002ccb..bdf235b19b 100644 --- a/depends/packages/packages.mk +++ b/depends/packages/packages.mk @@ -22,6 +22,8 @@ natpmp_packages=libnatpmp multiprocess_packages = libmultiprocess capnp multiprocess_native_packages = native_libmultiprocess native_capnp +usdt_linux_packages=systemtap + darwin_native_packages = native_ds_store native_mac_alias ifneq ($(build_os),darwin) diff --git a/depends/packages/systemtap.mk b/depends/packages/systemtap.mk new file mode 100644 index 0000000000..833e75b978 --- /dev/null +++ b/depends/packages/systemtap.mk @@ -0,0 +1,12 @@ +package=systemtap +$(package)_version=4.5 +$(package)_download_path=https://sourceware.org/systemtap/ftp/releases/ +$(package)_file_name=$(package)-$($(package)_version).tar.gz +$(package)_sha256_hash=75078ed37e0dd2a769c9d1f9394170b2d9f4d7daa425f43ca80c13bad6cfc925 +$(package)_patches=remove_SDT_ASM_SECTION_AUTOGROUP_SUPPORT_check.patch + +define $(package)_preprocess_cmds + patch -p1 < $($(package)_patch_dir)/remove_SDT_ASM_SECTION_AUTOGROUP_SUPPORT_check.patch && \ + mkdir -p $($(package)_staging_prefix_dir)/include/sys && \ + cp includes/sys/sdt.h $($(package)_staging_prefix_dir)/include/sys/sdt.h +endef diff --git a/depends/patches/systemtap/remove_SDT_ASM_SECTION_AUTOGROUP_SUPPORT_check.patch b/depends/patches/systemtap/remove_SDT_ASM_SECTION_AUTOGROUP_SUPPORT_check.patch new file mode 100644 index 0000000000..eae0cf72d6 --- /dev/null +++ b/depends/patches/systemtap/remove_SDT_ASM_SECTION_AUTOGROUP_SUPPORT_check.patch @@ -0,0 +1,31 @@ +commit b92d4c121486f3c6e8a2cea537c53eb09894479a +Author: 0xb10c <0xb10c@gmail.com> +Date: Tue Dec 7 11:02:07 2021 +0100 + + Remove _SDT_ASM_SECTION_AUTOGROUP_SUPPORT check + + We assume that the assembler supports "?" in .pushsection directives. + This enables us to skip configure and make. + + See https://github.com/bitcoin/bitcoin/issues/23297. + +diff --git a/includes/sys/sdt.h b/includes/sys/sdt.h +index 97766e710..352b4ee25 100644 +--- a/includes/sys/sdt.h ++++ b/includes/sys/sdt.h +@@ -230,12 +230,10 @@ __extension__ extern unsigned long long __sdt_unsp; + nice with code in COMDAT sections, which comes up in C++ code. + Without that assembler support, some combinations of probe placements + in certain kinds of C++ code may produce link-time errors. */ +-#include "sdt-config.h" +-#if _SDT_ASM_SECTION_AUTOGROUP_SUPPORT ++/* PATCH: We assume that the assembler supports the feature. This ++ enables us to skip configure and make. In turn, this means we ++ require fewer dependencies and have shorter depend build times. */ + # define _SDT_ASM_AUTOGROUP "?" +-#else +-# define _SDT_ASM_AUTOGROUP "" +-#endif + + #define _SDT_ASM_BODY(provider, name, pack_args, args) \ + _SDT_ASM_1(990: _SDT_NOP) \ diff --git a/doc/dependencies.md b/doc/dependencies.md index 4d7b9c49e0..4e24e49967 100644 --- a/doc/dependencies.md +++ b/doc/dependencies.md @@ -24,7 +24,7 @@ These are the dependencies currently used by Dash Core. You can find instruction | Qt | [5.15.11](https://download.qt.io/official_releases/qt/) | [5.11.3](https://github.com/bitcoin/bitcoin/pull/24132) | No | | | | SQLite | [3.32.1](https://sqlite.org/download.html) | [3.7.17](https://github.com/bitcoin/bitcoin/pull/19077) | | | | | XCB | | | | | [Yes](https://github.com/dashpay/dash/blob/develop/depends/packages/qt.mk) (Linux only) | -| systemtap ([tracing](tracing.md))| | | | | | +| systemtap ([tracing](tracing.md))| [4.5](https://sourceware.org/systemtap/ftp/releases/) | | | | | | xkbcommon | | | | | [Yes](https://github.com/dashpay/dash/blob/develop/depends/packages/qt.mk) (Linux only) | | ZeroMQ | [4.3.1](https://github.com/zeromq/libzmq/releases) | 4.0.0 | No | | | | zlib | | | | | [Yes](https://github.com/dashpay/dash/blob/develop/depends/packages/qt.mk) | From c3d7e3a1920416f82e953204358e50c303e4abed Mon Sep 17 00:00:00 2001 From: Kittywhiskers Van Gogh <63189531+kwvg@users.noreply.github.com> Date: Sat, 24 Aug 2024 16:28:11 +0000 Subject: [PATCH 07/14] merge bitcoin#26944: fix systemtap download URL --- depends/packages/systemtap.mk | 2 +- doc/dependencies.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/depends/packages/systemtap.mk b/depends/packages/systemtap.mk index 833e75b978..498db4982f 100644 --- a/depends/packages/systemtap.mk +++ b/depends/packages/systemtap.mk @@ -1,6 +1,6 @@ package=systemtap $(package)_version=4.5 -$(package)_download_path=https://sourceware.org/systemtap/ftp/releases/ +$(package)_download_path=https://sourceware.org/ftp/systemtap/releases/ $(package)_file_name=$(package)-$($(package)_version).tar.gz $(package)_sha256_hash=75078ed37e0dd2a769c9d1f9394170b2d9f4d7daa425f43ca80c13bad6cfc925 $(package)_patches=remove_SDT_ASM_SECTION_AUTOGROUP_SUPPORT_check.patch diff --git a/doc/dependencies.md b/doc/dependencies.md index 4e24e49967..e50f8113a1 100644 --- a/doc/dependencies.md +++ b/doc/dependencies.md @@ -24,7 +24,7 @@ These are the dependencies currently used by Dash Core. You can find instruction | Qt | [5.15.11](https://download.qt.io/official_releases/qt/) | [5.11.3](https://github.com/bitcoin/bitcoin/pull/24132) | No | | | | SQLite | [3.32.1](https://sqlite.org/download.html) | [3.7.17](https://github.com/bitcoin/bitcoin/pull/19077) | | | | | XCB | | | | | [Yes](https://github.com/dashpay/dash/blob/develop/depends/packages/qt.mk) (Linux only) | -| systemtap ([tracing](tracing.md))| [4.5](https://sourceware.org/systemtap/ftp/releases/) | | | | | +| systemtap ([tracing](tracing.md))| [4.5](https://sourceware.org/ftp/systemtap/releases/) | | | | | | xkbcommon | | | | | [Yes](https://github.com/dashpay/dash/blob/develop/depends/packages/qt.mk) (Linux only) | | ZeroMQ | [4.3.1](https://github.com/zeromq/libzmq/releases) | 4.0.0 | No | | | | zlib | | | | | [Yes](https://github.com/dashpay/dash/blob/develop/depends/packages/qt.mk) | From 5b10a5a2fd2e0d4506f87268f1559f1d8f11adff Mon Sep 17 00:00:00 2001 From: Kittywhiskers Van Gogh <63189531+kwvg@users.noreply.github.com> Date: Wed, 22 Dec 2021 11:57:44 +0530 Subject: [PATCH 08/14] merge bitcoin#23907: utxocache tracepoints follow up --- contrib/tracing/README.md | 31 ++++++++++-------------- contrib/tracing/log_utxocache_flush.py | 21 +++++++--------- contrib/tracing/log_utxos.bt | 2 +- doc/tracing.md | 33 +++++++++++++++++--------- src/validation.cpp | 13 +++++----- 5 files changed, 50 insertions(+), 50 deletions(-) diff --git a/contrib/tracing/README.md b/contrib/tracing/README.md index 581fb95863..063ba2b140 100644 --- a/contrib/tracing/README.md +++ b/contrib/tracing/README.md @@ -237,7 +237,7 @@ Histogram of block connection times in milliseconds (ms). ### log_utxocache_flush.py -A BCC Python script to log the cache and index flushes. Based on the +A BCC Python script to log the UTXO cache flushes. Based on the `utxocache:flush` tracepoint. ```bash @@ -246,23 +246,10 @@ $ python3 contrib/tracing/log_utxocache_flush.py ./src/dashd ``` Logging utxocache flushes. Ctrl-C to end... -Duration (µs) Mode Coins Count Memory Usage Prune Full Flush -0 PERIODIC 28484 3929.87 kB False False -1 PERIODIC 28485 3930.00 kB False False -0 PERIODIC 28489 3930.51 kB False False -1 PERIODIC 28490 3930.64 kB False False -0 PERIODIC 28491 3930.77 kB False False -0 PERIODIC 28491 3930.77 kB False False -0 PERIODIC 28496 3931.41 kB False False -1 PERIODIC 28496 3931.41 kB False False -0 PERIODIC 28497 3931.54 kB False False -1 PERIODIC 28497 3931.54 kB False False -1 PERIODIC 28499 3931.79 kB False False -. -. -. -53788 ALWAYS 30076 4136.27 kB False False -7463 ALWAYS 0 245.84 kB False False +Duration (µs) Mode Coins Count Memory Usage Prune +730451 IF_NEEDED 22990 3323.54 kB True +637657 ALWAYS 122320 17124.80 kB False +81349 ALWAYS 0 1383.49 kB False ``` ### log_utxos.bt @@ -275,7 +262,13 @@ uncached from the UTXO set. Based on the `utxocache:add`, `utxocache:spend` and $ bpftrace contrib/tracing/log_utxos.bt ``` -It should produce an output similar to the following. +This should produce an output similar to the following. If you see bpftrace +warnings like `Lost 24 events`, the eBPF perf ring-buffer is filled faster +than it is being read. You can increase the ring-buffer size by setting the +ENV variable `BPFTRACE_PERF_RB_PAGES` (default 64) at a cost of higher +memory usage. See the [bpftrace reference guide] for more information. + +[bpftrace reference guide]: https://github.com/iovisor/bpftrace/blob/master/docs/reference_guide.md#98-bpftrace_perf_rb_pages ```bash Attaching 4 probes... diff --git a/contrib/tracing/log_utxocache_flush.py b/contrib/tracing/log_utxocache_flush.py index b602c21f1c..8985d8fa73 100755 --- a/contrib/tracing/log_utxocache_flush.py +++ b/contrib/tracing/log_utxocache_flush.py @@ -13,14 +13,14 @@ from bcc import BPF, USDT # a sandboxed Linux kernel VM. program = """ # include + struct data_t { u64 duration; u32 mode; u64 coins_count; u64 coins_mem_usage; - bool is_flush_prune; - bool is_full_flush; + bool is_flush_for_prune; }; // BPF perf buffer to push the data to user space. @@ -32,8 +32,7 @@ int trace_flush(struct pt_regs *ctx) { bpf_usdt_readarg(2, ctx, &data.mode); bpf_usdt_readarg(3, ctx, &data.coins_count); bpf_usdt_readarg(4, ctx, &data.coins_mem_usage); - bpf_usdt_readarg(5, ctx, &data.is_flush_prune); - bpf_usdt_readarg(5, ctx, &data.is_full_flush); + bpf_usdt_readarg(5, ctx, &data.is_flush_for_prune); flush.perf_submit(ctx, &data, sizeof(data)); return 0; } @@ -54,19 +53,17 @@ class Data(ctypes.Structure): ("mode", ctypes.c_uint32), ("coins_count", ctypes.c_uint64), ("coins_mem_usage", ctypes.c_uint64), - ("is_flush_prune", ctypes.c_bool), - ("is_full_flush", ctypes.c_bool) + ("is_flush_for_prune", ctypes.c_bool) ] def print_event(event): - print("%-15d %-10s %-15d %-15s %-8s %-8s" % ( + print("%-15d %-10s %-15d %-15s %-8s" % ( event.duration, FLUSH_MODES[event.mode], event.coins_count, "%.2f kB" % (event.coins_mem_usage/1000), - event.is_flush_prune, - event.is_full_flush + event.is_flush_for_prune )) @@ -87,9 +84,9 @@ def main(bitcoind_path): b["flush"].open_perf_buffer(handle_flush) print("Logging utxocache flushes. Ctrl-C to end...") - print("%-15s %-10s %-15s %-15s %-8s %-8s" % ("Duration (µs)", "Mode", - "Coins Count", "Memory Usage", - "Prune", "Full Flush")) + print("%-15s %-10s %-15s %-15s %-8s" % ("Duration (µs)", "Mode", + "Coins Count", "Memory Usage", + "Flush for Prune")) while True: try: diff --git a/contrib/tracing/log_utxos.bt b/contrib/tracing/log_utxos.bt index cf2ba00a4d..8be17a0771 100755 --- a/contrib/tracing/log_utxos.bt +++ b/contrib/tracing/log_utxos.bt @@ -7,7 +7,7 @@ bpftrace contrib/tracing/log_utxos.bt This script requires a 'dashd' binary compiled with eBPF support and the - 'utxochache' tracepoints. By default, it's assumed that 'dashd' is + 'utxocache' tracepoints. By default, it's assumed that 'dashd' is located in './src/dashd'. This can be modified in the script below. NOTE: requires bpftrace v0.12.0 or above. diff --git a/doc/tracing.md b/doc/tracing.md index 5b25eb3816..c8cfa0a7d6 100644 --- a/doc/tracing.md +++ b/doc/tracing.md @@ -110,23 +110,31 @@ Arguments passed: ### Context `utxocache` +The following tracepoints cover the in-memory UTXO cache. UTXOs are, for example, +added to and removed (spent) from the cache when we connect a new block. +**Note**: Dash Core uses temporary clones of the _main_ UTXO cache +(`chainstate.CoinsTip()`). For example, the RPCs `generateblock` and +`getblocktemplate` call `TestBlockValidity()`, which applies the UTXO set +changes to a temporary cache. Similarly, mempool consistency checks, which are +frequent on regtest, also apply the the UTXO set changes to a temporary cache. +Changes to the _main_ UTXO cache and to temporary caches trigger the tracepoints. +We can't tell if a temporary cache or the _main_ cache was changed. + #### Tracepoint `utxocache:flush` -Is called *after* the caches and indexes are flushed depending on the mode -`CChainState::FlushStateToDisk` is called with. +Is called *after* the in-memory UTXO cache is flushed. Arguments passed: -1. Duration in microseconds as `int64` +1. Time it took to flush the cache microseconds as `int64` 2. Flush state mode as `uint32`. It's an enumerator class with values `0` (`NONE`), `1` (`IF_NEEDED`), `2` (`PERIODIC`), `3` (`ALWAYS`) -3. Number of coins flushed as `uint64` -4. Memory usage in bytes as `uint64` -5. If the flush was pruned as `bool` -6. If it was full flush as `bool` +3. Cache size (number of coins) before the flush as `uint64` +4. Cache memory usage in bytes as `uint64` +5. If pruning caused the flush as `bool` #### Tracepoint `utxocache:add` -It is called when a new coin is added to the UTXO cache. +Is called when a coin is added to a UTXO cache. This can be a temporary UTXO cache too. Arguments passed: 1. Transaction ID (hash) as `pointer to unsigned chars` (i.e. 32 bytes in little-endian) @@ -137,7 +145,7 @@ Arguments passed: #### Tracepoint `utxocache:spent` -It is called when a coin is spent from the UTXO cache. +Is called when a coin is spent from a UTXO cache. This can be a temporary UTXO cache too. Arguments passed: 1. Transaction ID (hash) as `pointer to unsigned chars` (i.e. 32 bytes in little-endian) @@ -148,14 +156,17 @@ Arguments passed: #### Tracepoint `utxocache:uncache` -It is called when the UTXO with the given outpoint is removed from the cache. +Is called when a coin is purposefully unloaded from a UTXO cache. This +happens, for example, when we load an UTXO into a cache when trying to accept +a transaction that turns out to be invalid. The loaded UTXO is uncached to avoid +filling our UTXO cache up with irrelevant UTXOs. Arguments passed: 1. Transaction ID (hash) as `pointer to unsigned chars` (i.e. 32 bytes in little-endian) 2. Output index as `uint32` 3. Block height the coin was uncached, as `uint32` 4. Value of the coin as `int64` -. If the coin is a coinbase as `bool` +5. If the coin is a coinbase as `bool` ## Adding tracepoints to Dash Core diff --git a/src/validation.cpp b/src/validation.cpp index 5ba14db708..bc2498ce5a 100644 --- a/src/validation.cpp +++ b/src/validation.cpp @@ -2471,14 +2471,13 @@ bool CChainState::FlushStateToDisk( } nLastFlush = nNow; full_flush_completed = true; + TRACE5(utxocache, flush, + (int64_t)(GetTimeMicros() - nNow.count()), // in microseconds (µs) + (u_int32_t)mode, + (u_int64_t)coins_count, + (u_int64_t)coins_mem_usage, + (bool)fFlushForPrune); } - TRACE6(utxocache, flush, - (int64_t)(GetTimeMicros() - nNow.count()), // in microseconds (µs) - (u_int32_t)mode, - (u_int64_t)coins_count, - (u_int64_t)coins_mem_usage, - (bool)fFlushForPrune, - (bool)fDoFullFlush); } if (full_flush_completed) { // Update best block in wallet (so we can detect restored wallets). From f3b219ad0d91c03d2050d679a3614119862f7678 Mon Sep 17 00:00:00 2001 From: Kittywhiskers Van Gogh <63189531+kwvg@users.noreply.github.com> Date: Sun, 25 Aug 2024 14:17:32 +0000 Subject: [PATCH 09/14] merge bitcoin#24358: USDT tracepoint interface tests --- configure.ac | 1 + test/config.ini.in | 1 + test/functional/interface_usdt_net.py | 171 ++++++++ test/functional/interface_usdt_utxocache.py | 408 ++++++++++++++++++ test/functional/interface_usdt_validation.py | 136 ++++++ .../test_framework/test_framework.py | 28 ++ test/functional/test_runner.py | 3 + 7 files changed, 748 insertions(+) create mode 100755 test/functional/interface_usdt_net.py create mode 100755 test/functional/interface_usdt_utxocache.py create mode 100755 test/functional/interface_usdt_validation.py diff --git a/configure.ac b/configure.ac index d0617acca8..2035e0ee53 100644 --- a/configure.ac +++ b/configure.ac @@ -1419,6 +1419,7 @@ if test x$use_usdt != xno; then [AC_MSG_RESULT(no); use_usdt=no;] ) fi +AM_CONDITIONAL([ENABLE_USDT_TRACEPOINTS], [test "$use_usdt" = "yes"]) dnl Check for libminiupnpc (optional) if test x$use_upnp != xno; then diff --git a/test/config.ini.in b/test/config.ini.in index 77c9a720c3..5601bd8e6a 100644 --- a/test/config.ini.in +++ b/test/config.ini.in @@ -23,3 +23,4 @@ RPCAUTH=@abs_top_srcdir@/share/rpcauth/rpcauth.py @BUILD_BITCOIND_TRUE@ENABLE_BITCOIND=true @ENABLE_FUZZ_TRUE@ENABLE_FUZZ=true @ENABLE_ZMQ_TRUE@ENABLE_ZMQ=true +@ENABLE_USDT_TRACEPOINTS_TRUE@ENABLE_USDT_TRACEPOINTS=true diff --git a/test/functional/interface_usdt_net.py b/test/functional/interface_usdt_net.py new file mode 100755 index 0000000000..0debc7bdb1 --- /dev/null +++ b/test/functional/interface_usdt_net.py @@ -0,0 +1,171 @@ +#!/usr/bin/env python3 +# Copyright (c) 2022 The Bitcoin Core developers +# Distributed under the MIT software license, see the accompanying +# file COPYING or http://www.opensource.org/licenses/mit-license.php. + +""" Tests the net:* tracepoint API interface. + See https://github.com/dashpay/dash/blob/develop/doc/tracing.md#context-net +""" + +import ctypes +from io import BytesIO +# Test will be skipped if we don't have bcc installed +try: + from bcc import BPF, USDT # type: ignore[import] +except ImportError: + pass +from test_framework.messages import msg_version +from test_framework.p2p import P2PInterface +from test_framework.test_framework import BitcoinTestFramework +from test_framework.util import assert_equal + +# Tor v3 addresses are 62 chars + 6 chars for the port (':12345'). +MAX_PEER_ADDR_LENGTH = 68 +MAX_PEER_CONN_TYPE_LENGTH = 20 +MAX_MSG_TYPE_LENGTH = 20 +# We won't process messages larger than 150 byte in this test. For reading +# larger messanges see contrib/tracing/log_raw_p2p_msgs.py +MAX_MSG_DATA_LENGTH = 150 + +net_tracepoints_program = """ +#include + +#define MAX_PEER_ADDR_LENGTH {} +#define MAX_PEER_CONN_TYPE_LENGTH {} +#define MAX_MSG_TYPE_LENGTH {} +#define MAX_MSG_DATA_LENGTH {} +""".format( + MAX_PEER_ADDR_LENGTH, + MAX_PEER_CONN_TYPE_LENGTH, + MAX_MSG_TYPE_LENGTH, + MAX_MSG_DATA_LENGTH +) + """ +#define MIN(a,b) ({ __typeof__ (a) _a = (a); __typeof__ (b) _b = (b); _a < _b ? _a : _b; }) + +struct p2p_message +{ + u64 peer_id; + char peer_addr[MAX_PEER_ADDR_LENGTH]; + char peer_conn_type[MAX_PEER_CONN_TYPE_LENGTH]; + char msg_type[MAX_MSG_TYPE_LENGTH]; + u64 msg_size; + u8 msg[MAX_MSG_DATA_LENGTH]; +}; + +BPF_PERF_OUTPUT(inbound_messages); +int trace_inbound_message(struct pt_regs *ctx) { + struct p2p_message msg = {}; + bpf_usdt_readarg(1, ctx, &msg.peer_id); + bpf_usdt_readarg_p(2, ctx, &msg.peer_addr, MAX_PEER_ADDR_LENGTH); + bpf_usdt_readarg_p(3, ctx, &msg.peer_conn_type, MAX_PEER_CONN_TYPE_LENGTH); + bpf_usdt_readarg_p(4, ctx, &msg.msg_type, MAX_MSG_TYPE_LENGTH); + bpf_usdt_readarg(5, ctx, &msg.msg_size); + bpf_usdt_readarg_p(6, ctx, &msg.msg, MIN(msg.msg_size, MAX_MSG_DATA_LENGTH)); + inbound_messages.perf_submit(ctx, &msg, sizeof(msg)); + return 0; +} + +BPF_PERF_OUTPUT(outbound_messages); +int trace_outbound_message(struct pt_regs *ctx) { + struct p2p_message msg = {}; + bpf_usdt_readarg(1, ctx, &msg.peer_id); + bpf_usdt_readarg_p(2, ctx, &msg.peer_addr, MAX_PEER_ADDR_LENGTH); + bpf_usdt_readarg_p(3, ctx, &msg.peer_conn_type, MAX_PEER_CONN_TYPE_LENGTH); + bpf_usdt_readarg_p(4, ctx, &msg.msg_type, MAX_MSG_TYPE_LENGTH); + bpf_usdt_readarg(5, ctx, &msg.msg_size); + bpf_usdt_readarg_p(6, ctx, &msg.msg, MIN(msg.msg_size, MAX_MSG_DATA_LENGTH)); + outbound_messages.perf_submit(ctx, &msg, sizeof(msg)); + return 0; +}; +""" + + +class NetTracepointTest(BitcoinTestFramework): + def set_test_params(self): + self.num_nodes = 1 + + def skip_test_if_missing_module(self): + self.skip_if_platform_not_linux() + self.skip_if_no_bitcoind_tracepoints() + self.skip_if_no_python_bcc() + self.skip_if_no_bpf_permissions() + + def run_test(self): + # Tests the net:inbound_message and net:outbound_message tracepoints + # See https://github.com/dashpay/dash/blob/develop/doc/tracing.md#context-net + + class P2PMessage(ctypes.Structure): + _fields_ = [ + ("peer_id", ctypes.c_uint64), + ("peer_addr", ctypes.c_char * MAX_PEER_ADDR_LENGTH), + ("peer_conn_type", ctypes.c_char * MAX_PEER_CONN_TYPE_LENGTH), + ("msg_type", ctypes.c_char * MAX_MSG_TYPE_LENGTH), + ("msg_size", ctypes.c_uint64), + ("msg", ctypes.c_ubyte * MAX_MSG_DATA_LENGTH), + ] + + def __repr__(self): + return f"P2PMessage(peer={self.peer_id}, addr={self.peer_addr.decode('utf-8')}, conn_type={self.peer_conn_type.decode('utf-8')}, msg_type={self.msg_type.decode('utf-8')}, msg_size={self.msg_size})" + + self.log.info( + "hook into the net:inbound_message and net:outbound_message tracepoints") + ctx = USDT(path=str(self.options.bitcoind)) + ctx.enable_probe(probe="net:inbound_message", + fn_name="trace_inbound_message") + ctx.enable_probe(probe="net:outbound_message", + fn_name="trace_outbound_message") + bpf = BPF(text=net_tracepoints_program, usdt_contexts=[ctx], debug=0) + + # The handle_* function is a ctypes callback function called from C. When + # we assert in the handle_* function, the AssertError doesn't propagate + # back to Python. The exception is ignored. We manually count and assert + # that the handle_* functions succeeded. + EXPECTED_INOUTBOUND_VERSION_MSG = 1 + checked_inbound_version_msg = 0 + checked_outbound_version_msg = 0 + + def check_p2p_message(event, inbound): + nonlocal checked_inbound_version_msg, checked_outbound_version_msg + if event.msg_type.decode("utf-8") == "version": + self.log.info( + f"check_p2p_message(): {'inbound' if inbound else 'outbound'} {event}") + peer = self.nodes[0].getpeerinfo()[0] + msg = msg_version() + msg.deserialize(BytesIO(bytes(event.msg[:event.msg_size]))) + assert_equal(peer["id"], event.peer_id, peer["id"]) + assert_equal(peer["addr"], event.peer_addr.decode("utf-8")) + assert_equal(peer["connection_type"], + event.peer_conn_type.decode("utf-8")) + if inbound: + checked_inbound_version_msg += 1 + else: + checked_outbound_version_msg += 1 + + def handle_inbound(_, data, __): + event = ctypes.cast(data, ctypes.POINTER(P2PMessage)).contents + check_p2p_message(event, True) + + def handle_outbound(_, data, __): + event = ctypes.cast(data, ctypes.POINTER(P2PMessage)).contents + check_p2p_message(event, False) + + bpf["inbound_messages"].open_perf_buffer(handle_inbound) + bpf["outbound_messages"].open_perf_buffer(handle_outbound) + + self.log.info("connect a P2P test node to our dashd node") + test_node = P2PInterface() + self.nodes[0].add_p2p_connection(test_node) + bpf.perf_buffer_poll(timeout=200) + + self.log.info( + "check that we got both an inbound and outbound version message") + assert_equal(EXPECTED_INOUTBOUND_VERSION_MSG, + checked_inbound_version_msg) + assert_equal(EXPECTED_INOUTBOUND_VERSION_MSG, + checked_outbound_version_msg) + + bpf.cleanup() + + +if __name__ == '__main__': + NetTracepointTest().main() diff --git a/test/functional/interface_usdt_utxocache.py b/test/functional/interface_usdt_utxocache.py new file mode 100755 index 0000000000..e9f98de523 --- /dev/null +++ b/test/functional/interface_usdt_utxocache.py @@ -0,0 +1,408 @@ +#!/usr/bin/env python3 +# Copyright (c) 2022 The Bitcoin Core developers +# Distributed under the MIT software license, see the accompanying +# file COPYING or http://www.opensource.org/licenses/mit-license.php. + +""" Tests the utxocache:* tracepoint API interface. + See https://github.com/dashpay/dash/blob/develop/doc/tracing.md#context-utxocache +""" + +import ctypes +# Test will be skipped if we don't have bcc installed +try: + from bcc import BPF, USDT # type: ignore[import] +except ImportError: + pass +from test_framework.messages import COIN +from test_framework.test_framework import BitcoinTestFramework +from test_framework.util import assert_equal +from test_framework.wallet import MiniWallet + +utxocache_changes_program = """ +#include + +typedef signed long long i64; + +struct utxocache_change +{ + char txid[32]; + u32 index; + u32 height; + i64 value; + bool is_coinbase; +}; + +BPF_PERF_OUTPUT(utxocache_add); +int trace_utxocache_add(struct pt_regs *ctx) { + struct utxocache_change add = {}; + bpf_usdt_readarg_p(1, ctx, &add.txid, 32); + bpf_usdt_readarg(2, ctx, &add.index); + bpf_usdt_readarg(3, ctx, &add.height); + bpf_usdt_readarg(4, ctx, &add.value); + bpf_usdt_readarg(5, ctx, &add.is_coinbase); + utxocache_add.perf_submit(ctx, &add, sizeof(add)); + return 0; +} + +BPF_PERF_OUTPUT(utxocache_spent); +int trace_utxocache_spent(struct pt_regs *ctx) { + struct utxocache_change spent = {}; + bpf_usdt_readarg_p(1, ctx, &spent.txid, 32); + bpf_usdt_readarg(2, ctx, &spent.index); + bpf_usdt_readarg(3, ctx, &spent.height); + bpf_usdt_readarg(4, ctx, &spent.value); + bpf_usdt_readarg(5, ctx, &spent.is_coinbase); + utxocache_spent.perf_submit(ctx, &spent, sizeof(spent)); + return 0; +} + +BPF_PERF_OUTPUT(utxocache_uncache); +int trace_utxocache_uncache(struct pt_regs *ctx) { + struct utxocache_change uncache = {}; + bpf_usdt_readarg_p(1, ctx, &uncache.txid, 32); + bpf_usdt_readarg(2, ctx, &uncache.index); + bpf_usdt_readarg(3, ctx, &uncache.height); + bpf_usdt_readarg(4, ctx, &uncache.value); + bpf_usdt_readarg(5, ctx, &uncache.is_coinbase); + utxocache_uncache.perf_submit(ctx, &uncache, sizeof(uncache)); + return 0; +} +""" + +utxocache_flushes_program = """ +#include + +typedef signed long long i64; + +struct utxocache_flush +{ + i64 duration; + u32 mode; + u64 size; + u64 memory; + bool for_prune; +}; + +BPF_PERF_OUTPUT(utxocache_flush); +int trace_utxocache_flush(struct pt_regs *ctx) { + struct utxocache_flush flush = {}; + bpf_usdt_readarg(1, ctx, &flush.duration); + bpf_usdt_readarg(2, ctx, &flush.mode); + bpf_usdt_readarg(3, ctx, &flush.size); + bpf_usdt_readarg(4, ctx, &flush.memory); + bpf_usdt_readarg(5, ctx, &flush.for_prune); + utxocache_flush.perf_submit(ctx, &flush, sizeof(flush)); + return 0; +} +""" + +FLUSHMODE_NAME = { + 0: "NONE", + 1: "IF_NEEDED", + 2: "PERIODIC", + 3: "ALWAYS", +} + + +class UTXOCacheChange(ctypes.Structure): + _fields_ = [ + ("txid", ctypes.c_ubyte * 32), + ("index", ctypes.c_uint32), + ("height", ctypes.c_uint32), + ("value", ctypes.c_uint64), + ("is_coinbase", ctypes.c_bool), + ] + + def __repr__(self): + return f"UTXOCacheChange(outpoint={bytes(self.txid[::-1]).hex()}:{self.index}, height={self.height}, value={self.value}sat, is_coinbase={self.is_coinbase})" + + +class UTXOCacheFlush(ctypes.Structure): + _fields_ = [ + ("duration", ctypes.c_int64), + ("mode", ctypes.c_uint32), + ("size", ctypes.c_uint64), + ("memory", ctypes.c_uint64), + ("for_prune", ctypes.c_bool), + ] + + def __repr__(self): + return f"UTXOCacheFlush(duration={self.duration}, mode={FLUSHMODE_NAME[self.mode]}, size={self.size}, memory={self.memory}, for_prune={self.for_prune})" + + +class UTXOCacheTracepointTest(BitcoinTestFramework): + def set_test_params(self): + self.setup_clean_chain = False + self.num_nodes = 1 + self.extra_args = [["-txindex"]] + + def skip_test_if_missing_module(self): + self.skip_if_platform_not_linux() + self.skip_if_no_bitcoind_tracepoints() + self.skip_if_no_python_bcc() + self.skip_if_no_bpf_permissions() + + def run_test(self): + self.wallet = MiniWallet(self.nodes[0]) + self.wallet.generate(101) + + self.test_uncache() + self.test_add_spent() + self.test_flush() + + def test_uncache(self): + """ Tests the utxocache:uncache tracepoint API. + https://github.com/dashpay/dash/blob/develop/doc/tracing.md#tracepoint-utxocacheuncache + """ + # To trigger an UTXO uncache from the cache, we create an invalid transaction + # spending a not-cached, but existing UTXO. During transaction validation, this + # the UTXO is added to the utxo cache, but as the transaction is invalid, it's + # uncached again. + self.log.info("testing the utxocache:uncache tracepoint API") + + # Retrieve the txid for the UTXO created in the first block. This UTXO is not + # in our UTXO cache. + EARLY_BLOCK_HEIGHT = 1 + block_1_hash = self.nodes[0].getblockhash(EARLY_BLOCK_HEIGHT) + block_1 = self.nodes[0].getblock(block_1_hash) + block_1_coinbase_txid = block_1["tx"][0] + + # Create a transaction and invalidate it by changing the txid of the previous + # output to the coinbase txid of the block at height 1. + invalid_tx = self.wallet.create_self_transfer( + from_node=self.nodes[0])["tx"] + invalid_tx.vin[0].prevout.hash = int(block_1_coinbase_txid, 16) + + self.log.info("hooking into the utxocache:uncache tracepoint") + ctx = USDT(path=str(self.options.bitcoind)) + ctx.enable_probe(probe="utxocache:uncache", + fn_name="trace_utxocache_uncache") + bpf = BPF(text=utxocache_changes_program, usdt_contexts=[ctx], debug=0) + + # The handle_* function is a ctypes callback function called from C. When + # we assert in the handle_* function, the AssertError doesn't propagate + # back to Python. The exception is ignored. We manually count and assert + # that the handle_* functions succeeded. + EXPECTED_HANDLE_UNCACHE_SUCCESS = 1 + handle_uncache_succeeds = 0 + + def handle_utxocache_uncache(_, data, __): + nonlocal handle_uncache_succeeds + event = ctypes.cast(data, ctypes.POINTER(UTXOCacheChange)).contents + self.log.info(f"handle_utxocache_uncache(): {event}") + assert_equal(block_1_coinbase_txid, bytes(event.txid[::-1]).hex()) + assert_equal(0, event.index) # prevout index + assert_equal(EARLY_BLOCK_HEIGHT, event.height) + assert_equal(500 * COIN, event.value) + assert_equal(True, event.is_coinbase) + + handle_uncache_succeeds += 1 + + bpf["utxocache_uncache"].open_perf_buffer(handle_utxocache_uncache) + + self.log.info( + "testmempoolaccept the invalid transaction to trigger an UTXO-cache uncache") + result = self.nodes[0].testmempoolaccept( + [invalid_tx.serialize().hex()])[0] + assert_equal(result["allowed"], False) + + bpf.perf_buffer_poll(timeout=100) + bpf.cleanup() + + self.log.info( + f"check that we successfully traced {EXPECTED_HANDLE_UNCACHE_SUCCESS} uncaches") + assert_equal(EXPECTED_HANDLE_UNCACHE_SUCCESS, handle_uncache_succeeds) + + def test_add_spent(self): + """ Tests the utxocache:add utxocache:spent tracepoint API + See https://github.com/dashpay/dash/blob/develop/doc/tracing.md#tracepoint-utxocacheadd + and https://github.com/dashpay/dash/blob/develop/doc/tracing.md#tracepoint-utxocachespent + """ + + self.log.info( + "test the utxocache:add and utxocache:spent tracepoint API") + + self.log.info("create an unconfirmed transaction") + self.wallet.send_self_transfer(from_node=self.nodes[0]) + + # We mine a block to trace changes (add/spent) to the active in-memory cache + # of the UTXO set (see CoinsTip() of CCoinsViewCache). However, in some cases + # temporary clones of the active cache are made. For example, during mining with + # the generate RPC call, the block is first tested in TestBlockValidity(). There, + # a clone of the active cache is modified during a test ConnectBlock() call. + # These are implementation details we don't want to test here. Thus, after + # mining, we invalidate the block, start the tracing, and then trace the cache + # changes to the active utxo cache. + self.log.info("mine and invalidate a block that is later reconsidered") + block_hash = self.wallet.generate(1)[0] + self.nodes[0].invalidateblock(block_hash) + + self.log.info( + "hook into the utxocache:add and utxocache:spent tracepoints") + ctx = USDT(path=str(self.options.bitcoind)) + ctx.enable_probe(probe="utxocache:add", fn_name="trace_utxocache_add") + ctx.enable_probe(probe="utxocache:spent", + fn_name="trace_utxocache_spent") + bpf = BPF(text=utxocache_changes_program, usdt_contexts=[ctx], debug=0) + + # The handle_* function is a ctypes callback function called from C. When + # we assert in the handle_* function, the AssertError doesn't propagate + # back to Python. The exception is ignored. We manually count and assert + # that the handle_* functions succeeded. + EXPECTED_HANDLE_ADD_SUCCESS = 2 + EXPECTED_HANDLE_SPENT_SUCCESS = 1 + handle_add_succeeds = 0 + handle_spent_succeeds = 0 + + expected_utxocache_spents = [] + expected_utxocache_adds = [] + + def handle_utxocache_add(_, data, __): + nonlocal handle_add_succeeds + event = ctypes.cast(data, ctypes.POINTER(UTXOCacheChange)).contents + self.log.info(f"handle_utxocache_add(): {event}") + add = expected_utxocache_adds.pop(0) + assert_equal(add["txid"], bytes(event.txid[::-1]).hex()) + assert_equal(add["index"], event.index) + assert_equal(add["height"], event.height) + assert_equal(add["value"], event.value) + assert_equal(add["is_coinbase"], event.is_coinbase) + handle_add_succeeds += 1 + + def handle_utxocache_spent(_, data, __): + nonlocal handle_spent_succeeds + event = ctypes.cast(data, ctypes.POINTER(UTXOCacheChange)).contents + self.log.info(f"handle_utxocache_spent(): {event}") + spent = expected_utxocache_spents.pop(0) + assert_equal(spent["txid"], bytes(event.txid[::-1]).hex()) + assert_equal(spent["index"], event.index) + assert_equal(spent["height"], event.height) + assert_equal(spent["value"], event.value) + assert_equal(spent["is_coinbase"], event.is_coinbase) + handle_spent_succeeds += 1 + + bpf["utxocache_add"].open_perf_buffer(handle_utxocache_add) + bpf["utxocache_spent"].open_perf_buffer(handle_utxocache_spent) + + # We trigger a block re-connection. This causes changes (add/spent) + # to the UTXO-cache which in turn triggers the tracepoints. + self.log.info("reconsider the previously invalidated block") + self.nodes[0].reconsiderblock(block_hash) + + block = self.nodes[0].getblock(block_hash, 2) + for (block_index, tx) in enumerate(block["tx"]): + for vin in tx["vin"]: + if "coinbase" not in vin: + prevout_tx = self.nodes[0].getrawtransaction( + vin["txid"], True) + prevout_tx_block = self.nodes[0].getblockheader( + prevout_tx["blockhash"]) + spends_coinbase = "coinbase" in prevout_tx["vin"][0] + expected_utxocache_spents.append({ + "txid": vin["txid"], + "index": vin["vout"], + "height": prevout_tx_block["height"], + "value": int(prevout_tx["vout"][vin["vout"]]["value"] * COIN), + "is_coinbase": spends_coinbase, + }) + for (i, vout) in enumerate(tx["vout"]): + if vout["scriptPubKey"]["type"] != "nulldata": + expected_utxocache_adds.append({ + "txid": tx["txid"], + "index": i, + "height": block["height"], + "value": int(vout["value"] * COIN), + "is_coinbase": block_index == 0, + }) + + assert_equal(EXPECTED_HANDLE_ADD_SUCCESS, len(expected_utxocache_adds)) + assert_equal(EXPECTED_HANDLE_SPENT_SUCCESS, + len(expected_utxocache_spents)) + + bpf.perf_buffer_poll(timeout=200) + bpf.cleanup() + + self.log.info( + f"check that we successfully traced {EXPECTED_HANDLE_ADD_SUCCESS} adds and {EXPECTED_HANDLE_SPENT_SUCCESS} spent") + assert_equal(0, len(expected_utxocache_adds)) + assert_equal(0, len(expected_utxocache_spents)) + assert_equal(EXPECTED_HANDLE_ADD_SUCCESS, handle_add_succeeds) + assert_equal(EXPECTED_HANDLE_SPENT_SUCCESS, handle_spent_succeeds) + + def test_flush(self): + """ Tests the utxocache:flush tracepoint API. + See https://github.com/dashpay/dash/blob/develop/doc/tracing.md#tracepoint-utxocacheflush""" + + self.log.info("test the utxocache:flush tracepoint API") + self.log.info("hook into the utxocache:flush tracepoint") + ctx = USDT(path=str(self.options.bitcoind)) + ctx.enable_probe(probe="utxocache:flush", + fn_name="trace_utxocache_flush") + bpf = BPF(text=utxocache_flushes_program, usdt_contexts=[ctx], debug=0) + + # The handle_* function is a ctypes callback function called from C. When + # we assert in the handle_* function, the AssertError doesn't propagate + # back to Python. The exception is ignored. We manually count and assert + # that the handle_* functions succeeded. + EXPECTED_HANDLE_FLUSH_SUCCESS = 3 + handle_flush_succeeds = 0 + possible_cache_sizes = set() + expected_flushes = [] + + def handle_utxocache_flush(_, data, __): + nonlocal handle_flush_succeeds + event = ctypes.cast(data, ctypes.POINTER(UTXOCacheFlush)).contents + self.log.info(f"handle_utxocache_flush(): {event}") + expected = expected_flushes.pop(0) + assert_equal(expected["mode"], FLUSHMODE_NAME[event.mode]) + possible_cache_sizes.remove(event.size) # fails if size not in set + # sanity checks only + assert(event.memory > 0) + assert(event.duration > 0) + handle_flush_succeeds += 1 + + bpf["utxocache_flush"].open_perf_buffer(handle_utxocache_flush) + + self.log.info("stop the node to flush the UTXO cache") + UTXOS_IN_CACHE = 104 # might need to be changed if the eariler tests are modified + # A node shutdown causes two flushes. One that flushes UTXOS_IN_CACHE + # UTXOs and one that flushes 0 UTXOs. Normally the 0-UTXO-flush is the + # second flush, however it can happen that the order changes. + possible_cache_sizes = {UTXOS_IN_CACHE, 0} + flush_for_shutdown = {"mode": "ALWAYS", "for_prune": False} + expected_flushes.extend([flush_for_shutdown, flush_for_shutdown]) + self.stop_node(0) + + bpf.perf_buffer_poll(timeout=200) + + self.log.info("check that we don't expect additional flushes") + assert_equal(0, len(expected_flushes)) + assert_equal(0, len(possible_cache_sizes)) + + self.log.info("restart the node with -prune") + self.start_node(0, ["-fastprune=1", "-prune=1"]) + + BLOCKS_TO_MINE = 450 + self.log.info(f"mine {BLOCKS_TO_MINE} blocks to be able to prune") + self.wallet.generate(BLOCKS_TO_MINE) + # we added BLOCKS_TO_MINE coinbase UTXOs to the cache + possible_cache_sizes = {BLOCKS_TO_MINE} + expected_flushes.append( + {"mode": "NONE", "for_prune": True, "size_fn": lambda x: x == BLOCKS_TO_MINE}) + + self.log.info(f"prune blockchain to trigger a flush for pruning") + self.nodes[0].pruneblockchain(415) + + bpf.perf_buffer_poll(timeout=500) + bpf.cleanup() + + self.log.info( + f"check that we don't expect additional flushes and that the handle_* function succeeded") + assert_equal(0, len(expected_flushes)) + assert_equal(0, len(possible_cache_sizes)) + assert_equal(EXPECTED_HANDLE_FLUSH_SUCCESS, handle_flush_succeeds) + + self.stop_node(0, expected_stderr='Warning: You are starting with governance validation disabled. This is expected because you are running a pruned node.') + +if __name__ == '__main__': + UTXOCacheTracepointTest().main() diff --git a/test/functional/interface_usdt_validation.py b/test/functional/interface_usdt_validation.py new file mode 100755 index 0000000000..c684e3c422 --- /dev/null +++ b/test/functional/interface_usdt_validation.py @@ -0,0 +1,136 @@ +#!/usr/bin/env python3 +# Copyright (c) 2022 The Bitcoin Core developers +# Distributed under the MIT software license, see the accompanying +# file COPYING or http://www.opensource.org/licenses/mit-license.php. + +""" Tests the validation:* tracepoint API interface. + See https://github.com/dashpay/dash/blob/develop/doc/tracing.md#context-validation +""" + +import ctypes + +# Test will be skipped if we don't have bcc installed +try: + from bcc import BPF, USDT # type: ignore[import] +except ImportError: + pass + +from test_framework.address import ADDRESS_BCRT1_UNSPENDABLE +from test_framework.test_framework import BitcoinTestFramework +from test_framework.util import assert_equal + + +validation_blockconnected_program = """ +#include + +typedef signed long long i64; + +struct connected_block +{ + char hash[32]; + int height; + i64 transactions; + int inputs; + i64 sigops; + u64 duration; +}; + +BPF_PERF_OUTPUT(block_connected); +int trace_block_connected(struct pt_regs *ctx) { + struct connected_block block = {}; + bpf_usdt_readarg_p(1, ctx, &block.hash, 32); + bpf_usdt_readarg(2, ctx, &block.height); + bpf_usdt_readarg(3, ctx, &block.transactions); + bpf_usdt_readarg(4, ctx, &block.inputs); + bpf_usdt_readarg(5, ctx, &block.sigops); + bpf_usdt_readarg(6, ctx, &block.duration); + block_connected.perf_submit(ctx, &block, sizeof(block)); + return 0; +} +""" + + +class ValidationTracepointTest(BitcoinTestFramework): + def set_test_params(self): + self.num_nodes = 1 + + def skip_test_if_missing_module(self): + self.skip_if_platform_not_linux() + self.skip_if_no_bitcoind_tracepoints() + self.skip_if_no_python_bcc() + self.skip_if_no_bpf_permissions() + + def run_test(self): + # Tests the validation:block_connected tracepoint by generating blocks + # and comparing the values passed in the tracepoint arguments with the + # blocks. + # See https://github.com/dashpay/dash/blob/develop/doc/tracing.md#tracepoint-validationblock_connected + + class Block(ctypes.Structure): + _fields_ = [ + ("hash", ctypes.c_ubyte * 32), + ("height", ctypes.c_int), + ("transactions", ctypes.c_int64), + ("inputs", ctypes.c_int), + ("sigops", ctypes.c_int64), + ("duration", ctypes.c_uint64), + ] + + def __repr__(self): + return "ConnectedBlock(hash=%s height=%d, transactions=%d, inputs=%d, sigops=%d, duration=%d)" % ( + bytes(self.hash[::-1]).hex(), + self.height, + self.transactions, + self.inputs, + self.sigops, + self.duration) + + # The handle_* function is a ctypes callback function called from C. When + # we assert in the handle_* function, the AssertError doesn't propagate + # back to Python. The exception is ignored. We manually count and assert + # that the handle_* functions succeeded. + BLOCKS_EXPECTED = 2 + blocks_checked = 0 + expected_blocks = list() + + self.log.info("hook into the validation:block_connected tracepoint") + ctx = USDT(path=str(self.options.bitcoind)) + ctx.enable_probe(probe="validation:block_connected", + fn_name="trace_block_connected") + bpf = BPF(text=validation_blockconnected_program, + usdt_contexts=[ctx], debug=0) + + def handle_blockconnected(_, data, __): + nonlocal expected_blocks, blocks_checked + event = ctypes.cast(data, ctypes.POINTER(Block)).contents + self.log.info(f"handle_blockconnected(): {event}") + block = expected_blocks.pop(0) + assert_equal(block["hash"], bytes(event.hash[::-1]).hex()) + assert_equal(block["height"], event.height) + assert_equal(len(block["tx"]), event.transactions) + assert_equal(len([tx["vin"] for tx in block["tx"]]), event.inputs) + assert_equal(0, event.sigops) # no sigops in coinbase tx + # only plausibility checks + assert(event.duration > 0) + + blocks_checked += 1 + + bpf["block_connected"].open_perf_buffer( + handle_blockconnected) + + self.log.info(f"mine {BLOCKS_EXPECTED} blocks") + block_hashes = self.nodes[0].generatetoaddress( + BLOCKS_EXPECTED, ADDRESS_BCRT1_UNSPENDABLE) + for block_hash in block_hashes: + expected_blocks.append(self.nodes[0].getblock(block_hash, 2)) + + bpf.perf_buffer_poll(timeout=200) + bpf.cleanup() + + self.log.info(f"check that we traced {BLOCKS_EXPECTED} blocks") + assert_equal(BLOCKS_EXPECTED, blocks_checked) + assert_equal(0, len(expected_blocks)) + + +if __name__ == '__main__': + ValidationTracepointTest().main() diff --git a/test/functional/test_framework/test_framework.py b/test/functional/test_framework/test_framework.py index c80a48ff8a..a8a7b48be1 100755 --- a/test/functional/test_framework/test_framework.py +++ b/test/functional/test_framework/test_framework.py @@ -12,6 +12,7 @@ from enum import Enum import argparse import logging import os +import platform import pdb import random import re @@ -955,6 +956,29 @@ class BitcoinTestFramework(metaclass=BitcoinTestMetaClass): except ImportError: raise SkipTest("python3-zmq module not available.") + def skip_if_no_python_bcc(self): + """Attempt to import the bcc package and skip the tests if the import fails.""" + try: + import bcc # type: ignore[import] # noqa: F401 + except ImportError: + raise SkipTest("bcc python module not available") + + def skip_if_no_bitcoind_tracepoints(self): + """Skip the running test if dashd has not been compiled with USDT tracepoint support.""" + if not self.is_usdt_compiled(): + raise SkipTest("dashd has not been built with USDT tracepoints enabled.") + + def skip_if_no_bpf_permissions(self): + """Skip the running test if we don't have permissions to do BPF syscalls and load BPF maps.""" + # check for 'root' permissions + if os.geteuid() != 0: + raise SkipTest("no permissions to use BPF (please review the tests carefully before running them with higher privileges)") + + def skip_if_platform_not_linux(self): + """Skip the running test if we are not on a Linux platform""" + if platform.system() != "Linux": + raise SkipTest("not on a Linux system") + def skip_if_no_bitcoind_zmq(self): """Skip the running test if dashd has not been compiled with zmq support.""" if not self.is_zmq_compiled(): @@ -1019,6 +1043,10 @@ class BitcoinTestFramework(metaclass=BitcoinTestMetaClass): """Checks whether the zmq module was compiled.""" return self.config["components"].getboolean("ENABLE_ZMQ") + def is_usdt_compiled(self): + """Checks whether the USDT tracepoints were compiled.""" + return self.config["components"].getboolean("ENABLE_USDT_TRACEPOINTS") + def is_sqlite_compiled(self): """Checks whether the wallet module was compiled with Sqlite support.""" return self.config["components"].getboolean("USE_SQLITE") diff --git a/test/functional/test_runner.py b/test/functional/test_runner.py index 4ecef85f37..6cd2368496 100755 --- a/test/functional/test_runner.py +++ b/test/functional/test_runner.py @@ -186,6 +186,9 @@ BASE_SCRIPTS = [ 'wallet_watchonly.py --usecli --legacy-wallet', 'interface_http.py', 'interface_rpc.py', + 'interface_usdt_net.py', + 'interface_usdt_utxocache.py', + 'interface_usdt_validation.py', 'rpc_psbt.py --legacy-wallet', 'rpc_psbt.py --descriptors', 'rpc_users.py', From 457bbd3f8b5121d5a54bf29d16b37588a7fe52f2 Mon Sep 17 00:00:00 2001 From: Kittywhiskers Van Gogh <63189531+kwvg@users.noreply.github.com> Date: Mon, 26 Aug 2024 13:54:18 +0000 Subject: [PATCH 10/14] merge bitcoin#25360: SystemTap 4.7 (RISC-V support) --- depends/packages/systemtap.mk | 4 ++-- ..._ASM_SECTION_AUTOGROUP_SUPPORT_check.patch | 22 ++++++++----------- doc/dependencies.md | 2 +- 3 files changed, 12 insertions(+), 16 deletions(-) diff --git a/depends/packages/systemtap.mk b/depends/packages/systemtap.mk index 498db4982f..ad74323d98 100644 --- a/depends/packages/systemtap.mk +++ b/depends/packages/systemtap.mk @@ -1,8 +1,8 @@ package=systemtap -$(package)_version=4.5 +$(package)_version=4.7 $(package)_download_path=https://sourceware.org/ftp/systemtap/releases/ $(package)_file_name=$(package)-$($(package)_version).tar.gz -$(package)_sha256_hash=75078ed37e0dd2a769c9d1f9394170b2d9f4d7daa425f43ca80c13bad6cfc925 +$(package)_sha256_hash=43a0a3db91aa4d41e28015b39a65e62059551f3cc7377ebf3a3a5ca7339e7b1f $(package)_patches=remove_SDT_ASM_SECTION_AUTOGROUP_SUPPORT_check.patch define $(package)_preprocess_cmds diff --git a/depends/patches/systemtap/remove_SDT_ASM_SECTION_AUTOGROUP_SUPPORT_check.patch b/depends/patches/systemtap/remove_SDT_ASM_SECTION_AUTOGROUP_SUPPORT_check.patch index eae0cf72d6..c115bc43e8 100644 --- a/depends/patches/systemtap/remove_SDT_ASM_SECTION_AUTOGROUP_SUPPORT_check.patch +++ b/depends/patches/systemtap/remove_SDT_ASM_SECTION_AUTOGROUP_SUPPORT_check.patch @@ -1,19 +1,15 @@ -commit b92d4c121486f3c6e8a2cea537c53eb09894479a -Author: 0xb10c <0xb10c@gmail.com> -Date: Tue Dec 7 11:02:07 2021 +0100 +Remove _SDT_ASM_SECTION_AUTOGROUP_SUPPORT check - Remove _SDT_ASM_SECTION_AUTOGROUP_SUPPORT check - - We assume that the assembler supports "?" in .pushsection directives. - This enables us to skip configure and make. - - See https://github.com/bitcoin/bitcoin/issues/23297. +We assume that the assembler supports "?" in .pushsection directives. +This enables us to skip configure and make. + +See https://github.com/bitcoin/bitcoin/issues/23297. diff --git a/includes/sys/sdt.h b/includes/sys/sdt.h -index 97766e710..352b4ee25 100644 +index ca0162b..f96e0ee 100644 --- a/includes/sys/sdt.h +++ b/includes/sys/sdt.h -@@ -230,12 +230,10 @@ __extension__ extern unsigned long long __sdt_unsp; +@@ -241,12 +241,10 @@ __extension__ extern unsigned long long __sdt_unsp; nice with code in COMDAT sections, which comes up in C++ code. Without that assembler support, some combinations of probe placements in certain kinds of C++ code may produce link-time errors. */ @@ -27,5 +23,5 @@ index 97766e710..352b4ee25 100644 -# define _SDT_ASM_AUTOGROUP "" -#endif - #define _SDT_ASM_BODY(provider, name, pack_args, args) \ - _SDT_ASM_1(990: _SDT_NOP) \ + #define _SDT_DEF_MACROS \ + _SDT_ASM_1(.altmacro) \ diff --git a/doc/dependencies.md b/doc/dependencies.md index e50f8113a1..bca313c40e 100644 --- a/doc/dependencies.md +++ b/doc/dependencies.md @@ -24,7 +24,7 @@ These are the dependencies currently used by Dash Core. You can find instruction | Qt | [5.15.11](https://download.qt.io/official_releases/qt/) | [5.11.3](https://github.com/bitcoin/bitcoin/pull/24132) | No | | | | SQLite | [3.32.1](https://sqlite.org/download.html) | [3.7.17](https://github.com/bitcoin/bitcoin/pull/19077) | | | | | XCB | | | | | [Yes](https://github.com/dashpay/dash/blob/develop/depends/packages/qt.mk) (Linux only) | -| systemtap ([tracing](tracing.md))| [4.5](https://sourceware.org/ftp/systemtap/releases/) | | | | | +| systemtap ([tracing](tracing.md))| [4.7](https://sourceware.org/ftp/systemtap/releases/) | | | | | | xkbcommon | | | | | [Yes](https://github.com/dashpay/dash/blob/develop/depends/packages/qt.mk) (Linux only) | | ZeroMQ | [4.3.1](https://github.com/zeromq/libzmq/releases) | 4.0.0 | No | | | | zlib | | | | | [Yes](https://github.com/dashpay/dash/blob/develop/depends/packages/qt.mk) | From 88696129f33b3958c7b14e995c64d6a09f438503 Mon Sep 17 00:00:00 2001 From: Kittywhiskers Van Gogh <63189531+kwvg@users.noreply.github.com> Date: Sat, 6 Aug 2022 13:52:29 +0200 Subject: [PATCH 11/14] merge bitcoin#25794: don't rely on block_connected USDT event order in tests --- test/functional/interface_usdt_validation.py | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/test/functional/interface_usdt_validation.py b/test/functional/interface_usdt_validation.py index c684e3c422..6c7b4fc927 100755 --- a/test/functional/interface_usdt_validation.py +++ b/test/functional/interface_usdt_validation.py @@ -91,7 +91,7 @@ class ValidationTracepointTest(BitcoinTestFramework): # that the handle_* functions succeeded. BLOCKS_EXPECTED = 2 blocks_checked = 0 - expected_blocks = list() + expected_blocks = dict() self.log.info("hook into the validation:block_connected tracepoint") ctx = USDT(path=str(self.options.bitcoind)) @@ -104,15 +104,16 @@ class ValidationTracepointTest(BitcoinTestFramework): nonlocal expected_blocks, blocks_checked event = ctypes.cast(data, ctypes.POINTER(Block)).contents self.log.info(f"handle_blockconnected(): {event}") - block = expected_blocks.pop(0) - assert_equal(block["hash"], bytes(event.hash[::-1]).hex()) + block_hash = bytes(event.hash[::-1]).hex() + block = expected_blocks[block_hash] + assert_equal(block["hash"], block_hash) assert_equal(block["height"], event.height) assert_equal(len(block["tx"]), event.transactions) assert_equal(len([tx["vin"] for tx in block["tx"]]), event.inputs) assert_equal(0, event.sigops) # no sigops in coinbase tx # only plausibility checks assert(event.duration > 0) - + del expected_blocks[block_hash] blocks_checked += 1 bpf["block_connected"].open_perf_buffer( @@ -122,7 +123,7 @@ class ValidationTracepointTest(BitcoinTestFramework): block_hashes = self.nodes[0].generatetoaddress( BLOCKS_EXPECTED, ADDRESS_BCRT1_UNSPENDABLE) for block_hash in block_hashes: - expected_blocks.append(self.nodes[0].getblock(block_hash, 2)) + expected_blocks[block_hash] = self.nodes[0].getblock(block_hash, 2) bpf.perf_buffer_poll(timeout=200) bpf.cleanup() From f4a53ba8ce229dd5876b70084882cd6c4b7fd422 Mon Sep 17 00:00:00 2001 From: Kittywhiskers Van Gogh <63189531+kwvg@users.noreply.github.com> Date: Mon, 26 Aug 2024 13:54:31 +0000 Subject: [PATCH 12/14] merge bitcoin#26945: systemtap 4.8 --- depends/packages/systemtap.mk | 7 ++++--- .../patches/systemtap/fix_variadic_warning.patch | 16 ++++++++++++++++ doc/dependencies.md | 2 +- 3 files changed, 21 insertions(+), 4 deletions(-) create mode 100644 depends/patches/systemtap/fix_variadic_warning.patch diff --git a/depends/packages/systemtap.mk b/depends/packages/systemtap.mk index ad74323d98..541ebeee01 100644 --- a/depends/packages/systemtap.mk +++ b/depends/packages/systemtap.mk @@ -1,12 +1,13 @@ package=systemtap -$(package)_version=4.7 +$(package)_version=4.8 $(package)_download_path=https://sourceware.org/ftp/systemtap/releases/ $(package)_file_name=$(package)-$($(package)_version).tar.gz -$(package)_sha256_hash=43a0a3db91aa4d41e28015b39a65e62059551f3cc7377ebf3a3a5ca7339e7b1f -$(package)_patches=remove_SDT_ASM_SECTION_AUTOGROUP_SUPPORT_check.patch +$(package)_sha256_hash=cbd50a4eba5b261394dc454c12448ddec73e55e6742fda7f508f9fbc1331c223 +$(package)_patches=remove_SDT_ASM_SECTION_AUTOGROUP_SUPPORT_check.patch fix_variadic_warning.patch define $(package)_preprocess_cmds patch -p1 < $($(package)_patch_dir)/remove_SDT_ASM_SECTION_AUTOGROUP_SUPPORT_check.patch && \ + patch -p1 < $($(package)_patch_dir)/fix_variadic_warning.patch && \ mkdir -p $($(package)_staging_prefix_dir)/include/sys && \ cp includes/sys/sdt.h $($(package)_staging_prefix_dir)/include/sys/sdt.h endef diff --git a/depends/patches/systemtap/fix_variadic_warning.patch b/depends/patches/systemtap/fix_variadic_warning.patch new file mode 100644 index 0000000000..93cc2d6081 --- /dev/null +++ b/depends/patches/systemtap/fix_variadic_warning.patch @@ -0,0 +1,16 @@ +Could be dropped after a migration to C++20. +See: https://github.com/bitcoin/bitcoin/issues/26916. + +diff --git a/includes/sys/sdt.h b/includes/sys/sdt.h +index 4075a5f..7c6138c 100644 +--- a/includes/sys/sdt.h ++++ b/includes/sys/sdt.h +@@ -276,7 +276,7 @@ __extension__ extern unsigned long long __sdt_unsp; + _SDT_ASM_1(.purgem _SDT_TYPE_) \ + _SDT_ASM_1(.purgem _SDT_TYPE) + +-#define _SDT_ASM_BODY(provider, name, pack_args, args, ...) \ ++#define _SDT_ASM_BODY(provider, name, pack_args, args) \ + _SDT_DEF_MACROS \ + _SDT_ASM_1(990: _SDT_NOP) \ + _SDT_ASM_3( .pushsection .note.stapsdt,_SDT_ASM_AUTOGROUP,"note") \ diff --git a/doc/dependencies.md b/doc/dependencies.md index bca313c40e..f27487d158 100644 --- a/doc/dependencies.md +++ b/doc/dependencies.md @@ -24,7 +24,7 @@ These are the dependencies currently used by Dash Core. You can find instruction | Qt | [5.15.11](https://download.qt.io/official_releases/qt/) | [5.11.3](https://github.com/bitcoin/bitcoin/pull/24132) | No | | | | SQLite | [3.32.1](https://sqlite.org/download.html) | [3.7.17](https://github.com/bitcoin/bitcoin/pull/19077) | | | | | XCB | | | | | [Yes](https://github.com/dashpay/dash/blob/develop/depends/packages/qt.mk) (Linux only) | -| systemtap ([tracing](tracing.md))| [4.7](https://sourceware.org/ftp/systemtap/releases/) | | | | | +| systemtap ([tracing](tracing.md))| [4.8](https://sourceware.org/ftp/systemtap/releases/) | | | | | | xkbcommon | | | | | [Yes](https://github.com/dashpay/dash/blob/develop/depends/packages/qt.mk) (Linux only) | | ZeroMQ | [4.3.1](https://github.com/zeromq/libzmq/releases) | 4.0.0 | No | | | | zlib | | | | | [Yes](https://github.com/dashpay/dash/blob/develop/depends/packages/qt.mk) | From c92eb67c40244e21bc8f8005f8b3232db9e3ac7e Mon Sep 17 00:00:00 2001 From: Kittywhiskers Van Gogh <63189531+kwvg@users.noreply.github.com> Date: Thu, 18 May 2023 14:59:14 +0100 Subject: [PATCH 13/14] merge bitcoin#27458: Detect USDT the same way how it is used in the code --- configure.ac | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/configure.ac b/configure.ac index 2035e0ee53..7d0ebbc5bb 100644 --- a/configure.ac +++ b/configure.ac @@ -1413,7 +1413,9 @@ if test x$use_usdt != xno; then AC_COMPILE_IFELSE([ AC_LANG_PROGRAM( [#include ], - [DTRACE_PROBE("context", "event");] + [DTRACE_PROBE(context, event); + int a, b, c, d, e, f, g; + DTRACE_PROBE7(context, event, a, b, c, d, e, f, g);] )], [AC_MSG_RESULT(yes); AC_DEFINE([ENABLE_TRACING], [1], [Define to 1 to enable tracepoints for Userspace, Statically Defined Tracing])], [AC_MSG_RESULT(no); use_usdt=no;] From 0a7a234bd3a1dbeb34ab981485a42d32d9d35985 Mon Sep 17 00:00:00 2001 From: Kittywhiskers Van Gogh <63189531+kwvg@users.noreply.github.com> Date: Thu, 20 Jul 2023 11:36:02 +0100 Subject: [PATCH 14/14] merge bitcoin#28110: correct Fedora systemtap dep --- doc/build-unix.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/build-unix.md b/doc/build-unix.md index db31540a7b..4eb4aed885 100644 --- a/doc/build-unix.md +++ b/doc/build-unix.md @@ -206,7 +206,7 @@ sudo dnf install gmp-devel User-Space, Statically Defined Tracing (USDT) dependencies: ```sh -sudo dnf install systemtap +sudo dnf install systemtap-sdt-devel ``` GUI dependencies: