dash/test/functional/test_framework/test_node.py
Andrew Chow f293c046f4
Merge #16528: Native Descriptor Wallets using DescriptorScriptPubKeyMan
223588b1bbc63dc57098bbd0baa48635e0cc0b82 Add a --descriptors option to various tests (Andrew Chow)
869f7ab30aeb4d7fbd563c535b55467a8a0430cf tests: Add RPCOverloadWrapper which overloads some disabled RPCs (Andrew Chow)
cf060628590fab87d73f278e744d70ef2d5d81db Correctly check for default wallet (Andrew Chow)
886e0d75f5fea2421190aa4812777d89f68962cc Implement CWallet::IsSpentKey for non-LegacySPKMans (Andrew Chow)
3c19fdd2a2fd5394fcfa75b2ba84ab2277cbdabf Return error when no ScriptPubKeyMan is available for specified type (Andrew Chow)
388ba94231f2f10a0be751c562cdd4650510a90a Change wallet_encryption.py to use signmessage instead of dumpprivkey (Andrew Chow)
1346e14831489f9c8f53a08f9dfed61d55d53c6f Functional tests for descriptor wallets (Andrew Chow)
f193ea889ddb53d9a5c47647966681d525e38368 add importdescriptors RPC and tests for native descriptor wallets (Hugo Nguyen)
ce24a944940019185efebcc5d85eac458ed26016 Add IsLegacy to CWallet so that the GUI knows whether to show watchonly (Andrew Chow)
1cb42b22b11c27e64462afc25a94b2fc50bfa113 Generate new descriptors when encrypting (Andrew Chow)
82ae02b1656819f4bd5023b8955447e1d4ea8692 Be able to create new wallets with DescriptorScriptPubKeyMans as backing (Andrew Chow)
b713baa75a62335ab9c0eed9ef76a95bfec30668 Implement GetMetadata in DescriptorScriptPubKeyMan (Andrew Chow)
8b9603bd0b443e2f7984eb72bf2e21cf02af0bcb Change GetMetadata to use unique_ptr<CKeyMetadata> (Andrew Chow)
72a9540df96ffdb94f039b9c14eaacdc7d961196 Implement FillPSBT in DescriptorScriptPubKeyMan (Andrew Chow)
84b4978c02102171775c77a45f6ec198930f0a88 Implement SignMessage for descriptor wallets (Andrew Chow)
bde7c9fa38775a81d53ac0484fa9c98076a0c7d1 Implement SignTransaction in DescriptorScriptPubKeyMan (Andrew Chow)
d50c8ddd4190f20bf0debd410348b73408ec3143 Implement GetSolvingProvider for DescriptorScriptPubKeyMan (Andrew Chow)
f1ca5feb4ad668a3e1ae543d0addd5f483f1a88f Implement GetKeypoolOldestTime and only display it if greater than 0 (Andrew Chow)
586b57a9a6b4b12a78f792785b63a5a1743bce0c Implement ReturnDestination in DescriptorScriptPubKeyMan (Andrew Chow)
f866957979c23cefd41efa9dae9e53b9177818dc Implement GetReservedDestination in DescriptorScriptPubKeyMan (Andrew Chow)
a775f7c7fd0b9094fcbeee6ba92206d5bbb19164 Implement Unlock and Encrypt in DescriptorScriptPubKeyMan (Andrew Chow)
bfdd0734869a22217c15858d7a76d0dacc2ebc86 Implement GetNewDestination for DescriptorScriptPubKeyMan (Andrew Chow)
58c7651821b0eeff0a99dc61d78d2e9e07986580 Implement TopUp in DescriptorScriptPubKeyMan (Andrew Chow)
e014886a342508f7c8d80323eee9a5f314eaf94c Implement SetupGeneration for DescriptorScriptPubKeyMan (Andrew Chow)
46dfb99768e7d03a3cf552812d5b41ceaebc06be Implement writing descriptorkeys, descriptorckeys, and descriptors to wallet file (Andrew Chow)
4cb9b69be031e1dc65d8964794781b347fd948f5 Implement several simple functions in DescriptorScriptPubKeyMan (Andrew Chow)
d1ec3e4f19487b4b100f80ad02eac063c571777d Add IsSingleType to Descriptors (Andrew Chow)
953feb3d2724f5398dd48990c4957a19313d2c8c Implement loading of keys for DescriptorScriptPubKeyMan (Andrew Chow)
2363e9fcaa41b68bf11153f591b95f2d41ff9a1a Load the descriptor cache from the wallet file (Andrew Chow)
46c46aebb7943e1e2e96755e94dc6c197920bf75 Implement GetID for DescriptorScriptPubKeyMan (Andrew Chow)
ec2f9e1178c8e38c0a5ca063fe81adac8f916348 Implement IsHDEnabled in DescriptorScriptPubKeyMan (Andrew Chow)
741122d4c1a62ced3e96d16d67f4eeb3a6522d99 Implement MarkUnusedAddresses in DescriptorScriptPubKeyMan (Andrew Chow)
2db7ca765c8fb2c71dd6f7c4f29ad70e68ff1720 Implement IsMine for DescriptorScriptPubKeyMan (Andrew Chow)
db7177af8c159abbcc209f2caafcd45d54c181c5 Add LoadDescriptorScriptPubKeyMan and SetActiveScriptPubKeyMan to CWallet (Andrew Chow)
78f8a92910d34247fa5d04368338c598d9908267 Implement SetType in DescriptorScriptPubKeyMan (Andrew Chow)
834de0300cde57ca3f662fb7aa5b1bdaed68bc8f Store WalletDescriptor in DescriptorScriptPubKeyMan (Andrew Chow)
d8132669e10c1db9ae0c2ea0d3f822d7d2f01345 Add a lock cs_desc_man for DescriptorScriptPubKeyMan (Andrew Chow)
3194a7f88ac1a32997b390b4f188c4b6a4af04a5 Introduce WalletDescriptor class (Andrew Chow)
6b13cd3fa854dfaeb9e269bff3d67cacc0e5b5dc Create LegacyScriptPubKeyMan when not a descriptor wallet (Andrew Chow)
aeac157c9dc141546b45e06ba9c2e641ad86083f Return nullptr from GetLegacyScriptPubKeyMan if descriptor wallet (Andrew Chow)
96accc73f067c7c95946e9932645dd821ef67f63 Add WALLET_FLAG_DESCRIPTORS (Andrew Chow)
6b8119af53ee2fdb4c4b5b24b4e650c0dc3bd27c Introduce DescriptorScriptPubKeyMan as a dummy class (Andrew Chow)
06620302c713cae65ee8e4ff9302e4c88e2a1285 Introduce SetType function to tell ScriptPubKeyMans the type and internal-ness of it (Andrew Chow)

Pull request description:

  Introducing the wallet of the glorious future (again): native descriptor wallets. With native descriptor wallets, addresses are generated from descriptors. Instead of generating keys and deriving addresses from keys, addresses come from the scriptPubKeys produced by a descriptor. Native descriptor wallets will be optional for now and can only be created by using `createwallet`.

  Descriptor wallets will store descriptors, master keys from the descriptor, and descriptor cache entries. Keys are derived from descriptors on the fly. In order to allow choosing different address types, 6 descriptors are needed for normal use. There is a pair of primary and change descriptors for each of the 3 address types. With the default keypool size of 1000, each descriptor has 1000 scriptPubKeys and descriptor cache entries pregenerated. This has a side effect of making wallets large since 6000 pubkeys are written to the wallet by default, instead of the current 2000. scriptPubKeys are kept only in memory and are generated every time a descriptor is loaded. By default, we use the standard BIP 44, 49, 84 derivation paths with an external and internal derivation chain for each.

  Descriptors can also be imported with a new `importdescriptors` RPC.

  Native descriptor wallets use the `ScriptPubKeyMan` interface introduced in #16341 to add a `DescriptorScriptPubKeyMan`. This defines a different IsMine which uses the simpler model of "does this scriptPubKey exist in this wallet". Furthermore, `DescriptorScriptPubKeyMan` does not have watchonly, so with native descriptor wallets, it is not possible to have a wallet with both watchonly and non-watchonly things. Rather a wallet with `disable_private_keys` needs to be used for watchonly things.

  A `--descriptor` option was added to some tests (`wallet_basic.py`, `wallet_encryption.py`, `wallet_keypool.py`, `wallet_keypool_topup.py`, and `wallet_labels.py`) to allow for these tests to use descriptor wallets. Additionally, several RPCs are disabled for descriptor wallets (`importprivkey`, `importpubkey`, `importaddress`, `importmulti`, `addmultisigaddress`, `dumpprivkey`, `dumpwallet`, `importwallet`, and `sethdseed`).

ACKs for top commit:
  Sjors:
    utACK 223588b1bbc63dc57098bbd0baa48635e0cc0b82 (rebased, nits addressed)
  jonatack:
    Code review re-ACK 223588b1bbc63dc57098bbd0baa48635e0cc0b82.
  fjahr:
    re-ACK 223588b1bbc63dc57098bbd0baa48635e0cc0b82
  instagibbs:
    light re-ACK 223588b
  meshcollider:
    Code review ACK 223588b1bbc63dc57098bbd0baa48635e0cc0b82

Tree-SHA512: 59bc52aeddbb769ed5f420d5d240d8137847ac821b588eb616b34461253510c1717d6a70bab8765631738747336ae06f45ba39603ccd17f483843e5ed9a90986

Introduce SetType function to tell ScriptPubKeyMans the type and internal-ness of it

Introduce DescriptorScriptPubKeyMan as a dummy class

Add WALLET_FLAG_DESCRIPTORS

Return nullptr from GetLegacyScriptPubKeyMan if descriptor wallet

Create LegacyScriptPubKeyMan when not a descriptor wallet

Introduce WalletDescriptor class

WalletDescriptor is a Descriptor with other wallet metadata

Add a lock cs_desc_man for DescriptorScriptPubKeyMan

Store WalletDescriptor in DescriptorScriptPubKeyMan

Implement SetType in DescriptorScriptPubKeyMan

Add LoadDescriptorScriptPubKeyMan and SetActiveScriptPubKeyMan to CWallet

Implement IsMine for DescriptorScriptPubKeyMan

Adds a set of scriptPubKeys that DescriptorScriptPubKeyMan tracks.
If the given script is in that set, it is considered ISMINE_SPENDABLE

Implement MarkUnusedAddresses in DescriptorScriptPubKeyMan

Implement IsHDEnabled in DescriptorScriptPubKeyMan

Implement GetID for DescriptorScriptPubKeyMan

Load the descriptor cache from the wallet file

Implement loading of keys for DescriptorScriptPubKeyMan

Add IsSingleType to Descriptors

IsSingleType will return whether the descriptor will give one or multiple scriptPubKeys

Implement several simple functions in DescriptorScriptPubKeyMan

Implements a bunch of one liners: UpgradeKeyMetadata, IsFirstRun, HavePrivateKeys,
KeypoolCountExternalKeys, GetKeypoolSize, GetTimeFirstKey, CanGetAddresses,
RewriteDB

Implement writing descriptorkeys, descriptorckeys, and descriptors to wallet file

Implement SetupGeneration for DescriptorScriptPubKeyMan

Implement TopUp in DescriptorScriptPubKeyMan

Implement GetNewDestination for DescriptorScriptPubKeyMan

Implement Unlock and Encrypt in DescriptorScriptPubKeyMan

Implement GetReservedDestination in DescriptorScriptPubKeyMan

Implement ReturnDestination in DescriptorScriptPubKeyMan

Implement GetKeypoolOldestTime and only display it if greater than 0

Implement GetSolvingProvider for DescriptorScriptPubKeyMan

Internally, a GetSigningProvider function is introduced which allows for
some private keys to be optionally included. This can be called with a
script as the argument (i.e. a scriptPubKey from our wallet when we are
signing) or with a pubkey. In order to know what index to expand the
private keys for that pubkey, we need to also cache all of the pubkeys
involved when we expand the descriptor. So SetCache and TopUp are
updated to do this too.

Implement SignTransaction in DescriptorScriptPubKeyMan

Implement SignMessage for descriptor wallets

Implement FillPSBT in DescriptorScriptPubKeyMan

FillPSBT will add our own scripts to the PSBT if those inputs are ours.
If an input also lists pubkeys that we happen to know the private keys
for, we will sign those inputs too.

Change GetMetadata to use unique_ptr<CKeyMetadata>

Implement GetMetadata in DescriptorScriptPubKeyMan

Be able to create new wallets with DescriptorScriptPubKeyMans as backing

Generate new descriptors when encrypting

Add IsLegacy to CWallet so that the GUI knows whether to show watchonly

add importdescriptors RPC and tests for native descriptor wallets

Co-authored-by: Andrew Chow <achow101-github@achow101.com>

Functional tests for descriptor wallets

Change wallet_encryption.py to use signmessage instead of dumpprivkey

Return error when no ScriptPubKeyMan is available for specified type

When a CWallet doesn't have a ScriptPubKeyMan for the requested type
in GetNewDestination, give a meaningful error. Also handle this in
Qt which did not do anything with errors.

Implement CWallet::IsSpentKey for non-LegacySPKMans

tests: Add RPCOverloadWrapper which overloads some disabled RPCs

RPCOverloadWrapper overloads some deprecated or disabled RPCs with
an implementation using other RPCs to avoid having a ton of code churn
around replacing those RPCs.

Add a --descriptors option to various tests

Adds a --descriptors option globally to the test framework. This will
make the test create and use descriptor wallets. However some tests may
not work with this.

Some tests are modified to work with --descriptors and run with that
option in test_runer:
* wallet_basic.py
* wallet_encryption.py
* wallet_keypool.py <---- wallet_keypool_hd.py actually
* wallet_keypool_topup.py
* wallet_labels.py
* wallet_avoidreuse.py
2024-03-07 01:23:15 +07:00

780 lines
34 KiB
Python
Executable File

#!/usr/bin/env python3
# Copyright (c) 2017-2020 The Bitcoin Core developers
# Distributed under the MIT software license, see the accompanying
# file COPYING or http://www.opensource.org/licenses/mit-license.php.
"""Class for dashd node under test"""
import contextlib
import decimal
import errno
from enum import Enum
import http.client
import json
import logging
import os.path
import re
import subprocess
import tempfile
import time
import urllib.parse
import shlex
import sys
import collections
from .authproxy import JSONRPCException
from .descriptors import descsum_create
from .messages import MY_SUBVERSION
from .util import (
MAX_NODES,
append_config,
delete_cookie_file,
get_auth_cookie,
get_rpc_proxy,
rpc_url,
wait_until_helper,
p2p_port,
get_chain_folder,
EncodeDecimal,
)
BITCOIND_PROC_WAIT_TIMEOUT = 60
class FailedToStartError(Exception):
"""Raised when a node fails to start correctly."""
class ErrorMatch(Enum):
FULL_TEXT = 1
FULL_REGEX = 2
PARTIAL_REGEX = 3
class TestNode():
"""A class for representing a dashd node under test.
This class contains:
- state about the node (whether it's running, etc)
- a Python subprocess.Popen object representing the running process
- an RPC connection to the node
- one or more P2P connections to the node
To make things easier for the test writer, any unrecognised messages will
be dispatched to the RPC connection."""
def __init__(self, i, datadir, extra_args_from_options, *, chain, rpchost, timewait, timeout_factor, bitcoind, bitcoin_cli, mocktime, coverage_dir, cwd, extra_conf=None, extra_args=None, use_cli=False, start_perf=False, use_valgrind=False, version=None, descriptors=False):
"""
Kwargs:
start_perf (bool): If True, begin profiling the node with `perf` as soon as
the node starts.
"""
self.index = i
self.datadir = datadir
self.chain = chain
self.bitcoinconf = os.path.join(self.datadir, "dash.conf")
self.stdout_dir = os.path.join(self.datadir, "stdout")
self.stderr_dir = os.path.join(self.datadir, "stderr")
self.rpchost = rpchost
self.rpc_timeout = timewait
self.binary = bitcoind
self.coverage_dir = coverage_dir
self.cwd = cwd
self.mocktime = mocktime
self.descriptors = descriptors
if extra_conf is not None:
append_config(datadir, extra_conf)
# Most callers will just need to add extra args to the standard list below.
# For those callers that need more flexibity, they can just set the args property directly.
# Note that common args are set in the config file (see initialize_datadir)
self.extra_args = extra_args
self.extra_args_from_options = extra_args_from_options
self.version = version
# Configuration for logging is set as command-line args rather than in the bitcoin.conf file.
# This means that starting a bitcoind using the temp dir to debug a failed test won't
# spam debug.log.
self.args = [
self.binary,
"-datadir=" + self.datadir,
"-logtimemicros",
"-debug",
"-debugexclude=libevent",
"-debugexclude=leveldb",
"-mocktime=" + str(mocktime),
"-uacomment=testnode%d" % i
]
if use_valgrind:
default_suppressions_file = os.path.join(
os.path.dirname(os.path.realpath(__file__)),
"..", "..", "..", "contrib", "valgrind.supp")
suppressions_file = os.getenv("VALGRIND_SUPPRESSIONS_FILE",
default_suppressions_file)
self.args = ["valgrind", "--suppressions={}".format(suppressions_file),
"--gen-suppressions=all", "--exit-on-first-error=yes",
"--error-exitcode=1", "--quiet"] + self.args
if self.version_is_at_least(190000):
self.args.append("-logthreadnames")
self.cli = TestNodeCLI(bitcoin_cli, self.datadir)
self.use_cli = use_cli
self.start_perf = start_perf
# Don't try auto backups (they fail a lot when running tests)
self.args.append("-createwalletbackups=0")
self.running = False
self.process = None
self.rpc_connected = False
self.rpc = None
self.url = None
self.log = logging.getLogger('TestFramework.node%d' % i)
self.cleanup_on_exit = True # Whether to kill the node when this object goes away
# Cache perf subprocesses here by their data output filename.
self.perf_subprocesses = {}
self.p2ps = []
self.timeout_factor = timeout_factor
AddressKeyPair = collections.namedtuple('AddressKeyPair', ['address', 'key'])
PRIV_KEYS = [
# address , privkey
AddressKeyPair('yYdShjQSptFKitYLksFEUSwHe4hnbar5rf', 'cMfbiEsnG5b8Gwm6vEgfWvZLuXZNC4zsN2y7Es3An9xHRWRjmwgR'),
AddressKeyPair('yfTFJgvq65UZsb9RBbpdYAAzsJoCGXqH2w', 'cStuFACUD1N6JjKQxNLUQ443qJUtSzLitKKEkA8x6utxTPZTLUtA'),
AddressKeyPair('yU3w4VDjKhHiZpWszkUZVnFTS56AfgdfPV', 'cQb5yh2sTiG7dsxxbXHhWSBLMByYT7jY49A1kC7zKhgL9WNHysWW'),
AddressKeyPair('yYhzix2R5LiYnDixsUnF8XwBYGYpyeTgB4', 'cW9Gu6uU4KoZJQcdyUvjULNRg4C8srPJw1adhgdTZMr9YQdKHtcn'),
AddressKeyPair('yiQ3qLx5L1BW9XA6JAG7hC8UQDktcBCeYG', 'cSq7gHVC1QPsswyX2pE5C38UnWZXfCLr7XnkjnDwuZ68NkWp183T'),
AddressKeyPair('yUL8h8mR7aNDRsU5zhcDbpp6YtA6ieUtK2', 'cTk7hiDKgxZX3JSb37vywdYYjjJows4DQjEaxBJDGF6LC6GXvPKo'),
AddressKeyPair('yfy21e12jn3A3uDicNehCq486o9fMwJKMc', 'cMuko9rLDbtxCFWuBSrFgBDRSMxsLWKpJKScRGNuWKbhuQsnsjKT'),
AddressKeyPair('yURgENB3b2YRMWnbhKF7iGs3KoaVRVXsJr', 'cQhdjTMh57MaHCDk9FsWGPtftRMBUuhaYAtouWnetcewmBuSrLSM'),
AddressKeyPair('yYC9AxBEUs3ZZxfcQvj2LUF5PVxxtqaEs7', 'cQFueiiP13mfytV3Svoe4o4Ux79fRJvwuSgHapXsnBwrHod57EeL'),
AddressKeyPair('yVs9jXGyLWLLFbpESnoppk7F8DtXcuCCTf', 'cN55daf1HotwBAgAKWVgDcoppmUNDtQSfb7XLutTLeAgVc3u8hik'),
AddressKeyPair('yV3eqNNshZJ4Pv6NCyYsbdJb1ERFFygFqf', 'cT7qK7g1wkYEMvKowd2ZrX1E5f6JQ7TM246UfqbCiyF7kZhorpX3'),
AddressKeyPair('yfE8gZCiFW9Uqu21v3JGibr3WVSPQWmY8n', 'cPiRWE8KMjTRxH1MWkPerhfoHFn5iHPWVK5aPqjW8NxmdwenFinJ'),
AddressKeyPair('yLLVXzya7GzmVkjQzsCG4iDpqYJyJFDSEV', 'cVLCocFyWxzyCwEknkWvDeWneTBsh9Jf3u4yiJCYjcy3gt8Jw1cM'),
AddressKeyPair('yLNNR3HeJxgR669oRePksYmCqHuPUG79mF', 'cQawC3oUgoToGDJBw1Ub2PpDmf44kVtcaVaTcHyzXMRKGwdn9UYW'),
AddressKeyPair('yLPKVwRTXME7Q3JfKAPJ4FHEaGdWgJuhpj', 'cVcFaWTbkCUZPFTHfDs8iHurPWns5QXc5rqcfkPMHUdmv17o8UYB'),
AddressKeyPair('yLPUundzTpvjU8KYVyM4Zmnr4REf3FFvhZ', 'cRVeRmRaYuEYP9HbCZFsf1ifYYZ4KQD9rttRoTNb9wjPzhvRwqMb'),
AddressKeyPair('yLRhHqau58AS1ALtnaowv1Pyztxi1Q6fXG', 'cNYFW52pJswYbfPR9fpiRpWHEQygg5tyMih2ASPsgMgPy9SUSSEV'),
AddressKeyPair('yLRwHeMkXwYrkDzC4q12vej243AyTeWiPm', 'cRqfZ3dAp8BJUcGhSv7ueCXNGbki1bpcXEKk5dEJN344H52GuHQY'),
AddressKeyPair('yLTMCXJhG1mpaWhbHcsr7zUt9wDWuQSPSk', 'cVWGbeCT5QcVGVTL5NuiLs9JfL8HFDb9PN5Gq2xudw6ZsDFeDy1V'),
AddressKeyPair('yLU9vxiAWUdiKKxn6EazLDFq9WXrK2T7RP', 'cVCzrzfxMhUMxV34UhTmdmntAqHvosAuNo2KUZsiHZSKLm73g35o'),
]
def get_deterministic_priv_key(self):
"""Return a deterministic priv key in base58, that only depends on the node's index"""
assert len(self.PRIV_KEYS) == MAX_NODES
return self.PRIV_KEYS[self.index]
def _node_msg(self, msg: str) -> str:
"""Return a modified msg that identifies this node by its index as a debugging aid."""
return "[node %d] %s" % (self.index, msg)
def _raise_assertion_error(self, msg: str):
"""Raise an AssertionError with msg modified to identify this node."""
raise AssertionError(self._node_msg(msg))
def __del__(self):
# Ensure that we don't leave any dashd processes lying around after
# the test ends
if self.process and self.cleanup_on_exit:
# Should only happen on test failure
# Avoid using logger, as that may have already been shutdown when
# this destructor is called.
print(self._node_msg("Cleaning up leftover process"))
self.process.kill()
def __getattr__(self, name):
"""Dispatches any unrecognised messages to the RPC connection or a CLI instance."""
if self.use_cli:
return getattr(RPCOverloadWrapper(self.cli, True, self.descriptors), name)
else:
assert self.rpc_connected and self.rpc is not None, self._node_msg("Error: no RPC connection")
return getattr(RPCOverloadWrapper(self.rpc, descriptors=self.descriptors), name)
def start(self, extra_args=None, *, cwd=None, stdout=None, stderr=None, **kwargs):
"""Start the node."""
if extra_args is None:
extra_args = self.extra_args
# Add a new stdout and stderr file each time dashd is started
if stderr is None:
stderr = tempfile.NamedTemporaryFile(dir=self.stderr_dir, delete=False)
if stdout is None:
stdout = tempfile.NamedTemporaryFile(dir=self.stdout_dir, delete=False)
self.stderr = stderr
self.stdout = stdout
if cwd is None:
cwd = self.cwd
all_args = self.args + self.extra_args_from_options + extra_args
if self.mocktime != 0:
all_args = all_args + ["-mocktime=%d" % self.mocktime]
# Delete any existing cookie file -- if such a file exists (eg due to
# unclean shutdown), it will get overwritten anyway by dashd, and
# potentially interfere with our attempt to authenticate
delete_cookie_file(self.datadir, self.chain)
# add environment variable LIBC_FATAL_STDERR_=1 so that libc errors are written to stderr and not the terminal
subp_env = dict(os.environ, LIBC_FATAL_STDERR_="1")
self.process = subprocess.Popen(all_args, env=subp_env, stdout=stdout, stderr=stderr, cwd=cwd, **kwargs)
self.running = True
self.log.debug("dashd started, waiting for RPC to come up")
if self.start_perf:
self._start_perf()
def wait_for_rpc_connection(self):
"""Sets up an RPC connection to the dashd process. Returns False if unable to connect."""
# Poll at a rate of four times per second
poll_per_s = 4
for _ in range(poll_per_s * self.rpc_timeout):
if self.process.poll() is not None:
raise FailedToStartError(self._node_msg(
'dashd exited with status {} during initialization'.format(self.process.returncode)))
try:
rpc = get_rpc_proxy(
rpc_url(self.datadir, self.index, self.chain, self.rpchost),
self.index,
timeout=self.rpc_timeout // 2, # Shorter timeout to allow for one retry in case of ETIMEDOUT
coveragedir=self.coverage_dir,
)
rpc.getblockcount()
# If the call to getblockcount() succeeds then the RPC connection is up
if self.version_is_at_least(180000):
# getmempoolinfo.loaded is available since commit
# 71e38b9ebcb78b3a264a4c25c7c4e373317f2a40 (version 0.18.0)
wait_until_helper(lambda: rpc.getmempoolinfo()['loaded'])
# Wait for the node to finish reindex, block import, and
# loading the mempool. Usually importing happens fast or
# even "immediate" when the node is started. However, there
# is no guarantee and sometimes ThreadImport might finish
# later. This is going to cause intermittent test failures,
# because generally the tests assume the node is fully
# ready after being started.
#
# For example, the node will reject block messages from p2p
# when it is still importing with the error "Unexpected
# block message received"
#
# The wait is done here to make tests as robust as possible
# and prevent racy tests and intermittent failures as much
# as possible. Some tests might not need this, but the
# overhead is trivial, and the added guarantees are worth
# the minimal performance cost.
self.log.debug("RPC successfully started")
if self.use_cli:
return
self.rpc = rpc
self.rpc_connected = True
self.url = self.rpc.rpc_url
return
except JSONRPCException as e: # Initialization phase
# -28 RPC in warmup
# -342 Service unavailable, RPC server started but is shutting down due to error
if e.error['code'] != -28 and e.error['code'] != -342:
raise # unknown JSON RPC exception
except ConnectionResetError:
# This might happen when the RPC server is in warmup, but shut down before the call to getblockcount
# succeeds. Try again to properly raise the FailedToStartError
pass
except OSError as e:
if e.errno == errno.ETIMEDOUT:
pass # Treat identical to ConnectionResetError
elif e.errno == errno.ECONNREFUSED:
pass # Port not yet open?
else:
raise # unknown OS error
except ValueError as e: # cookie file not found and no rpcuser or rpcpassword; dashd is still starting
if "No RPC credentials" not in str(e):
raise
time.sleep(1.0 / poll_per_s)
self._raise_assertion_error("Unable to connect to dashd after {}s".format(self.rpc_timeout))
def wait_for_cookie_credentials(self):
"""Ensures auth cookie credentials can be read, e.g. for testing CLI with -rpcwait before RPC connection is up."""
self.log.debug("Waiting for cookie credentials")
# Poll at a rate of four times per second.
poll_per_s = 4
for _ in range(poll_per_s * self.rpc_timeout):
try:
get_auth_cookie(self.datadir, self.chain)
self.log.debug("Cookie credentials successfully retrieved")
return
except ValueError: # cookie file not found and no rpcuser or rpcpassword; bitcoind is still starting
pass # so we continue polling until RPC credentials are retrieved
time.sleep(1.0 / poll_per_s)
self._raise_assertion_error("Unable to retrieve cookie credentials after {}s".format(self.rpc_timeout))
def generate(self, nblocks, maxtries=1000000):
self.log.debug("TestNode.generate() dispatches `generate` call to `generatetoaddress`")
return self.generatetoaddress(nblocks=nblocks, address=self.get_deterministic_priv_key().address, maxtries=maxtries)
def get_wallet_rpc(self, wallet_name):
if self.use_cli:
return RPCOverloadWrapper(self.cli("-rpcwallet={}".format(wallet_name)), True, self.descriptors)
else:
assert self.rpc_connected and self.rpc, self._node_msg("RPC not connected")
wallet_path = "wallet/{}".format(urllib.parse.quote(wallet_name))
return RPCOverloadWrapper(self.rpc / wallet_path, descriptors=self.descriptors)
def version_is_at_least(self, ver):
return self.version is None or self.version >= ver
def stop_node(self, expected_stderr='', *, wait=0, wait_until_stopped=True):
"""Stop the node."""
if not self.running:
return
self.log.debug("Stopping node")
try:
# Do not use wait argument when testing older nodes, e.g. in feature_backwards_compatibility.py
if self.version_is_at_least(180000):
self.stop(wait=wait)
else:
self.stop()
except http.client.CannotSendRequest:
self.log.exception("Unable to stop node.")
# If there are any running perf processes, stop them.
for profile_name in tuple(self.perf_subprocesses.keys()):
self._stop_perf(profile_name)
# Check that stderr is as expected
self.stderr.seek(0)
stderr = self.stderr.read().decode('utf-8').strip()
if stderr != expected_stderr:
raise AssertionError("Unexpected stderr {} != {}".format(stderr, expected_stderr))
self.stdout.close()
self.stderr.close()
del self.p2ps[:]
if wait_until_stopped:
self.wait_until_stopped()
def is_node_stopped(self):
"""Checks whether the node has stopped.
Returns True if the node has stopped. False otherwise.
This method is responsible for freeing resources (self.process)."""
if not self.running:
return True
return_code = self.process.poll()
if return_code is None:
return False
# process has stopped. Assert that it didn't return an error code.
assert return_code == 0, self._node_msg(
"Node returned non-zero exit code (%d) when stopping" % return_code)
self.running = False
self.process = None
self.rpc_connected = False
self.rpc = None
self.log.debug("Node stopped")
return True
def wait_until_stopped(self, timeout=BITCOIND_PROC_WAIT_TIMEOUT):
wait_until_helper(self.is_node_stopped, timeout=timeout, timeout_factor=self.timeout_factor)
@contextlib.contextmanager
def assert_debug_log(self, expected_msgs, unexpected_msgs=None, timeout=2):
if unexpected_msgs is None:
unexpected_msgs = []
time_end = time.time() + timeout * self.timeout_factor
chain = get_chain_folder(self.datadir, self.chain)
debug_log = os.path.join(self.datadir, chain, 'debug.log')
with open(debug_log, encoding='utf-8') as dl:
dl.seek(0, 2)
prev_size = dl.tell()
yield
while True:
found = True
with open(debug_log, encoding='utf-8') as dl:
dl.seek(prev_size)
log = dl.read()
print_log = " - " + "\n - ".join(log.splitlines())
for unexpected_msg in unexpected_msgs:
if re.search(re.escape(unexpected_msg), log, flags=re.MULTILINE):
self._raise_assertion_error('Unexpected message "{}" partially matches log:\n\n{}\n\n'.format(unexpected_msg, print_log))
for expected_msg in expected_msgs:
if re.search(re.escape(expected_msg), log, flags=re.MULTILINE) is None:
found = False
if found:
return
if time.time() >= time_end:
break
time.sleep(0.05)
self._raise_assertion_error('Expected messages "{}" does not partially match log:\n\n{}\n\n'.format(str(expected_msgs), print_log))
@contextlib.contextmanager
def profile_with_perf(self, profile_name):
"""
Context manager that allows easy profiling of node activity using `perf`.
See `test/functional/README.md` for details on perf usage.
Args:
profile_name (str): This string will be appended to the
profile data filename generated by perf.
"""
subp = self._start_perf(profile_name)
yield
if subp:
self._stop_perf(profile_name)
def _start_perf(self, profile_name=None):
"""Start a perf process to profile this node.
Returns the subprocess running perf."""
subp = None
def test_success(cmd):
return subprocess.call(
# shell=True required for pipe use below
cmd, shell=True,
stderr=subprocess.DEVNULL, stdout=subprocess.DEVNULL) == 0
if not sys.platform.startswith('linux'):
self.log.warning("Can't profile with perf; only available on Linux platforms")
return None
if not test_success('which perf'):
self.log.warning("Can't profile with perf; must install perf-tools")
return None
if not test_success('readelf -S {} | grep .debug_str'.format(shlex.quote(self.binary))):
self.log.warning(
"perf output won't be very useful without debug symbols compiled into bitcoind")
output_path = tempfile.NamedTemporaryFile(
dir=self.datadir,
prefix="{}.perf.data.".format(profile_name or 'test'),
delete=False,
).name
cmd = [
'perf', 'record',
'-g', # Record the callgraph.
'--call-graph', 'dwarf', # Compatibility for gcc's --fomit-frame-pointer.
'-F', '101', # Sampling frequency in Hz.
'-p', str(self.process.pid),
'-o', output_path,
]
subp = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
self.perf_subprocesses[profile_name] = subp
return subp
def _stop_perf(self, profile_name):
"""Stop (and pop) a perf subprocess."""
subp = self.perf_subprocesses.pop(profile_name)
output_path = subp.args[subp.args.index('-o') + 1]
subp.terminate()
subp.wait(timeout=10)
stderr = subp.stderr.read().decode()
if 'Consider tweaking /proc/sys/kernel/perf_event_paranoid' in stderr:
self.log.warning(
"perf couldn't collect data! Try "
"'sudo sysctl -w kernel.perf_event_paranoid=-1'")
else:
report_cmd = "perf report -i {}".format(output_path)
self.log.info("See perf output by running '{}'".format(report_cmd))
def assert_start_raises_init_error(self, extra_args=None, expected_msg=None, match=ErrorMatch.FULL_TEXT, *args, **kwargs):
"""Attempt to start the node and expect it to raise an error.
extra_args: extra arguments to pass through to dashd
expected_msg: regex that stderr should match when dashd fails
Will throw if dashd starts without an error.
Will throw if an expected_msg is provided and it does not match dashd's stdout."""
with tempfile.NamedTemporaryFile(dir=self.stderr_dir, delete=False) as log_stderr, \
tempfile.NamedTemporaryFile(dir=self.stdout_dir, delete=False) as log_stdout:
try:
self.start(extra_args, stdout=log_stdout, stderr=log_stderr, *args, **kwargs)
ret = self.process.wait(timeout=self.rpc_timeout)
self.log.debug(self._node_msg(f'dashd exited with status {ret} during initialization'))
self.running = False
self.process = None
# Check stderr for expected message
if expected_msg is not None:
log_stderr.seek(0)
stderr = log_stderr.read().decode('utf-8').strip()
if match == ErrorMatch.PARTIAL_REGEX:
if re.search(expected_msg, stderr, flags=re.MULTILINE) is None:
self._raise_assertion_error(
'Expected message "{}" does not partially match stderr:\n"{}"'.format(expected_msg, stderr))
elif match == ErrorMatch.FULL_REGEX:
if re.fullmatch(expected_msg, stderr) is None:
self._raise_assertion_error(
'Expected message "{}" does not fully match stderr:\n"{}"'.format(expected_msg, stderr))
elif match == ErrorMatch.FULL_TEXT:
if expected_msg != stderr:
self._raise_assertion_error(
'Expected message "{}" does not fully match stderr:\n"{}"'.format(expected_msg, stderr))
except subprocess.TimeoutExpired:
self.process.kill()
self.running = False
self.process = None
assert_msg = f'dashd should have exited within {self.rpc_timeout}s '
if expected_msg is None:
assert_msg += "with an error"
else:
assert_msg += "with expected error " + expected_msg
self._raise_assertion_error(assert_msg)
def add_p2p_connection(self, p2p_conn, *, wait_for_verack=True, **kwargs):
"""Add a p2p connection to the node.
This method adds the p2p connection to the self.p2ps list and also
returns the connection to the caller."""
if 'dstport' not in kwargs:
kwargs['dstport'] = p2p_port(self.index)
if 'dstaddr' not in kwargs:
kwargs['dstaddr'] = '127.0.0.1'
p2p_conn.peer_connect(**kwargs, net=self.chain, timeout_factor=self.timeout_factor)()
self.p2ps.append(p2p_conn)
p2p_conn.wait_until(lambda: p2p_conn.is_connected, check_connected=False)
if wait_for_verack:
# Wait for the node to send us the version and verack
p2p_conn.wait_for_verack()
# At this point we have sent our version message and received the version and verack, however the full node
# has not yet received the verack from us (in reply to their version). So, the connection is not yet fully
# established (fSuccessfullyConnected).
#
# This shouldn't lead to any issues when sending messages, since the verack will be in-flight before the
# message we send. However, it might lead to races where we are expecting to receive a message. E.g. a
# transaction that will be added to the mempool as soon as we return here.
#
# So syncing here is redundant when we only want to send a message, but the cost is low (a few milliseconds)
# in comparison to the upside of making tests less fragile and unexpected intermittent errors less likely.
p2p_conn.sync_with_ping()
return p2p_conn
def num_test_p2p_connections(self):
"""Return number of test framework p2p connections to the node."""
return len([peer for peer in self.getpeerinfo() if peer['subver'] == MY_SUBVERSION.decode("utf-8")])
def disconnect_p2ps(self):
"""Close all p2p connections to the node."""
for p in self.p2ps:
p.peer_disconnect()
# wait for p2p connections to disappear from getpeerinfo()
def check_peers():
for p in self.getpeerinfo():
for p2p in self.p2ps:
if p['subver'] == p2p.strSubVer.decode():
return False
return True
wait_until_helper(check_peers, timeout=5)
del self.p2ps[:]
wait_until_helper(lambda: self.num_test_p2p_connections() == 0, timeout_factor=self.timeout_factor)
class TestNodeCLIAttr:
def __init__(self, cli, command):
self.cli = cli
self.command = command
def __call__(self, *args, **kwargs):
return self.cli.send_cli(self.command, *args, **kwargs)
def get_request(self, *args, **kwargs):
return lambda: self(*args, **kwargs)
def arg_to_cli(arg):
if isinstance(arg, bool):
return str(arg).lower()
elif isinstance(arg, dict) or isinstance(arg, list):
return json.dumps(arg, default=EncodeDecimal)
else:
return str(arg)
class TestNodeCLI():
"""Interface to dash-cli for an individual node"""
def __init__(self, binary, datadir):
self.options = []
self.binary = binary
self.datadir = datadir
self.input = None
self.log = logging.getLogger('TestFramework.dashcli')
def __call__(self, *options, input=None):
# TestNodeCLI is callable with dash-cli command-line options
cli = TestNodeCLI(self.binary, self.datadir)
cli.options = [str(o) for o in options]
cli.input = input
return cli
def __getattr__(self, command):
return TestNodeCLIAttr(self, command)
def batch(self, requests):
results = []
for request in requests:
try:
results.append(dict(result=request()))
except JSONRPCException as e:
results.append(dict(error=e))
return results
def send_cli(self, command=None, *args, **kwargs):
"""Run dash-cli command. Deserializes returned string as python object."""
pos_args = [arg_to_cli(arg) for arg in args]
named_args = [str(key) + "=" + arg_to_cli(value) for (key, value) in kwargs.items()]
p_args = [self.binary, "-datadir=" + self.datadir] + self.options
if named_args:
p_args += ["-named"]
if command is not None:
p_args += [command]
p_args += pos_args + named_args
self.log.debug("Running dash-cli {}".format(p_args[2:]))
process = subprocess.Popen(p_args, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True)
cli_stdout, cli_stderr = process.communicate(input=self.input)
returncode = process.poll()
if returncode:
match = re.match(r'error code: ([-0-9]+)\nerror message:\n(.*)', cli_stderr)
if match:
code, message = match.groups()
raise JSONRPCException(dict(code=int(code), message=message))
# Ignore cli_stdout, raise with cli_stderr
raise subprocess.CalledProcessError(returncode, self.binary, output=cli_stderr)
try:
return json.loads(cli_stdout, parse_float=decimal.Decimal)
except (json.JSONDecodeError, decimal.InvalidOperation):
return cli_stdout.rstrip("\n")
class RPCOverloadWrapper():
def __init__(self, rpc, cli=False, descriptors=False):
self.rpc = rpc
self.is_cli = cli
self.descriptors = descriptors
def __getattr__(self, name):
return getattr(self.rpc, name)
def createwallet(self, wallet_name, disable_private_keys=None, blank=None, passphrase=None, avoid_reuse=None, descriptors=None, load_on_startup=None):
if self.is_cli:
if disable_private_keys is None:
disable_private_keys = 'null'
if blank is None:
blank = 'null'
if passphrase is None:
passphrase = ''
if avoid_reuse is None:
avoid_reuse = 'null'
if load_on_startup is None:
load_on_startup = 'null'
if descriptors is None:
descriptors = self.descriptors
return self.__getattr__('createwallet')(wallet_name, disable_private_keys, blank, passphrase, avoid_reuse, descriptors, load_on_startup)
def importprivkey(self, privkey, label=None, rescan=None):
wallet_info = self.getwalletinfo()
if self.is_cli:
if label is None:
label = 'null'
if rescan is None:
rescan = 'null'
if 'descriptors' not in wallet_info or ('descriptors' in wallet_info and not wallet_info['descriptors']):
return self.__getattr__('importprivkey')(privkey, label, rescan)
desc = descsum_create('combo(' + privkey + ')')
req = [{
'desc': desc,
'timestamp': 0 if rescan else 'now',
'label': label if label else ''
}]
import_res = self.importdescriptors(req)
if not import_res[0]['success']:
raise JSONRPCException(import_res[0]['error'])
def addmultisigaddress(self, nrequired, keys, label=None):
wallet_info = self.getwalletinfo()
if self.is_cli:
if label is None:
label = 'null'
if 'descriptors' not in wallet_info or ('descriptors' in wallet_info and not wallet_info['descriptors']):
return self.__getattr__('addmultisigaddress')(nrequired, keys, label)
cms = self.createmultisig(nrequired, keys)
req = [{
'desc': cms['descriptor'],
'timestamp': 0,
'label': label if label else ''
}]
import_res = self.importdescriptors(req)
if not import_res[0]['success']:
raise JSONRPCException(import_res[0]['error'])
return cms
def importpubkey(self, pubkey, label=None, rescan=None):
wallet_info = self.getwalletinfo()
if self.is_cli:
if label is None:
label = 'null'
if rescan is None:
rescan = 'null'
if 'descriptors' not in wallet_info or ('descriptors' in wallet_info and not wallet_info['descriptors']):
return self.__getattr__('importpubkey')(pubkey, label, rescan)
desc = descsum_create('combo(' + pubkey + ')')
req = [{
'desc': desc,
'timestamp': 0 if rescan else 'now',
'label': label if label else ''
}]
import_res = self.importdescriptors(req)
if not import_res[0]['success']:
raise JSONRPCException(import_res[0]['error'])
def importaddress(self, address, label=None, rescan=None, p2sh=None):
wallet_info = self.getwalletinfo()
if self.is_cli:
if label is None:
label = 'null'
if rescan is None:
rescan = 'null'
if p2sh is None:
p2sh = 'null'
if 'descriptors' not in wallet_info or ('descriptors' in wallet_info and not wallet_info['descriptors']):
return self.__getattr__('importaddress')(address, label, rescan, p2sh)
is_hex = False
try:
int(address ,16)
is_hex = True
desc = descsum_create('raw(' + address + ')')
except:
desc = descsum_create('addr(' + address + ')')
reqs = [{
'desc': desc,
'timestamp': 0 if rescan else 'now',
'label': label if label else ''
}]
if is_hex and p2sh:
reqs.append({
'desc': descsum_create('p2sh(raw(' + address + '))'),
'timestamp': 0 if rescan else 'now',
'label': label if label else ''
})
import_res = self.importdescriptors(reqs)
for res in import_res:
if not res['success']:
raise JSONRPCException(res['error'])