Merge #6137: backport: merge bitcoin#21732, #21762, #21754, #21953, #21850, #22633, #22738, #23154, #23721, #24002, #24197, merge bitcoin-core/gui#399 (auxiliary backports: part 14)

1c5ea38c68 merge bitcoin#24197: Replace lock with thread safety annotation in CBlockTreeDB::LoadBlockIndexGuts() (Kittywhiskers Van Gogh)
e5e37458bb merge bitcoin#24002: add thread safety lock assertion to WriteBlockIndexDB() (Kittywhiskers Van Gogh)
04a3f65032 merge bitcoin#23721: Move restorewallet() logic to the wallet section (Kittywhiskers Van Gogh)
e47d5ac81e merge bitcoin#23154: add assumeutxo notes (Kittywhiskers Van Gogh)
847d866ff5 merge bitcoin#22738: fix failure in feature_nulldummy.py on single-core machines (Kittywhiskers Van Gogh)
ad96ef2d25 merge bitcoin#22633: Replace remaining binascii method calls (Kittywhiskers Van Gogh)
b37f609fd0 merge bitcoin-core/gui#399: Fix "Load PSBT" functionality when no wallet loaded (Kittywhiskers Van Gogh)
94173f14dd merge bitcoin#21850: Remove `GetDataDir(net_specific)` function (Kittywhiskers Van Gogh)
6264c7b7c7 merge bitcoin#21953: fuzz: Add utxo_snapshot target (Konstantin Akimov)
8b7ea28e80 merge bitcoin#21754: Run feature_cltv with MiniWallet (Kittywhiskers Van Gogh)
bd750140be merge bitcoin#21762: Speed up mempool_spend_coinbase.py (Kittywhiskers Van Gogh)
72eeb9a0d6 merge bitcoin#21732: Move common init code to init/common (Kittywhiskers Van Gogh)
3944d4ed96 chore: resolve nit from dash#6085 (blockstorage backports) (Kittywhiskers Van Gogh)
92509e2eee fix: don't suppress `-logtimestamps` help if `HAVE_THREAD_LOCAL` undef (Kittywhiskers Van Gogh)

Pull request description:

  ## Additional Information

  * Dependency for https://github.com/dashpay/dash/pull/6138

  * In [bitcoin#21754](https://github.com/bitcoin/bitcoin/pull/21754), the `scriptSig` padding multiplier (`24`) differs from upstream (`35`) as the `vsize` value it corresponds to must match what is ordinarily generated (`85` vs `96` upstream) in order to fulfill an assertion ([source](d9835515cc/test/functional/test_framework/wallet.py (L107))).

  * In [bitcoin#21953](https://github.com/bitcoin/bitcoin/pull/21953), the hash associated with height `200` is generated like this (this is the same method used in [dash#5236](https://github.com/dashpay/dash/pull/5236)):
    * Add the height desired to the `CRegTestParams::m_assumeutxo_data` map with a garbage hash value (like `uint256::ONE`). This is to avoid an unrecognized metadata failure ([source](5211886fb4/src/validation.cpp (L5755-L5761))) caused by looking through the map to see if the height's there.
    * Change the `LogPrintf(..)` in the serialized hash check error log message located [here](5211886fb4/src/validation.cpp (L5876-L5880)) to a `std::cout << strprintf(..)`
    * Edit the value of `mineBlocks` [here](5211886fb4/src/test/validation_chainstatemanager_tests.cpp (L248-L253)) to be 100 blocks _less_ than the desired height.
    * Compile Dash Core and run `./src/test/test_dash -t validation_chainstatemanager_tests`
    * Take the `got` value printed to your terminal window/`stdout` (the `expected` value should be our garbage value from earlier, ignore that). That's your good hash.
    * Update the `CRegTestParams::m_assumeutxo_data` map entry with the correct entry, reverse every change _except_ the map entry (for obvious reasons) and the `mineBlocks` change.
      * Remember to add/update the hash [here](5211886fb4/src/test/validation_tests.cpp (L29-L31)) in `validation_tests`, it simply tests the hardcoded chainparams value with its own hardcoded value. That's also why we don't use this test since it'll just regurgitate the garbage values we give it.
    * Compile and re-run the test. If it passes, your hash is good. Revert the `mineBlocks` change.
    * Profit?

  ## Breaking Changes

  None expected.

  ## Checklist:

  - [x] I have performed a self-review of my own code
  - [x] I have commented my code, particularly in hard-to-understand areas **(note: N/A)**
  - [x] I have added or updated relevant unit/integration/functional/e2e tests
  - [x] I have made corresponding changes to the documentation **(note: N/A)**
  - [x] I have assigned this pull request to a milestone _(for repository code-owners and collaborators only)_

ACKs for top commit:
  UdjinM6:
    utACK 1c5ea38c68
  PastaPastaPasta:
    utACK 1c5ea38c68

Tree-SHA512: 1ce0d4f1cef68990412e2e7046b36db7425059ee41b39e3681fa05d59fe24a0a74ad8c5d833c0e4c0686f693af665ca749e504b88ad30e708fc163045160aa58
This commit is contained in:
pasta 2024-07-23 13:55:02 -05:00
commit ef2ad7654f
No known key found for this signature in database
GPG Key ID: 52527BEDABE87984
77 changed files with 844 additions and 400 deletions

View File

@ -17,7 +17,6 @@ import datetime
import time import time
import glob import glob
from collections import namedtuple from collections import namedtuple
from binascii import unhexlify
settings = {} settings = {}
@ -324,7 +323,7 @@ if __name__ == '__main__':
settings['max_out_sz'] = int(settings['max_out_sz']) settings['max_out_sz'] = int(settings['max_out_sz'])
settings['split_timestamp'] = int(settings['split_timestamp']) settings['split_timestamp'] = int(settings['split_timestamp'])
settings['file_timestamp'] = int(settings['file_timestamp']) settings['file_timestamp'] = int(settings['file_timestamp'])
settings['netmagic'] = unhexlify(settings['netmagic'].encode('utf-8')) settings['netmagic'] = bytes.fromhex(settings['netmagic'])
settings['out_of_order_cache_sz'] = int(settings['out_of_order_cache_sz']) settings['out_of_order_cache_sz'] = int(settings['out_of_order_cache_sz'])
settings['debug_output'] = settings['debug_output'].lower() settings['debug_output'] = settings['debug_output'].lower()

View File

@ -68,6 +68,7 @@ The Dash Core repo's [root README](/README.md) contains relevant information on
### Miscellaneous ### Miscellaneous
- [Assets Attribution](assets-attribution.md) - [Assets Attribution](assets-attribution.md)
- [Assumeutxo design](assumeutxo.md)
- [dash.conf Configuration File](dash-conf.md) - [dash.conf Configuration File](dash-conf.md)
- [CJDNS Support](cjdns.md) - [CJDNS Support](cjdns.md)
- [Files](files.md) - [Files](files.md)

138
doc/assumeutxo.md Normal file
View File

@ -0,0 +1,138 @@
# assumeutxo
Assumeutxo is a feature that allows fast bootstrapping of a validating dashd
instance with a very similar security model to assumevalid.
The RPC commands `dumptxoutset` and `loadtxoutset` are used to respectively generate
and load UTXO snapshots. The utility script `./contrib/devtools/utxo_snapshot.sh` may
be of use.
## General background
- [assumeutxo proposal](https://github.com/jamesob/assumeutxo-docs/tree/2019-04-proposal/proposal)
- [Github issue](https://github.com/bitcoin/bitcoin/issues/15605) (Bitcoin)
- [draft PR](https://github.com/bitcoin/bitcoin/pull/15606) (Bitcoin)
## Design notes
- A new block index `nStatus` flag is introduced, `BLOCK_ASSUMED_VALID`, to mark block
index entries that are required to be assumed-valid by a chainstate created
from a UTXO snapshot. This flag is mostly used as a way to modify certain
CheckBlockIndex() logic to account for index entries that are pending validation by a
chainstate running asynchronously in the background. We also use this flag to control
which index entries are added to setBlockIndexCandidates during LoadBlockIndex().
- Indexing implementations via BaseIndex can no longer assume that indexation happens
sequentially, since background validation chainstates can submit BlockConnected
events out of order with the active chain.
- The concept of UTXO snapshots is treated as an implementation detail that lives
behind the ChainstateManager interface. The external presentation of the changes
required to facilitate the use of UTXO snapshots is the understanding that there are
now certain regions of the chain that can be temporarily assumed to be valid (using
the nStatus flag mentioned above). In certain cases, e.g. wallet rescanning, this is
very similar to dealing with a pruned chain.
Logic outside ChainstateManager should try not to know about snapshots, instead
preferring to work in terms of more general states like assumed-valid.
## Chainstate phases
Chainstate within the system goes through a number of phases when UTXO snapshots are
used, as managed by `ChainstateManager`. At various points there can be multiple
`CChainState` objects in existence to facilitate both maintaining the network tip and
performing historical validation of the assumed-valid chain.
It is worth noting that though there are multiple separate chainstates, those
chainstates share use of a common block index (i.e. they hold the same `BlockManager`
reference).
The subheadings below outline the phases and the corresponding changes to chainstate
data.
### "Normal" operation via initial block download
`ChainstateManager` manages a single CChainState object, for which
`m_snapshot_blockhash` is null. This chainstate is (maybe obviously)
considered active. This is the "traditional" mode of operation for dashd.
| | |
| ---------- | ----------- |
| number of chainstates | 1 |
| active chainstate | ibd |
### User loads a UTXO snapshot via `loadtxoutset` RPC
`ChainstateManager` initializes a new chainstate (see `ActivateSnapshot()`) to load the
snapshot contents into. During snapshot load and validation (see
`PopulateAndValidateSnapshot()`), the new chainstate is not considered active and the
original chainstate remains in use as active.
| | |
| ---------- | ----------- |
| number of chainstates | 2 |
| active chainstate | ibd |
Once the snapshot chainstate is loaded and validated, it is promoted to active
chainstate and a sync to tip begins. A new chainstate directory is created in the
datadir for the snapshot chainstate called
`chainstate_[SHA256 blockhash of snapshot base block]`.
| | |
| ---------- | ----------- |
| number of chainstates | 2 |
| active chainstate | snapshot |
The snapshot begins to sync to tip from its base block, technically in parallel with
the original chainstate, but it is given priority during block download and is
allocated most of the cache (see `MaybeRebalanceCaches()` and usages) as our chief
consideration is getting to network tip.
**Failure consideration:** if shutdown happens at any point during this phase, both
chainstates will be detected during the next init and the process will resume.
### Snapshot chainstate hits network tip
Once the snapshot chainstate leaves IBD, caches are rebalanced
(via `MaybeRebalanceCaches()` in `ActivateBestChain()`) and more cache is given
to the background chainstate, which is responsible for doing full validation of the
assumed-valid parts of the chain.
**Note:** at this point, ValidationInterface callbacks will be coming in from both
chainstates. Considerations here must be made for indexing, which may no longer be happening
sequentially.
### Background chainstate hits snapshot base block
Once the tip of the background chainstate hits the base block of the snapshot
chainstate, we stop use of the background chainstate by setting `m_stop_use` (not yet
committed - see bitcoin#15606), in `CompleteSnapshotValidation()`, which is checked in
`ActivateBestChain()`). We hash the background chainstate's UTXO set contents and
ensure it matches the compiled value in `CMainParams::m_assumeutxo_data`.
The background chainstate data lingers on disk until shutdown, when in
`ChainstateManager::Reset()`, the background chainstate is cleaned up with
`ValidatedSnapshotShutdownCleanup()`, which renames the `chainstate_[hash]` datadir as
`chainstate`.
| | |
| ---------- | ----------- |
| number of chainstates | 2 (ibd has `m_stop_use=true`) |
| active chainstate | snapshot |
**Failure consideration:** if dashd unexpectedly halts after `m_stop_use` is set on
the background chainstate but before `CompleteSnapshotValidation()` can finish, the
need to complete snapshot validation will be detected on subsequent init by
`ChainstateManager::CheckForUncleanShutdown()`.
### Dashd restarts sometime after snapshot validation has completed
When dashd initializes again, what began as the snapshot chainstate is now
indistinguishable from a chainstate that has been built from the traditional IBD
process, and will be initialized as such.
| | |
| ---------- | ----------- |
| number of chainstates | 1 |
| active chainstate | ibd |

View File

@ -5,7 +5,6 @@
from argparse import ArgumentParser from argparse import ArgumentParser
from base64 import urlsafe_b64encode from base64 import urlsafe_b64encode
from binascii import hexlify
from getpass import getpass from getpass import getpass
from os import urandom from os import urandom
@ -13,7 +12,7 @@ import hmac
def generate_salt(size): def generate_salt(size):
"""Create size byte hex salt""" """Create size byte hex salt"""
return hexlify(urandom(size)).decode() return urandom(size).hex()
def generate_password(): def generate_password():
"""Create 32 byte b64 password""" """Create 32 byte b64 password"""

View File

@ -216,6 +216,7 @@ BITCOIN_CORE_H = \
index/txindex.h \ index/txindex.h \
indirectmap.h \ indirectmap.h \
init.h \ init.h \
init/common.h \
interfaces/chain.h \ interfaces/chain.h \
interfaces/coinjoin.h \ interfaces/coinjoin.h \
interfaces/handler.h \ interfaces/handler.h \
@ -731,6 +732,7 @@ libbitcoin_common_a_SOURCES = \
core_write.cpp \ core_write.cpp \
deploymentinfo.cpp \ deploymentinfo.cpp \
governance/common.cpp \ governance/common.cpp \
init/common.cpp \
key.cpp \ key.cpp \
key_io.cpp \ key_io.cpp \
merkleblock.cpp \ merkleblock.cpp \

View File

@ -331,6 +331,7 @@ test_fuzz_fuzz_SOURCES = \
test/fuzz/tx_in.cpp \ test/fuzz/tx_in.cpp \
test/fuzz/tx_out.cpp \ test/fuzz/tx_out.cpp \
test/fuzz/tx_pool.cpp \ test/fuzz/tx_pool.cpp \
test/fuzz/utxo_snapshot.cpp \
test/fuzz/validation_load_mempool.cpp \ test/fuzz/validation_load_mempool.cpp \
test/fuzz/versionbits.cpp test/fuzz/versionbits.cpp
endif # ENABLE_FUZZ_BINARY endif # ENABLE_FUZZ_BINARY

View File

@ -52,7 +52,7 @@ bool SerializeFileDB(const std::string& prefix, const fs::path& path, const Data
std::string tmpfn = strprintf("%s.%04x", prefix, randv); std::string tmpfn = strprintf("%s.%04x", prefix, randv);
// open temp output file, and associate with CAutoFile // open temp output file, and associate with CAutoFile
fs::path pathTmp = GetDataDir() / tmpfn; fs::path pathTmp = gArgs.GetDataDirNet() / tmpfn;
FILE *file = fsbridge::fopen(pathTmp, "wb"); FILE *file = fsbridge::fopen(pathTmp, "wb");
CAutoFile fileout(file, SER_DISK, version); CAutoFile fileout(file, SER_DISK, version);
if (fileout.IsNull()) { if (fileout.IsNull()) {
@ -172,7 +172,7 @@ bool CBanDB::Read(banmap_t& banSet)
bool DumpPeerAddresses(const ArgsManager& args, const AddrMan& addr) bool DumpPeerAddresses(const ArgsManager& args, const AddrMan& addr)
{ {
const auto pathAddr = GetDataDir() / "peers.dat"; const auto pathAddr = gArgs.GetDataDirNet() / "peers.dat";
return SerializeFileDB("peers", pathAddr, addr, CLIENT_VERSION); return SerializeFileDB("peers", pathAddr, addr, CLIENT_VERSION);
} }
@ -187,7 +187,7 @@ std::optional<bilingual_str> LoadAddrman(const std::vector<bool>& asmap, const A
addrman = std::make_unique<AddrMan>(asmap, /* deterministic */ false, /* consistency_check_ratio */ check_addrman); addrman = std::make_unique<AddrMan>(asmap, /* deterministic */ false, /* consistency_check_ratio */ check_addrman);
int64_t nStart = GetTimeMillis(); int64_t nStart = GetTimeMillis();
const auto path_addr{GetDataDir() / "peers.dat"}; const auto path_addr{gArgs.GetDataDirNet() / "peers.dat"};
try { try {
DeserializeFileDB(path_addr, *addrman, CLIENT_VERSION); DeserializeFileDB(path_addr, *addrman, CLIENT_VERSION);
LogPrintf("Loaded %i addresses from peers.dat %dms\n", addrman->size(), GetTimeMillis() - nStart); LogPrintf("Loaded %i addresses from peers.dat %dms\n", addrman->size(), GetTimeMillis() - nStart);

View File

@ -901,8 +901,8 @@ public:
{AssumeutxoHash{uint256S("0x9b2a277a3e3b979f1a539d57e949495d7f8247312dbc32bce6619128c192b44b")}, 110}, {AssumeutxoHash{uint256S("0x9b2a277a3e3b979f1a539d57e949495d7f8247312dbc32bce6619128c192b44b")}, 110},
}, },
{ {
210, 200,
{AssumeutxoHash{uint256S("0xd4c97d32882583b057efc3dce673e44204851435e6ffcef20346e69cddc7c91e")}, 210}, {AssumeutxoHash{uint256S("0x8a5bdd92252fc6b24663244bbe958c947bb036dc1f94ccd15439f48d8d1cb4e3")}, 200},
}, },
}; };

View File

@ -137,7 +137,7 @@ void CCoinJoinClientManager::ProcessMessage(CNode& peer, CChainState& active_cha
if (!CCoinJoinClientOptions::IsEnabled()) return; if (!CCoinJoinClientOptions::IsEnabled()) return;
if (!m_mn_sync.IsBlockchainSynced()) return; if (!m_mn_sync.IsBlockchainSynced()) return;
if (!CheckDiskSpace(GetDataDir())) { if (!CheckDiskSpace(gArgs.GetDataDirNet())) {
ResetPool(); ResetPool();
StopMixing(); StopMixing();
WalletCJLogPrint(m_wallet, "CCoinJoinClientManager::ProcessMessage -- Not enough disk space, disabling CoinJoin.\n"); WalletCJLogPrint(m_wallet, "CCoinJoinClientManager::ProcessMessage -- Not enough disk space, disabling CoinJoin.\n");
@ -460,7 +460,7 @@ bool CCoinJoinClientSession::SendDenominate(const std::vector<std::pair<CTxDSIn,
return false; return false;
} }
if (!CheckDiskSpace(GetDataDir())) { if (!CheckDiskSpace(gArgs.GetDataDirNet())) {
UnlockCoins(); UnlockCoins();
keyHolderStorage.ReturnAll(); keyHolderStorage.ReturnAll();
WITH_LOCK(cs_coinjoin, SetNull()); WITH_LOCK(cs_coinjoin, SetNull());

View File

@ -32,7 +32,7 @@ void CEvoDBScopedCommitter::Rollback()
} }
CEvoDB::CEvoDB(size_t nCacheSize, bool fMemory, bool fWipe) : CEvoDB::CEvoDB(size_t nCacheSize, bool fMemory, bool fWipe) :
db(fMemory ? "" : (GetDataDir() / "evodb"), nCacheSize, fMemory, fWipe), db(fMemory ? "" : (gArgs.GetDataDirNet() / "evodb"), nCacheSize, fMemory, fWipe),
rootBatch(db), rootBatch(db),
rootDBTransaction(db, rootBatch), rootDBTransaction(db, rootBatch),
curDBTransaction(rootDBTransaction, rootDBTransaction) curDBTransaction(rootDBTransaction, rootDBTransaction)

View File

@ -180,7 +180,7 @@ private:
public: public:
CFlatDB(std::string strFilenameIn, std::string strMagicMessageIn) CFlatDB(std::string strFilenameIn, std::string strMagicMessageIn)
{ {
pathDB = GetDataDir() / strFilenameIn; pathDB = gArgs.GetDataDirNet() / strFilenameIn;
strFilename = strFilenameIn; strFilename = strFilenameIn;
strMagicMessage = strMagicMessageIn; strMagicMessage = strMagicMessageIn;
} }

View File

@ -103,7 +103,7 @@ BlockFilterIndex::BlockFilterIndex(BlockFilterType filter_type,
const std::string& filter_name = BlockFilterTypeName(filter_type); const std::string& filter_name = BlockFilterTypeName(filter_type);
if (filter_name.empty()) throw std::invalid_argument("unknown filter_type"); if (filter_name.empty()) throw std::invalid_argument("unknown filter_type");
fs::path path = GetDataDir() / "indexes" / "blockfilter" / filter_name; fs::path path = gArgs.GetDataDirNet() / "indexes" / "blockfilter" / filter_name;
fs::create_directories(path); fs::create_directories(path);
m_name = filter_name + " block filter index"; m_name = filter_name + " block filter index";

View File

@ -98,7 +98,7 @@ std::unique_ptr<CoinStatsIndex> g_coin_stats_index;
CoinStatsIndex::CoinStatsIndex(size_t n_cache_size, bool f_memory, bool f_wipe) CoinStatsIndex::CoinStatsIndex(size_t n_cache_size, bool f_memory, bool f_wipe)
{ {
fs::path path{GetDataDir() / "indexes" / "coinstats"}; fs::path path{gArgs.GetDataDirNet() / "indexes" / "coinstats"};
fs::create_directories(path); fs::create_directories(path);
m_db = std::make_unique<CoinStatsIndex::DB>(path / "db", n_cache_size, f_memory, f_wipe); m_db = std::make_unique<CoinStatsIndex::DB>(path / "db", n_cache_size, f_memory, f_wipe);

View File

@ -37,7 +37,7 @@ public:
}; };
TxIndex::DB::DB(size_t n_cache_size, bool f_memory, bool f_wipe) : TxIndex::DB::DB(size_t n_cache_size, bool f_memory, bool f_wipe) :
BaseIndex::DB(GetDataDir() / "indexes" / "txindex", n_cache_size, f_memory, f_wipe) BaseIndex::DB(gArgs.GetDataDirNet() / "indexes" / "txindex", n_cache_size, f_memory, f_wipe)
{} {}
bool TxIndex::DB::ReadTxPos(const uint256 &txid, CDiskTxPos& pos) const bool TxIndex::DB::ReadTxPos(const uint256 &txid, CDiskTxPos& pos) const

View File

@ -24,12 +24,12 @@
#include <hash.h> #include <hash.h>
#include <httpserver.h> #include <httpserver.h>
#include <httprpc.h> #include <httprpc.h>
#include <init/common.h>
#include <interfaces/chain.h> #include <interfaces/chain.h>
#include <index/blockfilterindex.h> #include <index/blockfilterindex.h>
#include <index/coinstatsindex.h> #include <index/coinstatsindex.h>
#include <index/txindex.h> #include <index/txindex.h>
#include <interfaces/node.h> #include <interfaces/node.h>
#include <key.h>
#include <mapport.h> #include <mapport.h>
#include <miner.h> #include <miner.h>
#include <net.h> #include <net.h>
@ -398,7 +398,7 @@ void Shutdown(NodeContext& node)
PrepareShutdown(node); PrepareShutdown(node);
} }
// Shutdown part 2: delete wallet instance // Shutdown part 2: delete wallet instance
ECC_Stop(); init::UnsetGlobals();
node.mempool.reset(); node.mempool.reset();
node.fee_estimator.reset(); node.fee_estimator.reset();
node.chainman.reset(); node.chainman.reset();
@ -489,6 +489,8 @@ void SetupServerArgs(NodeContext& node)
SetupHelpOptions(argsman); SetupHelpOptions(argsman);
argsman.AddArg("-help-debug", "Print help message with debugging options and exit", ArgsManager::ALLOW_ANY, OptionsCategory::DEBUG_TEST); argsman.AddArg("-help-debug", "Print help message with debugging options and exit", ArgsManager::ALLOW_ANY, OptionsCategory::DEBUG_TEST);
init::AddLoggingArgs(argsman);
const auto defaultBaseParams = CreateBaseChainParams(CBaseChainParams::MAIN); const auto defaultBaseParams = CreateBaseChainParams(CBaseChainParams::MAIN);
const auto testnetBaseParams = CreateBaseChainParams(CBaseChainParams::TESTNET); const auto testnetBaseParams = CreateBaseChainParams(CBaseChainParams::TESTNET);
const auto regtestBaseParams = CreateBaseChainParams(CBaseChainParams::REGTEST); const auto regtestBaseParams = CreateBaseChainParams(CBaseChainParams::REGTEST);
@ -523,7 +525,6 @@ void SetupServerArgs(NodeContext& node)
argsman.AddArg("-datadir=<dir>", "Specify data directory", ArgsManager::ALLOW_ANY, OptionsCategory::OPTIONS); argsman.AddArg("-datadir=<dir>", "Specify data directory", ArgsManager::ALLOW_ANY, OptionsCategory::OPTIONS);
argsman.AddArg("-dbbatchsize", strprintf("Maximum database write batch size in bytes (default: %u)", nDefaultDbBatchSize), ArgsManager::ALLOW_ANY | ArgsManager::DEBUG_ONLY, OptionsCategory::OPTIONS); argsman.AddArg("-dbbatchsize", strprintf("Maximum database write batch size in bytes (default: %u)", nDefaultDbBatchSize), ArgsManager::ALLOW_ANY | ArgsManager::DEBUG_ONLY, OptionsCategory::OPTIONS);
argsman.AddArg("-dbcache=<n>", strprintf("Maximum database cache size <n> MiB (%d to %d, default: %d). In addition, unused mempool memory is shared for this cache (see -maxmempool).", nMinDbCache, nMaxDbCache, nDefaultDbCache), ArgsManager::ALLOW_ANY, OptionsCategory::OPTIONS); argsman.AddArg("-dbcache=<n>", strprintf("Maximum database cache size <n> MiB (%d to %d, default: %d). In addition, unused mempool memory is shared for this cache (see -maxmempool).", nMinDbCache, nMaxDbCache, nDefaultDbCache), ArgsManager::ALLOW_ANY, OptionsCategory::OPTIONS);
argsman.AddArg("-debuglogfile=<file>", strprintf("Specify location of debug log file. Relative paths will be prefixed by a net-specific datadir location. (-nodebuglogfile to disable; default: %s)", DEFAULT_DEBUGLOGFILE), ArgsManager::ALLOW_ANY, OptionsCategory::OPTIONS);
argsman.AddArg("-includeconf=<file>", "Specify additional configuration file, relative to the -datadir path (only useable from configuration file, not command line)", ArgsManager::ALLOW_ANY, OptionsCategory::OPTIONS); argsman.AddArg("-includeconf=<file>", "Specify additional configuration file, relative to the -datadir path (only useable from configuration file, not command line)", ArgsManager::ALLOW_ANY, OptionsCategory::OPTIONS);
argsman.AddArg("-loadblock=<file>", "Imports blocks from external file on startup", ArgsManager::ALLOW_ANY, OptionsCategory::OPTIONS); argsman.AddArg("-loadblock=<file>", "Imports blocks from external file on startup", ArgsManager::ALLOW_ANY, OptionsCategory::OPTIONS);
argsman.AddArg("-maxmempool=<n>", strprintf("Keep the transaction memory pool below <n> megabytes (default: %u)", DEFAULT_MAX_MEMPOOL_SIZE), ArgsManager::ALLOW_ANY, OptionsCategory::OPTIONS); argsman.AddArg("-maxmempool=<n>", strprintf("Keep the transaction memory pool below <n> megabytes (default: %u)", DEFAULT_MAX_MEMPOOL_SIZE), ArgsManager::ALLOW_ANY, OptionsCategory::OPTIONS);
@ -714,27 +715,13 @@ void SetupServerArgs(NodeContext& node)
argsman.AddArg("-watchquorums=<n>", strprintf("Watch and validate quorum communication (default: %u)", llmq::DEFAULT_WATCH_QUORUMS), ArgsManager::ALLOW_ANY | ArgsManager::DEBUG_ONLY, OptionsCategory::DEBUG_TEST); argsman.AddArg("-watchquorums=<n>", strprintf("Watch and validate quorum communication (default: %u)", llmq::DEFAULT_WATCH_QUORUMS), ArgsManager::ALLOW_ANY | ArgsManager::DEBUG_ONLY, OptionsCategory::DEBUG_TEST);
argsman.AddArg("-addrmantest", "Allows to test address relay on localhost", ArgsManager::ALLOW_ANY | ArgsManager::DEBUG_ONLY, OptionsCategory::DEBUG_TEST); argsman.AddArg("-addrmantest", "Allows to test address relay on localhost", ArgsManager::ALLOW_ANY | ArgsManager::DEBUG_ONLY, OptionsCategory::DEBUG_TEST);
argsman.AddArg("-capturemessages", "Capture all P2P messages to disk", ArgsManager::ALLOW_BOOL | ArgsManager::DEBUG_ONLY, OptionsCategory::DEBUG_TEST); argsman.AddArg("-capturemessages", "Capture all P2P messages to disk", ArgsManager::ALLOW_BOOL | ArgsManager::DEBUG_ONLY, OptionsCategory::DEBUG_TEST);
argsman.AddArg("-debug=<category>", "Output debugging information (default: -nodebug, supplying <category> is optional). "
"If <category> is not supplied or if <category> = 1, output all debugging information. <category> can be: " + LogInstance().LogCategoriesString() + ". This option can be specified multiple times to output multiple categories.", ArgsManager::ALLOW_ANY, OptionsCategory::DEBUG_TEST);
argsman.AddArg("-debugexclude=<category>", strprintf("Exclude debugging information for a category. Can be used in conjunction with -debug=1 to output debug logs for all categories except the specified category. This option can be specified multiple times to exclude multiple categories."), ArgsManager::ALLOW_ANY, OptionsCategory::DEBUG_TEST);
argsman.AddArg("-disablegovernance", strprintf("Disable governance validation (0-1, default: %u)", 0), ArgsManager::ALLOW_ANY, OptionsCategory::DEBUG_TEST); argsman.AddArg("-disablegovernance", strprintf("Disable governance validation (0-1, default: %u)", 0), ArgsManager::ALLOW_ANY, OptionsCategory::DEBUG_TEST);
argsman.AddArg("-logips", strprintf("Include IP addresses in debug output (default: %u)", DEFAULT_LOGIPS), ArgsManager::ALLOW_ANY, OptionsCategory::DEBUG_TEST);
argsman.AddArg("-logsourcelocations", strprintf("Prepend debug output with name of the originating source location (source file, line number and function name) (default: %u)", DEFAULT_LOGSOURCELOCATIONS), ArgsManager::ALLOW_ANY, OptionsCategory::DEBUG_TEST);
argsman.AddArg("-logtimemicros", strprintf("Add microsecond precision to debug timestamps (default: %u)", DEFAULT_LOGTIMEMICROS), ArgsManager::ALLOW_ANY | ArgsManager::DEBUG_ONLY, OptionsCategory::DEBUG_TEST);
#ifdef HAVE_THREAD_LOCAL
argsman.AddArg("-logtimestamps", strprintf("Prepend debug output with timestamp (default: %u)", DEFAULT_LOGTIMESTAMPS), ArgsManager::ALLOW_ANY, OptionsCategory::DEBUG_TEST);
#else
hidden_args.emplace_back("-logthreadnames");
#endif
argsman.AddArg("-logthreadnames", strprintf("Prepend debug output with name of the originating thread (only available on platforms supporting thread_local) (default: %u)", DEFAULT_LOGTHREADNAMES), ArgsManager::ALLOW_ANY | ArgsManager::DEBUG_ONLY, OptionsCategory::DEBUG_TEST);
argsman.AddArg("-maxsigcachesize=<n>", strprintf("Limit sum of signature cache and script execution cache sizes to <n> MiB (default: %u)", DEFAULT_MAX_SIG_CACHE_SIZE), ArgsManager::ALLOW_ANY | ArgsManager::DEBUG_ONLY, OptionsCategory::DEBUG_TEST); argsman.AddArg("-maxsigcachesize=<n>", strprintf("Limit sum of signature cache and script execution cache sizes to <n> MiB (default: %u)", DEFAULT_MAX_SIG_CACHE_SIZE), ArgsManager::ALLOW_ANY | ArgsManager::DEBUG_ONLY, OptionsCategory::DEBUG_TEST);
argsman.AddArg("-maxtipage=<n>", strprintf("Maximum tip age in seconds to consider node in initial block download (default: %u)", DEFAULT_MAX_TIP_AGE), ArgsManager::ALLOW_ANY | ArgsManager::DEBUG_ONLY, OptionsCategory::DEBUG_TEST); argsman.AddArg("-maxtipage=<n>", strprintf("Maximum tip age in seconds to consider node in initial block download (default: %u)", DEFAULT_MAX_TIP_AGE), ArgsManager::ALLOW_ANY | ArgsManager::DEBUG_ONLY, OptionsCategory::DEBUG_TEST);
argsman.AddArg("-mocktime=<n>", "Replace actual time with " + UNIX_EPOCH_TIME + "(default: 0)", ArgsManager::ALLOW_ANY | ArgsManager::DEBUG_ONLY, OptionsCategory::DEBUG_TEST); argsman.AddArg("-mocktime=<n>", "Replace actual time with " + UNIX_EPOCH_TIME + "(default: 0)", ArgsManager::ALLOW_ANY | ArgsManager::DEBUG_ONLY, OptionsCategory::DEBUG_TEST);
argsman.AddArg("-minsporkkeys=<n>", "Overrides minimum spork signers to change spork value. Only useful for regtest and devnet. Using this on mainnet or testnet will ban you.", ArgsManager::ALLOW_ANY, OptionsCategory::DEBUG_TEST); argsman.AddArg("-minsporkkeys=<n>", "Overrides minimum spork signers to change spork value. Only useful for regtest and devnet. Using this on mainnet or testnet will ban you.", ArgsManager::ALLOW_ANY, OptionsCategory::DEBUG_TEST);
argsman.AddArg("-printpriority", strprintf("Log transaction fee per kB when mining blocks (default: %u)", DEFAULT_PRINTPRIORITY), ArgsManager::ALLOW_ANY | ArgsManager::DEBUG_ONLY, OptionsCategory::DEBUG_TEST); argsman.AddArg("-printpriority", strprintf("Log transaction fee per kB when mining blocks (default: %u)", DEFAULT_PRINTPRIORITY), ArgsManager::ALLOW_ANY | ArgsManager::DEBUG_ONLY, OptionsCategory::DEBUG_TEST);
argsman.AddArg("-printtoconsole", "Send trace/debug info to console (default: 1 when no -daemon. To disable logging to file, set -nodebuglogfile)", ArgsManager::ALLOW_ANY, OptionsCategory::DEBUG_TEST);
argsman.AddArg("-pushversion", "Protocol version to report to other nodes", ArgsManager::ALLOW_ANY, OptionsCategory::DEBUG_TEST); argsman.AddArg("-pushversion", "Protocol version to report to other nodes", ArgsManager::ALLOW_ANY, OptionsCategory::DEBUG_TEST);
argsman.AddArg("-shrinkdebugfile", "Shrink debug.log file on client startup (default: 1 when no -debug)", ArgsManager::ALLOW_ANY, OptionsCategory::DEBUG_TEST);
argsman.AddArg("-sporkaddr=<dashaddress>", "Override spork address. Only useful for regtest and devnet. Using this on mainnet or testnet will ban you.", ArgsManager::ALLOW_ANY, OptionsCategory::DEBUG_TEST); argsman.AddArg("-sporkaddr=<dashaddress>", "Override spork address. Only useful for regtest and devnet. Using this on mainnet or testnet will ban you.", ArgsManager::ALLOW_ANY, OptionsCategory::DEBUG_TEST);
argsman.AddArg("-sporkkey=<privatekey>", "Set the private key to be used for signing spork messages.", ArgsManager::ALLOW_ANY | ArgsManager::SENSITIVE, OptionsCategory::DEBUG_TEST); argsman.AddArg("-sporkkey=<privatekey>", "Set the private key to be used for signing spork messages.", ArgsManager::ALLOW_ANY | ArgsManager::SENSITIVE, OptionsCategory::DEBUG_TEST);
argsman.AddArg("-uacomment=<cmt>", "Append comment to the user agent string", ArgsManager::ALLOW_ANY, OptionsCategory::DEBUG_TEST); argsman.AddArg("-uacomment=<cmt>", "Append comment to the user agent string", ArgsManager::ALLOW_ANY, OptionsCategory::DEBUG_TEST);
@ -893,31 +880,6 @@ static void PeriodicStats(ArgsManager& args, ChainstateManager& chainman, const
} }
} }
/** Sanity checks
* Ensure that Dash Core is running in a usable environment with all
* necessary library support.
*/
static bool InitSanityCheck()
{
if (!ECC_InitSanityCheck()) {
return InitError(Untranslated("Elliptic curve cryptography sanity check failure. Aborting."));
}
if (!BLSInit()) {
return false;
}
if (!Random_SanityCheck()) {
return InitError(Untranslated("OS cryptographic RNG sanity check failure. Aborting."));
}
if (!ChronoSanityCheck()) {
return InitError(Untranslated("Clock epoch mismatch. Aborting."));
}
return true;
}
static bool AppInitServers(NodeContext& node) static bool AppInitServers(NodeContext& node)
{ {
const ArgsManager& args = *Assert(node.args); const ArgsManager& args = *Assert(node.args);
@ -1040,25 +1002,8 @@ void InitParameterInteraction(ArgsManager& args)
*/ */
void InitLogging(const ArgsManager& args) void InitLogging(const ArgsManager& args)
{ {
LogInstance().m_print_to_file = !args.IsArgNegated("-debuglogfile"); init::SetLoggingOptions(args);
LogInstance().m_file_path = AbsPathForConfigVal(args.GetArg("-debuglogfile", DEFAULT_DEBUGLOGFILE)); init::LogPackageVersion();
LogInstance().m_print_to_console = args.GetBoolArg("-printtoconsole", !args.GetBoolArg("-daemon", false));
LogInstance().m_log_timestamps = args.GetBoolArg("-logtimestamps", DEFAULT_LOGTIMESTAMPS);
LogInstance().m_log_time_micros = args.GetBoolArg("-logtimemicros", DEFAULT_LOGTIMEMICROS);
#ifdef HAVE_THREAD_LOCAL
LogInstance().m_log_threadnames = args.GetBoolArg("-logthreadnames", DEFAULT_LOGTHREADNAMES);
#endif
LogInstance().m_log_sourcelocations = args.GetBoolArg("-logsourcelocations", DEFAULT_LOGSOURCELOCATIONS);
fLogIPs = args.GetBoolArg("-logips", DEFAULT_LOGIPS);
std::string version_string = FormatFullVersion();
#ifdef DEBUG_CORE
version_string += " (debug build)";
#else
version_string += " (release build)";
#endif
LogPrintf(PACKAGE_NAME " version %s\n", version_string);
} }
namespace { // Variables internal to initialization process only namespace { // Variables internal to initialization process only
@ -1248,26 +1193,7 @@ bool AppInitParameterInteraction(const ArgsManager& args)
InitWarning(strprintf(_("Reducing -maxconnections from %d to %d, because of system limitations."), nUserMaxConnections, nMaxConnections)); InitWarning(strprintf(_("Reducing -maxconnections from %d to %d, because of system limitations."), nUserMaxConnections, nMaxConnections));
// ********************************************************* Step 3: parameter-to-internal-flags // ********************************************************* Step 3: parameter-to-internal-flags
if (args.IsArgSet("-debug")) { init::SetLoggingCategories(args);
// Special-case: if -debug=0/-nodebug is set, turn off debugging messages
const std::vector<std::string> categories = args.GetArgs("-debug");
if (std::none_of(categories.begin(), categories.end(),
[](std::string cat){return cat == "0" || cat == "none";})) {
for (const auto& cat : categories) {
if (!LogInstance().EnableCategory(cat)) {
InitWarning(strprintf(_("Unsupported logging category %s=%s."), "-debug", cat));
}
}
}
}
// Now remove the logging categories which were explicitly excluded
for (const std::string& cat : args.GetArgs("-debugexclude")) {
if (!LogInstance().DisableCategory(cat)) {
InitWarning(strprintf(_("Unsupported logging category %s=%s."), "-debugexclude", cat));
}
}
fCheckBlockIndex = args.GetBoolArg("-checkblockindex", chainparams.DefaultConsistencyChecks()); fCheckBlockIndex = args.GetBoolArg("-checkblockindex", chainparams.DefaultConsistencyChecks());
fCheckpointsEnabled = args.GetBoolArg("-checkpoints", DEFAULT_CHECKPOINTS_ENABLED); fCheckpointsEnabled = args.GetBoolArg("-checkpoints", DEFAULT_CHECKPOINTS_ENABLED);
@ -1441,7 +1367,7 @@ bool AppInitParameterInteraction(const ArgsManager& args)
static bool LockDataDirectory(bool probeOnly) static bool LockDataDirectory(bool probeOnly)
{ {
// Make sure only a single Dash Core process is using the data directory. // Make sure only a single Dash Core process is using the data directory.
fs::path datadir = GetDataDir(); fs::path datadir = gArgs.GetDataDirNet();
if (!DirIsWritable(datadir)) { if (!DirIsWritable(datadir)) {
return InitError(strprintf(_("Cannot write to data directory '%s'; check permissions."), datadir.string())); return InitError(strprintf(_("Cannot write to data directory '%s'; check permissions."), datadir.string()));
} }
@ -1455,15 +1381,11 @@ bool AppInitSanityChecks()
{ {
// ********************************************************* Step 4: sanity checks // ********************************************************* Step 4: sanity checks
// Initialize elliptic curve code init::SetGlobals();
std::string sha256_algo = SHA256AutoDetect();
LogPrintf("Using the '%s' SHA256 implementation\n", sha256_algo);
RandomInit();
ECC_Start();
// Sanity check if (!init::SanityChecks()) {
if (!InitSanityCheck())
return InitError(strprintf(_("Initialization sanity check failed. %s is shutting down."), PACKAGE_NAME)); return InitError(strprintf(_("Initialization sanity check failed. %s is shutting down."), PACKAGE_NAME));
}
// Probe the data directory lock to give an early error message, if possible // Probe the data directory lock to give an early error message, if possible
// We cannot hold the data directory lock here, as the forking for daemon() hasn't yet happened, // We cannot hold the data directory lock here, as the forking for daemon() hasn't yet happened,
@ -1503,37 +1425,10 @@ bool AppInitMain(NodeContext& node, interfaces::BlockAndHeaderTipInfo* tip_info)
// Detailed error printed inside CreatePidFile(). // Detailed error printed inside CreatePidFile().
return false; return false;
} }
if (LogInstance().m_print_to_file) { if (!init::StartLogging(args)) {
if (args.GetBoolArg("-shrinkdebugfile", LogInstance().DefaultShrinkDebugFile())) { // Detailed error printed inside StartLogging().
// Do this first since it both loads a bunch of debug.log into memory, return false;
// and because this needs to happen before any other debug.log printing
LogInstance().ShrinkDebugFile();
} }
}
if (!LogInstance().StartLogging()) {
return InitError(strprintf(Untranslated("Could not open debug log file %s"),
LogInstance().m_file_path.string()));
}
if (!LogInstance().m_log_timestamps)
LogPrintf("Startup time: %s\n", FormatISO8601DateTime(GetTime()));
LogPrintf("Default data directory %s\n", GetDefaultDataDir().string());
LogPrintf("Using data directory %s\n", GetDataDir().string());
// Only log conf file usage message if conf file actually exists.
fs::path config_file_path = GetConfigFile(args.GetArg("-conf", BITCOIN_CONF_FILENAME));
if (fs::exists(config_file_path)) {
LogPrintf("Config file: %s\n", config_file_path.string());
} else if (args.IsArgSet("-conf")) {
// Warn if no conf file exists at path provided by user
InitWarning(strprintf(_("The specified config file %s does not exist"), config_file_path.string()));
} else {
// Not categorizing as "Warning" because it's the default behavior
LogPrintf("Config file: %s (not found, skipping)\n", config_file_path.string());
}
// Log the config arguments to debug.log
args.LogArgs();
LogPrintf("Using at most %i automatic connections (%i file descriptors available)\n", nMaxConnections, nFD); LogPrintf("Using at most %i automatic connections (%i file descriptors available)\n", nMaxConnections, nFD);
@ -1636,7 +1531,7 @@ bool AppInitMain(NodeContext& node, interfaces::BlockAndHeaderTipInfo* tip_info)
asmap_path = DEFAULT_ASMAP_FILENAME; asmap_path = DEFAULT_ASMAP_FILENAME;
} }
if (!asmap_path.is_absolute()) { if (!asmap_path.is_absolute()) {
asmap_path = GetDataDir() / asmap_path; asmap_path = gArgs.GetDataDirNet() / asmap_path;
} }
if (!fs::exists(asmap_path)) { if (!fs::exists(asmap_path)) {
InitError(strprintf(_("Could not find asmap file %s"), asmap_path)); InitError(strprintf(_("Could not find asmap file %s"), asmap_path));
@ -1660,7 +1555,7 @@ bool AppInitMain(NodeContext& node, interfaces::BlockAndHeaderTipInfo* tip_info)
} }
assert(!node.banman); assert(!node.banman);
node.banman = std::make_unique<BanMan>(GetDataDir() / "banlist", &uiInterface, args.GetArg("-bantime", DEFAULT_MISBEHAVING_BANTIME)); node.banman = std::make_unique<BanMan>(gArgs.GetDataDirNet() / "banlist", &uiInterface, args.GetArg("-bantime", DEFAULT_MISBEHAVING_BANTIME));
assert(!node.connman); assert(!node.connman);
node.connman = std::make_unique<CConnman>(GetRand(std::numeric_limits<uint64_t>::max()), GetRand(std::numeric_limits<uint64_t>::max()), *node.addrman, args.GetBoolArg("-networkactive", true)); node.connman = std::make_unique<CConnman>(GetRand(std::numeric_limits<uint64_t>::max()), GetRand(std::numeric_limits<uint64_t>::max()), *node.addrman, args.GetBoolArg("-networkactive", true));
@ -1878,7 +1773,7 @@ bool AppInitMain(NodeContext& node, interfaces::BlockAndHeaderTipInfo* tip_info)
// ********************************************************* Step 7a: Load sporks // ********************************************************* Step 7a: Load sporks
if (!node.sporkman->LoadCache()) { if (!node.sporkman->LoadCache()) {
auto file_path = (GetDataDir() / "sporks.dat").string(); auto file_path = (gArgs.GetDataDirNet() / "sporks.dat").string();
return InitError(strprintf(_("Failed to load sporks cache from %s"), file_path)); return InitError(strprintf(_("Failed to load sporks cache from %s"), file_path));
} }
@ -2233,7 +2128,7 @@ bool AppInitMain(NodeContext& node, interfaces::BlockAndHeaderTipInfo* tip_info)
bool fLoadCacheFiles = !(fReindex || fReindexChainState) && (chainman.ActiveChain().Tip() != nullptr); bool fLoadCacheFiles = !(fReindex || fReindexChainState) && (chainman.ActiveChain().Tip() != nullptr);
if (!node.netfulfilledman->LoadCache(fLoadCacheFiles)) { if (!node.netfulfilledman->LoadCache(fLoadCacheFiles)) {
auto file_path = (GetDataDir() / "netfulfilled.dat").string(); auto file_path = (gArgs.GetDataDirNet() / "netfulfilled.dat").string();
if (fLoadCacheFiles) { if (fLoadCacheFiles) {
return InitError(strprintf(_("Failed to load fulfilled requests cache from %s"), file_path)); return InitError(strprintf(_("Failed to load fulfilled requests cache from %s"), file_path));
} }
@ -2241,7 +2136,7 @@ bool AppInitMain(NodeContext& node, interfaces::BlockAndHeaderTipInfo* tip_info)
} }
if (!node.mn_metaman->LoadCache(fLoadCacheFiles)) { if (!node.mn_metaman->LoadCache(fLoadCacheFiles)) {
auto file_path = (GetDataDir() / "mncache.dat").string(); auto file_path = (gArgs.GetDataDirNet() / "mncache.dat").string();
if (fLoadCacheFiles) { if (fLoadCacheFiles) {
return InitError(strprintf(_("Failed to load masternode cache from %s"), file_path)); return InitError(strprintf(_("Failed to load masternode cache from %s"), file_path));
} }
@ -2250,7 +2145,7 @@ bool AppInitMain(NodeContext& node, interfaces::BlockAndHeaderTipInfo* tip_info)
if (is_governance_enabled) { if (is_governance_enabled) {
if (!node.govman->LoadCache(fLoadCacheFiles)) { if (!node.govman->LoadCache(fLoadCacheFiles)) {
auto file_path = (GetDataDir() / "governance.dat").string(); auto file_path = (gArgs.GetDataDirNet() / "governance.dat").string();
if (fLoadCacheFiles) { if (fLoadCacheFiles) {
return InitError(strprintf(_("Failed to load governance cache from %s"), file_path)); return InitError(strprintf(_("Failed to load governance cache from %s"), file_path));
} }
@ -2347,8 +2242,8 @@ bool AppInitMain(NodeContext& node, interfaces::BlockAndHeaderTipInfo* tip_info)
// ********************************************************* Step 11: import blocks // ********************************************************* Step 11: import blocks
if (!CheckDiskSpace(GetDataDir())) { if (!CheckDiskSpace(gArgs.GetDataDirNet())) {
InitError(strprintf(_("Error: Disk space is low for %s"), GetDataDir())); InitError(strprintf(_("Error: Disk space is low for %s"), gArgs.GetDataDirNet()));
return false; return false;
} }
if (!CheckDiskSpace(gArgs.GetBlocksDirPath())) { if (!CheckDiskSpace(gArgs.GetBlocksDirPath())) {

164
src/init/common.cpp Normal file
View File

@ -0,0 +1,164 @@
// Copyright (c) 2021 The Bitcoin Core developers
// Distributed under the MIT software license, see the accompanying
// file COPYING or http://www.opensource.org/licenses/mit-license.php.
#if defined(HAVE_CONFIG_H)
#include <config/bitcoin-config.h>
#endif
#include <bls/bls.h>
#include <clientversion.h>
#include <crypto/sha256.h>
#include <key.h>
#include <logging.h>
#include <node/ui_interface.h>
#include <pubkey.h>
#include <random.h>
#include <util/system.h>
#include <util/time.h>
#include <util/translation.h>
#include <memory>
namespace init {
void SetGlobals()
{
std::string sha256_algo = SHA256AutoDetect();
LogPrintf("Using the '%s' SHA256 implementation\n", sha256_algo);
RandomInit();
ECC_Start();
}
void UnsetGlobals()
{
ECC_Stop();
}
bool SanityChecks()
{
if (!ECC_InitSanityCheck()) {
return InitError(Untranslated("Elliptic curve cryptography sanity check failure. Aborting."));
}
if (!BLSInit()) {
return false;
}
if (!Random_SanityCheck()) {
return InitError(Untranslated("OS cryptographic RNG sanity check failure. Aborting."));
}
if (!ChronoSanityCheck()) {
return InitError(Untranslated("Clock epoch mismatch. Aborting."));
}
return true;
}
void AddLoggingArgs(ArgsManager& argsman)
{
argsman.AddArg("-debuglogfile=<file>", strprintf("Specify location of debug log file. Relative paths will be prefixed by a net-specific datadir location. (-nodebuglogfile to disable; default: %s)", DEFAULT_DEBUGLOGFILE), ArgsManager::ALLOW_ANY, OptionsCategory::OPTIONS);
argsman.AddArg("-debug=<category>", "Output debugging information (default: -nodebug, supplying <category> is optional). "
"If <category> is not supplied or if <category> = 1, output all debugging information. <category> can be: " + LogInstance().LogCategoriesString() + ". This option can be specified multiple times to output multiple categories.",
ArgsManager::ALLOW_ANY, OptionsCategory::DEBUG_TEST);
argsman.AddArg("-debugexclude=<category>", strprintf("Exclude debugging information for a category. Can be used in conjunction with -debug=1 to output debug logs for all categories except the specified category. This option can be specified multiple times to exclude multiple categories."), ArgsManager::ALLOW_ANY, OptionsCategory::DEBUG_TEST);
argsman.AddArg("-logips", strprintf("Include IP addresses in debug output (default: %u)", DEFAULT_LOGIPS), ArgsManager::ALLOW_ANY, OptionsCategory::DEBUG_TEST);
argsman.AddArg("-logtimestamps", strprintf("Prepend debug output with timestamp (default: %u)", DEFAULT_LOGTIMESTAMPS), ArgsManager::ALLOW_ANY, OptionsCategory::DEBUG_TEST);
#ifdef HAVE_THREAD_LOCAL
argsman.AddArg("-logthreadnames", strprintf("Prepend debug output with name of the originating thread (only available on platforms supporting thread_local) (default: %u)", DEFAULT_LOGTHREADNAMES), ArgsManager::ALLOW_ANY | ArgsManager::DEBUG_ONLY, OptionsCategory::DEBUG_TEST);
#else
argsman.AddHiddenArgs({"-logthreadnames"});
#endif
argsman.AddArg("-logsourcelocations", strprintf("Prepend debug output with name of the originating source location (source file, line number and function name) (default: %u)", DEFAULT_LOGSOURCELOCATIONS), ArgsManager::ALLOW_ANY, OptionsCategory::DEBUG_TEST);
argsman.AddArg("-logtimemicros", strprintf("Add microsecond precision to debug timestamps (default: %u)", DEFAULT_LOGTIMEMICROS), ArgsManager::ALLOW_ANY | ArgsManager::DEBUG_ONLY, OptionsCategory::DEBUG_TEST);
argsman.AddArg("-printtoconsole", "Send trace/debug info to console (default: 1 when no -daemon. To disable logging to file, set -nodebuglogfile)", ArgsManager::ALLOW_ANY, OptionsCategory::DEBUG_TEST);
argsman.AddArg("-shrinkdebugfile", "Shrink debug.log file on client startup (default: 1 when no -debug)", ArgsManager::ALLOW_ANY, OptionsCategory::DEBUG_TEST);
}
void SetLoggingOptions(const ArgsManager& args)
{
LogInstance().m_print_to_file = !args.IsArgNegated("-debuglogfile");
LogInstance().m_file_path = AbsPathForConfigVal(args.GetArg("-debuglogfile", DEFAULT_DEBUGLOGFILE));
LogInstance().m_print_to_console = args.GetBoolArg("-printtoconsole", !args.GetBoolArg("-daemon", false));
LogInstance().m_log_timestamps = args.GetBoolArg("-logtimestamps", DEFAULT_LOGTIMESTAMPS);
LogInstance().m_log_time_micros = args.GetBoolArg("-logtimemicros", DEFAULT_LOGTIMEMICROS);
#ifdef HAVE_THREAD_LOCAL
LogInstance().m_log_threadnames = args.GetBoolArg("-logthreadnames", DEFAULT_LOGTHREADNAMES);
#endif
LogInstance().m_log_sourcelocations = args.GetBoolArg("-logsourcelocations", DEFAULT_LOGSOURCELOCATIONS);
fLogIPs = args.GetBoolArg("-logips", DEFAULT_LOGIPS);
}
void SetLoggingCategories(const ArgsManager& args)
{
if (args.IsArgSet("-debug")) {
// Special-case: if -debug=0/-nodebug is set, turn off debugging messages
const std::vector<std::string> categories = args.GetArgs("-debug");
if (std::none_of(categories.begin(), categories.end(),
[](std::string cat){return cat == "0" || cat == "none";})) {
for (const auto& cat : categories) {
if (!LogInstance().EnableCategory(cat)) {
InitWarning(strprintf(_("Unsupported logging category %s=%s."), "-debug", cat));
}
}
}
}
// Now remove the logging categories which were explicitly excluded
for (const std::string& cat : args.GetArgs("-debugexclude")) {
if (!LogInstance().DisableCategory(cat)) {
InitWarning(strprintf(_("Unsupported logging category %s=%s."), "-debugexclude", cat));
}
}
}
bool StartLogging(const ArgsManager& args)
{
if (LogInstance().m_print_to_file) {
if (args.GetBoolArg("-shrinkdebugfile", LogInstance().DefaultShrinkDebugFile())) {
// Do this first since it both loads a bunch of debug.log into memory,
// and because this needs to happen before any other debug.log printing
LogInstance().ShrinkDebugFile();
}
}
if (!LogInstance().StartLogging()) {
return InitError(strprintf(Untranslated("Could not open debug log file %s"),
LogInstance().m_file_path.string()));
}
if (!LogInstance().m_log_timestamps)
LogPrintf("Startup time: %s\n", FormatISO8601DateTime(GetTime()));
LogPrintf("Default data directory %s\n", GetDefaultDataDir().string());
LogPrintf("Using data directory %s\n", gArgs.GetDataDirNet().string());
// Only log conf file usage message if conf file actually exists.
fs::path config_file_path = GetConfigFile(args.GetArg("-conf", BITCOIN_CONF_FILENAME));
if (fs::exists(config_file_path)) {
LogPrintf("Config file: %s\n", config_file_path.string());
} else if (args.IsArgSet("-conf")) {
// Warn if no conf file exists at path provided by user
InitWarning(strprintf(_("The specified config file %s does not exist"), config_file_path.string()));
} else {
// Not categorizing as "Warning" because it's the default behavior
LogPrintf("Config file: %s (not found, skipping)\n", config_file_path.string());
}
// Log the config arguments to debug.log
args.LogArgs();
return true;
}
void LogPackageVersion()
{
std::string version_string = FormatFullVersion();
#ifdef DEBUG_CORE
version_string += " (debug build)";
#else
version_string += " (release build)";
#endif
LogPrintf(PACKAGE_NAME " version %s\n", version_string);
}
} // namespace init

28
src/init/common.h Normal file
View File

@ -0,0 +1,28 @@
// Copyright (c) 2021 The Bitcoin Core developers
// Distributed under the MIT software license, see the accompanying
// file COPYING or http://www.opensource.org/licenses/mit-license.php.
//! @file
//! @brief Common init functions shared by bitcoin-node, bitcoin-wallet, etc.
#ifndef BITCOIN_INIT_COMMON_H
#define BITCOIN_INIT_COMMON_H
class ArgsManager;
namespace init {
void SetGlobals();
void UnsetGlobals();
/**
* Ensure a usable environment with all
* necessary library support.
*/
bool SanityChecks();
void AddLoggingArgs(ArgsManager& args);
void SetLoggingOptions(const ArgsManager& args);
void SetLoggingCategories(const ArgsManager& args);
bool StartLogging(const ArgsManager& args);
void LogPackageVersion();
} // namespace init
#endif // BITCOIN_INIT_COMMON_H

View File

@ -346,6 +346,9 @@ public:
//! Return default wallet directory. //! Return default wallet directory.
virtual std::string getWalletDir() = 0; virtual std::string getWalletDir() = 0;
//! Restore backup wallet
virtual std::unique_ptr<Wallet> restoreWallet(const std::string& backup_file, const std::string& wallet_name, bilingual_str& error, std::vector<bilingual_str>& warnings) = 0;
//! Return available wallets in wallet directory. //! Return available wallets in wallet directory.
virtual std::vector<std::string> listWalletDir() = 0; virtual std::vector<std::string> listWalletDir() = 0;

View File

@ -43,7 +43,7 @@ LLMQContext::LLMQContext(CChainState& chainstate, CConnman& connman, CDeterminis
{ {
// NOTE: we use this only to wipe the old db, do NOT use it for anything else // NOTE: we use this only to wipe the old db, do NOT use it for anything else
// TODO: remove it in some future version // TODO: remove it in some future version
auto llmqDbTmp = std::make_unique<CDBWrapper>(unit_tests ? "" : (GetDataDir() / "llmq"), 1 << 20, unit_tests, true); auto llmqDbTmp = std::make_unique<CDBWrapper>(unit_tests ? "" : (gArgs.GetDataDirNet() / "llmq"), 1 << 20, unit_tests, true);
} }
LLMQContext::~LLMQContext() { LLMQContext::~LLMQContext() {

View File

@ -28,7 +28,7 @@ CDKGSessionManager::CDKGSessionManager(CBLSWorker& _blsWorker, CChainState& chai
CDKGDebugManager& _dkgDebugManager, CMasternodeMetaMan& mn_metaman, CQuorumBlockProcessor& _quorumBlockProcessor, CDKGDebugManager& _dkgDebugManager, CMasternodeMetaMan& mn_metaman, CQuorumBlockProcessor& _quorumBlockProcessor,
const CActiveMasternodeManager* const mn_activeman, const CSporkManager& sporkman, const CActiveMasternodeManager* const mn_activeman, const CSporkManager& sporkman,
const std::unique_ptr<PeerManager>& peerman, bool unitTests, bool fWipe) : const std::unique_ptr<PeerManager>& peerman, bool unitTests, bool fWipe) :
db(std::make_unique<CDBWrapper>(unitTests ? "" : (GetDataDir() / "llmq/dkgdb"), 1 << 20, unitTests, fWipe)), db(std::make_unique<CDBWrapper>(unitTests ? "" : (gArgs.GetDataDirNet() / "llmq/dkgdb"), 1 << 20, unitTests, fWipe)),
blsWorker(_blsWorker), blsWorker(_blsWorker),
m_chainstate(chainstate), m_chainstate(chainstate),
connman(_connman), connman(_connman),
@ -63,7 +63,7 @@ void CDKGSessionManager::MigrateDKG()
LogPrint(BCLog::LLMQ, "CDKGSessionManager::%d -- start\n", __func__); LogPrint(BCLog::LLMQ, "CDKGSessionManager::%d -- start\n", __func__);
CDBBatch batch(*db); CDBBatch batch(*db);
auto oldDb = std::make_unique<CDBWrapper>(GetDataDir() / "llmq", 8 << 20); auto oldDb = std::make_unique<CDBWrapper>(gArgs.GetDataDirNet() / "llmq", 8 << 20);
std::unique_ptr<CDBIterator> pcursor(oldDb->NewIterator()); std::unique_ptr<CDBIterator> pcursor(oldDb->NewIterator());
auto start_vvec = std::make_tuple(DB_VVEC, (Consensus::LLMQType)0, uint256(), uint256()); auto start_vvec = std::make_tuple(DB_VVEC, (Consensus::LLMQType)0, uint256(), uint256());

View File

@ -55,7 +55,7 @@ uint256 CInstantSendLock::GetRequestId() const
CInstantSendDb::CInstantSendDb(bool unitTests, bool fWipe) : CInstantSendDb::CInstantSendDb(bool unitTests, bool fWipe) :
db(std::make_unique<CDBWrapper>(unitTests ? "" : (GetDataDir() / "llmq/isdb"), 32 << 20, unitTests, fWipe)) db(std::make_unique<CDBWrapper>(unitTests ? "" : (gArgs.GetDataDirNet() / "llmq/isdb"), 32 << 20, unitTests, fWipe))
{ {
} }

View File

@ -44,7 +44,7 @@ UniValue CRecoveredSig::ToJson() const
CRecoveredSigsDb::CRecoveredSigsDb(bool fMemory, bool fWipe) : CRecoveredSigsDb::CRecoveredSigsDb(bool fMemory, bool fWipe) :
db(std::make_unique<CDBWrapper>(fMemory ? "" : (GetDataDir() / "llmq/recsigdb"), 8 << 20, fMemory, fWipe)) db(std::make_unique<CDBWrapper>(fMemory ? "" : (gArgs.GetDataDirNet() / "llmq/recsigdb"), 8 << 20, fMemory, fWipe))
{ {
MigrateRecoveredSigs(); MigrateRecoveredSigs();
} }
@ -58,7 +58,7 @@ void CRecoveredSigsDb::MigrateRecoveredSigs()
LogPrint(BCLog::LLMQ, "CRecoveredSigsDb::%d -- start\n", __func__); LogPrint(BCLog::LLMQ, "CRecoveredSigsDb::%d -- start\n", __func__);
CDBBatch batch(*db); CDBBatch batch(*db);
auto oldDb = std::make_unique<CDBWrapper>(GetDataDir() / "llmq", 8 << 20); auto oldDb = std::make_unique<CDBWrapper>(gArgs.GetDataDirNet() / "llmq", 8 << 20);
std::unique_ptr<CDBIterator> pcursor(oldDb->NewIterator()); std::unique_ptr<CDBIterator> pcursor(oldDb->NewIterator());
auto start_h = std::make_tuple(std::string("rs_h"), uint256()); auto start_h = std::make_tuple(std::string("rs_h"), uint256());

View File

@ -3469,7 +3469,7 @@ bool CConnman::Start(CDeterministicMNManager& dmnman, CMasternodeMetaMan& mn_met
Proxy i2p_sam; Proxy i2p_sam;
if (GetProxy(NET_I2P, i2p_sam) && connOptions.m_i2p_accept_incoming) { if (GetProxy(NET_I2P, i2p_sam) && connOptions.m_i2p_accept_incoming) {
m_i2p_sam_session = std::make_unique<i2p::sam::Session>(GetDataDir() / "i2p_private_key", m_i2p_sam_session = std::make_unique<i2p::sam::Session>(gArgs.GetDataDirNet() / "i2p_private_key",
i2p_sam.proxy, &interruptNet); i2p_sam.proxy, &interruptNet);
} }
@ -3479,7 +3479,7 @@ bool CConnman::Start(CDeterministicMNManager& dmnman, CMasternodeMetaMan& mn_met
if (m_use_addrman_outgoing) { if (m_use_addrman_outgoing) {
// Load addresses from anchors.dat // Load addresses from anchors.dat
m_anchors = ReadAnchors(GetDataDir() / ANCHORS_DATABASE_FILENAME); m_anchors = ReadAnchors(gArgs.GetDataDirNet() / ANCHORS_DATABASE_FILENAME);
if (m_anchors.size() > MAX_BLOCK_RELAY_ONLY_ANCHORS) { if (m_anchors.size() > MAX_BLOCK_RELAY_ONLY_ANCHORS) {
m_anchors.resize(MAX_BLOCK_RELAY_ONLY_ANCHORS); m_anchors.resize(MAX_BLOCK_RELAY_ONLY_ANCHORS);
} }
@ -3642,7 +3642,7 @@ void CConnman::StopNodes()
if (anchors_to_dump.size() > MAX_BLOCK_RELAY_ONLY_ANCHORS) { if (anchors_to_dump.size() > MAX_BLOCK_RELAY_ONLY_ANCHORS) {
anchors_to_dump.resize(MAX_BLOCK_RELAY_ONLY_ANCHORS); anchors_to_dump.resize(MAX_BLOCK_RELAY_ONLY_ANCHORS);
} }
DumpAnchors(GetDataDir() / ANCHORS_DATABASE_FILENAME, anchors_to_dump); DumpAnchors(gArgs.GetDataDirNet() / ANCHORS_DATABASE_FILENAME, anchors_to_dump);
} }
} }
@ -4313,7 +4313,7 @@ void CaptureMessageToFile(const CAddress& addr,
std::string clean_addr = addr.ToString(); std::string clean_addr = addr.ToString();
std::replace(clean_addr.begin(), clean_addr.end(), ':', '_'); std::replace(clean_addr.begin(), clean_addr.end(), ':', '_');
fs::path base_path = GetDataDir() / "message_capture" / clean_addr; fs::path base_path = gArgs.GetDataDirNet() / "message_capture" / clean_addr;
fs::create_directories(base_path); fs::create_directories(base_path);
fs::path path = base_path / (is_incoming ? "msgs_recv.dat" : "msgs_sent.dat"); fs::path path = base_path / (is_incoming ? "msgs_recv.dat" : "msgs_sent.dat");

View File

@ -331,6 +331,7 @@ void BlockManager::Unload()
bool BlockManager::WriteBlockIndexDB() bool BlockManager::WriteBlockIndexDB()
{ {
AssertLockHeld(::cs_main);
std::vector<std::pair<int, const CBlockFileInfo*>> vFiles; std::vector<std::pair<int, const CBlockFileInfo*>> vFiles;
vFiles.reserve(m_dirty_fileinfo.size()); vFiles.reserve(m_dirty_fileinfo.size());
for (std::set<int>::iterator it = m_dirty_fileinfo.begin(); it != m_dirty_fileinfo.end();) { for (std::set<int>::iterator it = m_dirty_fileinfo.begin(); it != m_dirty_fileinfo.end();) {
@ -616,7 +617,6 @@ fs::path GetBlockPosFilename(const FlatFilePos& pos)
return BlockFileSeq().FileName(pos); return BlockFileSeq().FileName(pos);
} }
// TODO move to blockstorage
bool BlockManager::FindBlockPos(FlatFilePos& pos, unsigned int nAddSize, unsigned int nHeight, CChain& active_chain, uint64_t nTime, bool fKnown) bool BlockManager::FindBlockPos(FlatFilePos& pos, unsigned int nAddSize, unsigned int nHeight, CChain& active_chain, uint64_t nTime, bool fKnown)
{ {
LOCK(cs_LastBlockFile); LOCK(cs_LastBlockFile);

View File

@ -530,7 +530,7 @@ CBlockPolicyEstimator::CBlockPolicyEstimator()
longStats = std::make_unique<TxConfirmStats>(buckets, bucketMap, LONG_BLOCK_PERIODS, LONG_DECAY, LONG_SCALE); longStats = std::make_unique<TxConfirmStats>(buckets, bucketMap, LONG_BLOCK_PERIODS, LONG_DECAY, LONG_SCALE);
// If the fee estimation file is present, read recorded estimations // If the fee estimation file is present, read recorded estimations
fs::path est_filepath = GetDataDir() / FEE_ESTIMATES_FILENAME; fs::path est_filepath = gArgs.GetDataDirNet() / FEE_ESTIMATES_FILENAME;
CAutoFile est_file(fsbridge::fopen(est_filepath, "rb"), SER_DISK, CLIENT_VERSION); CAutoFile est_file(fsbridge::fopen(est_filepath, "rb"), SER_DISK, CLIENT_VERSION);
if (est_file.IsNull() || !Read(est_file)) { if (est_file.IsNull() || !Read(est_file)) {
LogPrintf("Failed to read fee estimates from %s. Continue anyway.\n", est_filepath.string()); LogPrintf("Failed to read fee estimates from %s. Continue anyway.\n", est_filepath.string());
@ -890,7 +890,7 @@ CFeeRate CBlockPolicyEstimator::estimateSmartFee(int confTarget, FeeCalculation
void CBlockPolicyEstimator::Flush() { void CBlockPolicyEstimator::Flush() {
FlushUnconfirmed(); FlushUnconfirmed();
fs::path est_filepath = GetDataDir() / FEE_ESTIMATES_FILENAME; fs::path est_filepath = gArgs.GetDataDirNet() / FEE_ESTIMATES_FILENAME;
CAutoFile est_file(fsbridge::fopen(est_filepath, "wb"), SER_DISK, CLIENT_VERSION); CAutoFile est_file(fsbridge::fopen(est_filepath, "wb"), SER_DISK, CLIENT_VERSION);
if (est_file.IsNull() || !Write(est_file)) { if (est_file.IsNull() || !Write(est_file)) {
LogPrintf("Failed to write fee estimates to %s. Continue anyway.\n", est_filepath.string()); LogPrintf("Failed to write fee estimates to %s. Continue anyway.\n", est_filepath.string());

View File

@ -631,7 +631,7 @@ int GuiMain(int argc, char* argv[])
if (!Intro::showIfNeeded(did_show_intro, prune_MiB)) return EXIT_SUCCESS; if (!Intro::showIfNeeded(did_show_intro, prune_MiB)) return EXIT_SUCCESS;
/// 6. Determine availability of data directory and parse dash.conf /// 6. Determine availability of data directory and parse dash.conf
/// - Do not call GetDataDir(true) before this step finishes /// - Do not call gArgs.GetDataDirNet() before this step finishes
if (!CheckDataDirOption()) { if (!CheckDataDirOption()) {
InitError(strprintf(Untranslated("Specified data directory \"%s\" does not exist.\n"), gArgs.GetArg("-datadir", ""))); InitError(strprintf(Untranslated("Specified data directory \"%s\" does not exist.\n"), gArgs.GetArg("-datadir", "")));
QMessageBox::critical(nullptr, PACKAGE_NAME, QMessageBox::critical(nullptr, PACKAGE_NAME,

View File

@ -111,6 +111,9 @@ BitcoinGUI::BitcoinGUI(interfaces::Node& node, const NetworkStyle* networkStyle,
{ {
/** Create wallet frame*/ /** Create wallet frame*/
walletFrame = new WalletFrame(this); walletFrame = new WalletFrame(this);
connect(walletFrame, &WalletFrame::message, [this](const QString& title, const QString& message, unsigned int style) {
this->message(title, message, style);
});
} else } else
#endif // ENABLE_WALLET #endif // ENABLE_WALLET
{ {

View File

@ -246,7 +246,7 @@ QString ClientModel::formatClientStartupTime() const
QString ClientModel::dataDir() const QString ClientModel::dataDir() const
{ {
return GUIUtil::PathToQString(GetDataDir()); return GUIUtil::PathToQString(gArgs.GetDataDirNet());
} }
QString ClientModel::blocksDir() const QString ClientModel::blocksDir() const

View File

@ -634,7 +634,7 @@ void handleCloseWindowShortcut(QWidget* w)
void openDebugLogfile() void openDebugLogfile()
{ {
fs::path pathDebug = GetDataDir() / "debug.log"; fs::path pathDebug = gArgs.GetDataDirNet() / "debug.log";
/* Open debug.log with the associated application */ /* Open debug.log with the associated application */
if (fs::exists(pathDebug)) if (fs::exists(pathDebug))

View File

@ -45,7 +45,7 @@ public:
* @returns true if a data directory was selected, false if the user cancelled the selection * @returns true if a data directory was selected, false if the user cancelled the selection
* dialog. * dialog.
* *
* @note do NOT call global GetDataDir() before calling this function, this * @note do NOT call global gArgs.GetDataDirNet() before calling this function, this
* will cause the wrong path to be cached. * will cause the wrong path to be cached.
*/ */
static bool showIfNeeded(bool& did_show_intro, int64_t& prune_MiB); static bool showIfNeeded(bool& did_show_intro, int64_t& prune_MiB);

View File

@ -326,7 +326,7 @@ void OptionsModel::Reset()
QSettings settings; QSettings settings;
// Backup old settings to chain-specific datadir for troubleshooting // Backup old settings to chain-specific datadir for troubleshooting
BackupSettings(GetDataDir(true) / "guisettings.ini.bak", settings); BackupSettings(gArgs.GetDataDirNet() / "guisettings.ini.bak", settings);
// Save the strDataDir setting // Save the strDataDir setting
QString dataDir = GUIUtil::getDefaultDataDirectory(); QString dataDir = GUIUtil::getDefaultDataDirectory();

View File

@ -50,9 +50,9 @@ static QString ipcServerName()
QString name("DashQt"); QString name("DashQt");
// Append a simple hash of the datadir // Append a simple hash of the datadir
// Note that GetDataDir(true) returns a different path // Note that gArgs.GetDataDirNet() returns a different path
// for -testnet versus main net // for -testnet versus main net
QString ddir(GUIUtil::PathToQString(GetDataDir(true))); QString ddir(GUIUtil::PathToQString(gArgs.GetDataDirNet()));
name.append(QString::number(qHash(ddir))); name.append(QString::number(qHash(ddir)));
return name; return name;

View File

@ -47,18 +47,22 @@ void PSBTOperationsDialog::openWithPSBT(PartiallySignedTransaction psbtx)
{ {
m_transaction_data = psbtx; m_transaction_data = psbtx;
bool complete; bool complete = FinalizePSBT(psbtx); // Make sure all existing signatures are fully combined before checking for completeness.
if (m_wallet_model) {
size_t n_could_sign; size_t n_could_sign;
FinalizePSBT(psbtx); // Make sure all existing signatures are fully combined before checking for completeness.
TransactionError err = m_wallet_model->wallet().fillPSBT(SIGHASH_ALL, false /* sign */, true /* bip32derivs */, &n_could_sign, m_transaction_data, complete); TransactionError err = m_wallet_model->wallet().fillPSBT(SIGHASH_ALL, false /* sign */, true /* bip32derivs */, &n_could_sign, m_transaction_data, complete);
if (err != TransactionError::OK) { if (err != TransactionError::OK) {
showStatus(tr("Failed to load transaction: %1") showStatus(tr("Failed to load transaction: %1")
.arg(QString::fromStdString(TransactionErrorString(err).translated)), StatusLevel::ERR); .arg(QString::fromStdString(TransactionErrorString(err).translated)),
StatusLevel::ERR);
return; return;
} }
m_ui->signTransactionButton->setEnabled(!complete && !m_wallet_model->wallet().privateKeysDisabled() && n_could_sign > 0);
} else {
m_ui->signTransactionButton->setEnabled(false);
}
m_ui->broadcastTransactionButton->setEnabled(complete); m_ui->broadcastTransactionButton->setEnabled(complete);
m_ui->signTransactionButton->setEnabled(!complete && !m_wallet_model->wallet().privateKeysDisabled() && n_could_sign > 0);
updateTransactionDisplay(); updateTransactionDisplay();
} }
@ -133,7 +137,7 @@ void PSBTOperationsDialog::saveTransaction() {
} }
CTxDestination address; CTxDestination address;
ExtractDestination(out.scriptPubKey, address); ExtractDestination(out.scriptPubKey, address);
QString amount = BitcoinUnits::format(m_wallet_model->getOptionsModel()->getDisplayUnit(), out.nValue); QString amount = BitcoinUnits::format(m_client_model->getOptionsModel()->getDisplayUnit(), out.nValue);
QString address_str = QString::fromStdString(EncodeDestination(address)); QString address_str = QString::fromStdString(EncodeDestination(address));
filename_suggestion.append(address_str + "-" + amount); filename_suggestion.append(address_str + "-" + amount);
first = false; first = false;
@ -224,6 +228,10 @@ void PSBTOperationsDialog::showStatus(const QString &msg, StatusLevel level) {
} }
size_t PSBTOperationsDialog::couldSignInputs(const PartiallySignedTransaction &psbtx) { size_t PSBTOperationsDialog::couldSignInputs(const PartiallySignedTransaction &psbtx) {
if (!m_wallet_model) {
return 0;
}
size_t n_signed; size_t n_signed;
bool complete; bool complete;
TransactionError err = m_wallet_model->wallet().fillPSBT(SIGHASH_ALL, false /* sign */, false /* bip32derivs */, &n_signed, m_transaction_data, complete); TransactionError err = m_wallet_model->wallet().fillPSBT(SIGHASH_ALL, false /* sign */, false /* bip32derivs */, &n_signed, m_transaction_data, complete);
@ -246,7 +254,10 @@ void PSBTOperationsDialog::showTransactionStatus(const PartiallySignedTransactio
case PSBTRole::SIGNER: { case PSBTRole::SIGNER: {
QString need_sig_text = tr("Transaction still needs signature(s)."); QString need_sig_text = tr("Transaction still needs signature(s).");
StatusLevel level = StatusLevel::INFO; StatusLevel level = StatusLevel::INFO;
if (m_wallet_model->wallet().privateKeysDisabled()) { if (!m_wallet_model) {
need_sig_text += " " + tr("(But no wallet is loaded.)");
level = StatusLevel::WARN;
} else if (m_wallet_model->wallet().privateKeysDisabled()) {
need_sig_text += " " + tr("(But this wallet cannot sign transactions.)"); need_sig_text += " " + tr("(But this wallet cannot sign transactions.)");
level = StatusLevel::WARN; level = StatusLevel::WARN;
} else if (n_could_sign < 1) { } else if (n_could_sign < 1) {

View File

@ -65,7 +65,7 @@ void AppTests::appTests()
fs::create_directories([] { fs::create_directories([] {
BasicTestingSetup test{CBaseChainParams::REGTEST}; // Create a temp data directory to backup the gui settings to BasicTestingSetup test{CBaseChainParams::REGTEST}; // Create a temp data directory to backup the gui settings to
return GetDataDir() / "blocks"; return gArgs.GetDataDirNet() / "blocks";
}()); }());
qRegisterMetaType<interfaces::BlockAndHeaderTipInfo>("interfaces::BlockAndHeaderTipInfo"); qRegisterMetaType<interfaces::BlockAndHeaderTipInfo>("interfaces::BlockAndHeaderTipInfo");

View File

@ -4,11 +4,15 @@
#include <qt/walletframe.h> #include <qt/walletframe.h>
#include <node/ui_interface.h>
#include <psbt.h>
#include <qt/bitcoingui.h> #include <qt/bitcoingui.h>
#include <qt/createwalletdialog.h> #include <qt/createwalletdialog.h>
#include <qt/governancelist.h> #include <qt/governancelist.h>
#include <qt/guiutil.h>
#include <qt/masternodelist.h> #include <qt/masternodelist.h>
#include <qt/overviewpage.h> #include <qt/overviewpage.h>
#include <qt/psbtoperationsdialog.h>
#include <qt/walletcontroller.h> #include <qt/walletcontroller.h>
#include <qt/walletmodel.h> #include <qt/walletmodel.h>
#include <qt/walletview.h> #include <qt/walletview.h>
@ -16,6 +20,8 @@
#include <cassert> #include <cassert>
#include <QApplication>
#include <QClipboard>
#include <QHBoxLayout> #include <QHBoxLayout>
#include <QLabel> #include <QLabel>
#include <QPushButton> #include <QPushButton>
@ -245,10 +251,40 @@ void WalletFrame::gotoVerifyMessageTab(QString addr)
void WalletFrame::gotoLoadPSBT(bool from_clipboard) void WalletFrame::gotoLoadPSBT(bool from_clipboard)
{ {
WalletView *walletView = currentWalletView(); std::string data;
if (walletView) {
walletView->gotoLoadPSBT(from_clipboard); if (from_clipboard) {
std::string raw = QApplication::clipboard()->text().toStdString();
bool invalid;
data = DecodeBase64(raw, &invalid);
if (invalid) {
Q_EMIT message(tr("Error"), tr("Unable to decode PSBT from clipboard (invalid base64)"), CClientUIInterface::MSG_ERROR);
return;
} }
} else {
QString filename = GUIUtil::getOpenFileName(this,
tr("Load Transaction Data"), QString(),
tr("Partially Signed Transaction (*.psbt)"), nullptr);
if (filename.isEmpty()) return;
if (GetFileSize(filename.toLocal8Bit().data(), MAX_FILE_SIZE_PSBT) == MAX_FILE_SIZE_PSBT) {
Q_EMIT message(tr("Error"), tr("PSBT file must be smaller than 100 MiB"), CClientUIInterface::MSG_ERROR);
return;
}
std::ifstream in(filename.toLocal8Bit().data(), std::ios::binary);
data = std::string(std::istreambuf_iterator<char>{in}, {});
}
std::string error;
PartiallySignedTransaction psbtx;
if (!DecodeRawPSBT(psbtx, data, error)) {
Q_EMIT message(tr("Error"), tr("Unable to decode PSBT") + "\n" + QString::fromStdString(error), CClientUIInterface::MSG_ERROR);
return;
}
PSBTOperationsDialog* dlg = new PSBTOperationsDialog(this, currentWalletModel(), clientModel);
dlg->openWithPSBT(psbtx);
dlg->setAttribute(Qt::WA_DeleteOnClose);
dlg->exec();
} }
void WalletFrame::encryptWallet() void WalletFrame::encryptWallet()

View File

@ -50,6 +50,7 @@ public:
QSize sizeHint() const override { return m_size_hint; } QSize sizeHint() const override { return m_size_hint; }
Q_SIGNALS: Q_SIGNALS:
void message(const QString& title, const QString& message, unsigned int style);
/** Notify that the user has requested more information about the out-of-sync warning */ /** Notify that the user has requested more information about the out-of-sync warning */
void requestedSyncWarningInfo(); void requestedSyncWarningInfo();

View File

@ -11,7 +11,6 @@
#include <qt/askpassphrasedialog.h> #include <qt/askpassphrasedialog.h>
#include <qt/clientmodel.h> #include <qt/clientmodel.h>
#include <qt/guiutil.h> #include <qt/guiutil.h>
#include <qt/psbtoperationsdialog.h>
#include <qt/optionsmodel.h> #include <qt/optionsmodel.h>
#include <qt/overviewpage.h> #include <qt/overviewpage.h>
#include <qt/receivecoinsdialog.h> #include <qt/receivecoinsdialog.h>
@ -24,13 +23,10 @@
#include <interfaces/node.h> #include <interfaces/node.h>
#include <node/ui_interface.h> #include <node/ui_interface.h>
#include <psbt.h>
#include <util/strencodings.h> #include <util/strencodings.h>
#include <QAction> #include <QAction>
#include <QActionGroup> #include <QActionGroup>
#include <QApplication>
#include <QClipboard>
#include <QFileDialog> #include <QFileDialog>
#include <QHBoxLayout> #include <QHBoxLayout>
#include <QLabel> #include <QLabel>
@ -298,44 +294,6 @@ void WalletView::gotoVerifyMessageTab(QString addr)
signVerifyMessageDialog->setAddress_VM(addr); signVerifyMessageDialog->setAddress_VM(addr);
} }
void WalletView::gotoLoadPSBT(bool from_clipboard)
{
std::string data;
if (from_clipboard) {
std::string raw = QApplication::clipboard()->text().toStdString();
bool invalid;
data = DecodeBase64(raw, &invalid);
if (invalid) {
Q_EMIT message(tr("Error"), tr("Unable to decode PSBT from clipboard (invalid base64)"), CClientUIInterface::MSG_ERROR);
return;
}
} else {
QString filename = GUIUtil::getOpenFileName(this,
tr("Load Transaction Data"), QString(),
tr("Partially Signed Transaction (*.psbt)"), nullptr);
if (filename.isEmpty()) return;
if (GetFileSize(filename.toLocal8Bit().data(), MAX_FILE_SIZE_PSBT) == MAX_FILE_SIZE_PSBT) {
Q_EMIT message(tr("Error"), tr("PSBT file must be smaller than 100 MiB"), CClientUIInterface::MSG_ERROR);
return;
}
std::ifstream in(filename.toLocal8Bit().data(), std::ios::binary);
data = std::string(std::istreambuf_iterator<char>{in}, {});
}
std::string error;
PartiallySignedTransaction psbtx;
if (!DecodeRawPSBT(psbtx, data, error)) {
Q_EMIT message(tr("Error"), tr("Unable to decode PSBT") + "\n" + QString::fromStdString(error), CClientUIInterface::MSG_ERROR);
return;
}
PSBTOperationsDialog* dlg = new PSBTOperationsDialog(this, walletModel, clientModel);
dlg->openWithPSBT(psbtx);
dlg->setAttribute(Qt::WA_DeleteOnClose);
dlg->exec();
}
bool WalletView::handlePaymentRequest(const SendCoinsRecipient& recipient) bool WalletView::handlePaymentRequest(const SendCoinsRecipient& recipient)
{ {
return sendCoinsPage->handlePaymentRequest(recipient); return sendCoinsPage->handlePaymentRequest(recipient);

View File

@ -94,8 +94,6 @@ public Q_SLOTS:
void gotoSignMessageTab(QString addr = ""); void gotoSignMessageTab(QString addr = "");
/** Show Sign/Verify Message dialog and switch to verify message tab */ /** Show Sign/Verify Message dialog and switch to verify message tab */
void gotoVerifyMessageTab(QString addr = ""); void gotoVerifyMessageTab(QString addr = "");
/** Load Partially Signed Bitcoin Transaction */
void gotoLoadPSBT(bool from_clipboard = false);
/** Show incoming transaction notification for new transactions. /** Show incoming transaction notification for new transactions.

View File

@ -2960,10 +2960,10 @@ static RPCHelpMan dumptxoutset()
}, },
[&](const RPCHelpMan& self, const JSONRPCRequest& request) -> UniValue [&](const RPCHelpMan& self, const JSONRPCRequest& request) -> UniValue
{ {
const fs::path path = fsbridge::AbsPathJoin(GetDataDir(), request.params[0].get_str()); const fs::path path = fsbridge::AbsPathJoin(gArgs.GetDataDirNet(), request.params[0].get_str());
// Write to a temporary path and then move into `path` on completion // Write to a temporary path and then move into `path` on completion
// to avoid confusion due to an interruption. // to avoid confusion due to an interruption.
const fs::path temppath = fsbridge::AbsPathJoin(GetDataDir(), request.params[0].get_str() + ".incomplete"); const fs::path temppath = fsbridge::AbsPathJoin(gArgs.GetDataDirNet(), request.params[0].get_str() + ".incomplete");
if (fs::exists(path)) { if (fs::exists(path)) {
throw JSONRPCError( throw JSONRPCError(

View File

@ -81,6 +81,7 @@ enum RPCErrorCode
RPC_WALLET_NOT_FOUND = -18, //!< Invalid wallet specified RPC_WALLET_NOT_FOUND = -18, //!< Invalid wallet specified
RPC_WALLET_NOT_SPECIFIED = -19, //!< No wallet specified (error when there are multiple wallets loaded) RPC_WALLET_NOT_SPECIFIED = -19, //!< No wallet specified (error when there are multiple wallets loaded)
RPC_WALLET_ALREADY_LOADED = -35, //!< This same wallet is already loaded RPC_WALLET_ALREADY_LOADED = -35, //!< This same wallet is already loaded
RPC_WALLET_ALREADY_EXISTS = -36, //!< There is already a wallet with the same name
//! Backwards compatible aliases //! Backwards compatible aliases

View File

@ -26,7 +26,7 @@ BOOST_AUTO_TEST_CASE(dbwrapper)
{ {
// Perform tests both obfuscated and non-obfuscated. // Perform tests both obfuscated and non-obfuscated.
for (const bool obfuscate : {false, true}) { for (const bool obfuscate : {false, true}) {
fs::path ph = m_args.GetDataDirPath() / (obfuscate ? "dbwrapper_obfuscate_true" : "dbwrapper_obfuscate_false"); fs::path ph = m_args.GetDataDirBase() / (obfuscate ? "dbwrapper_obfuscate_true" : "dbwrapper_obfuscate_false");
CDBWrapper dbw(ph, (1 << 20), true, false, obfuscate); CDBWrapper dbw(ph, (1 << 20), true, false, obfuscate);
uint8_t key{'k'}; uint8_t key{'k'};
uint256 in = InsecureRand256(); uint256 in = InsecureRand256();
@ -45,7 +45,7 @@ BOOST_AUTO_TEST_CASE(dbwrapper_basic_data)
{ {
// Perform tests both obfuscated and non-obfuscated. // Perform tests both obfuscated and non-obfuscated.
for (bool obfuscate : {false, true}) { for (bool obfuscate : {false, true}) {
fs::path ph = m_args.GetDataDirPath() / (obfuscate ? "dbwrapper_1_obfuscate_true" : "dbwrapper_1_obfuscate_false"); fs::path ph = m_args.GetDataDirBase() / (obfuscate ? "dbwrapper_1_obfuscate_true" : "dbwrapper_1_obfuscate_false");
CDBWrapper dbw(ph, (1 << 20), false, true, obfuscate); CDBWrapper dbw(ph, (1 << 20), false, true, obfuscate);
uint256 res; uint256 res;
@ -126,7 +126,7 @@ BOOST_AUTO_TEST_CASE(dbwrapper_batch)
{ {
// Perform tests both obfuscated and non-obfuscated. // Perform tests both obfuscated and non-obfuscated.
for (const bool obfuscate : {false, true}) { for (const bool obfuscate : {false, true}) {
fs::path ph = m_args.GetDataDirPath() / (obfuscate ? "dbwrapper_batch_obfuscate_true" : "dbwrapper_batch_obfuscate_false"); fs::path ph = m_args.GetDataDirBase() / (obfuscate ? "dbwrapper_batch_obfuscate_true" : "dbwrapper_batch_obfuscate_false");
CDBWrapper dbw(ph, (1 << 20), true, false, obfuscate); CDBWrapper dbw(ph, (1 << 20), true, false, obfuscate);
uint8_t key{'i'}; uint8_t key{'i'};
@ -162,7 +162,7 @@ BOOST_AUTO_TEST_CASE(dbwrapper_iterator)
{ {
// Perform tests both obfuscated and non-obfuscated. // Perform tests both obfuscated and non-obfuscated.
for (const bool obfuscate : {false, true}) { for (const bool obfuscate : {false, true}) {
fs::path ph = m_args.GetDataDirPath() / (obfuscate ? "dbwrapper_iterator_obfuscate_true" : "dbwrapper_iterator_obfuscate_false"); fs::path ph = m_args.GetDataDirBase() / (obfuscate ? "dbwrapper_iterator_obfuscate_true" : "dbwrapper_iterator_obfuscate_false");
CDBWrapper dbw(ph, (1 << 20), true, false, obfuscate); CDBWrapper dbw(ph, (1 << 20), true, false, obfuscate);
// The two keys are intentionally chosen for ordering // The two keys are intentionally chosen for ordering
@ -202,7 +202,7 @@ BOOST_AUTO_TEST_CASE(dbwrapper_iterator)
BOOST_AUTO_TEST_CASE(existing_data_no_obfuscate) BOOST_AUTO_TEST_CASE(existing_data_no_obfuscate)
{ {
// We're going to share this fs::path between two wrappers // We're going to share this fs::path between two wrappers
fs::path ph = m_args.GetDataDirPath() / "existing_data_no_obfuscate"; fs::path ph = m_args.GetDataDirBase() / "existing_data_no_obfuscate";
create_directories(ph); create_directories(ph);
// Set up a non-obfuscated wrapper to write some initial data. // Set up a non-obfuscated wrapper to write some initial data.
@ -243,7 +243,7 @@ BOOST_AUTO_TEST_CASE(existing_data_no_obfuscate)
BOOST_AUTO_TEST_CASE(existing_data_reindex) BOOST_AUTO_TEST_CASE(existing_data_reindex)
{ {
// We're going to share this fs::path between two wrappers // We're going to share this fs::path between two wrappers
fs::path ph = m_args.GetDataDirPath() / "existing_data_reindex"; fs::path ph = m_args.GetDataDirBase() / "existing_data_reindex";
create_directories(ph); create_directories(ph);
// Set up a non-obfuscated wrapper to write some initial data. // Set up a non-obfuscated wrapper to write some initial data.
@ -278,7 +278,7 @@ BOOST_AUTO_TEST_CASE(existing_data_reindex)
BOOST_AUTO_TEST_CASE(iterator_ordering) BOOST_AUTO_TEST_CASE(iterator_ordering)
{ {
fs::path ph = m_args.GetDataDirPath() / "iterator_ordering"; fs::path ph = m_args.GetDataDirBase() / "iterator_ordering";
CDBWrapper dbw(ph, (1 << 20), true, false, false); CDBWrapper dbw(ph, (1 << 20), true, false, false);
for (int x=0x00; x<256; ++x) { for (int x=0x00; x<256; ++x) {
uint8_t key = x; uint8_t key = x;
@ -358,7 +358,7 @@ BOOST_AUTO_TEST_CASE(iterator_string_ordering)
{ {
char buf[10]; char buf[10];
fs::path ph = m_args.GetDataDirPath() / "iterator_string_ordering"; fs::path ph = m_args.GetDataDirBase() / "iterator_string_ordering";
CDBWrapper dbw(ph, (1 << 20), true, false, false); CDBWrapper dbw(ph, (1 << 20), true, false, false);
for (int x=0x00; x<10; ++x) { for (int x=0x00; x<10; ++x) {
for (int y = 0; y < 10; y++) { for (int y = 0; y < 10; y++) {
@ -404,7 +404,7 @@ BOOST_AUTO_TEST_CASE(unicodepath)
// On Windows this test will fail if the directory is created using // On Windows this test will fail if the directory is created using
// the ANSI CreateDirectoryA call and the code page isn't UTF8. // the ANSI CreateDirectoryA call and the code page isn't UTF8.
// It will succeed if created with CreateDirectoryW. // It will succeed if created with CreateDirectoryW.
fs::path ph = m_args.GetDataDirPath() / "test_runner_₿_🏃_20191128_104644"; fs::path ph = m_args.GetDataDirBase() / "test_runner_₿_🏃_20191128_104644";
CDBWrapper dbw(ph, (1 << 20)); CDBWrapper dbw(ph, (1 << 20));
fs::path lockPath = ph / "LOCK"; fs::path lockPath = ph / "LOCK";

View File

@ -295,7 +295,7 @@ BOOST_AUTO_TEST_CASE(block_relay_only_eviction)
BOOST_AUTO_TEST_CASE(peer_discouragement) BOOST_AUTO_TEST_CASE(peer_discouragement)
{ {
const CChainParams& chainparams = Params(); const CChainParams& chainparams = Params();
auto banman = std::make_unique<BanMan>(m_args.GetDataDirPath() / "banlist", nullptr, DEFAULT_MISBEHAVING_BANTIME); auto banman = std::make_unique<BanMan>(m_args.GetDataDirBase() / "banlist", nullptr, DEFAULT_MISBEHAVING_BANTIME);
auto connman = std::make_unique<ConnmanTestMsg>(0x1337, 0x1337, *m_node.addrman); auto connman = std::make_unique<ConnmanTestMsg>(0x1337, 0x1337, *m_node.addrman);
auto peerLogic = PeerManager::make(chainparams, *connman, *m_node.addrman, banman.get(), *m_node.scheduler, auto peerLogic = PeerManager::make(chainparams, *connman, *m_node.addrman, banman.get(), *m_node.scheduler,
*m_node.chainman, *m_node.mempool, *m_node.mn_metaman, *m_node.mn_sync, *m_node.chainman, *m_node.mempool, *m_node.mn_metaman, *m_node.mn_sync,
@ -412,7 +412,7 @@ BOOST_AUTO_TEST_CASE(peer_discouragement)
BOOST_AUTO_TEST_CASE(DoS_bantime) BOOST_AUTO_TEST_CASE(DoS_bantime)
{ {
const CChainParams& chainparams = Params(); const CChainParams& chainparams = Params();
auto banman = std::make_unique<BanMan>(m_args.GetDataDirPath() / "banlist", nullptr, DEFAULT_MISBEHAVING_BANTIME); auto banman = std::make_unique<BanMan>(m_args.GetDataDirBase() / "banlist", nullptr, DEFAULT_MISBEHAVING_BANTIME);
auto connman = std::make_unique<CConnman>(0x1337, 0x1337, *m_node.addrman); auto connman = std::make_unique<CConnman>(0x1337, 0x1337, *m_node.addrman);
auto peerLogic = PeerManager::make(chainparams, *connman, *m_node.addrman, banman.get(), *m_node.scheduler, auto peerLogic = PeerManager::make(chainparams, *connman, *m_node.addrman, banman.get(), *m_node.scheduler,
*m_node.chainman, *m_node.mempool, *m_node.mn_metaman, *m_node.mn_sync, *m_node.chainman, *m_node.mempool, *m_node.mn_metaman, *m_node.mn_sync,

View File

@ -14,7 +14,7 @@ BOOST_FIXTURE_TEST_SUITE(flatfile_tests, BasicTestingSetup)
BOOST_AUTO_TEST_CASE(flatfile_filename) BOOST_AUTO_TEST_CASE(flatfile_filename)
{ {
const auto data_dir = m_args.GetDataDirPath(); const auto data_dir = m_args.GetDataDirBase();
FlatFilePos pos(456, 789); FlatFilePos pos(456, 789);
@ -27,7 +27,7 @@ BOOST_AUTO_TEST_CASE(flatfile_filename)
BOOST_AUTO_TEST_CASE(flatfile_open) BOOST_AUTO_TEST_CASE(flatfile_open)
{ {
const auto data_dir = m_args.GetDataDirPath(); const auto data_dir = m_args.GetDataDirBase();
FlatFileSeq seq(data_dir, "a", 16 * 1024); FlatFileSeq seq(data_dir, "a", 16 * 1024);
std::string line1("A purely peer-to-peer version of electronic cash would allow online " std::string line1("A purely peer-to-peer version of electronic cash would allow online "
@ -88,7 +88,7 @@ BOOST_AUTO_TEST_CASE(flatfile_open)
BOOST_AUTO_TEST_CASE(flatfile_allocate) BOOST_AUTO_TEST_CASE(flatfile_allocate)
{ {
const auto data_dir = m_args.GetDataDirPath(); const auto data_dir = m_args.GetDataDirBase();
FlatFileSeq seq(data_dir, "a", 100); FlatFileSeq seq(data_dir, "a", 100);
bool out_of_space; bool out_of_space;
@ -108,7 +108,7 @@ BOOST_AUTO_TEST_CASE(flatfile_allocate)
BOOST_AUTO_TEST_CASE(flatfile_flush) BOOST_AUTO_TEST_CASE(flatfile_flush)
{ {
const auto data_dir = m_args.GetDataDirPath(); const auto data_dir = m_args.GetDataDirBase();
FlatFileSeq seq(data_dir, "a", 100); FlatFileSeq seq(data_dir, "a", 100);
bool out_of_space; bool out_of_space;

View File

@ -13,7 +13,7 @@ BOOST_FIXTURE_TEST_SUITE(fs_tests, BasicTestingSetup)
BOOST_AUTO_TEST_CASE(fsbridge_fstream) BOOST_AUTO_TEST_CASE(fsbridge_fstream)
{ {
fs::path tmpfolder = m_args.GetDataDirPath(); fs::path tmpfolder = m_args.GetDataDirBase();
// tmpfile1 should be the same as tmpfile2 // tmpfile1 should be the same as tmpfile2
fs::path tmpfile1 = tmpfolder / "fs_tests_₿_🏃"; fs::path tmpfile1 = tmpfolder / "fs_tests_₿_🏃";
fs::path tmpfile2 = tmpfolder / "fs_tests_₿_🏃"; fs::path tmpfile2 = tmpfolder / "fs_tests_₿_🏃";

View File

@ -43,7 +43,7 @@ FUZZ_TARGET_INIT(banman, initialize_banman)
{ {
FuzzedDataProvider fuzzed_data_provider{buffer.data(), buffer.size()}; FuzzedDataProvider fuzzed_data_provider{buffer.data(), buffer.size()};
SetMockTime(ConsumeTime(fuzzed_data_provider)); SetMockTime(ConsumeTime(fuzzed_data_provider));
fs::path banlist_file = GetDataDir() / "fuzzed_banlist"; fs::path banlist_file = gArgs.GetDataDirNet() / "fuzzed_banlist";
const bool start_with_corrupted_banlist{fuzzed_data_provider.ConsumeBool()}; const bool start_with_corrupted_banlist{fuzzed_data_provider.ConsumeBool()};
bool force_read_and_write_to_err{false}; bool force_read_and_write_to_err{false};

View File

@ -30,7 +30,7 @@ FUZZ_TARGET_INIT(i2p, initialize_i2p)
const CService sam_proxy; const CService sam_proxy;
CThreadInterrupt interrupt; CThreadInterrupt interrupt;
i2p::sam::Session sess{GetDataDir() / "fuzzed_i2p_private_key", sam_proxy, &interrupt}; i2p::sam::Session sess{gArgs.GetDataDirNet() / "fuzzed_i2p_private_key", sam_proxy, &interrupt};
i2p::Connection conn; i2p::Connection conn;

View File

@ -0,0 +1,88 @@
// Copyright (c) 2021 The Bitcoin Core developers
// Distributed under the MIT software license, see the accompanying
// file COPYING or http://www.opensource.org/licenses/mit-license.php.
#include <chainparams.h>
#include <consensus/validation.h>
#include <node/utxo_snapshot.h>
#include <test/fuzz/FuzzedDataProvider.h>
#include <test/fuzz/fuzz.h>
#include <test/fuzz/util.h>
#include <test/util/mining.h>
#include <test/util/setup_common.h>
#include <validation.h>
#include <validationinterface.h>
namespace {
const std::vector<std::shared_ptr<CBlock>>* g_chain;
void initialize_chain()
{
const auto params{CreateChainParams(ArgsManager{}, CBaseChainParams::REGTEST)};
static const auto chain{CreateBlockChain(2 * COINBASE_MATURITY, *params)};
g_chain = &chain;
}
FUZZ_TARGET_INIT(utxo_snapshot, initialize_chain)
{
FuzzedDataProvider fuzzed_data_provider(buffer.data(), buffer.size());
std::unique_ptr<const TestingSetup> setup{MakeNoLogFileContext<const TestingSetup>()};
const auto& node = setup->m_node;
auto& chainman{*node.chainman};
const auto snapshot_path = gArgs.GetDataDirNet() / "fuzzed_snapshot.dat";
Assert(!chainman.SnapshotBlockhash());
{
CAutoFile outfile{fsbridge::fopen(snapshot_path, "wb"), SER_DISK, CLIENT_VERSION};
const auto file_data{ConsumeRandomLengthByteVector(fuzzed_data_provider)};
outfile << Span<const uint8_t>{file_data};
}
const auto ActivateFuzzedSnapshot{[&] {
CAutoFile infile{fsbridge::fopen(snapshot_path, "rb"), SER_DISK, CLIENT_VERSION};
SnapshotMetadata metadata;
try {
infile >> metadata;
} catch (const std::ios_base::failure&) {
return false;
}
return chainman.ActivateSnapshot(infile, metadata, /* in_memory */ true);
}};
if (fuzzed_data_provider.ConsumeBool()) {
for (const auto& block : *g_chain) {
BlockValidationState dummy;
bool processed{chainman.ProcessNewBlockHeaders({*block}, dummy, ::Params())};
Assert(processed);
const auto* index{WITH_LOCK(::cs_main, return chainman.m_blockman.LookupBlockIndex(block->GetHash()))};
Assert(index);
}
}
if (ActivateFuzzedSnapshot()) {
LOCK(::cs_main);
Assert(!chainman.ActiveChainstate().m_from_snapshot_blockhash->IsNull());
Assert(*chainman.ActiveChainstate().m_from_snapshot_blockhash ==
*chainman.SnapshotBlockhash());
const auto& coinscache{chainman.ActiveChainstate().CoinsTip()};
int64_t chain_tx{};
for (const auto& block : *g_chain) {
Assert(coinscache.HaveCoin(COutPoint{block->vtx.at(0)->GetHash(), 0}));
const auto* index{chainman.m_blockman.LookupBlockIndex(block->GetHash())};
const auto num_tx{Assert(index)->nTx};
Assert(num_tx == 1);
chain_tx += num_tx;
}
Assert(g_chain->size() == coinscache.GetCacheSize());
Assert(chain_tx == chainman.ActiveTip()->nChainTx);
} else {
Assert(!chainman.SnapshotBlockhash());
Assert(!chainman.ActiveChainstate().m_from_snapshot_blockhash);
}
// Snapshot should refuse to load a second time regardless of validity
Assert(!ActivateFuzzedSnapshot());
}
} // namespace

View File

@ -28,7 +28,7 @@ BOOST_AUTO_TEST_CASE(unlimited_recv)
}; };
CThreadInterrupt interrupt; CThreadInterrupt interrupt;
i2p::sam::Session session(GetDataDir() / "test_i2p_private_key", CService{}, &interrupt); i2p::sam::Session session(gArgs.GetDataDirNet() / "test_i2p_private_key", CService{}, &interrupt);
{ {
ASSERT_DEBUG_LOG("Creating persistent SAM session"); ASSERT_DEBUG_LOG("Creating persistent SAM session");

View File

@ -102,11 +102,12 @@ BOOST_AUTO_TEST_CASE(double_serfloat_tests) {
Python code to generate the below hashes: Python code to generate the below hashes:
def reversed_hex(x): def reversed_hex(x):
return binascii.hexlify(''.join(reversed(x))) return bytes(reversed(x)).hex()
def dsha256(x): def dsha256(x):
return hashlib.sha256(hashlib.sha256(x).digest()).digest() return hashlib.sha256(hashlib.sha256(x).digest()).digest()
reversed_hex(dsha256(''.join(struct.pack('<d', x) for x in range(0,1000)))) == '43d0c82591953c4eafe114590d392676a01585d25b25d433557f0d7878b23f96' reversed_hex(dsha256(b''.join(struct.pack('<d', x) for x in range(0,1000)))) == '43d0c82591953c4eafe114590d392676a01585d25b25d433557f0d7878b23f96'
*/ */
BOOST_AUTO_TEST_CASE(doubles) BOOST_AUTO_TEST_CASE(doubles)
{ {

View File

@ -45,7 +45,7 @@ BOOST_FIXTURE_TEST_SUITE(settings_tests, BasicTestingSetup)
BOOST_AUTO_TEST_CASE(ReadWrite) BOOST_AUTO_TEST_CASE(ReadWrite)
{ {
fs::path path = m_args.GetDataDirPath() / "settings.json"; fs::path path = m_args.GetDataDirBase() / "settings.json";
WriteText(path, R"({ WriteText(path, R"({
"string": "string", "string": "string",

View File

@ -13,8 +13,10 @@
#include <pow.h> #include <pow.h>
#include <script/standard.h> #include <script/standard.h>
#include <spork.h> #include <spork.h>
#include <test/util/script.h>
#include <util/check.h> #include <util/check.h>
#include <validation.h> #include <validation.h>
#include <versionbits.h>
CTxIn generatetoaddress(const NodeContext& node, const std::string& address) CTxIn generatetoaddress(const NodeContext& node, const std::string& address)
{ {
@ -25,6 +27,37 @@ CTxIn generatetoaddress(const NodeContext& node, const std::string& address)
return MineBlock(node, coinbase_script); return MineBlock(node, coinbase_script);
} }
std::vector<std::shared_ptr<CBlock>> CreateBlockChain(size_t total_height, const CChainParams& params)
{
std::vector<std::shared_ptr<CBlock>> ret{total_height};
auto time{params.GenesisBlock().nTime};
for (size_t height{0}; height < total_height; ++height) {
CBlock& block{*(ret.at(height) = std::make_shared<CBlock>())};
CMutableTransaction coinbase_tx;
coinbase_tx.vin.resize(1);
coinbase_tx.vin[0].prevout.SetNull();
coinbase_tx.vout.resize(1);
coinbase_tx.vout[0].scriptPubKey = P2SH_OP_TRUE;
coinbase_tx.vout[0].nValue = GetBlockSubsidyInner(params.GenesisBlock().nBits, height, params.GetConsensus(), false);
coinbase_tx.vin[0].scriptSig = CScript() << (height + 1) << OP_0;
block.vtx = {MakeTransactionRef(std::move(coinbase_tx))};
block.nVersion = VERSIONBITS_LAST_OLD_BLOCK_VERSION;
block.hashPrevBlock = (height >= 1 ? *ret.at(height - 1) : params.GenesisBlock()).GetHash();
block.hashMerkleRoot = BlockMerkleRoot(block);
block.nTime = ++time;
block.nBits = params.GenesisBlock().nBits;
block.nNonce = 0;
while (!CheckProofOfWork(block.GetHash(), block.nBits, params.GetConsensus())) {
++block.nNonce;
assert(block.nNonce);
}
}
return ret;
}
CTxIn MineBlock(const NodeContext& node, const CScript& coinbase_scriptPubKey) CTxIn MineBlock(const NodeContext& node, const CScript& coinbase_scriptPubKey)
{ {
auto block = PrepareBlock(node, coinbase_scriptPubKey); auto block = PrepareBlock(node, coinbase_scriptPubKey);

View File

@ -7,12 +7,17 @@
#include <memory> #include <memory>
#include <string> #include <string>
#include <vector>
class CBlock; class CBlock;
class CChainParams;
class CScript; class CScript;
class CTxIn; class CTxIn;
struct NodeContext; struct NodeContext;
/** Create a blockchain, starting from genesis */
std::vector<std::shared_ptr<CBlock>> CreateBlockChain(size_t total_height, const CChainParams& params);
/** Returns the generated coin */ /** Returns the generated coin */
CTxIn MineBlock(const NodeContext&, const CScript& coinbase_scriptPubKey); CTxIn MineBlock(const NodeContext&, const CScript& coinbase_scriptPubKey);

View File

@ -279,7 +279,7 @@ TestingSetup::TestingSetup(const std::string& chainName, const std::vector<const
throw std::runtime_error("LoadGenesisBlock failed."); throw std::runtime_error("LoadGenesisBlock failed.");
} }
m_node.banman = std::make_unique<BanMan>(m_args.GetDataDirPath() / "banlist", nullptr, DEFAULT_MISBEHAVING_BANTIME); m_node.banman = std::make_unique<BanMan>(m_args.GetDataDirBase() / "banlist", nullptr, DEFAULT_MISBEHAVING_BANTIME);
m_node.peerman = PeerManager::make(chainparams, *m_node.connman, *m_node.addrman, m_node.banman.get(), m_node.peerman = PeerManager::make(chainparams, *m_node.connman, *m_node.addrman, m_node.banman.get(),
*m_node.scheduler, *m_node.chainman, *m_node.mempool, *m_node.mn_metaman, *m_node.mn_sync, *m_node.scheduler, *m_node.chainman, *m_node.mempool, *m_node.mn_metaman, *m_node.mn_sync,
*m_node.govman, *m_node.sporkman, /* mn_activeman = */ nullptr, m_node.dmnman, *m_node.govman, *m_node.sporkman, /* mn_activeman = */ nullptr, m_node.dmnman,

View File

@ -58,23 +58,23 @@ BOOST_AUTO_TEST_CASE(util_datadir)
ArgsManager args; ArgsManager args;
args.ForceSetArg("-datadir", m_path_root.string()); args.ForceSetArg("-datadir", m_path_root.string());
const fs::path dd_norm = args.GetDataDirPath(); const fs::path dd_norm = args.GetDataDirBase();
args.ForceSetArg("-datadir", dd_norm.string() + "/"); args.ForceSetArg("-datadir", dd_norm.string() + "/");
args.ClearPathCache(); args.ClearPathCache();
BOOST_CHECK_EQUAL(dd_norm, args.GetDataDirPath()); BOOST_CHECK_EQUAL(dd_norm, args.GetDataDirBase());
args.ForceSetArg("-datadir", dd_norm.string() + "/."); args.ForceSetArg("-datadir", dd_norm.string() + "/.");
args.ClearPathCache(); args.ClearPathCache();
BOOST_CHECK_EQUAL(dd_norm, args.GetDataDirPath()); BOOST_CHECK_EQUAL(dd_norm, args.GetDataDirBase());
args.ForceSetArg("-datadir", dd_norm.string() + "/./"); args.ForceSetArg("-datadir", dd_norm.string() + "/./");
args.ClearPathCache(); args.ClearPathCache();
BOOST_CHECK_EQUAL(dd_norm, args.GetDataDirPath()); BOOST_CHECK_EQUAL(dd_norm, args.GetDataDirBase());
args.ForceSetArg("-datadir", dd_norm.string() + "/.//"); args.ForceSetArg("-datadir", dd_norm.string() + "/.//");
args.ClearPathCache(); args.ClearPathCache();
BOOST_CHECK_EQUAL(dd_norm, args.GetDataDirPath()); BOOST_CHECK_EQUAL(dd_norm, args.GetDataDirBase());
} }
namespace { namespace {
@ -1272,10 +1272,10 @@ BOOST_AUTO_TEST_CASE(util_ReadWriteSettings)
// Test error logging, and remove previously written setting. // Test error logging, and remove previously written setting.
{ {
ASSERT_DEBUG_LOG("Failed renaming settings file"); ASSERT_DEBUG_LOG("Failed renaming settings file");
fs::remove(args1.GetDataDirPath() / "settings.json"); fs::remove(args1.GetDataDirBase() / "settings.json");
fs::create_directory(args1.GetDataDirPath() / "settings.json"); fs::create_directory(args1.GetDataDirBase() / "settings.json");
args2.WriteSettingsFile(); args2.WriteSettingsFile();
fs::remove(args1.GetDataDirPath() / "settings.json"); fs::remove(args1.GetDataDirBase() / "settings.json");
} }
} }
@ -2085,7 +2085,7 @@ static constexpr char ExitCommand = 'X';
BOOST_AUTO_TEST_CASE(test_LockDirectory) BOOST_AUTO_TEST_CASE(test_LockDirectory)
{ {
fs::path dirname = m_args.GetDataDirPath() / "lock_dir"; fs::path dirname = m_args.GetDataDirBase() / "lock_dir";
const std::string lockname = ".lock"; const std::string lockname = ".lock";
#ifndef WIN32 #ifndef WIN32
// Revert SIGCHLD to default, otherwise boost.test will catch and fail on // Revert SIGCHLD to default, otherwise boost.test will catch and fail on
@ -2174,7 +2174,7 @@ BOOST_AUTO_TEST_CASE(test_LockDirectory)
BOOST_AUTO_TEST_CASE(test_DirIsWritable) BOOST_AUTO_TEST_CASE(test_DirIsWritable)
{ {
// Should be able to write to the data dir. // Should be able to write to the data dir.
fs::path tmpdirname = m_args.GetDataDirPath(); fs::path tmpdirname = m_args.GetDataDirBase();
BOOST_CHECK_EQUAL(DirIsWritable(tmpdirname), true); BOOST_CHECK_EQUAL(DirIsWritable(tmpdirname), true);
// Should not be able to write to a non-existent dir. // Should not be able to write to a non-existent dir.

View File

@ -28,11 +28,11 @@ BOOST_AUTO_TEST_CASE(test_assumeutxo)
const auto out110 = *ExpectedAssumeutxo(110, *params); const auto out110 = *ExpectedAssumeutxo(110, *params);
BOOST_CHECK_EQUAL(out110.hash_serialized.ToString(), "9b2a277a3e3b979f1a539d57e949495d7f8247312dbc32bce6619128c192b44b"); BOOST_CHECK_EQUAL(out110.hash_serialized.ToString(), "9b2a277a3e3b979f1a539d57e949495d7f8247312dbc32bce6619128c192b44b");
BOOST_CHECK_EQUAL(out110.nChainTx, (unsigned int)110); BOOST_CHECK_EQUAL(out110.nChainTx, 110U);
const auto out210 = *ExpectedAssumeutxo(210, *params); const auto out210 = *ExpectedAssumeutxo(200, *params);
BOOST_CHECK_EQUAL(out210.hash_serialized.ToString(), "d4c97d32882583b057efc3dce673e44204851435e6ffcef20346e69cddc7c91e"); BOOST_CHECK_EQUAL(out210.hash_serialized.ToString(), "8a5bdd92252fc6b24663244bbe958c947bb036dc1f94ccd15439f48d8d1cb4e3");
BOOST_CHECK_EQUAL(out210.nChainTx, (unsigned int)210); BOOST_CHECK_EQUAL(out210.nChainTx, 200U);
} }
BOOST_AUTO_TEST_SUITE_END() BOOST_AUTO_TEST_SUITE_END()

View File

@ -584,7 +584,7 @@ void TorController::Reconnect()
fs::path TorController::GetPrivateKeyFile() fs::path TorController::GetPrivateKeyFile()
{ {
return GetDataDir() / "onion_v3_private_key"; return gArgs.GetDataDirNet() / "onion_v3_private_key";
} }
void TorController::reconnect_cb(evutil_socket_t fd, short what, void *arg) void TorController::reconnect_cb(evutil_socket_t fd, short what, void *arg)

View File

@ -150,7 +150,7 @@ size_t CCoinsViewDB::EstimateSize() const
return m_db->EstimateSize(DB_COIN, uint8_t(DB_COIN + 1)); return m_db->EstimateSize(DB_COIN, uint8_t(DB_COIN + 1));
} }
CBlockTreeDB::CBlockTreeDB(size_t nCacheSize, bool fMemory, bool fWipe) : CDBWrapper(GetDataDir() / "blocks" / "index", nCacheSize, fMemory, fWipe) { CBlockTreeDB::CBlockTreeDB(size_t nCacheSize, bool fMemory, bool fWipe) : CDBWrapper(gArgs.GetDataDirNet() / "blocks" / "index", nCacheSize, fMemory, fWipe) {
} }
bool CBlockTreeDB::ReadBlockFileInfo(int nFile, CBlockFileInfo &info) { bool CBlockTreeDB::ReadBlockFileInfo(int nFile, CBlockFileInfo &info) {
@ -408,8 +408,8 @@ bool CBlockTreeDB::ReadFlag(const std::string &name, bool &fValue) {
bool CBlockTreeDB::LoadBlockIndexGuts(const Consensus::Params& consensusParams, std::function<CBlockIndex*(const uint256&)> insertBlockIndex) bool CBlockTreeDB::LoadBlockIndexGuts(const Consensus::Params& consensusParams, std::function<CBlockIndex*(const uint256&)> insertBlockIndex)
{ {
AssertLockHeld(::cs_main);
std::unique_ptr<CDBIterator> pcursor(NewIterator()); std::unique_ptr<CDBIterator> pcursor(NewIterator());
pcursor->Seek(std::make_pair(DB_BLOCK_INDEX, uint256())); pcursor->Seek(std::make_pair(DB_BLOCK_INDEX, uint256()));
// Load m_block_index // Load m_block_index
@ -423,19 +423,16 @@ bool CBlockTreeDB::LoadBlockIndexGuts(const Consensus::Params& consensusParams,
CBlockIndex* pindexNew = insertBlockIndex(diskindex.GetBlockHash()); CBlockIndex* pindexNew = insertBlockIndex(diskindex.GetBlockHash());
pindexNew->pprev = insertBlockIndex(diskindex.hashPrev); pindexNew->pprev = insertBlockIndex(diskindex.hashPrev);
pindexNew->nHeight = diskindex.nHeight; pindexNew->nHeight = diskindex.nHeight;
pindexNew->nFile = diskindex.nFile;
pindexNew->nDataPos = diskindex.nDataPos;
pindexNew->nUndoPos = diskindex.nUndoPos;
pindexNew->nVersion = diskindex.nVersion; pindexNew->nVersion = diskindex.nVersion;
pindexNew->hashMerkleRoot = diskindex.hashMerkleRoot; pindexNew->hashMerkleRoot = diskindex.hashMerkleRoot;
pindexNew->nTime = diskindex.nTime; pindexNew->nTime = diskindex.nTime;
pindexNew->nBits = diskindex.nBits; pindexNew->nBits = diskindex.nBits;
pindexNew->nNonce = diskindex.nNonce; pindexNew->nNonce = diskindex.nNonce;
pindexNew->nTx = diskindex.nTx;
{
LOCK(::cs_main);
pindexNew->nFile = diskindex.nFile;
pindexNew->nDataPos = diskindex.nDataPos;
pindexNew->nUndoPos = diskindex.nUndoPos;
pindexNew->nStatus = diskindex.nStatus; pindexNew->nStatus = diskindex.nStatus;
} pindexNew->nTx = diskindex.nTx;
if (!CheckProofOfWork(pindexNew->GetBlockHash(), pindexNew->nBits, consensusParams)) { if (!CheckProofOfWork(pindexNew->GetBlockHash(), pindexNew->nBits, consensusParams)) {
return error("%s: CheckProofOfWork failed: %s", __func__, pindexNew->ToString()); return error("%s: CheckProofOfWork failed: %s", __func__, pindexNew->ToString());

View File

@ -104,7 +104,8 @@ public:
bool WriteFlag(const std::string &name, bool fValue); bool WriteFlag(const std::string &name, bool fValue);
bool ReadFlag(const std::string &name, bool &fValue); bool ReadFlag(const std::string &name, bool &fValue);
bool LoadBlockIndexGuts(const Consensus::Params& consensusParams, std::function<CBlockIndex*(const uint256&)> insertBlockIndex); bool LoadBlockIndexGuts(const Consensus::Params& consensusParams, std::function<CBlockIndex*(const uint256&)> insertBlockIndex)
EXCLUSIVE_LOCKS_REQUIRED(::cs_main);
}; };
#endif // BITCOIN_TXDB_H #endif // BITCOIN_TXDB_H

View File

@ -413,7 +413,7 @@ std::optional<unsigned int> ArgsManager::GetArgFlags(const std::string& name) co
return std::nullopt; return std::nullopt;
} }
const fs::path& ArgsManager::GetBlocksDirPath() const fs::path& ArgsManager::GetBlocksDirPath() const
{ {
LOCK(cs_args); LOCK(cs_args);
fs::path& path = m_cached_blocks_path; fs::path& path = m_cached_blocks_path;
@ -429,7 +429,7 @@ const fs::path& ArgsManager::GetBlocksDirPath()
return path; return path;
} }
} else { } else {
path = GetDataDirPath(false); path = GetDataDirBase();
} }
path /= BaseParams().DataDir(); path /= BaseParams().DataDir();
@ -439,7 +439,7 @@ const fs::path& ArgsManager::GetBlocksDirPath()
return path; return path;
} }
const fs::path& ArgsManager::GetDataDirPath(bool net_specific) const const fs::path& ArgsManager::GetDataDir(bool net_specific) const
{ {
LOCK(cs_args); LOCK(cs_args);
fs::path& path = net_specific ? m_cached_network_datadir_path : m_cached_datadir_path; fs::path& path = net_specific ? m_cached_network_datadir_path : m_cached_datadir_path;
@ -473,7 +473,7 @@ const fs::path& ArgsManager::GetDataDirPath(bool net_specific) const
fs::path ArgsManager::GetBackupsDirPath() fs::path ArgsManager::GetBackupsDirPath()
{ {
if (!IsArgSet("-walletbackupsdir")) if (!IsArgSet("-walletbackupsdir"))
return GetDataDirPath() / "backups"; return GetDataDirNet() / "backups";
return fs::absolute(GetArg("-walletbackupsdir", "")); return fs::absolute(GetArg("-walletbackupsdir", ""));
} }
@ -546,7 +546,7 @@ bool ArgsManager::GetSettingsPath(fs::path* filepath, bool temp) const
} }
if (filepath) { if (filepath) {
std::string settings = GetArg("-settings", BITCOIN_SETTINGS_FILENAME); std::string settings = GetArg("-settings", BITCOIN_SETTINGS_FILENAME);
*filepath = fsbridge::AbsPathJoin(GetDataDirPath(/* net_specific= */ true), temp ? settings + ".tmp" : settings); *filepath = fsbridge::AbsPathJoin(GetDataDirNet(), temp ? settings + ".tmp" : settings);
} }
return true; return true;
} }
@ -854,11 +854,6 @@ fs::path GetDefaultDataDir()
#endif #endif
} }
const fs::path &GetDataDir(bool fNetSpecific)
{
return gArgs.GetDataDirPath(fNetSpecific);
}
fs::path GetBackupsDir() fs::path GetBackupsDir()
{ {
return gArgs.GetBackupsDirPath(); return gArgs.GetBackupsDirPath();
@ -1479,7 +1474,7 @@ fs::path AbsPathForConfigVal(const fs::path& path, bool net_specific)
if (path.is_absolute()) { if (path.is_absolute()) {
return path; return path;
} }
return fsbridge::AbsPathJoin(GetDataDir(net_specific), path); return fsbridge::AbsPathJoin(net_specific ? gArgs.GetDataDirNet() : gArgs.GetDataDirBase(), path);
} }
void ScheduleBatchPriority() void ScheduleBatchPriority()

View File

@ -96,7 +96,6 @@ void ReleaseDirectoryLocks();
bool TryCreateDirectories(const fs::path& p); bool TryCreateDirectories(const fs::path& p);
fs::path GetDefaultDataDir(); fs::path GetDefaultDataDir();
const fs::path &GetDataDir(bool fNetSpecific = true);
// Return true if -datadir option points to a valid directory or is not specified. // Return true if -datadir option points to a valid directory or is not specified.
bool CheckDataDirOption(); bool CheckDataDirOption();
fs::path GetConfigFile(const std::string& confPath); fs::path GetConfigFile(const std::string& confPath);
@ -125,7 +124,7 @@ UniValue RunCommandParseJSON(const std::string& str_command, const std::string&
* the datadir if they are not absolute. * the datadir if they are not absolute.
* *
* @param path The path to be conditionally prefixed with datadir. * @param path The path to be conditionally prefixed with datadir.
* @param net_specific Forwarded to GetDataDir(). * @param net_specific Use network specific datadir variant
* @return The normalized path. * @return The normalized path.
*/ */
fs::path AbsPathForConfigVal(const fs::path& path, bool net_specific = true); fs::path AbsPathForConfigVal(const fs::path& path, bool net_specific = true);
@ -207,7 +206,7 @@ protected:
std::map<OptionsCategory, std::map<std::string, Arg>> m_available_args GUARDED_BY(cs_args); std::map<OptionsCategory, std::map<std::string, Arg>> m_available_args GUARDED_BY(cs_args);
bool m_accept_any_command GUARDED_BY(cs_args){true}; bool m_accept_any_command GUARDED_BY(cs_args){true};
std::list<SectionInfo> m_config_sections GUARDED_BY(cs_args); std::list<SectionInfo> m_config_sections GUARDED_BY(cs_args);
fs::path m_cached_blocks_path GUARDED_BY(cs_args); mutable fs::path m_cached_blocks_path GUARDED_BY(cs_args);
mutable fs::path m_cached_datadir_path GUARDED_BY(cs_args); mutable fs::path m_cached_datadir_path GUARDED_BY(cs_args);
mutable fs::path m_cached_network_datadir_path GUARDED_BY(cs_args); mutable fs::path m_cached_network_datadir_path GUARDED_BY(cs_args);
@ -283,16 +282,23 @@ public:
* *
* @return Blocks path which is network specific * @return Blocks path which is network specific
*/ */
const fs::path& GetBlocksDirPath(); const fs::path& GetBlocksDirPath() const;
/** /**
* Get data directory path * Get data directory path
* *
* @param net_specific Append network identifier to the returned path
* @return Absolute path on success, otherwise an empty path when a non-directory path would be returned * @return Absolute path on success, otherwise an empty path when a non-directory path would be returned
* @post Returned directory path is created unless it is empty * @post Returned directory path is created unless it is empty
*/ */
const fs::path& GetDataDirPath(bool net_specific = true) const; const fs::path& GetDataDirBase() const { return GetDataDir(false); }
/**
* Get data directory path with appended network identifier
*
* @return Absolute path on success, otherwise an empty path when a non-directory path would be returned
* @post Returned directory path is created unless it is empty
*/
const fs::path& GetDataDirNet() const { return GetDataDir(true); }
fs::path GetBackupsDirPath(); fs::path GetBackupsDirPath();
@ -464,6 +470,15 @@ public:
void LogArgs() const; void LogArgs() const;
private: private:
/**
* Get data directory path
*
* @param net_specific Append network identifier to the returned path
* @return Absolute path on success, otherwise an empty path when a non-directory path would be returned
* @post Returned directory path is created unless it is empty
*/
const fs::path& GetDataDir(bool net_specific) const;
// Helper function for LogArgs(). // Helper function for LogArgs().
void logArgsPrefix( void logArgsPrefix(
const std::string& prefix, const std::string& prefix,

View File

@ -1175,7 +1175,7 @@ CoinsViews::CoinsViews(
size_t cache_size_bytes, size_t cache_size_bytes,
bool in_memory, bool in_memory,
bool should_wipe) : m_dbview( bool should_wipe) : m_dbview(
GetDataDir() / ldb_name, cache_size_bytes, in_memory, should_wipe), gArgs.GetDataDirNet() / ldb_name, cache_size_bytes, in_memory, should_wipe),
m_catcherview(&m_dbview) {} m_catcherview(&m_dbview) {}
void CoinsViews::InitCache() void CoinsViews::InitCache()
@ -2415,7 +2415,7 @@ bool CChainState::FlushStateToDisk(
// twice (once in the log, and once in the tables). This is already // twice (once in the log, and once in the tables). This is already
// an overestimation, as most will delete an existing entry or // an overestimation, as most will delete an existing entry or
// overwrite one. Still, use a conservative safety factor of 2. // overwrite one. Still, use a conservative safety factor of 2.
if (!CheckDiskSpace(GetDataDir(), 48 * 2 * 2 * CoinsTip().GetCacheSize())) { if (!CheckDiskSpace(gArgs.GetDataDirNet(), 48 * 2 * 2 * CoinsTip().GetCacheSize())) {
return AbortNode(state, "Disk space is too low!", _("Disk space is too low!")); return AbortNode(state, "Disk space is too low!", _("Disk space is too low!"));
} }
// Flush the chainstate (which may refer to block index entries). // Flush the chainstate (which may refer to block index entries).
@ -4889,7 +4889,7 @@ bool LoadMempool(CTxMemPool& pool, CChainState& active_chainstate, FopenFn mocka
{ {
const CChainParams& chainparams = Params(); const CChainParams& chainparams = Params();
int64_t nExpiryTimeout = gArgs.GetArg("-mempoolexpiry", DEFAULT_MEMPOOL_EXPIRY) * 60 * 60; int64_t nExpiryTimeout = gArgs.GetArg("-mempoolexpiry", DEFAULT_MEMPOOL_EXPIRY) * 60 * 60;
FILE* filestr{mockable_fopen_function(GetDataDir() / "mempool.dat", "rb")}; FILE* filestr{mockable_fopen_function(gArgs.GetDataDirNet() / "mempool.dat", "rb")};
CAutoFile file(filestr, SER_DISK, CLIENT_VERSION); CAutoFile file(filestr, SER_DISK, CLIENT_VERSION);
if (file.IsNull()) { if (file.IsNull()) {
LogPrintf("Failed to open mempool file from disk. Continuing anyway.\n"); LogPrintf("Failed to open mempool file from disk. Continuing anyway.\n");
@ -4994,7 +4994,7 @@ bool DumpMempool(const CTxMemPool& pool, FopenFn mockable_fopen_function, bool s
int64_t mid = GetTimeMicros(); int64_t mid = GetTimeMicros();
try { try {
FILE* filestr{mockable_fopen_function(GetDataDir() / "mempool.dat.new", "wb")}; FILE* filestr{mockable_fopen_function(gArgs.GetDataDirNet() / "mempool.dat.new", "wb")};
if (!filestr) { if (!filestr) {
return false; return false;
} }
@ -5020,7 +5020,7 @@ bool DumpMempool(const CTxMemPool& pool, FopenFn mockable_fopen_function, bool s
if (!skip_file_commit && !FileCommit(file.Get())) if (!skip_file_commit && !FileCommit(file.Get()))
throw std::runtime_error("FileCommit failed"); throw std::runtime_error("FileCommit failed");
file.fclose(); file.fclose();
if (!RenameOver(GetDataDir() / "mempool.dat.new", GetDataDir() / "mempool.dat")) { if (!RenameOver(gArgs.GetDataDirNet() / "mempool.dat.new", gArgs.GetDataDirNet() / "mempool.dat")) {
throw std::runtime_error("Rename failed"); throw std::runtime_error("Rename failed");
} }
int64_t last = GetTimeMicros(); int64_t last = GetTimeMicros();

View File

@ -222,6 +222,7 @@ enum class DatabaseStatus {
FAILED_LOAD, FAILED_LOAD,
FAILED_VERIFY, FAILED_VERIFY,
FAILED_ENCRYPT, FAILED_ENCRYPT,
FAILED_INVALID_BACKUP_FILE,
}; };
/** Recursively list database paths in directory. */ /** Recursively list database paths in directory. */

View File

@ -607,6 +607,12 @@ public:
assert(m_context.m_coinjoin_loader); assert(m_context.m_coinjoin_loader);
return MakeWallet(LoadWallet(*m_context.chain, *m_context.m_coinjoin_loader, name, true /* load_on_start */, options, status, error, warnings)); return MakeWallet(LoadWallet(*m_context.chain, *m_context.m_coinjoin_loader, name, true /* load_on_start */, options, status, error, warnings));
} }
std::unique_ptr<Wallet> restoreWallet(const std::string& backup_file, const std::string& wallet_name, bilingual_str& error, std::vector<bilingual_str>& warnings) override
{
DatabaseStatus status;
assert(m_context.m_coinjoin_loader);
return MakeWallet(RestoreWallet(*m_context.chain, *m_context.m_coinjoin_loader, backup_file, wallet_name, /*load_on_start=*/true, status, error, warnings));
}
std::string getWalletDir() override std::string getWalletDir() override
{ {
return GetWalletDir().string(); return GetWalletDir().string();

View File

@ -2716,16 +2716,8 @@ static RPCHelpMan listwallets()
}; };
} }
static std::tuple<std::shared_ptr<CWallet>, std::vector<bilingual_str>> LoadWalletHelper(WalletContext& context, UniValue load_on_start_param, const std::string wallet_name) void HandleWalletError(const std::shared_ptr<CWallet> wallet, DatabaseStatus& status, bilingual_str& error)
{ {
DatabaseOptions options;
DatabaseStatus status;
options.require_existing = true;
bilingual_str error;
std::vector<bilingual_str> warnings;
std::optional<bool> load_on_start = load_on_start_param.isNull() ? std::nullopt : std::optional<bool>(load_on_start_param.get_bool());
std::shared_ptr<CWallet> const wallet = LoadWallet(*context.chain, *context.m_coinjoin_loader, wallet_name, load_on_start, options, status, error, warnings);
if (!wallet) { if (!wallet) {
// Map bad format to not found, since bad format is returned when the // Map bad format to not found, since bad format is returned when the
// wallet directory exists, but doesn't contain a data file. // wallet directory exists, but doesn't contain a data file.
@ -2738,13 +2730,17 @@ static std::tuple<std::shared_ptr<CWallet>, std::vector<bilingual_str>> LoadWall
case DatabaseStatus::FAILED_ALREADY_LOADED: case DatabaseStatus::FAILED_ALREADY_LOADED:
code = RPC_WALLET_ALREADY_LOADED; code = RPC_WALLET_ALREADY_LOADED;
break; break;
case DatabaseStatus::FAILED_ALREADY_EXISTS:
code = RPC_WALLET_ALREADY_EXISTS;
break;
case DatabaseStatus::FAILED_INVALID_BACKUP_FILE:
code = RPC_INVALID_PARAMETER;
break;
default: // RPC_WALLET_ERROR is returned for all other cases. default: // RPC_WALLET_ERROR is returned for all other cases.
break; break;
} }
throw JSONRPCError(code, error.original); throw JSONRPCError(code, error.original);
} }
return { wallet, warnings };
} }
static RPCHelpMan upgradetohd() static RPCHelpMan upgradetohd()
@ -2872,7 +2868,15 @@ static RPCHelpMan loadwallet()
WalletContext& context = EnsureWalletContext(request.context); WalletContext& context = EnsureWalletContext(request.context);
const std::string name(request.params[0].get_str()); const std::string name(request.params[0].get_str());
auto [wallet, warnings] = LoadWalletHelper(context, request.params[1], name); DatabaseOptions options;
DatabaseStatus status;
options.require_existing = true;
bilingual_str error;
std::vector<bilingual_str> warnings;
std::optional<bool> load_on_start = request.params[1].isNull() ? std::nullopt : std::optional<bool>(request.params[1].get_bool());
std::shared_ptr<CWallet> const wallet = LoadWallet(*context.chain, *context.m_coinjoin_loader, name, load_on_start, options, status, error, warnings);
HandleWalletError(wallet, status, error);
UniValue obj(UniValue::VOBJ); UniValue obj(UniValue::VOBJ);
obj.pushKV("name", wallet->GetName()); obj.pushKV("name", wallet->GetName());
@ -3072,27 +3076,17 @@ static RPCHelpMan restorewallet()
std::string backup_file = request.params[1].get_str(); std::string backup_file = request.params[1].get_str();
if (!fs::exists(backup_file)) {
throw JSONRPCError(RPC_INVALID_PARAMETER, "Backup file does not exist");
}
std::string wallet_name = request.params[0].get_str(); std::string wallet_name = request.params[0].get_str();
const fs::path wallet_path = fsbridge::AbsPathJoin(GetWalletDir(), wallet_name); std::optional<bool> load_on_start = request.params[2].isNull() ? std::nullopt : std::optional<bool>(request.params[2].get_bool());
if (fs::exists(wallet_path)) { DatabaseStatus status;
throw JSONRPCError(RPC_INVALID_PARAMETER, "Wallet name already exists."); bilingual_str error;
} std::vector<bilingual_str> warnings;
if (!TryCreateDirectories(wallet_path)) { const std::shared_ptr<CWallet> wallet = RestoreWallet(*context.chain, *context.m_coinjoin_loader, backup_file, wallet_name, load_on_start, status, error, warnings);
throw JSONRPCError(RPC_WALLET_ERROR, strprintf("Failed to create database path '%s'. Database already exists.", wallet_path.string()));
}
auto wallet_file = wallet_path / "wallet.dat"; HandleWalletError(wallet, status, error);
fs::copy_file(backup_file, wallet_file, fs::copy_option::fail_if_exists);
auto [wallet, warnings] = LoadWalletHelper(context, request.params[2], wallet_name);
UniValue obj(UniValue::VOBJ); UniValue obj(UniValue::VOBJ);
obj.pushKV("name", wallet->GetName()); obj.pushKV("name", wallet->GetName());

View File

@ -23,7 +23,7 @@ static std::shared_ptr<BerkeleyEnvironment> GetWalletEnv(const fs::path& path, s
BOOST_AUTO_TEST_CASE(getwalletenv_file) BOOST_AUTO_TEST_CASE(getwalletenv_file)
{ {
std::string test_name = "test_name.dat"; std::string test_name = "test_name.dat";
const fs::path datadir = GetDataDir(); const fs::path datadir = gArgs.GetDataDirNet();
fs::path file_path = datadir / test_name; fs::path file_path = datadir / test_name;
#if BOOST_VERSION >= 107700 #if BOOST_VERSION >= 107700
std::ofstream f(BOOST_FILESYSTEM_C_STR(file_path)); std::ofstream f(BOOST_FILESYSTEM_C_STR(file_path));
@ -41,7 +41,7 @@ BOOST_AUTO_TEST_CASE(getwalletenv_file)
BOOST_AUTO_TEST_CASE(getwalletenv_directory) BOOST_AUTO_TEST_CASE(getwalletenv_directory)
{ {
std::string expected_name = "wallet.dat"; std::string expected_name = "wallet.dat";
const fs::path datadir = GetDataDir(); const fs::path datadir = gArgs.GetDataDirNet();
std::string filename; std::string filename;
std::shared_ptr<BerkeleyEnvironment> env = GetWalletEnv(datadir, filename); std::shared_ptr<BerkeleyEnvironment> env = GetWalletEnv(datadir, filename);
@ -51,8 +51,8 @@ BOOST_AUTO_TEST_CASE(getwalletenv_directory)
BOOST_AUTO_TEST_CASE(getwalletenv_g_dbenvs_multiple) BOOST_AUTO_TEST_CASE(getwalletenv_g_dbenvs_multiple)
{ {
fs::path datadir = GetDataDir() / "1"; fs::path datadir = gArgs.GetDataDirNet() / "1";
fs::path datadir_2 = GetDataDir() / "2"; fs::path datadir_2 = gArgs.GetDataDirNet() / "2";
std::string filename; std::string filename;
std::shared_ptr<BerkeleyEnvironment> env_1 = GetWalletEnv(datadir, filename); std::shared_ptr<BerkeleyEnvironment> env_1 = GetWalletEnv(datadir, filename);
@ -65,8 +65,8 @@ BOOST_AUTO_TEST_CASE(getwalletenv_g_dbenvs_multiple)
BOOST_AUTO_TEST_CASE(getwalletenv_g_dbenvs_free_instance) BOOST_AUTO_TEST_CASE(getwalletenv_g_dbenvs_free_instance)
{ {
fs::path datadir = GetDataDir() / "1"; fs::path datadir = gArgs.GetDataDirNet() / "1";
fs::path datadir_2 = GetDataDir() / "2"; fs::path datadir_2 = gArgs.GetDataDirNet() / "2";
std::string filename; std::string filename;
std::shared_ptr <BerkeleyEnvironment> env_1_a = GetWalletEnv(datadir, filename); std::shared_ptr <BerkeleyEnvironment> env_1_a = GetWalletEnv(datadir, filename);

View File

@ -16,7 +16,7 @@ InitWalletDirTestingSetup::InitWalletDirTestingSetup(const std::string& chainNam
std::string sep; std::string sep;
sep += fs::path::preferred_separator; sep += fs::path::preferred_separator;
m_datadir = GetDataDir(); m_datadir = gArgs.GetDataDirNet();
m_cwd = fs::current_path(); m_cwd = fs::current_path();
m_walletdir_path_cases["default"] = m_datadir / "wallets"; m_walletdir_path_cases["default"] = m_datadir / "wallets";

View File

@ -272,7 +272,7 @@ BOOST_FIXTURE_TEST_CASE(importwallet_rescan, TestChain100Setup)
SetMockTime(KEY_TIME); SetMockTime(KEY_TIME);
m_coinbase_txns.emplace_back(CreateAndProcessBlock({}, GetScriptForRawPubKey(coinbaseKey.GetPubKey())).vtx[0]); m_coinbase_txns.emplace_back(CreateAndProcessBlock({}, GetScriptForRawPubKey(coinbaseKey.GetPubKey())).vtx[0]);
std::string backup_file = (GetDataDir() / "wallet.backup").string(); std::string backup_file = (gArgs.GetDataDirNet() / "wallet.backup").string();
// Import key into wallet and call dumpwallet to create backup file. // Import key into wallet and call dumpwallet to create backup file.
{ {

View File

@ -365,6 +365,38 @@ std::shared_ptr<CWallet> CreateWallet(interfaces::Chain& chain, interfaces::Coin
return wallet; return wallet;
} }
std::shared_ptr<CWallet> RestoreWallet(interfaces::Chain& chain, interfaces::CoinJoin::Loader& coinjoin_loader, const std::string& backup_file, const std::string& wallet_name, std::optional<bool> load_on_start, DatabaseStatus& status, bilingual_str& error, std::vector<bilingual_str>& warnings)
{
DatabaseOptions options;
options.require_existing = true;
if (!fs::exists(backup_file)) {
error = Untranslated("Backup file does not exist");
status = DatabaseStatus::FAILED_INVALID_BACKUP_FILE;
return nullptr;
}
const fs::path wallet_path = fsbridge::AbsPathJoin(GetWalletDir(), wallet_name);
if (fs::exists(wallet_path) || !TryCreateDirectories(wallet_path)) {
error = Untranslated(strprintf("Failed to create database path '%s'. Database already exists.", wallet_path.string()));
status = DatabaseStatus::FAILED_ALREADY_EXISTS;
return nullptr;
}
auto wallet_file = wallet_path / "wallet.dat";
fs::copy_file(backup_file, wallet_file, fs::copy_option::fail_if_exists);
auto wallet = LoadWallet(chain, coinjoin_loader, wallet_name, load_on_start, options, status, error, warnings);
if (!wallet) {
fs::remove(wallet_file);
fs::remove(wallet_path);
}
return wallet;
}
/** @defgroup mapWallet /** @defgroup mapWallet
* *
* @{ * @{

View File

@ -62,6 +62,7 @@ std::vector<std::shared_ptr<CWallet>> GetWallets();
std::shared_ptr<CWallet> GetWallet(const std::string& name); std::shared_ptr<CWallet> GetWallet(const std::string& name);
std::shared_ptr<CWallet> LoadWallet(interfaces::Chain& chain, interfaces::CoinJoin::Loader& coinjoin_loader, const std::string& name, std::optional<bool> load_on_start, const DatabaseOptions& options, DatabaseStatus& status, bilingual_str& error, std::vector<bilingual_str>& warnings); std::shared_ptr<CWallet> LoadWallet(interfaces::Chain& chain, interfaces::CoinJoin::Loader& coinjoin_loader, const std::string& name, std::optional<bool> load_on_start, const DatabaseOptions& options, DatabaseStatus& status, bilingual_str& error, std::vector<bilingual_str>& warnings);
std::shared_ptr<CWallet> CreateWallet(interfaces::Chain& chain, interfaces::CoinJoin::Loader& coinjoin_loader, const std::string& name, std::optional<bool> load_on_start, DatabaseOptions& options, DatabaseStatus& status, bilingual_str& error, std::vector<bilingual_str>& warnings); std::shared_ptr<CWallet> CreateWallet(interfaces::Chain& chain, interfaces::CoinJoin::Loader& coinjoin_loader, const std::string& name, std::optional<bool> load_on_start, DatabaseOptions& options, DatabaseStatus& status, bilingual_str& error, std::vector<bilingual_str>& warnings);
std::shared_ptr<CWallet> RestoreWallet(interfaces::Chain& chain, interfaces::CoinJoin::Loader& coinjoin_loader, const std::string& backup_file, const std::string& wallet_name, std::optional<bool> load_on_start, DatabaseStatus& status, bilingual_str& error, std::vector<bilingual_str>& warnings);
std::unique_ptr<interfaces::Handler> HandleLoadWallet(LoadWalletFn load_wallet); std::unique_ptr<interfaces::Handler> HandleLoadWallet(LoadWalletFn load_wallet);
std::unique_ptr<WalletDatabase> MakeWalletDatabase(const std::string& name, const DatabaseOptions& options, DatabaseStatus& status, bilingual_str& error); std::unique_ptr<WalletDatabase> MakeWalletDatabase(const std::string& name, const DatabaseOptions& options, DatabaseStatus& status, bilingual_str& error);

View File

@ -19,7 +19,7 @@ fs::path GetWalletDir()
path = ""; path = "";
} }
} else { } else {
path = GetDataDir(); path = gArgs.GetDataDirNet();
// If a wallets directory exists, use that, otherwise default to GetDataDir // If a wallets directory exists, use that, otherwise default to GetDataDir
if (fs::is_directory(path / "wallets")) { if (fs::is_directory(path / "wallets")) {
path /= "wallets"; path /= "wallets";

View File

@ -11,7 +11,6 @@ Test that the CHECKLOCKTIMEVERIFY soft-fork activates at (regtest) block height
from test_framework.blocktools import ( from test_framework.blocktools import (
create_block, create_block,
create_coinbase, create_coinbase,
create_transaction,
) )
from test_framework.messages import ( from test_framework.messages import (
CTransaction, CTransaction,
@ -29,10 +28,8 @@ from test_framework.test_framework import BitcoinTestFramework
from test_framework.util import ( from test_framework.util import (
assert_equal, assert_equal,
assert_raises_rpc_error, assert_raises_rpc_error,
hex_str_to_bytes,
) )
from test_framework.wallet import MiniWallet
from io import BytesIO
CLTV_HEIGHT = 1351 CLTV_HEIGHT = 1351
@ -41,19 +38,14 @@ CLTV_HEIGHT = 1351
# 1) prepending a given script to the scriptSig of vin 0 and # 1) prepending a given script to the scriptSig of vin 0 and
# 2) (optionally) modify the nSequence of vin 0 and the tx's nLockTime # 2) (optionally) modify the nSequence of vin 0 and the tx's nLockTime
def cltv_modify_tx(node, tx, prepend_scriptsig, nsequence=None, nlocktime=None): def cltv_modify_tx(node, tx, prepend_scriptsig, nsequence=None, nlocktime=None):
assert_equal(len(tx.vin), 1)
if nsequence is not None: if nsequence is not None:
tx.vin[0].nSequence = nsequence tx.vin[0].nSequence = nsequence
tx.nLockTime = nlocktime tx.nLockTime = nlocktime
# Need to re-sign, since nSequence and nLockTime changed tx.vin[0].scriptSig = CScript(prepend_scriptsig + list(CScript(tx.vin[0].scriptSig)))
signed_result = node.signrawtransactionwithwallet(tx.serialize().hex()) tx.rehash()
new_tx = CTransaction() return tx
new_tx.deserialize(BytesIO(hex_str_to_bytes(signed_result['hex'])))
else:
new_tx = tx
new_tx.vin[0].scriptSig = CScript(prepend_scriptsig + list(CScript(new_tx.vin[0].scriptSig)))
return new_tx
def cltv_invalidate(node, tx, failure_reason): def cltv_invalidate(node, tx, failure_reason):
@ -108,27 +100,23 @@ class BIP65Test(BitcoinTestFramework):
}, },
) )
def skip_test_if_missing_module(self):
self.skip_if_no_wallet()
def run_test(self): def run_test(self):
peer = self.nodes[0].add_p2p_connection(P2PInterface()) peer = self.nodes[0].add_p2p_connection(P2PInterface())
wallet = MiniWallet(self.nodes[0], raw_script=True)
self.test_cltv_info(is_active=False) self.test_cltv_info(is_active=False)
self.log.info("Mining %d blocks", CLTV_HEIGHT - 2) self.log.info("Mining %d blocks", CLTV_HEIGHT - 2)
self.coinbase_txids = [self.nodes[0].getblock(b)['tx'][0] for b in self.nodes[0].generate(CLTV_HEIGHT - 2)] wallet.generate(10)
self.nodeaddress = self.nodes[0].getnewaddress() self.nodes[0].generate(CLTV_HEIGHT - 2 - 10)
self.log.info("Test that invalid-according-to-CLTV transactions can still appear in a block") self.log.info("Test that invalid-according-to-CLTV transactions can still appear in a block")
# create one invalid tx per CLTV failure reason (5 in total) and collect them # create one invalid tx per CLTV failure reason (5 in total) and collect them
invalid_ctlv_txs = [] invalid_ctlv_txs = []
for i in range(5): for i in range(5):
spendtx = create_transaction(self.nodes[0], self.coinbase_txids[i], spendtx = wallet.create_self_transfer(from_node=self.nodes[0])['tx']
self.nodeaddress, amount=1.0)
spendtx = cltv_invalidate(self.nodes[0], spendtx, i) spendtx = cltv_invalidate(self.nodes[0], spendtx, i)
spendtx.rehash()
invalid_ctlv_txs.append(spendtx) invalid_ctlv_txs.append(spendtx)
tip = self.nodes[0].getbestblockhash() tip = self.nodes[0].getbestblockhash()
@ -162,10 +150,8 @@ class BIP65Test(BitcoinTestFramework):
# create and test one invalid tx per CLTV failure reason (5 in total) # create and test one invalid tx per CLTV failure reason (5 in total)
for i in range(5): for i in range(5):
spendtx = create_transaction(self.nodes[0], self.coinbase_txids[10+i], spendtx = wallet.create_self_transfer(from_node=self.nodes[0])['tx']
self.nodeaddress, amount=1.0)
spendtx = cltv_invalidate(self.nodes[0], spendtx, i) spendtx = cltv_invalidate(self.nodes[0], spendtx, i)
spendtx.rehash()
expected_cltv_reject_reason = [ expected_cltv_reject_reason = [
"non-mandatory-script-verify-flag (Operation not valid with the current stack size)", "non-mandatory-script-verify-flag (Operation not valid with the current stack size)",
@ -191,7 +177,6 @@ class BIP65Test(BitcoinTestFramework):
self.log.info("Test that a version 4 block with a valid-according-to-CLTV transaction is accepted") self.log.info("Test that a version 4 block with a valid-according-to-CLTV transaction is accepted")
spendtx = cltv_validate(self.nodes[0], spendtx, CLTV_HEIGHT - 1) spendtx = cltv_validate(self.nodes[0], spendtx, CLTV_HEIGHT - 1)
spendtx.rehash()
block.vtx.pop(1) block.vtx.pop(1)
block.vtx.append(spendtx) block.vtx.append(spendtx)

View File

@ -22,7 +22,10 @@ from test_framework.blocktools import (
from test_framework.messages import CTransaction from test_framework.messages import CTransaction
from test_framework.script import CScript from test_framework.script import CScript
from test_framework.test_framework import BitcoinTestFramework from test_framework.test_framework import BitcoinTestFramework
from test_framework.util import assert_equal, assert_raises_rpc_error from test_framework.util import (
assert_equal,
assert_raises_rpc_error,
)
NULLDUMMY_ERROR = "non-mandatory-script-verify-flag (Dummy CHECKMULTISIG argument must be zero)" NULLDUMMY_ERROR = "non-mandatory-script-verify-flag (Dummy CHECKMULTISIG argument must be zero)"
@ -44,7 +47,12 @@ class NULLDUMMYTest(BitcoinTestFramework):
# Need two nodes so GBT (getblocktemplate) doesn't complain that it's not connected. # Need two nodes so GBT (getblocktemplate) doesn't complain that it's not connected.
self.num_nodes = 2 self.num_nodes = 2
self.setup_clean_chain = True self.setup_clean_chain = True
self.extra_args = [['-whitelist=127.0.0.1', '-dip3params=105:105', '-bip147height=105']] * 2 self.extra_args = [[
'-whitelist=127.0.0.1',
'-dip3params=105:105',
'-bip147height=105',
'-par=1', # Use only one script thread to get the exact reject reason for testing
]] * 2
def skip_test_if_missing_module(self): def skip_test_if_missing_module(self):
self.skip_if_no_wallet() self.skip_if_no_wallet()
@ -74,7 +82,7 @@ class NULLDUMMYTest(BitcoinTestFramework):
txid1 = self.nodes[0].sendrawtransaction(test1txs[0].serialize().hex(), 0) txid1 = self.nodes[0].sendrawtransaction(test1txs[0].serialize().hex(), 0)
test1txs.append(create_transaction(self.nodes[0], txid1, self.ms_address, amount=48)) test1txs.append(create_transaction(self.nodes[0], txid1, self.ms_address, amount=48))
txid2 = self.nodes[0].sendrawtransaction(test1txs[1].serialize().hex(), 0) txid2 = self.nodes[0].sendrawtransaction(test1txs[1].serialize().hex(), 0)
self.block_submit(self.nodes[0], test1txs, True) self.block_submit(self.nodes[0], test1txs, accept=True)
self.log.info("Test 2: Non-NULLDUMMY base multisig transaction should not be accepted to mempool before activation") self.log.info("Test 2: Non-NULLDUMMY base multisig transaction should not be accepted to mempool before activation")
test2tx = create_transaction(self.nodes[0], txid2, self.ms_address, amount=47) test2tx = create_transaction(self.nodes[0], txid2, self.ms_address, amount=47)
@ -82,22 +90,22 @@ class NULLDUMMYTest(BitcoinTestFramework):
assert_raises_rpc_error(-26, NULLDUMMY_ERROR, self.nodes[0].sendrawtransaction, test2tx.serialize().hex(), 0) assert_raises_rpc_error(-26, NULLDUMMY_ERROR, self.nodes[0].sendrawtransaction, test2tx.serialize().hex(), 0)
self.log.info(f"Test 3: Non-NULLDUMMY base transactions should be accepted in a block before activation [{COINBASE_MATURITY + 4}]") self.log.info(f"Test 3: Non-NULLDUMMY base transactions should be accepted in a block before activation [{COINBASE_MATURITY + 4}]")
self.block_submit(self.nodes[0], [test2tx], True) self.block_submit(self.nodes[0], [test2tx], accept=True)
self.log.info("Test 4: Non-NULLDUMMY base multisig transaction is invalid after activation") self.log.info("Test 4: Non-NULLDUMMY base multisig transaction is invalid after activation")
test4tx = create_transaction(self.nodes[0], test2tx.hash, self.address, amount=46) test4tx = create_transaction(self.nodes[0], test2tx.hash, self.address, amount=46)
test6txs=[CTransaction(test4tx)] test6txs=[CTransaction(test4tx)]
trueDummy(test4tx) trueDummy(test4tx)
assert_raises_rpc_error(-26, NULLDUMMY_ERROR, self.nodes[0].sendrawtransaction, test4tx.serialize().hex(), 0) assert_raises_rpc_error(-26, NULLDUMMY_ERROR, self.nodes[0].sendrawtransaction, test4tx.serialize().hex(), 0)
self.block_submit(self.nodes[0], [test4tx]) self.block_submit(self.nodes[0], [test4tx], accept=False)
self.log.info(f"Test 6: NULLDUMMY compliant base/witness transactions should be accepted to mempool and in block after activation [{COINBASE_MATURITY + 5}]") self.log.info(f"Test 6: NULLDUMMY compliant base/witness transactions should be accepted to mempool and in block after activation [{COINBASE_MATURITY + 5}]")
for i in test6txs: for i in test6txs:
self.nodes[0].sendrawtransaction(i.serialize().hex(), 0) self.nodes[0].sendrawtransaction(i.serialize().hex(), 0)
self.block_submit(self.nodes[0], test6txs, True) self.block_submit(self.nodes[0], test6txs, accept=True)
def block_submit(self, node, txs, accept = False): def block_submit(self, node, txs, *, accept=False):
dip4_activated = self.lastblockheight + 1 >= COINBASE_MATURITY + 5 dip4_activated = self.lastblockheight + 1 >= COINBASE_MATURITY + 5
tmpl = node.getblocktemplate(NORMAL_GBT_REQUEST_PARAMS) tmpl = node.getblocktemplate(NORMAL_GBT_REQUEST_PARAMS)
assert_equal(tmpl['previousblockhash'], self.lastblockhash) assert_equal(tmpl['previousblockhash'], self.lastblockhash)
@ -109,8 +117,8 @@ class NULLDUMMYTest(BitcoinTestFramework):
block.hashMerkleRoot = block.calc_merkle_root() block.hashMerkleRoot = block.calc_merkle_root()
block.rehash() block.rehash()
block.solve() block.solve()
assert_equal(None if accept else 'block-validation-failed', node.submitblock(block.serialize().hex())) assert_equal(None if accept else NULLDUMMY_ERROR, node.submitblock(block.serialize().hex()))
if (accept): if accept:
assert_equal(node.getbestblockhash(), block.hash) assert_equal(node.getbestblockhash(), block.hash)
self.lastblockhash = block.hash self.lastblockhash = block.hash
self.lastblocktime += 1 self.lastblocktime += 1

View File

@ -20,40 +20,41 @@ from test_framework.wallet import MiniWallet
class MempoolSpendCoinbaseTest(BitcoinTestFramework): class MempoolSpendCoinbaseTest(BitcoinTestFramework):
def set_test_params(self): def set_test_params(self):
self.num_nodes = 1 self.num_nodes = 1
self.setup_clean_chain = True
def run_test(self): def run_test(self):
wallet = MiniWallet(self.nodes[0]) wallet = MiniWallet(self.nodes[0])
wallet.generate(200) # Invalidate two blocks, so that miniwallet has access to a coin that will mature in the next block
chain_height = self.nodes[0].getblockcount() chain_height = 198
assert_equal(chain_height, 200) self.nodes[0].invalidateblock(self.nodes[0].getblockhash(chain_height + 1))
assert_equal(chain_height, self.nodes[0].getblockcount())
# Coinbase at height chain_height-100+1 ok in mempool, should # Coinbase at height chain_height-100+1 ok in mempool, should
# get mined. Coinbase at height chain_height-100+2 is # get mined. Coinbase at height chain_height-100+2 is
# too immature to spend. # too immature to spend.
b = [self.nodes[0].getblockhash(n) for n in range(101, 103)] wallet.scan_blocks(start=chain_height - 100 + 1, num=1)
coinbase_txids = [self.nodes[0].getblock(h)['tx'][0] for h in b] utxo_mature = wallet.get_utxo()
utxo_101 = wallet.get_utxo(txid=coinbase_txids[0]) wallet.scan_blocks(start=chain_height - 100 + 2, num=1)
utxo_102 = wallet.get_utxo(txid=coinbase_txids[1]) utxo_immature = wallet.get_utxo()
spend_101_id = wallet.send_self_transfer(from_node=self.nodes[0], utxo_to_spend=utxo_101)["txid"] spend_mature_id = wallet.send_self_transfer(from_node=self.nodes[0], utxo_to_spend=utxo_mature)["txid"]
# coinbase at height 102 should be too immature to spend # other coinbase should be too immature to spend
immature_tx = wallet.create_self_transfer(from_node=self.nodes[0], utxo_to_spend=utxo_immature, mempool_valid=False)
assert_raises_rpc_error(-26, assert_raises_rpc_error(-26,
"bad-txns-premature-spend-of-coinbase", "bad-txns-premature-spend-of-coinbase",
lambda: wallet.send_self_transfer(from_node=self.nodes[0], utxo_to_spend=utxo_102)) lambda: self.nodes[0].sendrawtransaction(immature_tx['hex']))
# mempool should have just spend_101: # mempool should have just the mature one
assert_equal(self.nodes[0].getrawmempool(), [spend_101_id]) assert_equal(self.nodes[0].getrawmempool(), [spend_mature_id])
# mine a block, spend_101 should get confirmed # mine a block, mature one should get confirmed
self.nodes[0].generate(1) self.nodes[0].generate(1)
assert_equal(set(self.nodes[0].getrawmempool()), set()) assert_equal(set(self.nodes[0].getrawmempool()), set())
# ... and now height 102 can be spent: # ... and now previously immature can be spent:
spend_102_id = wallet.send_self_transfer(from_node=self.nodes[0], utxo_to_spend=utxo_102)["txid"] spend_new_id = self.nodes[0].sendrawtransaction(immature_tx['hex'])
assert_equal(self.nodes[0].getrawmempool(), [spend_102_id]) assert_equal(self.nodes[0].getrawmempool(), [spend_new_id])
if __name__ == '__main__': if __name__ == '__main__':

View File

@ -913,12 +913,13 @@ class BitcoinTestFramework(metaclass=BitcoinTestMetaClass):
# This is needed so that we are out of IBD when the test starts, # This is needed so that we are out of IBD when the test starts,
# see the tip age check in IsInitialBlockDownload(). # see the tip age check in IsInitialBlockDownload().
self.set_genesis_mocktime() self.set_genesis_mocktime()
gen_addresses = [k.address for k in TestNode.PRIV_KEYS] + [ADDRESS_BCRT1_P2SH_OP_TRUE] gen_addresses = [k.address for k in TestNode.PRIV_KEYS][:3] + [ADDRESS_BCRT1_P2SH_OP_TRUE]
assert_equal(len(gen_addresses), 4)
for i in range(8): for i in range(8):
self.bump_mocktime((25 if i != 7 else 24) * 156) self.bump_mocktime((25 if i != 7 else 24) * 156)
cache_node.generatetoaddress( cache_node.generatetoaddress(
nblocks=25 if i != 7 else 24, nblocks=25 if i != 7 else 24,
address=gen_addresses[i % 4], address=gen_addresses[i % len(gen_addresses)],
) )
assert_equal(cache_node.getblockchaininfo()["blocks"], 199) assert_equal(cache_node.getblockchaininfo()["blocks"], 199)

View File

@ -17,6 +17,7 @@ from test_framework.messages import (
from test_framework.script import ( from test_framework.script import (
CScript, CScript,
OP_TRUE, OP_TRUE,
OP_NOP,
) )
from test_framework.util import ( from test_framework.util import (
assert_equal, assert_equal,
@ -26,9 +27,13 @@ from test_framework.util import (
class MiniWallet: class MiniWallet:
def __init__(self, test_node): def __init__(self, test_node, *, raw_script=False):
self._test_node = test_node self._test_node = test_node
self._utxos = [] self._utxos = []
if raw_script:
self._address = None
self._scriptPubKey = bytes(CScript([OP_TRUE]))
else:
self._address = ADDRESS_BCRT1_P2SH_OP_TRUE self._address = ADDRESS_BCRT1_P2SH_OP_TRUE
self._scriptPubKey = hex_str_to_bytes(self._test_node.validateaddress(self._address)['scriptPubKey']) self._scriptPubKey = hex_str_to_bytes(self._test_node.validateaddress(self._address)['scriptPubKey'])
@ -37,13 +42,17 @@ class MiniWallet:
for i in range(start, start + num): for i in range(start, start + num):
block = self._test_node.getblock(blockhash=self._test_node.getblockhash(i), verbosity=2) block = self._test_node.getblock(blockhash=self._test_node.getblockhash(i), verbosity=2)
for tx in block['tx']: for tx in block['tx']:
self.scan_tx(tx)
def scan_tx(self, tx):
"""Scan the tx for self._scriptPubKey outputs and add them to self._utxos"""
for out in tx['vout']: for out in tx['vout']:
if out['scriptPubKey']['hex'] == self._scriptPubKey.hex(): if out['scriptPubKey']['hex'] == self._scriptPubKey.hex():
self._utxos.append({'txid': tx['txid'], 'vout': out['n'], 'value': out['value']}) self._utxos.append({'txid': tx['txid'], 'vout': out['n'], 'value': out['value']})
def generate(self, num_blocks): def generate(self, num_blocks):
"""Generate blocks with coinbase outputs to the internal address, and append the outputs to the internal list""" """Generate blocks with coinbase outputs to the internal address, and append the outputs to the internal list"""
blocks = self._test_node.generatetoaddress(num_blocks, self._address) blocks = self._test_node.generatetodescriptor(num_blocks, f'raw({self._scriptPubKey.hex()})')
for b in blocks: for b in blocks:
cb_tx = self._test_node.getblock(blockhash=b, verbosity=2)['tx'][0] cb_tx = self._test_node.getblock(blockhash=b, verbosity=2)['tx'][0]
self._utxos.append({'txid': cb_tx['txid'], 'vout': 0, 'value': cb_tx['vout'][0]['value']}) self._utxos.append({'txid': cb_tx['txid'], 'vout': 0, 'value': cb_tx['vout'][0]['value']})
@ -69,6 +78,12 @@ class MiniWallet:
def send_self_transfer(self, *, fee_rate=Decimal("0.003"), from_node, utxo_to_spend=None): def send_self_transfer(self, *, fee_rate=Decimal("0.003"), from_node, utxo_to_spend=None):
"""Create and send a tx with the specified fee_rate. Fee may be exact or at most one satoshi higher than needed.""" """Create and send a tx with the specified fee_rate. Fee may be exact or at most one satoshi higher than needed."""
tx = self.create_self_transfer(fee_rate=fee_rate, from_node=from_node, utxo_to_spend=utxo_to_spend)
self.sendrawtransaction(from_node=from_node, tx_hex=tx['hex'])
return tx
def create_self_transfer(self, *, fee_rate=Decimal("0.003"), from_node, utxo_to_spend=None, mempool_valid=True):
"""Create and return a tx with the specified fee_rate. Fee may be exact or at most one satoshi higher than needed."""
self._utxos = sorted(self._utxos, key=lambda k: k['value']) self._utxos = sorted(self._utxos, key=lambda k: k['value'])
utxo_to_spend = utxo_to_spend or self._utxos.pop() # Pick the largest utxo (if none provided) and hope it covers the fee utxo_to_spend = utxo_to_spend or self._utxos.pop() # Pick the largest utxo (if none provided) and hope it covers the fee
vsize = Decimal(85) vsize = Decimal(85)
@ -79,12 +94,20 @@ class MiniWallet:
tx = CTransaction() tx = CTransaction()
tx.vin = [CTxIn(COutPoint(int(utxo_to_spend['txid'], 16), utxo_to_spend['vout']))] tx.vin = [CTxIn(COutPoint(int(utxo_to_spend['txid'], 16), utxo_to_spend['vout']))]
tx.vout = [CTxOut(int(send_value * COIN), self._scriptPubKey)] tx.vout = [CTxOut(int(send_value * COIN), self._scriptPubKey)]
if not self._address:
# raw script
tx.vin[0].scriptSig = CScript([OP_NOP] * 24) # pad to identical size
else:
tx.vin[0].scriptSig = CScript([CScript([OP_TRUE])]) tx.vin[0].scriptSig = CScript([CScript([OP_TRUE])])
tx_hex = tx.serialize().hex() tx_hex = tx.serialize().hex()
tx_info = from_node.testmempoolaccept([tx_hex])[0] tx_info = from_node.testmempoolaccept([tx_hex])[0]
self._utxos.append({'txid': tx_info['txid'], 'vout': 0, 'value': send_value}) assert_equal(mempool_valid, tx_info['allowed'])
from_node.sendrawtransaction(tx_hex) if mempool_valid:
assert_equal(len(tx_hex) // 2, vsize) assert_equal(len(tx_hex) // 2, vsize)
assert_equal(tx_info['fees']['base'], fee) assert_equal(tx_info['fees']['base'], fee)
return {'txid': tx_info['txid'], 'hex': tx_hex} return {'txid': tx_info['txid'], 'hex': tx_hex, 'tx': tx}
def sendrawtransaction(self, *, from_node, tx_hex):
from_node.sendrawtransaction(tx_hex)
self.scan_tx(from_node.decoderawtransaction(tx_hex))

View File

@ -107,17 +107,32 @@ class WalletBackupTest(BitcoinTestFramework):
os.remove(os.path.join(self.nodes[1].datadir, self.chain, 'wallets', self.default_wallet_name, self.wallet_data_filename)) os.remove(os.path.join(self.nodes[1].datadir, self.chain, 'wallets', self.default_wallet_name, self.wallet_data_filename))
os.remove(os.path.join(self.nodes[2].datadir, self.chain, 'wallets', self.default_wallet_name, self.wallet_data_filename)) os.remove(os.path.join(self.nodes[2].datadir, self.chain, 'wallets', self.default_wallet_name, self.wallet_data_filename))
def restore_invalid_wallet(self):
node = self.nodes[3]
invalid_wallet_file = os.path.join(self.nodes[0].datadir, 'invalid_wallet_file.bak')
open(invalid_wallet_file, 'a', encoding="utf8").write('invald wallet')
wallet_name = "res0"
not_created_wallet_file = os.path.join(node.datadir, self.chain, 'wallets', wallet_name)
error_message = "Wallet file verification failed. Failed to load database path '{}'. Data is not in recognized format.".format(not_created_wallet_file)
assert_raises_rpc_error(-18, error_message, node.restorewallet, wallet_name, invalid_wallet_file)
assert not os.path.exists(not_created_wallet_file)
def restore_nonexistent_wallet(self): def restore_nonexistent_wallet(self):
node = self.nodes[3] node = self.nodes[3]
nonexistent_wallet_file = os.path.join(self.nodes[0].datadir, 'nonexistent_wallet.bak') nonexistent_wallet_file = os.path.join(self.nodes[0].datadir, 'nonexistent_wallet.bak')
wallet_name = "res0" wallet_name = "res0"
assert_raises_rpc_error(-8, "Backup file does not exist", node.restorewallet, wallet_name, nonexistent_wallet_file) assert_raises_rpc_error(-8, "Backup file does not exist", node.restorewallet, wallet_name, nonexistent_wallet_file)
not_created_wallet_file = os.path.join(node.datadir, self.chain, 'wallets', wallet_name)
assert not os.path.exists(not_created_wallet_file)
def restore_wallet_existent_name(self): def restore_wallet_existent_name(self):
node = self.nodes[3] node = self.nodes[3]
wallet_file = os.path.join(self.nodes[0].datadir, 'wallet.bak') backup_file = os.path.join(self.nodes[0].datadir, 'wallet.bak')
wallet_name = "res0" wallet_name = "res0"
assert_raises_rpc_error(-8, "Wallet name already exists.", node.restorewallet, wallet_name, wallet_file) wallet_file = os.path.join(node.datadir, self.chain, 'wallets', wallet_name)
error_message = "Failed to create database path '{}'. Database already exists.".format(wallet_file)
assert_raises_rpc_error(-36, error_message, node.restorewallet, wallet_name, backup_file)
assert os.path.exists(wallet_file)
def init_three(self): def init_three(self):
self.init_wallet(0) self.init_wallet(0)
@ -179,6 +194,7 @@ class WalletBackupTest(BitcoinTestFramework):
## ##
self.log.info("Restoring wallets on node 3 using backup files") self.log.info("Restoring wallets on node 3 using backup files")
self.restore_invalid_wallet()
self.restore_nonexistent_wallet() self.restore_nonexistent_wallet()
backup_file_0 = os.path.join(self.nodes[0].datadir, 'wallet.bak') backup_file_0 = os.path.join(self.nodes[0].datadir, 'wallet.bak')
@ -189,6 +205,10 @@ class WalletBackupTest(BitcoinTestFramework):
self.nodes[3].restorewallet("res1", backup_file_1) self.nodes[3].restorewallet("res1", backup_file_1)
self.nodes[3].restorewallet("res2", backup_file_2) self.nodes[3].restorewallet("res2", backup_file_2)
assert os.path.exists(os.path.join(self.nodes[3].datadir, self.chain, 'wallets', "res0"))
assert os.path.exists(os.path.join(self.nodes[3].datadir, self.chain, 'wallets', "res1"))
assert os.path.exists(os.path.join(self.nodes[3].datadir, self.chain, 'wallets', "res2"))
res0_rpc = self.nodes[3].get_wallet_rpc("res0") res0_rpc = self.nodes[3].get_wallet_rpc("res0")
res1_rpc = self.nodes[3].get_wallet_rpc("res1") res1_rpc = self.nodes[3].get_wallet_rpc("res1")
res2_rpc = self.nodes[3].get_wallet_rpc("res2") res2_rpc = self.nodes[3].get_wallet_rpc("res2")

View File

@ -10,7 +10,6 @@ Runs automatically during `make check`.
Can also be run manually.""" Can also be run manually."""
import argparse import argparse
import binascii
import configparser import configparser
import difflib import difflib
import json import json
@ -167,7 +166,7 @@ def parse_output(a, fmt):
if fmt == 'json': # json: compare parsed data if fmt == 'json': # json: compare parsed data
return json.loads(a) return json.loads(a)
elif fmt == 'hex': # hex: parse and compare binary data elif fmt == 'hex': # hex: parse and compare binary data
return binascii.a2b_hex(a.strip()) return bytes.fromhex(a.strip())
else: else:
raise NotImplementedError("Don't know how to compare %s" % fmt) raise NotImplementedError("Don't know how to compare %s" % fmt)