Merge #6327: backport: merge bitcoin#23046, #24138, #24812, #24626, #21726, #25123, #25074, #24832, #26215, #24858, #26417, #16981 (index backports)

7d9ff96091 merge bitcoin#16981: Improve runtime performance of --reindex (Kittywhiskers Van Gogh)
e531dff5f7 merge bitcoin#26417: fix intermittent failure in feature_index_prune.py (Kittywhiskers Van Gogh)
b04b71a957 merge bitcoin#24858: incorrect blk file size calculation during reindex results in recoverable blk file corruption (Kittywhiskers Van Gogh)
9e75b99c53 merge bitcoin#26215: Improve BaseIndex::BlockUntilSyncedToCurrentChain reliability (Kittywhiskers Van Gogh)
3bd584c845 merge bitcoin#24832: Verify the block filter hash when reading the filter from disk (Kittywhiskers Van Gogh)
e507a51323 fix: avoid `mandatory-script-verify-flag-failed` crash in bench test (Kittywhiskers Van Gogh)
a86109a017 merge bitcoin#25074: During sync, commit best block after indexing (Kittywhiskers Van Gogh)
e6867a35ce merge bitcoin#25123: Fix race condition in index prune test (Kittywhiskers Van Gogh)
baf6e26eed merge bitcoin#21726: Improve Indices on pruned nodes via prune blockers (Kittywhiskers Van Gogh)
c65ec190c5 merge bitcoin#24626: disallow reindex-chainstate when pruning (Kittywhiskers Van Gogh)
bcd24a25e3 fix: push activation height for forks ahead, fix `feature_pruning.py` (Kittywhiskers Van Gogh)
10203560f5 merge bitcoin#24812: Add CHECK_NONFATAL identity function and NONFATAL_UNREACHABLE macro (Kittywhiskers Van Gogh)
1caaa85716 merge bitcoin#24138: Commit MuHash and best block together for coinstatsindex (Kittywhiskers Van Gogh)
b218f123b7 merge bitcoin#23046: Add txindex migration test (Kittywhiskers Van Gogh)
ebae59eedf fix: make sure we flush our committed best block in no-upgrade cases (Kittywhiskers Van Gogh)

Pull request description:

  ## Additional Information

  * When backporting [bitcoin#23046](https://github.com/bitcoin/bitcoin/pull/23046), it was discovered that there has been a longstanding bug in `CDeterministicMNManager::MigrateDBIfNeeded`(`2`)`()` that flagged a database taken from an older version for failing its "previous migration attempt", requiring the database to be fully rebuilt through a reindex.

    This occurred because the older database would be read pre-DIP3 in `MigrateDBIfNeeded()`, which then caused the migration logic to write the new best block ([source](3f0c2ff324/src/evo/deterministicmns.cpp (L1236-L1241))) (the legacy best block is erased before the DIP3 condition is checked, [source](3f0c2ff324/src/evo/deterministicmns.cpp (L1233))) but while it completed the transaction ([source](3f0c2ff324/src/evo/deterministicmns.cpp (L1240))), critically, it didn't write it to disk (example of writing to disk, [here](3f0c2ff324/src/evo/deterministicmns.cpp (L1288-L1292))).

    This meant that when it was read again by `MigrateDBIfNeeded2()`, it saw three things a) there is no new best block (because it didn't get written), b) there is no legacy best block (because it gets erased before the new best block is written) and c) that the chain height is greater than 1 (since this isn't a new datadir and the chain has already advanced), it concludes that it was a botched migration attempt and fails ([source](3f0c2ff324/src/evo/deterministicmns.cpp (L1337-L1343))).

    This bug affects v19 to `develop` (`3f0c2ff3` as of this writing) and prevents `feature_txindex_compatibility.py` from working as expected as it would migrate legacy databases to newer versions to test txindex migration code and get stuck at unhappy EvoDB migration logic, to allow the test to function properly when testing against the latest version of the client, this bug has been fixed as part of this PR.

  * In [bitcoin#23046](https://github.com/bitcoin/bitcoin/pull/23046), version v0.17 was used as the last version to support legacy txindex as the updated txindex format was introduced in [dash#4178](https://github.com/dashpay/dash/pull/4178) (i.e. after v0.17) and the version selected for having migration code in it (note, migration code was removed in [dash#6296](https://github.com/dashpay/dash/pull/6296), so far not included as part of any release) was v18.2.2 despite the range being v18.x to v21.x was a) due to the bug mentioned above affecting v19.x onwards and b) v18.2.2 being the latest release in the v18.x lifecycle.

    * The specific version number used for v0.17 is `170003` as the binaries corresponding to `170000` are not populated in `releases/`, which causes a CI failure ([source](https://gitlab.com/dashpay/dash/-/jobs/8073041955#L380))

  * As of `develop` (`3f0c2ff3` as of this writing), `feature_pruning.py` was broken due to changes in Core that were not adjusted for, namely:
    * The enforcement of `MAX_STANDARD_TX_SIZE` ([source](3f0c2ff324/src/policy/policy.h (L23-L24))) from DIP1 onwards ([source](3f0c2ff324/src/validation.cpp (L299-L301)))  resulting in `bad-txns-oversize` errors in blocks generated for the test as the transactions generated are ~9.5x larger than the now-enforced limit ([source](3f0c2ff324/test/functional/feature_pruning.py (L48C51-L48C57))), this is resolved by pushing the DIP1 activation height upwards to `2000` (the same activation height used for DIP3 and DIP8).
    * Change in subsidy logic in v20 ([source](3f0c2ff324/src/validation.cpp (L1073-L1082))) that results in `bad-cb-amount` errors, this is resolved by pushing the v20 activation height upwards.

    Additionally, an inopportune implicit post-`generate` sync ([source](3f0c2ff324/test/functional/feature_pruning.py (L215))) also causes the test to fail. Alongside the above, they have been resolved in this PR.

  * As of `develop` (`3f0c2ff3` as of this writing), `bench_dash` crashes when running the `AssembleBlock` benchmark. The regression was traced back to [bitcoin#21840](https://github.com/bitcoin/bitcoin/pull/21840) (5d10b41) in [dash#6152](https://github.com/dashpay/dash/pull/6152) due to the switch to `P2SH_OP_TRUE`.

    This has been resolved by reverting this particular change.

    <details>

    <summary>Pre-fix test failure:</summary>

    ```
    $ ./src/bench/bench_dash
    Warning, results might be unstable:
    * CPU governor is '' but should be 'performance'
    * Turbo is enabled, CPU frequency will fluctuate

    Recommendations
    * Use 'pyperf system tune' before benchmarking. See https://github.com/psf/pyperf

    |               ns/op |                op/s |    err% |          ins/op |         bra/op |   miss% |     total | benchmark
    |--------------------:|--------------------:|--------:|----------------:|---------------:|--------:|----------:|:----------
    |       17,647,657.00 |               56.66 |    0.1% |  231,718,349.00 |  42,246,265.00 |    0.1% |      0.20 | `AddrManAdd`
    |       42,201,861.00 |               23.70 |    0.1% |  544,366,811.00 | 102,569,244.00 |    0.0% |      0.46 | `AddrManAddThenGood`
    |          189,697.83 |            5,271.54 |    0.1% |    1,763,991.40 |     356,189.40 |    0.3% |      0.01 | `AddrManGetAddr`
    |              454.38 |        2,200,808.04 |    0.6% |        6,229.11 |       1,343.92 |    0.1% |      0.01 | `AddrManSelect`
    |        1,066,471.00 |              937.67 |   67.6% |   13,350,463.00 |   3,150,465.00 |    0.4% |      0.01 | 〰️ `AddrManSelectByNetwork` (Unstable with ~1.0 iters. Increase `minEpochIterations` to e.g. 10)
    |        1,181,774.50 |              846.19 |   49.0% |   18,358,489.50 |   4,224,642.50 |    0.0% |      0.02 | 〰️ `AddrManSelectFromAlmostEmpty` (Unstable with ~1.1 iters. Increase `minEpochIterations` to e.g. 11)
    bench_dash: bench/block_assemble.cpp:46: void AssembleBlock(benchmark::Bench &): Assertion `res.m_result_type == MempoolAcceptResult::ResultType::VALID' failed.
    [1]    2343746 IOT instruction (core dumped)  ./src/bench/bench_dash
    ```
    </details>

  ## Breaking changes

  None expected

  ## Checklist

  - [x] I have performed a self-review of my own code
  - [x] I have commented my code, particularly in hard-to-understand areas
  - [x] I have added or updated relevant unit/integration/functional/e2e tests
  - [x] I have made corresponding changes to the documentation **(note: N/A)**
  - [x] I have assigned this pull request to a milestone _(for repository code-owners and collaborators only)_

ACKs for top commit:
  UdjinM6:
    utACK 7d9ff96091
  PastaPastaPasta:
    utACK 7d9ff96091

Tree-SHA512: e2f1e58abb0a0368c4f1d5e488520957e042e6207b7d2d68a15eb18662405a3cdac91c5ff8e93c8a94c0fdab9b1412bd608d055f196230506c1640439939c25d
This commit is contained in:
pasta 2024-10-21 11:19:30 -05:00
commit 0946eec692
No known key found for this signature in database
GPG Key ID: E2F3D7916E722D38
45 changed files with 962 additions and 344 deletions

View File

@ -35,6 +35,7 @@ bench_bench_dash_SOURCES = \
bench/ccoins_caching.cpp \ bench/ccoins_caching.cpp \
bench/gcs_filter.cpp \ bench/gcs_filter.cpp \
bench/hashpadding.cpp \ bench/hashpadding.cpp \
bench/load_external.cpp \
bench/merkle_root.cpp \ bench/merkle_root.cpp \
bench/mempool_eviction.cpp \ bench/mempool_eviction.cpp \
bench/mempool_stress.cpp \ bench/mempool_stress.cpp \

View File

@ -89,6 +89,7 @@ BITCOIN_TESTS =\
test/blockencodings_tests.cpp \ test/blockencodings_tests.cpp \
test/blockfilter_tests.cpp \ test/blockfilter_tests.cpp \
test/blockfilter_index_tests.cpp \ test/blockfilter_index_tests.cpp \
test/blockmanager_tests.cpp \
test/bloom_tests.cpp \ test/bloom_tests.cpp \
test/bls_tests.cpp \ test/bls_tests.cpp \
test/bswap_tests.cpp \ test/bswap_tests.cpp \

View File

@ -32,7 +32,7 @@ static void AssembleBlock(benchmark::Bench& bench)
std::array<CTransactionRef, NUM_BLOCKS - COINBASE_MATURITY + 1> txs; std::array<CTransactionRef, NUM_BLOCKS - COINBASE_MATURITY + 1> txs;
for (size_t b{0}; b < NUM_BLOCKS; ++b) { for (size_t b{0}; b < NUM_BLOCKS; ++b) {
CMutableTransaction tx; CMutableTransaction tx;
tx.vin.push_back(MineBlock(test_setup->m_node, P2SH_OP_TRUE)); tx.vin.push_back(MineBlock(test_setup->m_node, SCRIPT_PUB));
tx.vin.back().scriptSig = scriptSig; tx.vin.back().scriptSig = scriptSig;
tx.vout.emplace_back(1337, SCRIPT_PUB); tx.vout.emplace_back(1337, SCRIPT_PUB);
if (NUM_BLOCKS - b >= COINBASE_MATURITY) if (NUM_BLOCKS - b >= COINBASE_MATURITY)
@ -48,7 +48,7 @@ static void AssembleBlock(benchmark::Bench& bench)
} }
bench.minEpochIterations(700).run([&] { bench.minEpochIterations(700).run([&] {
PrepareBlock(test_setup->m_node, P2SH_OP_TRUE); PrepareBlock(test_setup->m_node, SCRIPT_PUB);
}); });
} }

View File

@ -5,38 +5,84 @@
#include <bench/bench.h> #include <bench/bench.h>
#include <blockfilter.h> #include <blockfilter.h>
static void ConstructGCSFilter(benchmark::Bench& bench) static const GCSFilter::ElementSet GenerateGCSTestElements()
{ {
GCSFilter::ElementSet elements; GCSFilter::ElementSet elements;
for (int i = 0; i < 10000; ++i) {
// Testing the benchmarks with different number of elements show that a filter
// with at least 100,000 elements results in benchmarks that have the same
// ns/op. This makes it easy to reason about how long (in nanoseconds) a single
// filter element takes to process.
for (int i = 0; i < 100000; ++i) {
GCSFilter::Element element(32); GCSFilter::Element element(32);
element[0] = static_cast<unsigned char>(i); element[0] = static_cast<unsigned char>(i);
element[1] = static_cast<unsigned char>(i >> 8); element[1] = static_cast<unsigned char>(i >> 8);
elements.insert(std::move(element)); elements.insert(std::move(element));
} }
return elements;
}
static void GCSBlockFilterGetHash(benchmark::Bench& bench)
{
auto elements = GenerateGCSTestElements();
GCSFilter filter({0, 0, BASIC_FILTER_P, BASIC_FILTER_M}, elements);
BlockFilter block_filter(BlockFilterType::BASIC_FILTER, {}, filter.GetEncoded(), /*skip_decode_check=*/false);
bench.run([&] {
block_filter.GetHash();
});
}
static void GCSFilterConstruct(benchmark::Bench& bench)
{
auto elements = GenerateGCSTestElements();
uint64_t siphash_k0 = 0; uint64_t siphash_k0 = 0;
bench.batch(elements.size()).unit("elem").run([&] { bench.run([&]{
GCSFilter filter({siphash_k0, 0, 20, 1 << 20}, elements); GCSFilter filter({siphash_k0, 0, BASIC_FILTER_P, BASIC_FILTER_M}, elements);
siphash_k0++; siphash_k0++;
}); });
} }
static void MatchGCSFilter(benchmark::Bench& bench) static void GCSFilterDecode(benchmark::Bench& bench)
{ {
GCSFilter::ElementSet elements; auto elements = GenerateGCSTestElements();
for (int i = 0; i < 10000; ++i) {
GCSFilter::Element element(32);
element[0] = static_cast<unsigned char>(i);
element[1] = static_cast<unsigned char>(i >> 8);
elements.insert(std::move(element));
}
GCSFilter filter({0, 0, 20, 1 << 20}, elements);
bench.unit("elem").run([&] { GCSFilter filter({0, 0, BASIC_FILTER_P, BASIC_FILTER_M}, elements);
filter.Match(GCSFilter::Element()); auto encoded = filter.GetEncoded();
bench.run([&] {
GCSFilter filter({0, 0, BASIC_FILTER_P, BASIC_FILTER_M}, encoded, /*skip_decode_check=*/false);
}); });
} }
BENCHMARK(ConstructGCSFilter); static void GCSFilterDecodeSkipCheck(benchmark::Bench& bench)
BENCHMARK(MatchGCSFilter); {
auto elements = GenerateGCSTestElements();
GCSFilter filter({0, 0, BASIC_FILTER_P, BASIC_FILTER_M}, elements);
auto encoded = filter.GetEncoded();
bench.run([&] {
GCSFilter filter({0, 0, BASIC_FILTER_P, BASIC_FILTER_M}, encoded, /*skip_decode_check=*/true);
});
}
static void GCSFilterMatch(benchmark::Bench& bench)
{
auto elements = GenerateGCSTestElements();
GCSFilter filter({0, 0, BASIC_FILTER_P, BASIC_FILTER_M}, elements);
bench.run([&] {
filter.Match(GCSFilter::Element());
});
}
BENCHMARK(GCSBlockFilterGetHash);
BENCHMARK(GCSFilterConstruct);
BENCHMARK(GCSFilterDecode);
BENCHMARK(GCSFilterDecodeSkipCheck);
BENCHMARK(GCSFilterMatch);

View File

@ -0,0 +1,63 @@
// Copyright (c) 2022 The Bitcoin Core developers
// Distributed under the MIT software license, see the accompanying
// file COPYING or https://www.opensource.org/licenses/mit-license.php.
#include <bench/bench.h>
#include <bench/data.h>
#include <chainparams.h>
#include <test/util/setup_common.h>
#include <validation.h>
/**
* The LoadExternalBlockFile() function is used during -reindex and -loadblock.
*
* Create a test file that's similar to a datadir/blocks/blk?????.dat file,
* It contains around 134 copies of the same block (typical size of real block files).
* For each block in the file, LoadExternalBlockFile() won't find its parent,
* and so will skip the block. (In the real system, it will re-read the block
* from disk later when it encounters its parent.)
*
* This benchmark measures the performance of deserializing the block (or just
* its header, beginning with PR 16981).
*/
static void LoadExternalBlockFile(benchmark::Bench& bench)
{
const auto testing_setup{MakeNoLogFileContext<const TestingSetup>(CBaseChainParams::MAIN)};
// Create a single block as in the blocks files (magic bytes, block size,
// block data) as a stream object.
const fs::path blkfile{testing_setup.get()->m_path_root / "blk.dat"};
CDataStream ss(SER_DISK, 0);
auto params{Params()};
ss << params.MessageStart();
ss << static_cast<uint32_t>(benchmark::data::block813851.size());
// We can't use the streaming serialization (ss << benchmark::data::block813851)
// because that first writes a compact size.
ss.write(MakeByteSpan(benchmark::data::block813851));
// Create the test file.
{
// "wb+" is "binary, O_RDWR | O_CREAT | O_TRUNC".
FILE* file{fsbridge::fopen(blkfile, "wb+")};
// Make the test block file about 128 MB in length.
for (size_t i = 0; i < MAX_BLOCKFILE_SIZE / ss.size(); ++i) {
if (fwrite(ss.data(), 1, ss.size(), file) != ss.size()) {
throw std::runtime_error("write to test file failed\n");
}
}
fclose(file);
}
CChainState& chainstate{testing_setup->m_node.chainman->ActiveChainstate()};
std::multimap<uint256, FlatFilePos> blocks_with_unknown_parent;
FlatFilePos pos;
bench.run([&] {
// "rb" is "binary, O_RDONLY", positioned to the start of the file.
// The file will be closed by LoadExternalBlockFile().
FILE* file{fsbridge::fopen(blkfile, "rb")};
chainstate.LoadExternalBlockFile(file, &pos, &blocks_with_unknown_parent);
});
fs::remove(blkfile);
}
BENCHMARK(LoadExternalBlockFile);

View File

@ -47,7 +47,7 @@ GCSFilter::GCSFilter(const Params& params)
: m_params(params), m_N(0), m_F(0), m_encoded{0} : m_params(params), m_N(0), m_F(0), m_encoded{0}
{} {}
GCSFilter::GCSFilter(const Params& params, std::vector<unsigned char> encoded_filter) GCSFilter::GCSFilter(const Params& params, std::vector<unsigned char> encoded_filter, bool skip_decode_check)
: m_params(params), m_encoded(std::move(encoded_filter)) : m_params(params), m_encoded(std::move(encoded_filter))
{ {
SpanReader stream{GCS_SER_TYPE, GCS_SER_VERSION, m_encoded, 0}; SpanReader stream{GCS_SER_TYPE, GCS_SER_VERSION, m_encoded, 0};
@ -59,6 +59,8 @@ GCSFilter::GCSFilter(const Params& params, std::vector<unsigned char> encoded_fi
} }
m_F = static_cast<uint64_t>(m_N) * static_cast<uint64_t>(m_params.m_M); m_F = static_cast<uint64_t>(m_N) * static_cast<uint64_t>(m_params.m_M);
if (skip_decode_check) return;
// Verify that the encoded filter contains exactly N elements. If it has too much or too little // Verify that the encoded filter contains exactly N elements. If it has too much or too little
// data, a std::ios_base::failure exception will be raised. // data, a std::ios_base::failure exception will be raised.
BitStreamReader<SpanReader> bitreader{stream}; BitStreamReader<SpanReader> bitreader{stream};
@ -219,14 +221,14 @@ static GCSFilter::ElementSet BasicFilterElements(const CBlock& block,
} }
BlockFilter::BlockFilter(BlockFilterType filter_type, const uint256& block_hash, BlockFilter::BlockFilter(BlockFilterType filter_type, const uint256& block_hash,
std::vector<unsigned char> filter) std::vector<unsigned char> filter, bool skip_decode_check)
: m_filter_type(filter_type), m_block_hash(block_hash) : m_filter_type(filter_type), m_block_hash(block_hash)
{ {
GCSFilter::Params params; GCSFilter::Params params;
if (!BuildParams(params)) { if (!BuildParams(params)) {
throw std::invalid_argument("unknown filter_type"); throw std::invalid_argument("unknown filter_type");
} }
m_filter = GCSFilter(params, std::move(filter)); m_filter = GCSFilter(params, std::move(filter), skip_decode_check);
} }
BlockFilter::BlockFilter(BlockFilterType filter_type, const CBlock& block, const CBlockUndo& block_undo) BlockFilter::BlockFilter(BlockFilterType filter_type, const CBlock& block, const CBlockUndo& block_undo)

View File

@ -60,7 +60,7 @@ public:
explicit GCSFilter(const Params& params = Params()); explicit GCSFilter(const Params& params = Params());
/** Reconstructs an already-created filter from an encoding. */ /** Reconstructs an already-created filter from an encoding. */
GCSFilter(const Params& params, std::vector<unsigned char> encoded_filter); GCSFilter(const Params& params, std::vector<unsigned char> encoded_filter, bool skip_decode_check);
/** Builds a new filter from the params and set of elements. */ /** Builds a new filter from the params and set of elements. */
GCSFilter(const Params& params, const ElementSet& elements); GCSFilter(const Params& params, const ElementSet& elements);
@ -123,7 +123,7 @@ public:
//! Reconstruct a BlockFilter from parts. //! Reconstruct a BlockFilter from parts.
BlockFilter(BlockFilterType filter_type, const uint256& block_hash, BlockFilter(BlockFilterType filter_type, const uint256& block_hash,
std::vector<unsigned char> filter); std::vector<unsigned char> filter, bool skip_decode_check);
//! Construct a new BlockFilter of the specified type from a block. //! Construct a new BlockFilter of the specified type from a block.
BlockFilter(BlockFilterType filter_type, const CBlock& block, const CBlockUndo& block_undo); BlockFilter(BlockFilterType filter_type, const CBlock& block, const CBlockUndo& block_undo);
@ -165,7 +165,7 @@ public:
if (!BuildParams(params)) { if (!BuildParams(params)) {
throw std::ios_base::failure("unknown filter_type"); throw std::ios_base::failure("unknown filter_type");
} }
m_filter = GCSFilter(params, std::move(encoded_filter)); m_filter = GCSFilter(params, std::move(encoded_filter), /*skip_decode_check=*/false);
} }
}; };

View File

@ -1240,6 +1240,10 @@ bool CDeterministicMNManager::MigrateDBIfNeeded()
auto dbTx = m_evoDb.BeginTransaction(); auto dbTx = m_evoDb.BeginTransaction();
m_evoDb.WriteBestBlock(m_chainstate.m_chain.Tip()->GetBlockHash()); m_evoDb.WriteBestBlock(m_chainstate.m_chain.Tip()->GetBlockHash());
dbTx->Commit(); dbTx->Commit();
if (!m_evoDb.CommitRootTransaction()) {
LogPrintf("CDeterministicMNManager::%s -- failed to commit to evoDB\n", __func__);
return false;
}
return true; return true;
} }
@ -1351,6 +1355,10 @@ bool CDeterministicMNManager::MigrateDBIfNeeded2()
auto dbTx = m_evoDb.BeginTransaction(); auto dbTx = m_evoDb.BeginTransaction();
m_evoDb.WriteBestBlock(m_chainstate.m_chain.Tip()->GetBlockHash()); m_evoDb.WriteBestBlock(m_chainstate.m_chain.Tip()->GetBlockHash());
dbTx->Commit(); dbTx->Commit();
if (!m_evoDb.CommitRootTransaction()) {
LogPrintf("CDeterministicMNManager::%s -- failed to commit to evoDB\n", __func__);
return false;
}
return true; return true;
} }

View File

@ -62,9 +62,9 @@ bool BaseIndex::Init()
LOCK(cs_main); LOCK(cs_main);
CChain& active_chain = m_chainstate->m_chain; CChain& active_chain = m_chainstate->m_chain;
if (locator.IsNull()) { if (locator.IsNull()) {
m_best_block_index = nullptr; SetBestBlockIndex(nullptr);
} else { } else {
m_best_block_index = m_chainstate->FindForkInGlobalIndex(locator); SetBestBlockIndex(m_chainstate->FindForkInGlobalIndex(locator));
} }
// Note: this will latch to true immediately if the user starts up with an empty // Note: this will latch to true immediately if the user starts up with an empty
@ -76,11 +76,7 @@ bool BaseIndex::Init()
if (!m_best_block_index) { if (!m_best_block_index) {
// index is not built yet // index is not built yet
// make sure we have all block data back to the genesis // make sure we have all block data back to the genesis
const CBlockIndex* block = active_chain.Tip(); prune_violation = GetFirstStoredBlock(active_chain.Tip()) != active_chain.Genesis();
while (block->pprev && (block->pprev->nStatus & BLOCK_HAVE_DATA)) {
block = block->pprev;
}
prune_violation = block != active_chain.Genesis();
} }
// in case the index has a best block set and is not fully synced // in case the index has a best block set and is not fully synced
// check if we have the required blocks to continue building the index // check if we have the required blocks to continue building the index
@ -138,7 +134,7 @@ void BaseIndex::ThreadSync()
std::chrono::steady_clock::time_point last_locator_write_time{0s}; std::chrono::steady_clock::time_point last_locator_write_time{0s};
while (true) { while (true) {
if (m_interrupt) { if (m_interrupt) {
m_best_block_index = pindex; SetBestBlockIndex(pindex);
// No need to handle errors in Commit. If it fails, the error will be already be // No need to handle errors in Commit. If it fails, the error will be already be
// logged. The best way to recover is to continue, as index cannot be corrupted by // logged. The best way to recover is to continue, as index cannot be corrupted by
// a missed commit to disk for an advanced index state. // a missed commit to disk for an advanced index state.
@ -150,7 +146,7 @@ void BaseIndex::ThreadSync()
LOCK(cs_main); LOCK(cs_main);
const CBlockIndex* pindex_next = NextSyncBlock(pindex, m_chainstate->m_chain); const CBlockIndex* pindex_next = NextSyncBlock(pindex, m_chainstate->m_chain);
if (!pindex_next) { if (!pindex_next) {
m_best_block_index = pindex; SetBestBlockIndex(pindex);
m_synced = true; m_synced = true;
// No need to handle errors in Commit. See rationale above. // No need to handle errors in Commit. See rationale above.
Commit(); Commit();
@ -172,7 +168,7 @@ void BaseIndex::ThreadSync()
} }
if (last_locator_write_time + SYNC_LOCATOR_WRITE_INTERVAL < current_time) { if (last_locator_write_time + SYNC_LOCATOR_WRITE_INTERVAL < current_time) {
m_best_block_index = pindex; SetBestBlockIndex(pindex->pprev);
last_locator_write_time = current_time; last_locator_write_time = current_time;
// No need to handle errors in Commit. See rationale above. // No need to handle errors in Commit. See rationale above.
Commit(); Commit();
@ -230,10 +226,10 @@ bool BaseIndex::Rewind(const CBlockIndex* current_tip, const CBlockIndex* new_ti
// out of sync may be possible but a users fault. // out of sync may be possible but a users fault.
// In case we reorg beyond the pruned depth, ReadBlockFromDisk would // In case we reorg beyond the pruned depth, ReadBlockFromDisk would
// throw and lead to a graceful shutdown // throw and lead to a graceful shutdown
m_best_block_index = new_tip; SetBestBlockIndex(new_tip);
if (!Commit()) { if (!Commit()) {
// If commit fails, revert the best block index to avoid corruption. // If commit fails, revert the best block index to avoid corruption.
m_best_block_index = current_tip; SetBestBlockIndex(current_tip);
return false; return false;
} }
@ -274,7 +270,11 @@ void BaseIndex::BlockConnected(const std::shared_ptr<const CBlock>& block, const
} }
if (WriteBlock(*block, pindex)) { if (WriteBlock(*block, pindex)) {
m_best_block_index = pindex; // Setting the best block index is intentionally the last step of this
// function, so BlockUntilSyncedToCurrentChain callers waiting for the
// best block index to be updated can rely on the block being fully
// processed, and the index object being safe to delete.
SetBestBlockIndex(pindex);
} else { } else {
FatalError("%s: Failed to write block %s to index", FatalError("%s: Failed to write block %s to index",
__func__, pindex->GetBlockHash().ToString()); __func__, pindex->GetBlockHash().ToString());
@ -381,3 +381,21 @@ IndexSummary BaseIndex::GetSummary() const
summary.best_block_height = m_best_block_index ? m_best_block_index.load()->nHeight : 0; summary.best_block_height = m_best_block_index ? m_best_block_index.load()->nHeight : 0;
return summary; return summary;
} }
void BaseIndex::SetBestBlockIndex(const CBlockIndex* block) {
assert(!fPruneMode || AllowPrune());
if (AllowPrune() && block) {
PruneLockInfo prune_lock;
prune_lock.height_first = block->nHeight;
WITH_LOCK(::cs_main, m_chainstate->m_blockman.UpdatePruneLock(GetName(), prune_lock));
}
// Intentionally set m_best_block_index as the last step in this function,
// after updating prune locks above, and after making any other references
// to *this, so the BlockUntilSyncedToCurrentChain function (which checks
// m_best_block_index as an optimization) can be used to wait for the last
// BlockConnected notification and safely assume that prune locks are
// updated and that the index object is safe to delete.
m_best_block_index = block;
}

View File

@ -81,6 +81,9 @@ private:
/// to a chain reorganization), the index must halt until Commit succeeds or else it could end up /// to a chain reorganization), the index must halt until Commit succeeds or else it could end up
/// getting corrupted. /// getting corrupted.
bool Commit(); bool Commit();
virtual bool AllowPrune() const = 0;
protected: protected:
CChainState* m_chainstate{nullptr}; CChainState* m_chainstate{nullptr};
@ -109,6 +112,9 @@ protected:
/// Get the name of the index for display in logs. /// Get the name of the index for display in logs.
virtual const char* GetName() const = 0; virtual const char* GetName() const = 0;
/// Update the internal best block index as well as the prune lock.
void SetBestBlockIndex(const CBlockIndex* block);
public: public:
/// Destructor interrupts sync thread if running and blocks until it exits. /// Destructor interrupts sync thread if running and blocks until it exits.
virtual ~BaseIndex(); virtual ~BaseIndex();

View File

@ -5,6 +5,7 @@
#include <map> #include <map>
#include <dbwrapper.h> #include <dbwrapper.h>
#include <hash.h>
#include <index/blockfilterindex.h> #include <index/blockfilterindex.h>
#include <node/blockstorage.h> #include <node/blockstorage.h>
#include <serialize.h> #include <serialize.h>
@ -146,18 +147,22 @@ bool BlockFilterIndex::CommitInternal(CDBBatch& batch)
return BaseIndex::CommitInternal(batch); return BaseIndex::CommitInternal(batch);
} }
bool BlockFilterIndex::ReadFilterFromDisk(const FlatFilePos& pos, BlockFilter& filter) const bool BlockFilterIndex::ReadFilterFromDisk(const FlatFilePos& pos, const uint256& hash, BlockFilter& filter) const
{ {
CAutoFile filein(m_filter_fileseq->Open(pos, true), SER_DISK, CLIENT_VERSION); CAutoFile filein(m_filter_fileseq->Open(pos, true), SER_DISK, CLIENT_VERSION);
if (filein.IsNull()) { if (filein.IsNull()) {
return false; return false;
} }
// Check that the hash of the encoded_filter matches the one stored in the db.
uint256 block_hash; uint256 block_hash;
std::vector<uint8_t> encoded_filter; std::vector<uint8_t> encoded_filter;
try { try {
filein >> block_hash >> encoded_filter; filein >> block_hash >> encoded_filter;
filter = BlockFilter(GetFilterType(), block_hash, std::move(encoded_filter)); uint256 result;
CHash256().Write(encoded_filter).Finalize(result);
if (result != hash) return error("Checksum mismatch in filter decode.");
filter = BlockFilter(GetFilterType(), block_hash, std::move(encoded_filter), /*skip_decode_check=*/true);
} }
catch (const std::exception& e) { catch (const std::exception& e) {
return error("%s: Failed to deserialize block filter from disk: %s", __func__, e.what()); return error("%s: Failed to deserialize block filter from disk: %s", __func__, e.what());
@ -384,7 +389,7 @@ bool BlockFilterIndex::LookupFilter(const CBlockIndex* block_index, BlockFilter&
return false; return false;
} }
return ReadFilterFromDisk(entry.pos, filter_out); return ReadFilterFromDisk(entry.pos, entry.hash, filter_out);
} }
bool BlockFilterIndex::LookupFilterHeader(const CBlockIndex* block_index, uint256& header_out) bool BlockFilterIndex::LookupFilterHeader(const CBlockIndex* block_index, uint256& header_out)
@ -428,7 +433,7 @@ bool BlockFilterIndex::LookupFilterRange(int start_height, const CBlockIndex* st
filters_out.resize(entries.size()); filters_out.resize(entries.size());
auto filter_pos_it = filters_out.begin(); auto filter_pos_it = filters_out.begin();
for (const auto& entry : entries) { for (const auto& entry : entries) {
if (!ReadFilterFromDisk(entry.pos, *filter_pos_it)) { if (!ReadFilterFromDisk(entry.pos, entry.hash, *filter_pos_it)) {
return false; return false;
} }
++filter_pos_it; ++filter_pos_it;

View File

@ -32,13 +32,15 @@ private:
FlatFilePos m_next_filter_pos; FlatFilePos m_next_filter_pos;
std::unique_ptr<FlatFileSeq> m_filter_fileseq; std::unique_ptr<FlatFileSeq> m_filter_fileseq;
bool ReadFilterFromDisk(const FlatFilePos& pos, BlockFilter& filter) const; bool ReadFilterFromDisk(const FlatFilePos& pos, const uint256& hash, BlockFilter& filter) const;
size_t WriteFilterToDisk(FlatFilePos& pos, const BlockFilter& filter); size_t WriteFilterToDisk(FlatFilePos& pos, const BlockFilter& filter);
Mutex m_cs_headers_cache; Mutex m_cs_headers_cache;
/** cache of block hash to filter header, to avoid disk access when responding to getcfcheckpt. */ /** cache of block hash to filter header, to avoid disk access when responding to getcfcheckpt. */
std::unordered_map<uint256, uint256, FilterHeaderHasher> m_headers_cache GUARDED_BY(m_cs_headers_cache); std::unordered_map<uint256, uint256, FilterHeaderHasher> m_headers_cache GUARDED_BY(m_cs_headers_cache);
bool AllowPrune() const override { return true; }
protected: protected:
bool Init() override; bool Init() override;

View File

@ -36,6 +36,8 @@ private:
bool ReverseBlock(const CBlock& block, const CBlockIndex* pindex); bool ReverseBlock(const CBlock& block, const CBlockIndex* pindex);
bool AllowPrune() const override { return true; }
protected: protected:
bool Init() override; bool Init() override;

View File

@ -20,6 +20,8 @@ protected:
private: private:
const std::unique_ptr<DB> m_db; const std::unique_ptr<DB> m_db;
bool AllowPrune() const override { return false; }
protected: protected:
bool WriteBlock(const CBlock& block, const CBlockIndex* pindex) override; bool WriteBlock(const CBlock& block, const CBlockIndex* pindex) override;

View File

@ -538,7 +538,7 @@ void SetupServerArgs(ArgsManager& argsman)
-GetNumCores(), MAX_SCRIPTCHECK_THREADS, DEFAULT_SCRIPTCHECK_THREADS), ArgsManager::ALLOW_ANY, OptionsCategory::OPTIONS); -GetNumCores(), MAX_SCRIPTCHECK_THREADS, DEFAULT_SCRIPTCHECK_THREADS), ArgsManager::ALLOW_ANY, OptionsCategory::OPTIONS);
argsman.AddArg("-persistmempool", strprintf("Whether to save the mempool on shutdown and load on restart (default: %u)", DEFAULT_PERSIST_MEMPOOL), ArgsManager::ALLOW_ANY, OptionsCategory::OPTIONS); argsman.AddArg("-persistmempool", strprintf("Whether to save the mempool on shutdown and load on restart (default: %u)", DEFAULT_PERSIST_MEMPOOL), ArgsManager::ALLOW_ANY, OptionsCategory::OPTIONS);
argsman.AddArg("-pid=<file>", strprintf("Specify pid file. Relative paths will be prefixed by a net-specific datadir location. (default: %s)", BITCOIN_PID_FILENAME), ArgsManager::ALLOW_ANY, OptionsCategory::OPTIONS); argsman.AddArg("-pid=<file>", strprintf("Specify pid file. Relative paths will be prefixed by a net-specific datadir location. (default: %s)", BITCOIN_PID_FILENAME), ArgsManager::ALLOW_ANY, OptionsCategory::OPTIONS);
argsman.AddArg("-prune=<n>", strprintf("Reduce storage requirements by enabling pruning (deleting) of old blocks. This allows the pruneblockchain RPC to be called to delete specific blocks, and enables automatic pruning of old blocks if a target size in MiB is provided. This mode is incompatible with -txindex, -coinstatsindex, -rescan and -disablegovernance=false. " argsman.AddArg("-prune=<n>", strprintf("Reduce storage requirements by enabling pruning (deleting) of old blocks. This allows the pruneblockchain RPC to be called to delete specific blocks, and enables automatic pruning of old blocks if a target size in MiB is provided. This mode is incompatible with -txindex, -rescan and -disablegovernance=false. "
"Warning: Reverting this setting requires re-downloading the entire blockchain. " "Warning: Reverting this setting requires re-downloading the entire blockchain. "
"(default: 0 = disable pruning blocks, 1 = allow manual pruning via RPC, >%u = automatically prune block files to stay under the specified target size in MiB)", MIN_DISK_SPACE_FOR_BLOCK_FILES / 1024 / 1024), ArgsManager::ALLOW_ANY, OptionsCategory::OPTIONS); "(default: 0 = disable pruning blocks, 1 = allow manual pruning via RPC, >%u = automatically prune block files to stay under the specified target size in MiB)", MIN_DISK_SPACE_FOR_BLOCK_FILES / 1024 / 1024), ArgsManager::ALLOW_ANY, OptionsCategory::OPTIONS);
argsman.AddArg("-settings=<file>", strprintf("Specify path to dynamic settings data file. Can be disabled with -nosettings. File is written at runtime and not meant to be edited by users (use %s instead for custom settings). Relative paths will be prefixed by datadir location. (default: %s)", BITCOIN_CONF_FILENAME, BITCOIN_SETTINGS_FILENAME), ArgsManager::ALLOW_ANY, OptionsCategory::OPTIONS); argsman.AddArg("-settings=<file>", strprintf("Specify path to dynamic settings data file. Can be disabled with -nosettings. File is written at runtime and not meant to be edited by users (use %s instead for custom settings). Relative paths will be prefixed by datadir location. (default: %s)", BITCOIN_CONF_FILENAME, BITCOIN_SETTINGS_FILENAME), ArgsManager::ALLOW_ANY, OptionsCategory::OPTIONS);
@ -1162,12 +1162,12 @@ bool AppInitParameterInteraction(const ArgsManager& args)
nLocalServices = ServiceFlags(nLocalServices | NODE_COMPACT_FILTERS); nLocalServices = ServiceFlags(nLocalServices | NODE_COMPACT_FILTERS);
} }
// if using block pruning, then disallow txindex, coinstatsindex and require disabling governance validation
if (args.GetArg("-prune", 0)) { if (args.GetArg("-prune", 0)) {
if (args.GetBoolArg("-txindex", DEFAULT_TXINDEX)) if (args.GetBoolArg("-txindex", DEFAULT_TXINDEX))
return InitError(_("Prune mode is incompatible with -txindex.")); return InitError(_("Prune mode is incompatible with -txindex."));
if (args.GetBoolArg("-coinstatsindex", DEFAULT_COINSTATSINDEX)) if (args.GetBoolArg("-reindex-chainstate", false)) {
return InitError(_("Prune mode is incompatible with -coinstatsindex.")); return InitError(_("Prune mode is incompatible with -reindex-chainstate. Use full -reindex instead."));
}
if (!args.GetBoolArg("-disablegovernance", !DEFAULT_GOVERNANCE_ENABLE)) { if (!args.GetBoolArg("-disablegovernance", !DEFAULT_GOVERNANCE_ENABLE)) {
return InitError(_("Prune mode is incompatible with -disablegovernance=false.")); return InitError(_("Prune mode is incompatible with -disablegovernance=false."));
} }

View File

@ -24,6 +24,7 @@
#include <walletinitinterface.h> #include <walletinitinterface.h>
#include <map> #include <map>
#include <unordered_map>
std::atomic_bool fImporting(false); std::atomic_bool fImporting(false);
std::atomic_bool fReindex(false); std::atomic_bool fReindex(false);
@ -249,6 +250,11 @@ void BlockManager::FindFilesToPrune(std::set<int>& setFilesToPrune, uint64_t nPr
nLastBlockWeCanPrune, count); nLastBlockWeCanPrune, count);
} }
void BlockManager::UpdatePruneLock(const std::string& name, const PruneLockInfo& lock_info) {
AssertLockHeld(::cs_main);
m_prune_locks[name] = lock_info;
}
CBlockIndex* BlockManager::InsertBlockIndex(const uint256& hash) CBlockIndex* BlockManager::InsertBlockIndex(const uint256& hash)
{ {
AssertLockHeld(cs_main); AssertLockHeld(cs_main);
@ -421,6 +427,16 @@ bool BlockManager::IsBlockPruned(const CBlockIndex* pblockindex)
return (m_have_pruned && !(pblockindex->nStatus & BLOCK_HAVE_DATA) && pblockindex->nTx > 0); return (m_have_pruned && !(pblockindex->nStatus & BLOCK_HAVE_DATA) && pblockindex->nTx > 0);
} }
const CBlockIndex* GetFirstStoredBlock(const CBlockIndex* start_block) {
AssertLockHeld(::cs_main);
assert(start_block);
const CBlockIndex* last_block = start_block;
while (last_block->pprev && (last_block->pprev->nStatus & BLOCK_HAVE_DATA)) {
last_block = last_block->pprev;
}
return last_block;
}
// If we're using -prune with -reindex, then delete block files that will be ignored by the // If we're using -prune with -reindex, then delete block files that will be ignored by the
// reindex. Since reindexing works by starting at block file 0 and looping until a blockfile // reindex. Since reindexing works by starting at block file 0 and looping until a blockfile
// is missing, do the same here to delete any later block files after a gap. Also delete all // is missing, do the same here to delete any later block files after a gap. Also delete all
@ -774,19 +790,24 @@ bool ReadBlockFromDisk(CBlock& block, const CBlockIndex* pindex, const Consensus
return true; return true;
} }
/** Store block on disk. If dbp is non-nullptr, the file is known to already reside on disk */
FlatFilePos BlockManager::SaveBlockToDisk(const CBlock& block, int nHeight, CChain& active_chain, const CChainParams& chainparams, const FlatFilePos* dbp) FlatFilePos BlockManager::SaveBlockToDisk(const CBlock& block, int nHeight, CChain& active_chain, const CChainParams& chainparams, const FlatFilePos* dbp)
{ {
unsigned int nBlockSize = ::GetSerializeSize(block, CLIENT_VERSION); unsigned int nBlockSize = ::GetSerializeSize(block, CLIENT_VERSION);
FlatFilePos blockPos; FlatFilePos blockPos;
if (dbp != nullptr) { const auto position_known {dbp != nullptr};
if (position_known) {
blockPos = *dbp; blockPos = *dbp;
} else {
// when known, blockPos.nPos points at the offset of the block data in the blk file. that already accounts for
// the serialization header present in the file (the 4 magic message start bytes + the 4 length bytes = 8 bytes = BLOCK_SERIALIZATION_HEADER_SIZE).
// we add BLOCK_SERIALIZATION_HEADER_SIZE only for new blocks since they will have the serialization header added when written to disk.
nBlockSize += static_cast<unsigned int>(BLOCK_SERIALIZATION_HEADER_SIZE);
} }
if (!FindBlockPos(blockPos, nBlockSize + 8, nHeight, active_chain, block.GetBlockTime(), dbp != nullptr)) { if (!FindBlockPos(blockPos, nBlockSize, nHeight, active_chain, block.GetBlockTime(), position_known)) {
error("%s: FindBlockPos failed", __func__); error("%s: FindBlockPos failed", __func__);
return FlatFilePos(); return FlatFilePos();
} }
if (dbp == nullptr) { if (!position_known) {
if (!WriteBlockToDisk(block, blockPos, chainparams.MessageStart())) { if (!WriteBlockToDisk(block, blockPos, chainparams.MessageStart())) {
AbortNode("Failed to write block"); AbortNode("Failed to write block");
return FlatFilePos(); return FlatFilePos();

View File

@ -12,6 +12,7 @@
#include <txdb.h> #include <txdb.h>
#include <cstdint> #include <cstdint>
#include <unordered_map>
#include <vector> #include <vector>
extern RecursiveMutex cs_main; extern RecursiveMutex cs_main;
@ -46,6 +47,9 @@ static const unsigned int UNDOFILE_CHUNK_SIZE = 0x100000; // 1 MiB
/** The maximum size of a blk?????.dat file (since 0.8) */ /** The maximum size of a blk?????.dat file (since 0.8) */
static const unsigned int MAX_BLOCKFILE_SIZE = 0x8000000; // 128 MiB static const unsigned int MAX_BLOCKFILE_SIZE = 0x8000000; // 128 MiB
/** Size of header written by WriteBlockToDisk before a serialized CBlock */
static constexpr size_t BLOCK_SERIALIZATION_HEADER_SIZE = CMessageHeader::MESSAGE_START_SIZE + sizeof(unsigned int);
extern std::atomic_bool fImporting; extern std::atomic_bool fImporting;
extern std::atomic_bool fReindex; extern std::atomic_bool fReindex;
/** Pruning-related variables and constants */ /** Pruning-related variables and constants */
@ -77,6 +81,10 @@ struct CBlockIndexHeightOnlyComparator {
bool operator()(const CBlockIndex* pa, const CBlockIndex* pb) const; bool operator()(const CBlockIndex* pa, const CBlockIndex* pb) const;
}; };
struct PruneLockInfo {
int height_first{std::numeric_limits<int>::max()}; //! Height of earliest block that should be kept and not pruned
};
/** /**
* Maintains a tree of blocks (stored in `m_block_index`) which is consulted * Maintains a tree of blocks (stored in `m_block_index`) which is consulted
* to determine where the most-work tip is. * to determine where the most-work tip is.
@ -137,6 +145,14 @@ private:
/** Dirty block file entries. */ /** Dirty block file entries. */
std::set<int> m_dirty_fileinfo; std::set<int> m_dirty_fileinfo;
/**
* Map from external index name to oldest block that must not be pruned.
*
* @note Internally, only blocks at height (height_first - PRUNE_LOCK_BUFFER - 1) and
* below will be pruned, but callers should avoid assuming any particular buffer size.
*/
std::unordered_map<std::string, PruneLockInfo> m_prune_locks GUARDED_BY(::cs_main);
public: public:
BlockMap m_block_index GUARDED_BY(cs_main); BlockMap m_block_index GUARDED_BY(cs_main);
PrevBlockMap m_prev_block_index GUARDED_BY(cs_main); PrevBlockMap m_prev_block_index GUARDED_BY(cs_main);
@ -172,6 +188,7 @@ public:
bool WriteUndoDataForBlock(const CBlockUndo& blockundo, BlockValidationState& state, CBlockIndex* pindex, const CChainParams& chainparams) bool WriteUndoDataForBlock(const CBlockUndo& blockundo, BlockValidationState& state, CBlockIndex* pindex, const CChainParams& chainparams)
EXCLUSIVE_LOCKS_REQUIRED(::cs_main); EXCLUSIVE_LOCKS_REQUIRED(::cs_main);
/** Store block on disk. If dbp is not nullptr, then it provides the known position of the block within a block file on disk. */
FlatFilePos SaveBlockToDisk(const CBlock& block, int nHeight, CChain& active_chain, const CChainParams& chainparams, const FlatFilePos* dbp); FlatFilePos SaveBlockToDisk(const CBlock& block, int nHeight, CChain& active_chain, const CChainParams& chainparams, const FlatFilePos* dbp);
/** Calculate the amount of disk space the block & undo files currently use */ /** Calculate the amount of disk space the block & undo files currently use */
@ -185,8 +202,14 @@ public:
//! Check whether the block associated with this index entry is pruned or not. //! Check whether the block associated with this index entry is pruned or not.
bool IsBlockPruned(const CBlockIndex* pblockindex) EXCLUSIVE_LOCKS_REQUIRED(::cs_main); bool IsBlockPruned(const CBlockIndex* pblockindex) EXCLUSIVE_LOCKS_REQUIRED(::cs_main);
//! Create or update a prune lock identified by its name
void UpdatePruneLock(const std::string& name, const PruneLockInfo& lock_info) EXCLUSIVE_LOCKS_REQUIRED(::cs_main);
}; };
//! Find the first block that is not pruned
const CBlockIndex* GetFirstStoredBlock(const CBlockIndex* start_block) EXCLUSIVE_LOCKS_REQUIRED(::cs_main);
void CleanupBlockRevFiles(); void CleanupBlockRevFiles();
/** Open a block file (blk?????.dat) */ /** Open a block file (blk?????.dat) */

View File

@ -39,6 +39,7 @@
#include <sync.h> #include <sync.h>
#include <txmempool.h> #include <txmempool.h>
#include <undo.h> #include <undo.h>
#include <util/check.h>
#include <util/strencodings.h> #include <util/strencodings.h>
#include <util/string.h> #include <util/string.h>
#include <util/system.h> #include <util/system.h>
@ -1359,12 +1360,10 @@ static RPCHelpMan pruneblockchain()
} }
PruneBlockFilesManual(active_chainstate, height); PruneBlockFilesManual(active_chainstate, height);
const CBlockIndex* block = active_chain.Tip(); const CBlockIndex* block = CHECK_NONFATAL(active_chain.Tip());
CHECK_NONFATAL(block); const CBlockIndex* last_block = GetFirstStoredBlock(block);
while (block->pprev && (block->pprev->nStatus & BLOCK_HAVE_DATA)) {
block = block->pprev; return static_cast<uint64_t>(last_block->nHeight);
}
return uint64_t(block->nHeight);
}, },
}; };
} }
@ -1635,14 +1634,13 @@ static RPCHelpMan verifychain()
const int check_depth{request.params[1].isNull() ? DEFAULT_CHECKBLOCKS : request.params[1].get_int()}; const int check_depth{request.params[1].isNull() ? DEFAULT_CHECKBLOCKS : request.params[1].get_int()};
const NodeContext& node = EnsureAnyNodeContext(request.context); const NodeContext& node = EnsureAnyNodeContext(request.context);
CHECK_NONFATAL(node.evodb);
ChainstateManager& chainman = EnsureChainman(node); ChainstateManager& chainman = EnsureChainman(node);
LOCK(cs_main); LOCK(cs_main);
CChainState& active_chainstate = chainman.ActiveChainstate(); CChainState& active_chainstate = chainman.ActiveChainstate();
return CVerifyDB().VerifyDB( return CVerifyDB().VerifyDB(
active_chainstate, Params(), active_chainstate.CoinsTip(), *node.evodb, check_level, check_depth); active_chainstate, Params(), active_chainstate.CoinsTip(), *CHECK_NONFATAL(node.evodb), check_level, check_depth);
}, },
}; };
} }
@ -1782,13 +1780,11 @@ RPCHelpMan getblockchaininfo()
LOCK(cs_main); LOCK(cs_main);
CChainState& active_chainstate = chainman.ActiveChainstate(); CChainState& active_chainstate = chainman.ActiveChainstate();
const CBlockIndex* tip = active_chainstate.m_chain.Tip();
CHECK_NONFATAL(tip); const CBlockIndex* tip = CHECK_NONFATAL(active_chainstate.m_chain.Tip());
const int height = tip->nHeight; const int height = tip->nHeight;
CHECK_NONFATAL(node.mnhf_manager); const auto ehfSignals = CHECK_NONFATAL(node.mnhf_manager)->GetSignalsStage(tip);
const auto ehfSignals = node.mnhf_manager->GetSignalsStage(tip);
UniValue obj(UniValue::VOBJ); UniValue obj(UniValue::VOBJ);
if (args.IsArgSet("-devnet")) { if (args.IsArgSet("-devnet")) {
@ -1808,13 +1804,8 @@ RPCHelpMan getblockchaininfo()
obj.pushKV("size_on_disk", chainman.m_blockman.CalculateCurrentUsage()); obj.pushKV("size_on_disk", chainman.m_blockman.CalculateCurrentUsage());
obj.pushKV("pruned", fPruneMode); obj.pushKV("pruned", fPruneMode);
if (fPruneMode) { if (fPruneMode) {
const CBlockIndex* block = tip; const CBlockIndex* block = CHECK_NONFATAL(tip);
CHECK_NONFATAL(block); obj.pushKV("pruneheight", GetFirstStoredBlock(block)->nHeight);
while (block->pprev && (block->pprev->nStatus & BLOCK_HAVE_DATA)) {
block = block->pprev;
}
obj.pushKV("pruneheight", block->nHeight);
// if 0, execution bypasses the whole if block. // if 0, execution bypasses the whole if block.
bool automatic_pruning{args.GetArg("-prune", 0) != 1}; bool automatic_pruning{args.GetArg("-prune", 0) != 1};
@ -2864,10 +2855,8 @@ static RPCHelpMan scantxoutset()
LOCK(cs_main); LOCK(cs_main);
CChainState& active_chainstate = chainman.ActiveChainstate(); CChainState& active_chainstate = chainman.ActiveChainstate();
active_chainstate.ForceFlushStateToDisk(); active_chainstate.ForceFlushStateToDisk();
pcursor = active_chainstate.CoinsDB().Cursor(); pcursor = CHECK_NONFATAL(active_chainstate.CoinsDB().Cursor());
CHECK_NONFATAL(pcursor); tip = CHECK_NONFATAL(active_chainstate.m_chain.Tip());
tip = active_chainstate.m_chain.Tip();
CHECK_NONFATAL(tip);
} }
bool res = FindScriptPubKey(g_scan_progress, g_should_abort_scan, count, pcursor.get(), needles, coins, node.rpc_interruption_point); bool res = FindScriptPubKey(g_scan_progress, g_should_abort_scan, count, pcursor.get(), needles, coins, node.rpc_interruption_point);
result.pushKV("success", res); result.pushKV("success", res);
@ -3062,8 +3051,7 @@ UniValue CreateUTXOSnapshot(NodeContext& node, CChainState& chainstate, CAutoFil
} }
pcursor = chainstate.CoinsDB().Cursor(); pcursor = chainstate.CoinsDB().Cursor();
tip = chainstate.m_blockman.LookupBlockIndex(stats.hashBlock); tip = CHECK_NONFATAL(chainstate.m_blockman.LookupBlockIndex(stats.hashBlock));
CHECK_NONFATAL(tip);
} }
SnapshotMetadata metadata{tip->GetBlockHash(), stats.coins_count, tip->nChainTx}; SnapshotMetadata metadata{tip->GetBlockHash(), stats.coins_count, tip->nChainTx};

View File

@ -9,6 +9,7 @@
#include <rpc/blockchain.h> #include <rpc/blockchain.h>
#include <rpc/server.h> #include <rpc/server.h>
#include <rpc/server_util.h> #include <rpc/server_util.h>
#include <util/check.h>
#include <rpc/util.h> #include <rpc/util.h>
#include <util/strencodings.h> #include <util/strencodings.h>
@ -83,11 +84,8 @@ static RPCHelpMan coinjoin_reset()
ValidateCoinJoinArguments(); ValidateCoinJoinArguments();
CHECK_NONFATAL(node.coinjoin_loader); auto cj_clientman = CHECK_NONFATAL(node.coinjoin_loader)->walletman().Get(wallet->GetName());
auto cj_clientman = node.coinjoin_loader->walletman().Get(wallet->GetName()); CHECK_NONFATAL(cj_clientman)->ResetPool();
CHECK_NONFATAL(cj_clientman);
cj_clientman->ResetPool();
return "Mixing was reset"; return "Mixing was reset";
}, },
@ -126,10 +124,7 @@ static RPCHelpMan coinjoin_start()
throw JSONRPCError(RPC_WALLET_UNLOCK_NEEDED, "Error: Please unlock wallet for mixing with walletpassphrase first."); throw JSONRPCError(RPC_WALLET_UNLOCK_NEEDED, "Error: Please unlock wallet for mixing with walletpassphrase first.");
} }
CHECK_NONFATAL(node.coinjoin_loader); auto cj_clientman = CHECK_NONFATAL(CHECK_NONFATAL(node.coinjoin_loader)->walletman().Get(wallet->GetName()));
auto cj_clientman = node.coinjoin_loader->walletman().Get(wallet->GetName());
CHECK_NONFATAL(cj_clientman);
if (!cj_clientman->StartMixing()) { if (!cj_clientman->StartMixing()) {
throw JSONRPCError(RPC_INTERNAL_ERROR, "Mixing has been started already."); throw JSONRPCError(RPC_INTERNAL_ERROR, "Mixing has been started already.");
} }
@ -450,8 +445,7 @@ static RPCHelpMan getcoinjoininfo()
return obj; return obj;
} }
auto manager = node.coinjoin_loader->walletman().Get(wallet->GetName()); auto* manager = CHECK_NONFATAL(node.coinjoin_loader->walletman().Get(wallet->GetName()));
CHECK_NONFATAL(manager != nullptr);
manager->GetJsonInfo(obj); manager->GetJsonInfo(obj);
std::string warning_msg{""}; std::string warning_msg{""};

View File

@ -26,6 +26,7 @@
#include <rpc/server.h> #include <rpc/server.h>
#include <rpc/server_util.h> #include <rpc/server_util.h>
#include <rpc/util.h> #include <rpc/util.h>
#include <util/check.h>
#include <util/moneystr.h> #include <util/moneystr.h>
#include <util/translation.h> #include <util/translation.h>
#include <validation.h> #include <validation.h>
@ -629,8 +630,7 @@ static UniValue protx_register_common_wrapper(const JSONRPCRequest& request,
const NodeContext& node = EnsureAnyNodeContext(request.context); const NodeContext& node = EnsureAnyNodeContext(request.context);
const ChainstateManager& chainman = EnsureChainman(node); const ChainstateManager& chainman = EnsureChainman(node);
CHECK_NONFATAL(node.chain_helper); CChainstateHelper& chain_helper = *CHECK_NONFATAL(node.chain_helper);
CChainstateHelper& chain_helper = *node.chain_helper;
const bool isEvoRequested = mnType == MnType::Evo; const bool isEvoRequested = mnType == MnType::Evo;
@ -856,8 +856,7 @@ static RPCHelpMan protx_register_submit()
const NodeContext& node = EnsureAnyNodeContext(request.context); const NodeContext& node = EnsureAnyNodeContext(request.context);
const ChainstateManager& chainman = EnsureChainman(node); const ChainstateManager& chainman = EnsureChainman(node);
CHECK_NONFATAL(node.chain_helper); CChainstateHelper& chain_helper = *CHECK_NONFATAL(node.chain_helper);
CChainstateHelper& chain_helper = *node.chain_helper;
std::shared_ptr<CWallet> const wallet = GetWalletForJSONRPCRequest(request); std::shared_ptr<CWallet> const wallet = GetWalletForJSONRPCRequest(request);
if (!wallet) return NullUniValue; if (!wallet) return NullUniValue;
@ -955,11 +954,8 @@ static UniValue protx_update_service_common_wrapper(const JSONRPCRequest& reques
const NodeContext& node = EnsureAnyNodeContext(request.context); const NodeContext& node = EnsureAnyNodeContext(request.context);
const ChainstateManager& chainman = EnsureChainman(node); const ChainstateManager& chainman = EnsureChainman(node);
CHECK_NONFATAL(node.dmnman); CDeterministicMNManager& dmnman = *CHECK_NONFATAL(node.dmnman);
CDeterministicMNManager& dmnman = *node.dmnman; CChainstateHelper& chain_helper = *CHECK_NONFATAL(node.chain_helper);
CHECK_NONFATAL(node.chain_helper);
CChainstateHelper& chain_helper = *node.chain_helper;
const bool isEvoRequested = mnType == MnType::Evo; const bool isEvoRequested = mnType == MnType::Evo;
std::shared_ptr<CWallet> const wallet = GetWalletForJSONRPCRequest(request); std::shared_ptr<CWallet> const wallet = GetWalletForJSONRPCRequest(request);
@ -1093,11 +1089,8 @@ static RPCHelpMan protx_update_registrar_wrapper(bool specific_legacy_bls_scheme
const NodeContext& node = EnsureAnyNodeContext(request.context); const NodeContext& node = EnsureAnyNodeContext(request.context);
const ChainstateManager& chainman = EnsureChainman(node); const ChainstateManager& chainman = EnsureChainman(node);
CHECK_NONFATAL(node.dmnman); CDeterministicMNManager& dmnman = *CHECK_NONFATAL(node.dmnman);
CDeterministicMNManager& dmnman = *node.dmnman; CChainstateHelper& chain_helper = *CHECK_NONFATAL(node.chain_helper);
CHECK_NONFATAL(node.chain_helper);
CChainstateHelper& chain_helper = *node.chain_helper;
std::shared_ptr<CWallet> const wallet = GetWalletForJSONRPCRequest(request); std::shared_ptr<CWallet> const wallet = GetWalletForJSONRPCRequest(request);
if (!wallet) return NullUniValue; if (!wallet) return NullUniValue;
@ -1208,12 +1201,8 @@ static RPCHelpMan protx_revoke()
const NodeContext& node = EnsureAnyNodeContext(request.context); const NodeContext& node = EnsureAnyNodeContext(request.context);
const ChainstateManager& chainman = EnsureChainman(node); const ChainstateManager& chainman = EnsureChainman(node);
CHECK_NONFATAL(node.dmnman); CDeterministicMNManager& dmnman = *CHECK_NONFATAL(node.dmnman);
CDeterministicMNManager& dmnman = *node.dmnman; CChainstateHelper& chain_helper = *CHECK_NONFATAL(node.chain_helper);
CHECK_NONFATAL(node.chain_helper);
CChainstateHelper& chain_helper = *node.chain_helper;
std::shared_ptr<CWallet> const pwallet = GetWalletForJSONRPCRequest(request); std::shared_ptr<CWallet> const pwallet = GetWalletForJSONRPCRequest(request);
if (!pwallet) return NullUniValue; if (!pwallet) return NullUniValue;
@ -1371,11 +1360,8 @@ static RPCHelpMan protx_list()
const NodeContext& node = EnsureAnyNodeContext(request.context); const NodeContext& node = EnsureAnyNodeContext(request.context);
const ChainstateManager& chainman = EnsureChainman(node); const ChainstateManager& chainman = EnsureChainman(node);
CHECK_NONFATAL(node.dmnman); CDeterministicMNManager& dmnman = *CHECK_NONFATAL(node.dmnman);
CDeterministicMNManager& dmnman = *node.dmnman; CMasternodeMetaMan& mn_metaman = *CHECK_NONFATAL(node.mn_metaman);
CHECK_NONFATAL(node.mn_metaman);
CMasternodeMetaMan& mn_metaman = *node.mn_metaman;
std::shared_ptr<CWallet> wallet{nullptr}; std::shared_ptr<CWallet> wallet{nullptr};
#ifdef ENABLE_WALLET #ifdef ENABLE_WALLET
@ -1487,11 +1473,8 @@ static RPCHelpMan protx_info()
const NodeContext& node = EnsureAnyNodeContext(request.context); const NodeContext& node = EnsureAnyNodeContext(request.context);
const ChainstateManager& chainman = EnsureChainman(node); const ChainstateManager& chainman = EnsureChainman(node);
CHECK_NONFATAL(node.dmnman); CDeterministicMNManager& dmnman = *CHECK_NONFATAL(node.dmnman);
CDeterministicMNManager& dmnman = *node.dmnman; CMasternodeMetaMan& mn_metaman = *CHECK_NONFATAL(node.mn_metaman);
CHECK_NONFATAL(node.mn_metaman);
CMasternodeMetaMan& mn_metaman = *node.mn_metaman;
std::shared_ptr<CWallet> wallet{nullptr}; std::shared_ptr<CWallet> wallet{nullptr};
#ifdef ENABLE_WALLET #ifdef ENABLE_WALLET
@ -1561,11 +1544,8 @@ static RPCHelpMan protx_diff()
const NodeContext& node = EnsureAnyNodeContext(request.context); const NodeContext& node = EnsureAnyNodeContext(request.context);
const ChainstateManager& chainman = EnsureChainman(node); const ChainstateManager& chainman = EnsureChainman(node);
CHECK_NONFATAL(node.dmnman); CDeterministicMNManager& dmnman = *CHECK_NONFATAL(node.dmnman);
CDeterministicMNManager& dmnman = *node.dmnman; const LLMQContext& llmq_ctx = *CHECK_NONFATAL(node.llmq_ctx);
CHECK_NONFATAL(node.llmq_ctx);
const LLMQContext& llmq_ctx = *node.llmq_ctx;
LOCK(cs_main); LOCK(cs_main);
uint256 baseBlockHash = ParseBlock(request.params[0], chainman, "baseBlock"); uint256 baseBlockHash = ParseBlock(request.params[0], chainman, "baseBlock");
@ -1622,8 +1602,7 @@ static RPCHelpMan protx_listdiff()
const NodeContext& node = EnsureAnyNodeContext(request.context); const NodeContext& node = EnsureAnyNodeContext(request.context);
const ChainstateManager& chainman = EnsureChainman(node); const ChainstateManager& chainman = EnsureChainman(node);
CHECK_NONFATAL(node.dmnman); CDeterministicMNManager& dmnman = *CHECK_NONFATAL(node.dmnman);
CDeterministicMNManager& dmnman = *node.dmnman;
LOCK(cs_main); LOCK(cs_main);
UniValue ret(UniValue::VOBJ); UniValue ret(UniValue::VOBJ);

View File

@ -21,6 +21,7 @@
#include <rpc/server_util.h> #include <rpc/server_util.h>
#include <rpc/util.h> #include <rpc/util.h>
#include <timedata.h> #include <timedata.h>
#include <util/check.h>
#include <util/strencodings.h> #include <util/strencodings.h>
#include <util/system.h> #include <util/system.h>
#include <validation.h> #include <validation.h>
@ -194,12 +195,11 @@ static RPCHelpMan gobject_prepare()
const NodeContext& node = EnsureAnyNodeContext(request.context); const NodeContext& node = EnsureAnyNodeContext(request.context);
const ChainstateManager& chainman = EnsureChainman(node); const ChainstateManager& chainman = EnsureChainman(node);
CHECK_NONFATAL(node.dmnman);
{ {
LOCK(cs_main); LOCK(cs_main);
std::string strError = ""; std::string strError = "";
if (!govobj.IsValidLocally(node.dmnman->GetListAtChainTip(), chainman, strError, false)) if (!govobj.IsValidLocally(CHECK_NONFATAL(node.dmnman)->GetListAtChainTip(), chainman, strError, false))
throw JSONRPCError(RPC_INTERNAL_ERROR, "Governance object is not valid - " + govobj.GetHash().ToString() + " - " + strError); throw JSONRPCError(RPC_INTERNAL_ERROR, "Governance object is not valid - " + govobj.GetHash().ToString() + " - " + strError);
} }
@ -303,14 +303,12 @@ static RPCHelpMan gobject_submit()
{ {
const NodeContext& node = EnsureAnyNodeContext(request.context); const NodeContext& node = EnsureAnyNodeContext(request.context);
const ChainstateManager& chainman = EnsureChainman(node); const ChainstateManager& chainman = EnsureChainman(node);
CHECK_NONFATAL(node.dmnman);
CHECK_NONFATAL(node.govman);
if(!node.mn_sync->IsBlockchainSynced()) { if(!node.mn_sync->IsBlockchainSynced()) {
throw JSONRPCError(RPC_CLIENT_IN_INITIAL_DOWNLOAD, "Must wait for client to sync with masternode network. Try again in a minute or so."); throw JSONRPCError(RPC_CLIENT_IN_INITIAL_DOWNLOAD, "Must wait for client to sync with masternode network. Try again in a minute or so.");
} }
auto mnList = node.dmnman->GetListAtChainTip(); auto mnList = CHECK_NONFATAL(node.dmnman)->GetListAtChainTip();
if (node.mn_activeman) { if (node.mn_activeman) {
const bool fMnFound = mnList.HasValidMNByCollateral(node.mn_activeman->GetOutPoint()); const bool fMnFound = mnList.HasValidMNByCollateral(node.mn_activeman->GetOutPoint());
@ -373,7 +371,6 @@ static RPCHelpMan gobject_submit()
} }
LOCK2(cs_main, mempool.cs); LOCK2(cs_main, mempool.cs);
CHECK_NONFATAL(node.dmnman);
std::string strError; std::string strError;
if (!govobj.IsValidLocally(node.dmnman->GetListAtChainTip(), chainman, strError, fMissingConfirmations, true) && !fMissingConfirmations) { if (!govobj.IsValidLocally(node.dmnman->GetListAtChainTip(), chainman, strError, fMissingConfirmations, true) && !fMissingConfirmations) {
@ -384,7 +381,7 @@ static RPCHelpMan gobject_submit()
// RELAY THIS OBJECT // RELAY THIS OBJECT
// Reject if rate check fails but don't update buffer // Reject if rate check fails but don't update buffer
if (!node.govman->MasternodeRateCheck(govobj)) { if (!CHECK_NONFATAL(node.govman)->MasternodeRateCheck(govobj)) {
LogPrintf("gobject(submit) -- Object submission rejected because of rate check failure - hash = %s\n", strHash); LogPrintf("gobject(submit) -- Object submission rejected because of rate check failure - hash = %s\n", strHash);
throw JSONRPCError(RPC_INVALID_PARAMETER, "Object creation rate limit exceeded"); throw JSONRPCError(RPC_INVALID_PARAMETER, "Object creation rate limit exceeded");
} }
@ -393,9 +390,8 @@ static RPCHelpMan gobject_submit()
PeerManager& peerman = EnsurePeerman(node); PeerManager& peerman = EnsurePeerman(node);
if (fMissingConfirmations) { if (fMissingConfirmations) {
CHECK_NONFATAL(node.mn_sync);
node.govman->AddPostponedObject(govobj); node.govman->AddPostponedObject(govobj);
govobj.Relay(peerman, *node.mn_sync); govobj.Relay(peerman, *CHECK_NONFATAL(node.mn_sync));
} else { } else {
node.govman->AddGovernanceObject(govobj, peerman); node.govman->AddGovernanceObject(govobj, peerman);
} }
@ -452,7 +448,7 @@ static UniValue VoteWithMasternodes(const JSONRPCRequest& request, const CWallet
int nSuccessful = 0; int nSuccessful = 0;
int nFailed = 0; int nFailed = 0;
auto mnList = node.dmnman->GetListAtChainTip(); auto mnList = CHECK_NONFATAL(node.dmnman)->GetListAtChainTip();
UniValue resultsObj(UniValue::VOBJ); UniValue resultsObj(UniValue::VOBJ);
@ -529,7 +525,6 @@ static RPCHelpMan gobject_vote_many()
if (!wallet) return NullUniValue; if (!wallet) return NullUniValue;
const NodeContext& node = EnsureAnyNodeContext(request.context); const NodeContext& node = EnsureAnyNodeContext(request.context);
CHECK_NONFATAL(node.dmnman);
uint256 hash(ParseHashV(request.params[0], "Object hash")); uint256 hash(ParseHashV(request.params[0], "Object hash"));
std::string strVoteSignal = request.params[1].get_str(); std::string strVoteSignal = request.params[1].get_str();
@ -551,7 +546,7 @@ static RPCHelpMan gobject_vote_many()
std::map<uint256, CKeyID> votingKeys; std::map<uint256, CKeyID> votingKeys;
auto mnList = node.dmnman->GetListAtChainTip(); auto mnList = CHECK_NONFATAL(node.dmnman)->GetListAtChainTip();
mnList.ForEachMN(true, [&](auto& dmn) { mnList.ForEachMN(true, [&](auto& dmn) {
const bool is_mine = CheckWalletOwnsKey(*wallet, dmn.pdmnState->keyIDVoting); const bool is_mine = CheckWalletOwnsKey(*wallet, dmn.pdmnState->keyIDVoting);
if (is_mine) { if (is_mine) {
@ -583,7 +578,6 @@ static RPCHelpMan gobject_vote_alias()
if (!wallet) return NullUniValue; if (!wallet) return NullUniValue;
const NodeContext& node = EnsureAnyNodeContext(request.context); const NodeContext& node = EnsureAnyNodeContext(request.context);
CHECK_NONFATAL(node.dmnman);
uint256 hash(ParseHashV(request.params[0], "Object hash")); uint256 hash(ParseHashV(request.params[0], "Object hash"));
std::string strVoteSignal = request.params[1].get_str(); std::string strVoteSignal = request.params[1].get_str();
@ -604,7 +598,7 @@ static RPCHelpMan gobject_vote_alias()
EnsureWalletIsUnlocked(*wallet); EnsureWalletIsUnlocked(*wallet);
uint256 proTxHash(ParseHashV(request.params[3], "protx-hash")); uint256 proTxHash(ParseHashV(request.params[3], "protx-hash"));
auto dmn = node.dmnman->GetListAtChainTip().GetValidMN(proTxHash); auto dmn = CHECK_NONFATAL(node.dmnman)->GetListAtChainTip().GetValidMN(proTxHash);
if (!dmn) { if (!dmn) {
throw JSONRPCError(RPC_INVALID_PARAMETER, "Invalid or unknown proTxHash"); throw JSONRPCError(RPC_INVALID_PARAMETER, "Invalid or unknown proTxHash");
} }
@ -718,11 +712,10 @@ static RPCHelpMan gobject_list_helper(const bool make_a_diff)
const NodeContext& node = EnsureAnyNodeContext(request.context); const NodeContext& node = EnsureAnyNodeContext(request.context);
const ChainstateManager& chainman = EnsureChainman(node); const ChainstateManager& chainman = EnsureChainman(node);
CHECK_NONFATAL(node.dmnman);
CHECK_NONFATAL(node.govman);
const int64_t last_time = make_a_diff ? node.govman->GetLastDiffTime() : 0; const int64_t last_time = make_a_diff ? node.govman->GetLastDiffTime() : 0;
return ListObjects(*node.govman, node.dmnman->GetListAtChainTip(), chainman, strCachedSignal, strType, last_time); return ListObjects(*CHECK_NONFATAL(node.govman), CHECK_NONFATAL(node.dmnman)->GetListAtChainTip(), chainman,
strCachedSignal, strType, last_time);
}, },
}; };
} }
@ -759,8 +752,8 @@ static RPCHelpMan gobject_get()
// FIND THE GOVERNANCE OBJECT THE USER IS LOOKING FOR // FIND THE GOVERNANCE OBJECT THE USER IS LOOKING FOR
const NodeContext& node = EnsureAnyNodeContext(request.context); const NodeContext& node = EnsureAnyNodeContext(request.context);
const ChainstateManager& chainman = EnsureChainman(node); const ChainstateManager& chainman = EnsureChainman(node);
CHECK_NONFATAL(node.dmnman && node.govman);
CHECK_NONFATAL(node.govman);
LOCK2(cs_main, node.govman->cs); LOCK2(cs_main, node.govman->cs);
const CGovernanceObject* pGovObj = node.govman->FindConstGovernanceObject(hash); const CGovernanceObject* pGovObj = node.govman->FindConstGovernanceObject(hash);
@ -785,7 +778,7 @@ static RPCHelpMan gobject_get()
// SHOW (MUCH MORE) INFORMATION ABOUT VOTES FOR GOVERNANCE OBJECT (THAN LIST/DIFF ABOVE) // SHOW (MUCH MORE) INFORMATION ABOUT VOTES FOR GOVERNANCE OBJECT (THAN LIST/DIFF ABOVE)
// -- FUNDING VOTING RESULTS // -- FUNDING VOTING RESULTS
auto tip_mn_list = node.dmnman->GetListAtChainTip(); auto tip_mn_list = CHECK_NONFATAL(node.dmnman)->GetListAtChainTip();
UniValue objFundingResult(UniValue::VOBJ); UniValue objFundingResult(UniValue::VOBJ);
objFundingResult.pushKV("AbsoluteYesCount", pGovObj->GetAbsoluteYesCount(tip_mn_list, VOTE_SIGNAL_FUNDING)); objFundingResult.pushKV("AbsoluteYesCount", pGovObj->GetAbsoluteYesCount(tip_mn_list, VOTE_SIGNAL_FUNDING));
@ -858,8 +851,8 @@ static RPCHelpMan gobject_getcurrentvotes()
// FIND OBJECT USER IS LOOKING FOR // FIND OBJECT USER IS LOOKING FOR
const NodeContext& node = EnsureAnyNodeContext(request.context); const NodeContext& node = EnsureAnyNodeContext(request.context);
CHECK_NONFATAL(node.govman);
CHECK_NONFATAL(node.govman);
LOCK(node.govman->cs); LOCK(node.govman->cs);
const CGovernanceObject* pGovObj = node.govman->FindConstGovernanceObject(hash); const CGovernanceObject* pGovObj = node.govman->FindConstGovernanceObject(hash);
@ -872,10 +865,9 @@ static RPCHelpMan gobject_getcurrentvotes()
UniValue bResult(UniValue::VOBJ); UniValue bResult(UniValue::VOBJ);
// GET MATCHING VOTES BY HASH, THEN SHOW USERS VOTE INFORMATION // GET MATCHING VOTES BY HASH, THEN SHOW USERS VOTE INFORMATION
CHECK_NONFATAL(node.dmnman);
std::vector<CGovernanceVote> vecVotes = node.govman->GetCurrentVotes(hash, mnCollateralOutpoint); std::vector<CGovernanceVote> vecVotes = node.govman->GetCurrentVotes(hash, mnCollateralOutpoint);
for (const auto& vote : vecVotes) { for (const auto& vote : vecVotes) {
bResult.pushKV(vote.GetHash().ToString(), vote.ToString(node.dmnman->GetListAtChainTip())); bResult.pushKV(vote.GetHash().ToString(), vote.ToString(CHECK_NONFATAL(node.dmnman)->GetListAtChainTip()));
} }
return bResult; return bResult;
@ -975,8 +967,7 @@ static RPCHelpMan voteraw()
throw JSONRPCError(RPC_INVALID_ADDRESS_OR_KEY, "Malformed base64 encoding"); throw JSONRPCError(RPC_INVALID_ADDRESS_OR_KEY, "Malformed base64 encoding");
} }
CHECK_NONFATAL(node.dmnman); const auto tip_mn_list = CHECK_NONFATAL(node.dmnman)->GetListAtChainTip();
const auto tip_mn_list = node.dmnman->GetListAtChainTip();
auto dmn = tip_mn_list.GetValidMNByCollateral(outpoint); auto dmn = tip_mn_list.GetValidMNByCollateral(outpoint);
if (!dmn) { if (!dmn) {
@ -1034,7 +1025,6 @@ static RPCHelpMan getgovernanceinfo()
const NodeContext& node = EnsureAnyNodeContext(request.context); const NodeContext& node = EnsureAnyNodeContext(request.context);
const ChainstateManager& chainman = EnsureAnyChainman(request.context); const ChainstateManager& chainman = EnsureAnyChainman(request.context);
CHECK_NONFATAL(node.dmnman);
const auto* pindex = WITH_LOCK(cs_main, return chainman.ActiveChain().Tip()); const auto* pindex = WITH_LOCK(cs_main, return chainman.ActiveChain().Tip());
int nBlockHeight = pindex->nHeight; int nBlockHeight = pindex->nHeight;
@ -1048,7 +1038,7 @@ static RPCHelpMan getgovernanceinfo()
obj.pushKV("superblockmaturitywindow", Params().GetConsensus().nSuperblockMaturityWindow); obj.pushKV("superblockmaturitywindow", Params().GetConsensus().nSuperblockMaturityWindow);
obj.pushKV("lastsuperblock", nLastSuperblock); obj.pushKV("lastsuperblock", nLastSuperblock);
obj.pushKV("nextsuperblock", nNextSuperblock); obj.pushKV("nextsuperblock", nNextSuperblock);
obj.pushKV("fundingthreshold", int(node.dmnman->GetListAtChainTip().GetValidWeightedMNsCount() / 10)); obj.pushKV("fundingthreshold", int(CHECK_NONFATAL(node.dmnman)->GetListAtChainTip().GetValidWeightedMNsCount() / 10));
obj.pushKV("governancebudget", ValueFromAmount(CSuperblock::GetPaymentsLimit(chainman.ActiveChain(), nNextSuperblock))); obj.pushKV("governancebudget", ValueFromAmount(CSuperblock::GetPaymentsLimit(chainman.ActiveChain(), nNextSuperblock)));
return obj; return obj;

View File

@ -19,6 +19,7 @@
#include <rpc/server_util.h> #include <rpc/server_util.h>
#include <rpc/util.h> #include <rpc/util.h>
#include <univalue.h> #include <univalue.h>
#include <util/check.h>
#include <util/strencodings.h> #include <util/strencodings.h>
#include <validation.h> #include <validation.h>
#include <wallet/coincontrol.h> #include <wallet/coincontrol.h>
@ -138,8 +139,7 @@ static RPCHelpMan masternode_winner()
const NodeContext& node = EnsureAnyNodeContext(request.context); const NodeContext& node = EnsureAnyNodeContext(request.context);
const ChainstateManager& chainman = EnsureChainman(node); const ChainstateManager& chainman = EnsureChainman(node);
CHECK_NONFATAL(node.dmnman); return GetNextMasternodeForPayment(chainman.ActiveChain(), *CHECK_NONFATAL(node.dmnman), 10);
return GetNextMasternodeForPayment(chainman.ActiveChain(), *node.dmnman, 10);
}, },
}; };
} }
@ -159,8 +159,7 @@ static RPCHelpMan masternode_current()
const NodeContext& node = EnsureAnyNodeContext(request.context); const NodeContext& node = EnsureAnyNodeContext(request.context);
const ChainstateManager& chainman = EnsureChainman(node); const ChainstateManager& chainman = EnsureChainman(node);
CHECK_NONFATAL(node.dmnman); return GetNextMasternodeForPayment(chainman.ActiveChain(), *CHECK_NONFATAL(node.dmnman), 1);
return GetNextMasternodeForPayment(chainman.ActiveChain(), *node.dmnman, 1);
}, },
}; };
} }
@ -212,7 +211,7 @@ static RPCHelpMan masternode_status()
[&](const RPCHelpMan& self, const JSONRPCRequest& request) -> UniValue [&](const RPCHelpMan& self, const JSONRPCRequest& request) -> UniValue
{ {
const NodeContext& node = EnsureAnyNodeContext(request.context); const NodeContext& node = EnsureAnyNodeContext(request.context);
CHECK_NONFATAL(node.dmnman);
if (!node.mn_activeman) { if (!node.mn_activeman) {
throw JSONRPCError(RPC_INTERNAL_ERROR, "This node does not run an active masternode."); throw JSONRPCError(RPC_INTERNAL_ERROR, "This node does not run an active masternode.");
} }
@ -221,7 +220,7 @@ static RPCHelpMan masternode_status()
// keep compatibility with legacy status for now (might get deprecated/removed later) // keep compatibility with legacy status for now (might get deprecated/removed later)
mnObj.pushKV("outpoint", node.mn_activeman->GetOutPoint().ToStringShort()); mnObj.pushKV("outpoint", node.mn_activeman->GetOutPoint().ToStringShort());
mnObj.pushKV("service", node.mn_activeman->GetService().ToStringAddrPort()); mnObj.pushKV("service", node.mn_activeman->GetService().ToStringAddrPort());
CDeterministicMNCPtr dmn = node.dmnman->GetListAtChainTip().GetMN(node.mn_activeman->GetProTxHash()); CDeterministicMNCPtr dmn = CHECK_NONFATAL(node.dmnman)->GetListAtChainTip().GetMN(node.mn_activeman->GetProTxHash());
if (dmn) { if (dmn) {
mnObj.pushKV("proTxHash", dmn->proTxHash.ToString()); mnObj.pushKV("proTxHash", dmn->proTxHash.ToString());
mnObj.pushKV("type", std::string(GetMnType(dmn->nType).description)); mnObj.pushKV("type", std::string(GetMnType(dmn->nType).description));
@ -243,12 +242,12 @@ static std::string GetRequiredPaymentsString(CGovernanceManager& govman, const C
if (payee) { if (payee) {
CTxDestination dest; CTxDestination dest;
if (!ExtractDestination(payee->pdmnState->scriptPayout, dest)) { if (!ExtractDestination(payee->pdmnState->scriptPayout, dest)) {
CHECK_NONFATAL(false); NONFATAL_UNREACHABLE();
} }
strPayments = EncodeDestination(dest); strPayments = EncodeDestination(dest);
if (payee->nOperatorReward != 0 && payee->pdmnState->scriptOperatorPayout != CScript()) { if (payee->nOperatorReward != 0 && payee->pdmnState->scriptOperatorPayout != CScript()) {
if (!ExtractDestination(payee->pdmnState->scriptOperatorPayout, dest)) { if (!ExtractDestination(payee->pdmnState->scriptOperatorPayout, dest)) {
CHECK_NONFATAL(false); NONFATAL_UNREACHABLE();
} }
strPayments += ", " + EncodeDestination(dest); strPayments += ", " + EncodeDestination(dest);
} }
@ -310,14 +309,11 @@ static RPCHelpMan masternode_winners()
int nChainTipHeight = pindexTip->nHeight; int nChainTipHeight = pindexTip->nHeight;
int nStartHeight = std::max(nChainTipHeight - nCount, 1); int nStartHeight = std::max(nChainTipHeight - nCount, 1);
CHECK_NONFATAL(node.dmnman); const auto tip_mn_list = CHECK_NONFATAL(node.dmnman)->GetListAtChainTip();
CHECK_NONFATAL(node.govman);
const auto tip_mn_list = node.dmnman->GetListAtChainTip();
for (int h = nStartHeight; h <= nChainTipHeight; h++) { for (int h = nStartHeight; h <= nChainTipHeight; h++) {
const CBlockIndex* pIndex = pindexTip->GetAncestor(h - 1); const CBlockIndex* pIndex = pindexTip->GetAncestor(h - 1);
auto payee = node.dmnman->GetListForBlock(pIndex).GetMNPayee(pIndex); auto payee = node.dmnman->GetListForBlock(pIndex).GetMNPayee(pIndex);
std::string strPayments = GetRequiredPaymentsString(*node.govman, tip_mn_list, h, payee); std::string strPayments = GetRequiredPaymentsString(*CHECK_NONFATAL(node.govman), tip_mn_list, h, payee);
if (strFilter != "" && strPayments.find(strFilter) == std::string::npos) continue; if (strFilter != "" && strPayments.find(strFilter) == std::string::npos) continue;
obj.pushKV(strprintf("%d", h), strPayments); obj.pushKV(strprintf("%d", h), strPayments);
} }
@ -558,11 +554,10 @@ static RPCHelpMan masternodelist_helper(bool is_composite)
const NodeContext& node = EnsureAnyNodeContext(request.context); const NodeContext& node = EnsureAnyNodeContext(request.context);
const ChainstateManager& chainman = EnsureChainman(node); const ChainstateManager& chainman = EnsureChainman(node);
CHECK_NONFATAL(node.dmnman);
UniValue obj(UniValue::VOBJ); UniValue obj(UniValue::VOBJ);
const auto mnList = node.dmnman->GetListAtChainTip(); const auto mnList = CHECK_NONFATAL(node.dmnman)->GetListAtChainTip();
const auto dmnToStatus = [&](const auto& dmn) { const auto dmnToStatus = [&](const auto& dmn) {
if (mnList.IsMNValid(dmn)) { if (mnList.IsMNValid(dmn)) {
return "ENABLED"; return "ENABLED";

View File

@ -36,6 +36,7 @@
#include <spork.h> #include <spork.h>
#include <txmempool.h> #include <txmempool.h>
#include <univalue.h> #include <univalue.h>
#include <util/check.h>
#include <util/fees.h> #include <util/fees.h>
#include <util/strencodings.h> #include <util/strencodings.h>
#include <util/string.h> #include <util/string.h>
@ -391,12 +392,11 @@ static RPCHelpMan generateblock()
block.vtx.insert(block.vtx.end(), txs.begin(), txs.end()); block.vtx.insert(block.vtx.end(), txs.begin(), txs.end());
{ {
CHECK_NONFATAL(node.evodb);
LOCK(cs_main); LOCK(cs_main);
BlockValidationState state; BlockValidationState state;
if (!TestBlockValidity(state, *llmq_ctx.clhandler, *node.evodb, chainparams, active_chainstate, block, chainman.m_blockman.LookupBlockIndex(block.hashPrevBlock), false, false)) { if (!TestBlockValidity(state, *llmq_ctx.clhandler, *CHECK_NONFATAL(node.evodb), chainparams, active_chainstate,
block, chainman.m_blockman.LookupBlockIndex(block.hashPrevBlock), false, false)) {
throw JSONRPCError(RPC_VERIFY_ERROR, strprintf("TestBlockValidity failed: %s", state.GetRejectReason())); throw JSONRPCError(RPC_VERIFY_ERROR, strprintf("TestBlockValidity failed: %s", state.GetRejectReason()));
} }
} }
@ -710,14 +710,14 @@ static RPCHelpMan getblocktemplate()
} }
LLMQContext& llmq_ctx = EnsureLLMQContext(node); LLMQContext& llmq_ctx = EnsureLLMQContext(node);
CHECK_NONFATAL(node.evodb);
CBlockIndex* const pindexPrev = active_chain.Tip(); CBlockIndex* const pindexPrev = active_chain.Tip();
// TestBlockValidity only supports blocks built on the current Tip // TestBlockValidity only supports blocks built on the current Tip
if (block.hashPrevBlock != pindexPrev->GetBlockHash()) if (block.hashPrevBlock != pindexPrev->GetBlockHash())
return "inconclusive-not-best-prevblk"; return "inconclusive-not-best-prevblk";
BlockValidationState state; BlockValidationState state;
TestBlockValidity(state, *llmq_ctx.clhandler, *node.evodb, Params(), active_chainstate, block, pindexPrev, false, true); TestBlockValidity(state, *llmq_ctx.clhandler, *CHECK_NONFATAL(node.evodb), Params(), active_chainstate,
block, pindexPrev, false, true);
return BIP22ValidationResult(state); return BIP22ValidationResult(state);
} }

View File

@ -1194,9 +1194,8 @@ static RPCHelpMan mockscheduler()
throw std::runtime_error("delta_time must be between 1 and 3600 seconds (1 hr)"); throw std::runtime_error("delta_time must be between 1 and 3600 seconds (1 hr)");
} }
auto* node_context = GetContext<NodeContext>(request.context); auto* node_context = CHECK_NONFATAL(GetContext<NodeContext>(request.context));
// protect against null pointer dereference // protect against null pointer dereference
CHECK_NONFATAL(node_context);
CHECK_NONFATAL(node_context->scheduler); CHECK_NONFATAL(node_context->scheduler);
node_context->scheduler->MockForward(std::chrono::seconds(delta_seconds)); node_context->scheduler->MockForward(std::chrono::seconds(delta_seconds));

View File

@ -11,6 +11,7 @@
#include <rpc/server.h> #include <rpc/server.h>
#include <rpc/server_util.h> #include <rpc/server_util.h>
#include <rpc/util.h> #include <rpc/util.h>
#include <util/check.h>
#include <validation.h> #include <validation.h>
#include <masternode/node.h> #include <masternode/node.h>
@ -282,7 +283,6 @@ static RPCHelpMan quorum_dkgstatus()
const ChainstateManager& chainman = EnsureChainman(node); const ChainstateManager& chainman = EnsureChainman(node);
const LLMQContext& llmq_ctx = EnsureLLMQContext(node); const LLMQContext& llmq_ctx = EnsureLLMQContext(node);
const CConnman& connman = EnsureConnman(node); const CConnman& connman = EnsureConnman(node);
CHECK_NONFATAL(node.dmnman);
CHECK_NONFATAL(node.sporkman); CHECK_NONFATAL(node.sporkman);
int detailLevel = 0; int detailLevel = 0;
@ -296,7 +296,7 @@ static RPCHelpMan quorum_dkgstatus()
llmq::CDKGDebugStatus status; llmq::CDKGDebugStatus status;
llmq_ctx.dkg_debugman->GetLocalDebugStatus(status); llmq_ctx.dkg_debugman->GetLocalDebugStatus(status);
auto ret = status.ToJson(*node.dmnman, chainman, detailLevel); auto ret = status.ToJson(*CHECK_NONFATAL(node.dmnman), chainman, detailLevel);
CBlockIndex* pindexTip = WITH_LOCK(cs_main, return chainman.ActiveChain().Tip()); CBlockIndex* pindexTip = WITH_LOCK(cs_main, return chainman.ActiveChain().Tip());
int tipHeight = pindexTip->nHeight; int tipHeight = pindexTip->nHeight;
@ -385,7 +385,6 @@ static RPCHelpMan quorum_memberof()
const NodeContext& node = EnsureAnyNodeContext(request.context); const NodeContext& node = EnsureAnyNodeContext(request.context);
const ChainstateManager& chainman = EnsureChainman(node); const ChainstateManager& chainman = EnsureChainman(node);
const LLMQContext& llmq_ctx = EnsureLLMQContext(node); const LLMQContext& llmq_ctx = EnsureLLMQContext(node);
CHECK_NONFATAL(node.dmnman);
uint256 protxHash(ParseHashV(request.params[0], "proTxHash")); uint256 protxHash(ParseHashV(request.params[0], "proTxHash"));
int scanQuorumsCount = -1; int scanQuorumsCount = -1;
@ -397,7 +396,7 @@ static RPCHelpMan quorum_memberof()
} }
const CBlockIndex* pindexTip = WITH_LOCK(cs_main, return chainman.ActiveChain().Tip()); const CBlockIndex* pindexTip = WITH_LOCK(cs_main, return chainman.ActiveChain().Tip());
auto mnList = node.dmnman->GetListForBlock(pindexTip); auto mnList = CHECK_NONFATAL(node.dmnman)->GetListForBlock(pindexTip);
auto dmn = mnList.GetMN(protxHash); auto dmn = mnList.GetMN(protxHash);
if (!dmn) { if (!dmn) {
throw JSONRPCError(RPC_INVALID_PARAMETER, "masternode not found"); throw JSONRPCError(RPC_INVALID_PARAMETER, "masternode not found");
@ -842,7 +841,7 @@ static RPCHelpMan quorum_rotationinfo()
const NodeContext& node = EnsureAnyNodeContext(request.context); const NodeContext& node = EnsureAnyNodeContext(request.context);
const ChainstateManager& chainman = EnsureChainman(node); const ChainstateManager& chainman = EnsureChainman(node);
const LLMQContext& llmq_ctx = EnsureLLMQContext(node); const LLMQContext& llmq_ctx = EnsureLLMQContext(node);
CHECK_NONFATAL(node.dmnman); ;
llmq::CGetQuorumRotationInfo cmd; llmq::CGetQuorumRotationInfo cmd;
llmq::CQuorumRotationInfo quorumRotationInfoRet; llmq::CQuorumRotationInfo quorumRotationInfoRet;
@ -859,7 +858,8 @@ static RPCHelpMan quorum_rotationinfo()
LOCK(cs_main); LOCK(cs_main);
if (!BuildQuorumRotationInfo(*node.dmnman, chainman, *llmq_ctx.qman, *llmq_ctx.quorum_block_processor, cmd, quorumRotationInfoRet, strError)) { if (!BuildQuorumRotationInfo(*CHECK_NONFATAL(node.dmnman), chainman, *llmq_ctx.qman, *llmq_ctx.quorum_block_processor,
cmd, quorumRotationInfoRet, strError)) {
throw JSONRPCError(RPC_INVALID_REQUEST, strError); throw JSONRPCError(RPC_INVALID_REQUEST, strError);
} }
@ -1073,7 +1073,8 @@ static RPCHelpMan verifyislock()
auto llmqType = Params().GetConsensus().llmqTypeDIP0024InstantSend; auto llmqType = Params().GetConsensus().llmqTypeDIP0024InstantSend;
const auto llmq_params_opt = Params().GetLLMQ(llmqType); const auto llmq_params_opt = Params().GetLLMQ(llmqType);
CHECK_NONFATAL(llmq_params_opt.has_value()); CHECK_NONFATAL(llmq_params_opt.has_value());
return VerifyRecoveredSigLatestQuorums(*llmq_params_opt, chainman.ActiveChain(), *llmq_ctx.qman, signHeight, id, txid, sig); return VerifyRecoveredSigLatestQuorums(*llmq_params_opt, chainman.ActiveChain(), *CHECK_NONFATAL(llmq_ctx.qman),
signHeight, id, txid, sig);
}, },
}; };
} }

View File

@ -38,6 +38,7 @@
#include <txmempool.h> #include <txmempool.h>
#include <uint256.h> #include <uint256.h>
#include <util/bip32.h> #include <util/bip32.h>
#include <util/check.h>
#include <util/moneystr.h> #include <util/moneystr.h>
#include <util/strencodings.h> #include <util/strencodings.h>
#include <util/string.h> #include <util/string.h>
@ -495,8 +496,6 @@ static RPCHelpMan getassetunlockstatuses()
throw JSONRPCError(RPC_INTERNAL_ERROR, "No blocks in chain"); throw JSONRPCError(RPC_INTERNAL_ERROR, "No blocks in chain");
} }
CHECK_NONFATAL(node.cpoolman);
std::optional<CCreditPool> poolCL{std::nullopt}; std::optional<CCreditPool> poolCL{std::nullopt};
std::optional<CCreditPool> poolOnTip{std::nullopt}; std::optional<CCreditPool> poolOnTip{std::nullopt};
std::optional<int> nSpecificCoreHeight{std::nullopt}; std::optional<int> nSpecificCoreHeight{std::nullopt};
@ -506,7 +505,7 @@ static RPCHelpMan getassetunlockstatuses()
if (nSpecificCoreHeight.value() < 0 || nSpecificCoreHeight.value() > chainman.ActiveChain().Height()) { if (nSpecificCoreHeight.value() < 0 || nSpecificCoreHeight.value() > chainman.ActiveChain().Height()) {
throw JSONRPCError(RPC_INVALID_PARAMETER, "Block height out of range"); throw JSONRPCError(RPC_INVALID_PARAMETER, "Block height out of range");
} }
poolCL = std::make_optional(node.cpoolman->GetCreditPool(chainman.ActiveChain()[nSpecificCoreHeight.value()], Params().GetConsensus())); poolCL = std::make_optional(CHECK_NONFATAL(node.cpoolman)->GetCreditPool(chainman.ActiveChain()[nSpecificCoreHeight.value()], Params().GetConsensus()));
} }
else { else {
const auto pBlockIndexBestCL = [&]() -> const CBlockIndex* { const auto pBlockIndexBestCL = [&]() -> const CBlockIndex* {

View File

@ -10,6 +10,7 @@
#include <script/descriptor.h> #include <script/descriptor.h>
#include <script/signingprovider.h> #include <script/signingprovider.h>
#include <tinyformat.h> #include <tinyformat.h>
#include <util/check.h>
#include <util/system.h> #include <util/system.h>
#include <util/strencodings.h> #include <util/strencodings.h>
#include <util/string.h> #include <util/string.h>
@ -494,7 +495,7 @@ RPCHelpMan::RPCHelpMan(std::string name, std::string description, std::vector<RP
// Null values are accepted in all arguments // Null values are accepted in all arguments
break; break;
default: default:
CHECK_NONFATAL(false); NONFATAL_UNREACHABLE();
break; break;
} }
} }
@ -743,7 +744,7 @@ void RPCResult::ToSections(Sections& sections, const OuterType outer_type, const
return; return;
} }
case Type::ANY: { case Type::ANY: {
CHECK_NONFATAL(false); // Only for testing NONFATAL_UNREACHABLE(); // Only for testing
} }
case Type::NONE: { case Type::NONE: {
sections.PushSection({indent + "null" + maybe_separator, Description("json null")}); sections.PushSection({indent + "null" + maybe_separator, Description("json null")});
@ -807,7 +808,7 @@ void RPCResult::ToSections(Sections& sections, const OuterType outer_type, const
return; return;
} }
} // no default case, so the compiler can warn about missing cases } // no default case, so the compiler can warn about missing cases
CHECK_NONFATAL(false); NONFATAL_UNREACHABLE();
} }
bool RPCResult::MatchesType(const UniValue& result) const bool RPCResult::MatchesType(const UniValue& result) const
@ -843,7 +844,7 @@ bool RPCResult::MatchesType(const UniValue& result) const
return UniValue::VOBJ == result.getType(); return UniValue::VOBJ == result.getType();
} }
} // no default case, so the compiler can warn about missing cases } // no default case, so the compiler can warn about missing cases
CHECK_NONFATAL(false); NONFATAL_UNREACHABLE();
} }
std::string RPCArg::ToStringObj(const bool oneline) const std::string RPCArg::ToStringObj(const bool oneline) const
@ -878,9 +879,9 @@ std::string RPCArg::ToStringObj(const bool oneline) const
case Type::OBJ: case Type::OBJ:
case Type::OBJ_USER_KEYS: case Type::OBJ_USER_KEYS:
// Currently unused, so avoid writing dead code // Currently unused, so avoid writing dead code
CHECK_NONFATAL(false); NONFATAL_UNREACHABLE();
} // no default case, so the compiler can warn about missing cases } // no default case, so the compiler can warn about missing cases
CHECK_NONFATAL(false); NONFATAL_UNREACHABLE();
} }
std::string RPCArg::ToString(const bool oneline) const std::string RPCArg::ToString(const bool oneline) const
@ -915,7 +916,7 @@ std::string RPCArg::ToString(const bool oneline) const
return "[" + res + "...]"; return "[" + res + "...]";
} }
} // no default case, so the compiler can warn about missing cases } // no default case, so the compiler can warn about missing cases
CHECK_NONFATAL(false); NONFATAL_UNREACHABLE();
} }
static std::pair<int64_t, int64_t> ParseRange(const UniValue& value) static std::pair<int64_t, int64_t> ParseRange(const UniValue& value)

View File

@ -641,7 +641,6 @@ private:
uint64_t nRewind; //!< how many bytes we guarantee to rewind uint64_t nRewind; //!< how many bytes we guarantee to rewind
std::vector<std::byte> vchBuf; //!< the buffer std::vector<std::byte> vchBuf; //!< the buffer
protected:
//! read data from the source to fill the buffer //! read data from the source to fill the buffer
bool Fill() { bool Fill() {
unsigned int pos = nSrcPos % vchBuf.size(); unsigned int pos = nSrcPos % vchBuf.size();
@ -659,6 +658,28 @@ protected:
return true; return true;
} }
//! Advance the stream's read pointer (m_read_pos) by up to 'length' bytes,
//! filling the buffer from the file so that at least one byte is available.
//! Return a pointer to the available buffer data and the number of bytes
//! (which may be less than the requested length) that may be accessed
//! beginning at that pointer.
std::pair<std::byte*, size_t> AdvanceStream(size_t length)
{
assert(m_read_pos <= nSrcPos);
if (m_read_pos + length > nReadLimit) {
throw std::ios_base::failure("Attempt to position past buffer limit");
}
// If there are no bytes available, read from the file.
if (m_read_pos == nSrcPos && length > 0) Fill();
size_t buffer_offset{static_cast<size_t>(m_read_pos % vchBuf.size())};
size_t buffer_available{static_cast<size_t>(vchBuf.size() - buffer_offset)};
size_t bytes_until_source_pos{static_cast<size_t>(nSrcPos - m_read_pos)};
size_t advance{std::min({length, buffer_available, bytes_until_source_pos})};
m_read_pos += advance;
return std::make_pair(&vchBuf[buffer_offset], advance);
}
public: public:
CBufferedFile(FILE* fileIn, uint64_t nBufSize, uint64_t nRewindIn, int nTypeIn, int nVersionIn) CBufferedFile(FILE* fileIn, uint64_t nBufSize, uint64_t nRewindIn, int nTypeIn, int nVersionIn)
: nType(nTypeIn), nVersion(nVersionIn), nSrcPos(0), m_read_pos(0), nReadLimit(std::numeric_limits<uint64_t>::max()), nRewind(nRewindIn), vchBuf(nBufSize, std::byte{0}) : nType(nTypeIn), nVersion(nVersionIn), nSrcPos(0), m_read_pos(0), nReadLimit(std::numeric_limits<uint64_t>::max()), nRewind(nRewindIn), vchBuf(nBufSize, std::byte{0})
@ -696,24 +717,21 @@ public:
//! read a number of bytes //! read a number of bytes
void read(Span<std::byte> dst) void read(Span<std::byte> dst)
{ {
if (dst.size() + m_read_pos > nReadLimit) {
throw std::ios_base::failure("Read attempted past buffer limit");
}
while (dst.size() > 0) { while (dst.size() > 0) {
if (m_read_pos == nSrcPos) auto [buffer_pointer, length]{AdvanceStream(dst.size())};
Fill(); memcpy(dst.data(), buffer_pointer, length);
unsigned int pos = m_read_pos % vchBuf.size(); dst = dst.subspan(length);
size_t nNow = dst.size();
if (nNow + pos > vchBuf.size())
nNow = vchBuf.size() - pos;
if (nNow + m_read_pos > nSrcPos)
nNow = nSrcPos - m_read_pos;
memcpy(dst.data(), &vchBuf[pos], nNow);
m_read_pos += nNow;
dst = dst.subspan(nNow);
} }
} }
//! Move the read position ahead in the stream to the given position.
//! Use SetPos() to back up in the stream, not SkipTo().
void SkipTo(const uint64_t file_pos)
{
assert(file_pos >= m_read_pos);
while (m_read_pos < file_pos) AdvanceStream(file_pos - m_read_pos);
}
//! return the current reading position //! return the current reading position
uint64_t GetPos() const { uint64_t GetPos() const {
return m_read_pos; return m_read_pos;

View File

@ -0,0 +1,39 @@
// Copyright (c) 2022 The Bitcoin Core developers
// Distributed under the MIT software license, see the accompanying
// file COPYING or http://www.opensource.org/licenses/mit-license.php.
#include <chainparams.h>
#include <node/blockstorage.h>
#include <node/context.h>
#include <validation.h>
#include <boost/test/unit_test.hpp>
#include <test/util/setup_common.h>
// use BasicTestingSetup here for the data directory configuration, setup, and cleanup
BOOST_FIXTURE_TEST_SUITE(blockmanager_tests, BasicTestingSetup)
BOOST_AUTO_TEST_CASE(blockmanager_find_block_pos)
{
const auto params {CreateChainParams(ArgsManager{}, CBaseChainParams::MAIN)};
BlockManager blockman {};
CChain chain {};
// simulate adding a genesis block normally
BOOST_CHECK_EQUAL(blockman.SaveBlockToDisk(params->GenesisBlock(), 0, chain, *params, nullptr).nPos, BLOCK_SERIALIZATION_HEADER_SIZE);
// simulate what happens during reindex
// simulate a well-formed genesis block being found at offset 8 in the blk00000.dat file
// the block is found at offset 8 because there is an 8 byte serialization header
// consisting of 4 magic bytes + 4 length bytes before each block in a well-formed blk file.
FlatFilePos pos{0, BLOCK_SERIALIZATION_HEADER_SIZE};
BOOST_CHECK_EQUAL(blockman.SaveBlockToDisk(params->GenesisBlock(), 0, chain, *params, &pos).nPos, BLOCK_SERIALIZATION_HEADER_SIZE);
// now simulate what happens after reindex for the first new block processed
// the actual block contents don't matter, just that it's a block.
// verify that the write position is at offset 0x12d.
// this is a check to make sure that https://github.com/bitcoin/bitcoin/issues/21379 does not recur
// 8 bytes (for serialization header) + 285 (for serialized genesis block) = 293
// add another 8 bytes for the second block's serialization header and we get 293 + 8 = 301
FlatFilePos actual{blockman.SaveBlockToDisk(params->GenesisBlock(), 1, chain, *params, nullptr)};
BOOST_CHECK_EQUAL(actual.nPos, BLOCK_SERIALIZATION_HEADER_SIZE + ::GetSerializeSize(params->GenesisBlock(), CLIENT_VERSION) + BLOCK_SERIALIZATION_HEADER_SIZE);
}
BOOST_AUTO_TEST_SUITE_END()

View File

@ -2,9 +2,11 @@
// Distributed under the MIT software license, see the accompanying // Distributed under the MIT software license, see the accompanying
// file COPYING or http://www.opensource.org/licenses/mit-license.php. // file COPYING or http://www.opensource.org/licenses/mit-license.php.
#include <chainparams.h>
#include <index/coinstatsindex.h> #include <index/coinstatsindex.h>
#include <test/util/index.h> #include <test/util/index.h>
#include <test/util/setup_common.h> #include <test/util/setup_common.h>
#include <test/util/validation.h>
#include <util/time.h> #include <util/time.h>
#include <validation.h> #include <validation.h>
@ -74,4 +76,44 @@ BOOST_FIXTURE_TEST_CASE(coinstatsindex_initial_sync, TestChain100Setup)
coin_stats_index.Stop(); coin_stats_index.Stop();
} }
// Test shutdown between BlockConnected and ChainStateFlushed notifications,
// make sure index is not corrupted and is able to reload.
BOOST_FIXTURE_TEST_CASE(coinstatsindex_unclean_shutdown, TestChain100Setup)
{
CChainState& chainstate = Assert(m_node.chainman)->ActiveChainstate();
const CChainParams& params = Params();
{
CoinStatsIndex index{1 << 20};
BOOST_REQUIRE(index.Start(chainstate));
IndexWaitSynced(index);
std::shared_ptr<const CBlock> new_block;
CBlockIndex* new_block_index = nullptr;
{
const CScript script_pub_key{CScript() << ToByteVector(coinbaseKey.GetPubKey()) << OP_CHECKSIG};
const CBlock block = this->CreateBlock({}, script_pub_key, chainstate);
new_block = std::make_shared<CBlock>(block);
LOCK(cs_main);
BlockValidationState state;
BOOST_CHECK(CheckBlock(block, state, params.GetConsensus()));
BOOST_CHECK(chainstate.AcceptBlock(new_block, state, &new_block_index, true, nullptr, nullptr));
CCoinsViewCache view(&chainstate.CoinsTip());
BOOST_CHECK(chainstate.ConnectBlock(block, state, new_block_index, view));
}
// Send block connected notification, then stop the index without
// sending a chainstate flushed notification. Prior to #24138, this
// would cause the index to be corrupted and fail to reload.
ValidationInterfaceTest::BlockConnected(index, new_block, new_block_index);
index.Stop();
}
{
CoinStatsIndex index{1 << 20};
// Make sure the index can be loaded.
BOOST_REQUIRE(index.Start(chainstate));
index.Stop();
}
}
BOOST_AUTO_TEST_SUITE_END() BOOST_AUTO_TEST_SUITE_END()

View File

@ -251,7 +251,7 @@ BOOST_AUTO_TEST_CASE(streams_buffered_file)
BOOST_CHECK(false); BOOST_CHECK(false);
} catch (const std::exception& e) { } catch (const std::exception& e) {
BOOST_CHECK(strstr(e.what(), BOOST_CHECK(strstr(e.what(),
"Read attempted past buffer limit") != nullptr); "Attempt to position past buffer limit") != nullptr);
} }
// The default argument removes the limit completely. // The default argument removes the limit completely.
BOOST_CHECK(bf.SetLimit()); BOOST_CHECK(bf.SetLimit());
@ -320,7 +320,7 @@ BOOST_AUTO_TEST_CASE(streams_buffered_file)
BOOST_CHECK(!bf.SetPos(0)); BOOST_CHECK(!bf.SetPos(0));
// But we should now be positioned at least as far back as allowed // But we should now be positioned at least as far back as allowed
// by the rewind window (relative to our farthest read position, 40). // by the rewind window (relative to our farthest read position, 40).
BOOST_CHECK(bf.GetPos() <= 30); BOOST_CHECK(bf.GetPos() <= 30U);
// We can explicitly close the file, or the destructor will do it. // We can explicitly close the file, or the destructor will do it.
bf.fclose(); bf.fclose();
@ -328,6 +328,55 @@ BOOST_AUTO_TEST_CASE(streams_buffered_file)
fs::remove("streams_test_tmp"); fs::remove("streams_test_tmp");
} }
BOOST_AUTO_TEST_CASE(streams_buffered_file_skip)
{
fs::path streams_test_filename = m_args.GetDataDirBase() / "streams_test_tmp";
FILE* file = fsbridge::fopen(streams_test_filename, "w+b");
// The value at each offset is the byte offset (e.g. byte 1 in the file has the value 0x01).
for (uint8_t j = 0; j < 40; ++j) {
fwrite(&j, 1, 1, file);
}
rewind(file);
// The buffer is 25 bytes, allow rewinding 10 bytes.
CBufferedFile bf(file, 25, 10, 222, 333);
uint8_t i;
// This is like bf >> (7-byte-variable), in that it will cause data
// to be read from the file into memory, but it's not copied to us.
bf.SkipTo(7);
BOOST_CHECK_EQUAL(bf.GetPos(), 7U);
bf >> i;
BOOST_CHECK_EQUAL(i, 7);
// The bytes in the buffer up to offset 7 are valid and can be read.
BOOST_CHECK(bf.SetPos(0));
bf >> i;
BOOST_CHECK_EQUAL(i, 0);
bf >> i;
BOOST_CHECK_EQUAL(i, 1);
bf.SkipTo(11);
bf >> i;
BOOST_CHECK_EQUAL(i, 11);
// SkipTo() honors the transfer limit; we can't position beyond the limit.
bf.SetLimit(13);
try {
bf.SkipTo(14);
BOOST_CHECK(false);
} catch (const std::exception& e) {
BOOST_CHECK(strstr(e.what(), "Attempt to position past buffer limit") != nullptr);
}
// We can position exactly to the transfer limit.
bf.SkipTo(13);
BOOST_CHECK_EQUAL(bf.GetPos(), 13U);
bf.fclose();
fs::remove(streams_test_filename);
}
BOOST_AUTO_TEST_CASE(streams_buffered_file_rand) BOOST_AUTO_TEST_CASE(streams_buffered_file_rand)
{ {
// Make this test deterministic. // Make this test deterministic.
@ -358,7 +407,7 @@ BOOST_AUTO_TEST_CASE(streams_buffered_file_rand)
// sizes; the boundaries of the objects can interact arbitrarily // sizes; the boundaries of the objects can interact arbitrarily
// with the CBufferFile's internal buffer. These first three // with the CBufferFile's internal buffer. These first three
// cases simulate objects of various sizes (1, 2, 5 bytes). // cases simulate objects of various sizes (1, 2, 5 bytes).
switch (InsecureRandRange(5)) { switch (InsecureRandRange(6)) {
case 0: { case 0: {
uint8_t a[1]; uint8_t a[1];
if (currentPos + 1 > fileSize) if (currentPos + 1 > fileSize)
@ -396,6 +445,16 @@ BOOST_AUTO_TEST_CASE(streams_buffered_file_rand)
break; break;
} }
case 3: { case 3: {
// SkipTo is similar to the "read" cases above, except
// we don't receive the data.
size_t skip_length{static_cast<size_t>(InsecureRandRange(5))};
if (currentPos + skip_length > fileSize) continue;
bf.SetLimit(currentPos + skip_length);
bf.SkipTo(currentPos + skip_length);
currentPos += skip_length;
break;
}
case 4: {
// Find a byte value (that is at or ahead of the current position). // Find a byte value (that is at or ahead of the current position).
size_t find = currentPos + InsecureRandRange(8); size_t find = currentPos + InsecureRandRange(8);
if (find >= fileSize) if (find >= fileSize)
@ -412,7 +471,7 @@ BOOST_AUTO_TEST_CASE(streams_buffered_file_rand)
currentPos++; currentPos++;
break; break;
} }
case 4: { case 5: {
size_t requestPos = InsecureRandRange(maxPos + 4); size_t requestPos = InsecureRandRange(maxPos + 4);
bool okay = bf.SetPos(requestPos); bool okay = bf.SetPos(requestPos);
// The new position may differ from the requested position // The new position may differ from the requested position

View File

@ -7,6 +7,7 @@
#include <util/check.h> #include <util/check.h>
#include <util/time.h> #include <util/time.h>
#include <validation.h> #include <validation.h>
#include <validationinterface.h>
void TestChainState::ResetIbd() void TestChainState::ResetIbd()
{ {
@ -20,3 +21,8 @@ void TestChainState::JumpOutOfIbd()
m_cached_finished_ibd = true; m_cached_finished_ibd = true;
Assert(!IsInitialBlockDownload()); Assert(!IsInitialBlockDownload());
} }
void ValidationInterfaceTest::BlockConnected(CValidationInterface& obj, const std::shared_ptr<const CBlock>& block, const CBlockIndex* pindex)
{
obj.BlockConnected(block, pindex);
}

View File

@ -7,6 +7,8 @@
#include <validation.h> #include <validation.h>
class CValidationInterface;
struct TestChainState : public CChainState { struct TestChainState : public CChainState {
/** Reset the ibd cache to its initial state */ /** Reset the ibd cache to its initial state */
void ResetIbd(); void ResetIbd();
@ -14,4 +16,10 @@ struct TestChainState : public CChainState {
void JumpOutOfIbd(); void JumpOutOfIbd();
}; };
class ValidationInterfaceTest
{
public:
static void BlockConnected(CValidationInterface& obj, const std::shared_ptr<const CBlock>& block, const CBlockIndex* pindex);
};
#endif // BITCOIN_TEST_UTIL_VALIDATION_H #endif // BITCOIN_TEST_UTIL_VALIDATION_H

View File

@ -18,8 +18,24 @@ class NonFatalCheckError : public std::runtime_error
using std::runtime_error::runtime_error; using std::runtime_error::runtime_error;
}; };
#define format_internal_error(msg, file, line, func, report) \
strprintf("Internal bug detected: \"%s\"\n%s:%d (%s)\nPlease report this issue here: %s\n", \
msg, file, line, func, report)
/** Helper for CHECK_NONFATAL() */
template <typename T>
T&& inline_check_non_fatal(T&& val, const char* file, int line, const char* func, const char* assertion)
{
if (!(val)) {
throw NonFatalCheckError(
format_internal_error(assertion, file, line, func, PACKAGE_BUGREPORT));
}
return std::forward<T>(val);
}
/** /**
* Throw a NonFatalCheckError when the condition evaluates to false * Identity function. Throw a NonFatalCheckError when the condition evaluates to false
* *
* This should only be used * This should only be used
* - where the condition is assumed to be true, not for error handling or validating user input * - where the condition is assumed to be true, not for error handling or validating user input
@ -29,18 +45,8 @@ class NonFatalCheckError : public std::runtime_error
* asserts or recoverable logic errors. A NonFatalCheckError in RPC code is caught and passed as a string to the RPC * asserts or recoverable logic errors. A NonFatalCheckError in RPC code is caught and passed as a string to the RPC
* caller, which can then report the issue to the developers. * caller, which can then report the issue to the developers.
*/ */
#define CHECK_NONFATAL(condition) \ #define CHECK_NONFATAL(condition) \
do { \ inline_check_non_fatal(condition, __FILE__, __LINE__, __func__, #condition)
if (!(condition)) { \
throw NonFatalCheckError( \
strprintf("Internal bug detected: '%s'\n" \
"%s:%d (%s)\n" \
"You may report this issue here: %s\n", \
(#condition), \
__FILE__, __LINE__, __func__, \
PACKAGE_BUGREPORT)); \
} \
} while (false)
#if defined(NDEBUG) #if defined(NDEBUG)
#error "Cannot compile without assertions!" #error "Cannot compile without assertions!"
@ -80,4 +86,13 @@ T&& inline_assertion_check(T&& val, [[maybe_unused]] const char* file, [[maybe_u
*/ */
#define Assume(val) inline_assertion_check<false>(val, __FILE__, __LINE__, __func__, #val) #define Assume(val) inline_assertion_check<false>(val, __FILE__, __LINE__, __func__, #val)
/**
* NONFATAL_UNREACHABLE() is a macro that is used to mark unreachable code. It throws a NonFatalCheckError.
* This is used to mark code that is not yet implemented or is not yet reachable.
*/
#define NONFATAL_UNREACHABLE() \
throw NonFatalCheckError( \
format_internal_error("Unreachable code reached (non-fatal)", \
__FILE__, __LINE__, __func__, PACKAGE_BUGREPORT))
#endif // BITCOIN_UTIL_CHECK_H #endif // BITCOIN_UTIL_CHECK_H

View File

@ -19,7 +19,6 @@
#include <deploymentstatus.h> #include <deploymentstatus.h>
#include <flatfile.h> #include <flatfile.h>
#include <hash.h> #include <hash.h>
#include <index/blockfilterindex.h>
#include <logging.h> #include <logging.h>
#include <logging/timer.h> #include <logging/timer.h>
#include <node/blockstorage.h> #include <node/blockstorage.h>
@ -93,6 +92,12 @@ const std::vector<std::string> CHECKLEVEL_DOC {
"level 4 tries to reconnect the blocks", "level 4 tries to reconnect the blocks",
"each level includes the checks of the previous levels", "each level includes the checks of the previous levels",
}; };
/** The number of blocks to keep below the deepest prune lock.
* There is nothing special about this number. It is higher than what we
* expect to see in regular mainnet reorgs, but not so high that it would
* noticeably interfere with the pruning mechanism.
* */
static constexpr int PRUNE_LOCK_BUFFER{10};
/** /**
* Mutex to guard access to validation specific variables, such as reading * Mutex to guard access to validation specific variables, such as reading
@ -2362,12 +2367,24 @@ bool CChainState::FlushStateToDisk(
CoinsCacheSizeState cache_state = GetCoinsCacheSizeState(); CoinsCacheSizeState cache_state = GetCoinsCacheSizeState();
LOCK(m_blockman.cs_LastBlockFile); LOCK(m_blockman.cs_LastBlockFile);
if (fPruneMode && (m_blockman.m_check_for_pruning || nManualPruneHeight > 0) && !fReindex) { if (fPruneMode && (m_blockman.m_check_for_pruning || nManualPruneHeight > 0) && !fReindex) {
// make sure we don't prune above the blockfilterindexes bestblocks // make sure we don't prune above any of the prune locks bestblocks
// pruning is height-based // pruning is height-based
int last_prune = m_chain.Height(); // last height we can prune int last_prune{m_chain.Height()}; // last height we can prune
ForEachBlockFilterIndex([&](BlockFilterIndex& index) { std::optional<std::string> limiting_lock; // prune lock that actually was the limiting factor, only used for logging
last_prune = std::max(1, std::min(last_prune, index.GetSummary().best_block_height));
}); for (const auto& prune_lock : m_blockman.m_prune_locks) {
if (prune_lock.second.height_first == std::numeric_limits<int>::max()) continue;
// Remove the buffer and one additional block here to get actual height that is outside of the buffer
const int lock_height{prune_lock.second.height_first - PRUNE_LOCK_BUFFER - 1};
last_prune = std::max(1, std::min(last_prune, lock_height));
if (last_prune == lock_height) {
limiting_lock = prune_lock.first;
}
}
if (limiting_lock) {
LogPrint(BCLog::PRUNE, "%s limited pruning to height %d\n", limiting_lock.value(), last_prune);
}
if (nManualPruneHeight > 0) { if (nManualPruneHeight > 0) {
LOG_TIME_MILLIS_WITH_CATEGORY("find files to prune (manual)", BCLog::BENCHMARK); LOG_TIME_MILLIS_WITH_CATEGORY("find files to prune (manual)", BCLog::BENCHMARK);
@ -2622,6 +2639,18 @@ bool CChainState::DisconnectTip(BlockValidationState& state, DisconnectedBlockTr
dbTx->Commit(); dbTx->Commit();
} }
LogPrint(BCLog::BENCHMARK, "- Disconnect block: %.2fms\n", (GetTimeMicros() - nStart) * MILLI); LogPrint(BCLog::BENCHMARK, "- Disconnect block: %.2fms\n", (GetTimeMicros() - nStart) * MILLI);
{
// Prune locks that began at or after the tip should be moved backward so they get a chance to reorg
const int max_height_first{pindexDelete->nHeight - 1};
for (auto& prune_lock : m_blockman.m_prune_locks) {
if (prune_lock.second.height_first <= max_height_first) continue;
prune_lock.second.height_first = max_height_first;
LogPrint(BCLog::PRUNE, "%s prune lock moved back to %d\n", prune_lock.first, max_height_first);
}
}
// Write the chain state to disk, if necessary. // Write the chain state to disk, if necessary.
if (!FlushStateToDisk(state, FlushStateMode::IF_NEEDED)) { if (!FlushStateToDisk(state, FlushStateMode::IF_NEEDED)) {
return false; return false;
@ -4569,6 +4598,8 @@ void CChainState::LoadExternalBlockFile(
unsigned int nMaxBlockSize = MaxBlockSize(); unsigned int nMaxBlockSize = MaxBlockSize();
// This takes over fileIn and calls fclose() on it in the CBufferedFile destructor // This takes over fileIn and calls fclose() on it in the CBufferedFile destructor
CBufferedFile blkdat(fileIn, 2*nMaxBlockSize, nMaxBlockSize+8, SER_DISK, CLIENT_VERSION); CBufferedFile blkdat(fileIn, 2*nMaxBlockSize, nMaxBlockSize+8, SER_DISK, CLIENT_VERSION);
// nRewind indicates where to resume scanning in case something goes wrong,
// such as a block fails to deserialize.
uint64_t nRewind = blkdat.GetPos(); uint64_t nRewind = blkdat.GetPos();
while (!blkdat.eof()) { while (!blkdat.eof()) {
if (ShutdownRequested()) return; if (ShutdownRequested()) return;
@ -4592,28 +4623,30 @@ void CChainState::LoadExternalBlockFile(
continue; continue;
} catch (const std::exception&) { } catch (const std::exception&) {
// no valid block header found; don't complain // no valid block header found; don't complain
// (this happens at the end of every blk.dat file)
break; break;
} }
try { try {
// read block // read block header
uint64_t nBlockPos = blkdat.GetPos(); const uint64_t nBlockPos{blkdat.GetPos()};
if (dbp) if (dbp)
dbp->nPos = nBlockPos; dbp->nPos = nBlockPos;
blkdat.SetLimit(nBlockPos + nSize); blkdat.SetLimit(nBlockPos + nSize);
std::shared_ptr<CBlock> pblock = std::make_shared<CBlock>(); CBlockHeader header;
CBlock& block = *pblock; blkdat >> header;
blkdat >> block; const uint256 hash{header.GetHash()};
nRewind = blkdat.GetPos(); // Skip the rest of this block (this may read from disk into memory); position to the marker before the
// next block, but it's still possible to rewind to the start of the current block (without a disk read).
uint256 hash = block.GetHash(); nRewind = nBlockPos + nSize;
blkdat.SkipTo(nRewind);
{ {
LOCK(cs_main); LOCK(cs_main);
// detect out of order blocks, and store them for later // detect out of order blocks, and store them for later
if (hash != m_params.GetConsensus().hashGenesisBlock && !m_blockman.LookupBlockIndex(block.hashPrevBlock)) { if (hash != m_params.GetConsensus().hashGenesisBlock && !m_blockman.LookupBlockIndex(header.hashPrevBlock)) {
LogPrint(BCLog::REINDEX, "%s: Out of order block %s, parent %s not known\n", __func__, hash.ToString(), LogPrint(BCLog::REINDEX, "%s: Out of order block %s, parent %s not known\n", __func__, hash.ToString(),
block.hashPrevBlock.ToString()); header.hashPrevBlock.ToString());
if (dbp && blocks_with_unknown_parent) { if (dbp && blocks_with_unknown_parent) {
blocks_with_unknown_parent->emplace(block.hashPrevBlock, *dbp); blocks_with_unknown_parent->emplace(header.hashPrevBlock, *dbp);
} }
continue; continue;
} }
@ -4621,13 +4654,19 @@ void CChainState::LoadExternalBlockFile(
// process in case the block isn't known yet // process in case the block isn't known yet
const CBlockIndex* pindex = m_blockman.LookupBlockIndex(hash); const CBlockIndex* pindex = m_blockman.LookupBlockIndex(hash);
if (!pindex || (pindex->nStatus & BLOCK_HAVE_DATA) == 0) { if (!pindex || (pindex->nStatus & BLOCK_HAVE_DATA) == 0) {
BlockValidationState state; // This block can be processed immediately; rewind to its start, read and deserialize it.
if (AcceptBlock(pblock, state, nullptr, true, dbp, nullptr)) { blkdat.SetPos(nBlockPos);
nLoaded++; std::shared_ptr<CBlock> pblock{std::make_shared<CBlock>()};
} blkdat >> *pblock;
if (state.IsError()) { nRewind = blkdat.GetPos();
break;
} BlockValidationState state;
if (AcceptBlock(pblock, state, nullptr, true, dbp, nullptr)) {
nLoaded++;
}
if (state.IsError()) {
break;
}
} else if (hash != m_params.GetConsensus().hashGenesisBlock && pindex->nHeight % 1000 == 0) { } else if (hash != m_params.GetConsensus().hashGenesisBlock && pindex->nHeight % 1000 == 0) {
LogPrint(BCLog::REINDEX, "Block Import: already had block %s at height %d\n", hash.ToString(), pindex->nHeight); LogPrint(BCLog::REINDEX, "Block Import: already had block %s at height %d\n", hash.ToString(), pindex->nHeight);
} }
@ -4671,7 +4710,18 @@ void CChainState::LoadExternalBlockFile(
} }
} }
} catch (const std::exception& e) { } catch (const std::exception& e) {
LogPrintf("%s: Deserialize or I/O error - %s\n", __func__, e.what()); // historical bugs added extra data to the block files that does not deserialize cleanly.
// commonly this data is between readable blocks, but it does not really matter. such data is not fatal to the import process.
// the code that reads the block files deals with invalid data by simply ignoring it.
// it continues to search for the next {4 byte magic message start bytes + 4 byte length + block} that does deserialize cleanly
// and passes all of the other block validation checks dealing with POW and the merkle root, etc...
// we merely note with this informational log message when unexpected data is encountered.
// we could also be experiencing a storage system read error, or a read of a previous bad write. these are possible, but
// less likely scenarios. we don't have enough information to tell a difference here.
// the reindex process is not the place to attempt to clean and/or compact the block files. if so desired, a studious node operator
// may use knowledge of the fact that the block files are not entirely pristine in order to prepare a set of pristine, and
// perhaps ordered, block files for later reindexing.
LogPrint(BCLog::REINDEX, "%s: unexpected data at file offset 0x%x - %s. continuing\n", __func__, (nRewind - 1), e.what());
} }
} }
} catch (const std::runtime_error& e) { } catch (const std::runtime_error& e) {

View File

@ -200,6 +200,7 @@ protected:
* has been received and connected to the headers tree, though not validated yet */ * has been received and connected to the headers tree, though not validated yet */
virtual void NewPoWValidBlock(const CBlockIndex *pindex, const std::shared_ptr<const CBlock>& block) {}; virtual void NewPoWValidBlock(const CBlockIndex *pindex, const std::shared_ptr<const CBlock>& block) {};
friend class CMainSignals; friend class CMainSignals;
friend class ValidationInterfaceTest;
}; };
struct MainSignalsInstance; struct MainSignalsInstance;

View File

@ -1135,7 +1135,7 @@ static std::string RecurseImportData(const CScript& script, ImportData& import_d
case TxoutType::NONSTANDARD: case TxoutType::NONSTANDARD:
return "unrecognized script"; return "unrecognized script";
} // no default case, so the compiler can warn about missing cases } // no default case, so the compiler can warn about missing cases
CHECK_NONFATAL(false); NONFATAL_UNREACHABLE();
} }
static UniValue ProcessImportLegacy(ImportData& import_data, std::map<CKeyID, CPubKey>& pubkey_map, std::map<CKeyID, CKey>& privkey_map, std::set<CScript>& script_pub_keys, bool& have_solving_data, const UniValue& data, std::vector<CKeyID>& ordered_pubkeys) static UniValue ProcessImportLegacy(ImportData& import_data, std::map<CKeyID, CPubKey>& pubkey_map, std::map<CKeyID, CKey>& privkey_map, std::set<CScript>& script_pub_keys, bool& have_solving_data, const UniValue& data, std::vector<CKeyID>& ordered_pubkeys)

View File

@ -1,80 +0,0 @@
#!/usr/bin/env python3
# Copyright (c) 2020 The Bitcoin Core developers
# Distributed under the MIT software license, see the accompanying
# file COPYING or http://www.opensource.org/licenses/mit-license.php.
"""Test blockfilterindex in conjunction with prune."""
from test_framework.test_framework import BitcoinTestFramework
from test_framework.util import (
assert_equal,
assert_greater_than,
assert_raises_rpc_error,
)
from test_framework.governance import EXPECTED_STDERR_NO_GOV_PRUNE
class FeatureBlockfilterindexPruneTest(BitcoinTestFramework):
def set_test_params(self):
self.num_nodes = 1
self.extra_args = [["-fastprune", "-prune=1", "-blockfilterindex=1", "-testactivationheight=v20@2000"]]
def sync_index(self, height):
expected = {'basic block filter index': {'synced': True, 'best_block_height': height}}
self.wait_until(lambda: self.nodes[0].getindexinfo() == expected)
def run_test(self):
self.log.info("check if we can access a blockfilter when pruning is enabled but no blocks are actually pruned")
self.sync_index(height=200)
assert_greater_than(len(self.nodes[0].getblockfilter(self.nodes[0].getbestblockhash())['filter']), 0)
self.generate(self.nodes[0], 500)
self.sync_index(height=700)
self.log.info("prune some blocks")
pruneheight = self.nodes[0].pruneblockchain(400)
# the prune heights used here and below are magic numbers that are determined by the
# thresholds at which block files wrap, so they depend on disk serialization and default block file size.
assert_equal(pruneheight, 366)
self.log.info("check if we can access the tips blockfilter when we have pruned some blocks")
assert_greater_than(len(self.nodes[0].getblockfilter(self.nodes[0].getbestblockhash())['filter']), 0)
self.log.info("check if we can access the blockfilter of a pruned block")
assert_greater_than(len(self.nodes[0].getblockfilter(self.nodes[0].getblockhash(2))['filter']), 0)
# mine and sync index up to a height that will later be the pruneheight
self.generate(self.nodes[0], 298)
self.sync_index(height=998)
self.log.info("start node without blockfilterindex")
self.restart_node(0, extra_args=["-fastprune", "-prune=1", "-testactivationheight=v20@2000"], expected_stderr=EXPECTED_STDERR_NO_GOV_PRUNE)
self.log.info("make sure accessing the blockfilters throws an error")
assert_raises_rpc_error(-1, "Index is not enabled for filtertype basic", self.nodes[0].getblockfilter, self.nodes[0].getblockhash(2))
self.generate(self.nodes[0], 502)
self.log.info("prune exactly up to the blockfilterindexes best block while blockfilters are disabled")
pruneheight_2 = self.nodes[0].pruneblockchain(1000)
assert_equal(pruneheight_2, 932)
self.restart_node(0, extra_args=["-fastprune", "-prune=1", "-blockfilterindex=1", "-testactivationheight=v20@2000"], expected_stderr=EXPECTED_STDERR_NO_GOV_PRUNE)
self.log.info("make sure that we can continue with the partially synced index after having pruned up to the index height")
self.sync_index(height=1500)
self.log.info("prune below the blockfilterindexes best block while blockfilters are disabled")
self.restart_node(0, extra_args=["-fastprune", "-prune=1", "-testactivationheight=v20@2000"], expected_stderr=EXPECTED_STDERR_NO_GOV_PRUNE)
self.generate(self.nodes[0], 1000)
pruneheight_3 = self.nodes[0].pruneblockchain(2000)
assert_greater_than(pruneheight_3, pruneheight_2)
self.stop_node(0, expected_stderr=EXPECTED_STDERR_NO_GOV_PRUNE)
self.log.info("make sure we get an init error when starting the node again with block filters")
self.nodes[0].assert_start_raises_init_error(
extra_args=["-fastprune", "-prune=1", "-blockfilterindex=1"],
expected_msg=f"{EXPECTED_STDERR_NO_GOV_PRUNE}\nError: basic block filter index best block of the index goes beyond pruned data. Please disable the index or reindex (which will download the whole blockchain again)",
)
self.log.info("make sure the node starts again with the -reindex arg")
self.start_node(0, extra_args=["-fastprune", "-prune=1", "-blockfilterindex", "-reindex"])
self.stop_nodes(expected_stderr=EXPECTED_STDERR_NO_GOV_PRUNE)
if __name__ == '__main__':
FeatureBlockfilterindexPruneTest().main()

View File

@ -0,0 +1,161 @@
#!/usr/bin/env python3
# Copyright (c) 2020 The Bitcoin Core developers
# Distributed under the MIT software license, see the accompanying
# file COPYING or http://www.opensource.org/licenses/mit-license.php.
"""Test indices in conjunction with prune."""
from test_framework.test_framework import BitcoinTestFramework
from test_framework.util import (
assert_equal,
assert_greater_than,
assert_raises_rpc_error,
)
from test_framework.governance import EXPECTED_STDERR_NO_GOV_PRUNE
DEPLOYMENT_ARG = "-testactivationheight=v20@3000"
class FeatureIndexPruneTest(BitcoinTestFramework):
def set_test_params(self):
self.num_nodes = 4
self.extra_args = [
["-fastprune", "-prune=1", "-blockfilterindex=1", DEPLOYMENT_ARG],
["-fastprune", "-prune=1", "-coinstatsindex=1", DEPLOYMENT_ARG],
["-fastprune", "-prune=1", "-blockfilterindex=1", "-coinstatsindex=1", DEPLOYMENT_ARG],
[DEPLOYMENT_ARG]
]
def sync_index(self, height):
expected_filter = {
'basic block filter index': {'synced': True, 'best_block_height': height},
}
self.wait_until(lambda: self.nodes[0].getindexinfo() == expected_filter)
expected_stats = {
'coinstatsindex': {'synced': True, 'best_block_height': height}
}
self.wait_until(lambda: self.nodes[1].getindexinfo() == expected_stats)
expected = {**expected_filter, **expected_stats}
self.wait_until(lambda: self.nodes[2].getindexinfo() == expected)
def reconnect_nodes(self):
self.connect_nodes(0,1)
self.connect_nodes(0,2)
self.connect_nodes(0,3)
def mine_batches(self, blocks):
n = blocks // 250
for _ in range(n):
self.generate(self.nodes[0], 250)
self.generate(self.nodes[0], blocks % 250)
self.sync_blocks()
def restart_without_indices(self):
for i in range(3):
self.restart_node(i, extra_args=["-fastprune", "-prune=1", DEPLOYMENT_ARG], expected_stderr=EXPECTED_STDERR_NO_GOV_PRUNE)
self.reconnect_nodes()
def run_test(self):
filter_nodes = [self.nodes[0], self.nodes[2]]
stats_nodes = [self.nodes[1], self.nodes[2]]
self.log.info("check if we can access blockfilters and coinstats when pruning is enabled but no blocks are actually pruned")
self.sync_index(height=200)
tip = self.nodes[0].getbestblockhash()
for node in filter_nodes:
assert_greater_than(len(node.getblockfilter(tip)['filter']), 0)
for node in stats_nodes:
assert(node.gettxoutsetinfo(hash_type="muhash", hash_or_height=tip)['muhash'])
self.mine_batches(500)
self.sync_index(height=700)
self.log.info("prune some blocks")
for node in self.nodes[:2]:
with node.assert_debug_log(['limited pruning to height 689']):
pruneheight_new = node.pruneblockchain(400)
# the prune heights used here and below are magic numbers that are determined by the
# thresholds at which block files wrap, so they depend on disk serialization and default block file size.
assert_equal(pruneheight_new, 366)
self.log.info("check if we can access the tips blockfilter and coinstats when we have pruned some blocks")
tip = self.nodes[0].getbestblockhash()
for node in filter_nodes:
assert_greater_than(len(node.getblockfilter(tip)['filter']), 0)
for node in stats_nodes:
assert(node.gettxoutsetinfo(hash_type="muhash", hash_or_height=tip)['muhash'])
self.log.info("check if we can access the blockfilter and coinstats of a pruned block")
height_hash = self.nodes[0].getblockhash(2)
for node in filter_nodes:
assert_greater_than(len(node.getblockfilter(height_hash)['filter']), 0)
for node in stats_nodes:
assert(node.gettxoutsetinfo(hash_type="muhash", hash_or_height=height_hash)['muhash'])
# mine and sync index up to a height that will later be the pruneheight
self.generate(self.nodes[0], 298)
self.sync_index(height=998)
self.restart_without_indices()
self.log.info("make sure trying to access the indices throws errors")
for node in filter_nodes:
msg = "Index is not enabled for filtertype basic"
assert_raises_rpc_error(-1, msg, node.getblockfilter, height_hash)
for node in stats_nodes:
msg = "Querying specific block heights requires coinstatsindex"
assert_raises_rpc_error(-8, msg, node.gettxoutsetinfo, "muhash", height_hash)
self.mine_batches(502)
self.log.info("prune exactly up to the indices best blocks while the indices are disabled")
for i in range(3):
pruneheight_2 = self.nodes[i].pruneblockchain(1000)
assert_equal(pruneheight_2, 932)
# Restart the nodes again with the indices activated
self.restart_node(i, extra_args=self.extra_args[i], expected_stderr=EXPECTED_STDERR_NO_GOV_PRUNE)
self.log.info("make sure that we can continue with the partially synced indices after having pruned up to the index height")
self.sync_index(height=1500)
self.log.info("prune further than the indices best blocks while the indices are disabled")
self.restart_without_indices()
self.mine_batches(1000)
for i in range(3):
pruneheight_3 = self.nodes[i].pruneblockchain(2000)
assert_greater_than(pruneheight_3, pruneheight_2)
self.stop_node(i, expected_stderr=EXPECTED_STDERR_NO_GOV_PRUNE)
self.log.info("make sure we get an init error when starting the nodes again with the indices")
filter_msg = "Error: basic block filter index best block of the index goes beyond pruned data. Please disable the index or reindex (which will download the whole blockchain again)"
stats_msg = "Error: coinstatsindex best block of the index goes beyond pruned data. Please disable the index or reindex (which will download the whole blockchain again)"
for i, msg in enumerate([filter_msg, stats_msg, filter_msg]):
self.nodes[i].assert_start_raises_init_error(extra_args=self.extra_args[i], expected_msg=f"{EXPECTED_STDERR_NO_GOV_PRUNE}\n{msg}")
self.log.info("make sure the nodes start again with the indices and an additional -reindex arg")
for i in range(3):
restart_args = self.extra_args[i]+["-reindex"]
self.restart_node(i, extra_args=restart_args, expected_stderr=EXPECTED_STDERR_NO_GOV_PRUNE)
# The nodes need to be reconnected to the non-pruning node upon restart, otherwise they will be stuck
self.connect_nodes(i, 3)
self.sync_blocks(timeout=300)
self.sync_index(height=2500)
for node in self.nodes[:2]:
with node.assert_debug_log(['limited pruning to height 2489']):
pruneheight_new = node.pruneblockchain(2500)
assert_equal(pruneheight_new, 2197)
self.log.info("ensure that prune locks don't prevent indices from failing in a reorg scenario")
with self.nodes[0].assert_debug_log(['basic block filter index prune lock moved back to 2480']):
self.nodes[3].invalidateblock(self.nodes[0].getblockhash(2480))
self.generate(self.nodes[3], 30)
self.sync_blocks()
for idx in range(self.num_nodes):
self.nodes[idx].stop_node(expected_stderr=EXPECTED_STDERR_NO_GOV_PRUNE if idx != 3 else "")
if __name__ == '__main__':
FeatureIndexPruneTest().main()

View File

@ -33,6 +33,13 @@ from test_framework.util import (
# compatible with pruning based on key creation time. # compatible with pruning based on key creation time.
TIMESTAMP_WINDOW = 2 * 60 * 60 TIMESTAMP_WINDOW = 2 * 60 * 60
DEPLOYMENT_ARGS = [
"-dip3params=2000:2000",
"-testactivationheight=dip0001@2000",
"-testactivationheight=dip0008@2000",
"-testactivationheight=v20@2000",
]
def mine_large_blocks(node, n): def mine_large_blocks(node, n):
# Make a large scriptPubKey for the coinbase transaction. This is OP_RETURN # Make a large scriptPubKey for the coinbase transaction. This is OP_RETURN
# followed by 950k of OP_NOP. This would be non-standard in a non-coinbase # followed by 950k of OP_NOP. This would be non-standard in a non-coinbase
@ -87,16 +94,16 @@ class PruneTest(BitcoinTestFramework):
# Create nodes 0 and 1 to mine. # Create nodes 0 and 1 to mine.
# Create node 2 to test pruning. # Create node 2 to test pruning.
self.full_node_default_args = ["-dip3params=2000:2000", "-testactivationheight=dip0008@2000", "-maxreceivebuffer=20000", "-blockmaxsize=999000", "-checkblocks=5"] self.full_node_default_args = ["-maxreceivebuffer=20000", "-blockmaxsize=999000", "-checkblocks=5"] + DEPLOYMENT_ARGS
# Create nodes 3 and 4 to test manual pruning (they will be re-started with manual pruning later) # Create nodes 3 and 4 to test manual pruning (they will be re-started with manual pruning later)
# Create nodes 5 to test wallet in prune mode, but do not connect # Create nodes 5 to test wallet in prune mode, but do not connect
self.extra_args = [ self.extra_args = [
self.full_node_default_args, self.full_node_default_args,
self.full_node_default_args, self.full_node_default_args,
["-dip3params=2000:2000", "-testactivationheight=dip0008@2000", "-disablegovernance","-txindex=0","-maxreceivebuffer=20000","-prune=550"], ["-disablegovernance","-txindex=0","-maxreceivebuffer=20000","-prune=550"] + DEPLOYMENT_ARGS,
["-dip3params=2000:2000", "-testactivationheight=dip0008@2000", "-disablegovernance","-txindex=0","-maxreceivebuffer=20000","-blockmaxsize=999000"], ["-disablegovernance","-txindex=0","-maxreceivebuffer=20000","-blockmaxsize=999000"] + DEPLOYMENT_ARGS,
["-dip3params=2000:2000", "-testactivationheight=dip0008@2000", "-disablegovernance","-txindex=0","-maxreceivebuffer=20000","-blockmaxsize=999000"], ["-disablegovernance","-txindex=0","-maxreceivebuffer=20000","-blockmaxsize=999000"] + DEPLOYMENT_ARGS,
["-dip3params=2000:2000", "-testactivationheight=dip0008@2000", "-disablegovernance","-txindex=0","-prune=550"], ["-disablegovernance","-txindex=0","-prune=550"] + DEPLOYMENT_ARGS,
] ]
self.rpc_timeout = 120 self.rpc_timeout = 120
@ -143,8 +150,8 @@ class PruneTest(BitcoinTestFramework):
extra_args=['-prune=550', '-txindex'], extra_args=['-prune=550', '-txindex'],
) )
self.nodes[0].assert_start_raises_init_error( self.nodes[0].assert_start_raises_init_error(
expected_msg='Error: Prune mode is incompatible with -coinstatsindex.', expected_msg='Error: Prune mode is incompatible with -reindex-chainstate. Use full -reindex instead.',
extra_args=['-prune=550', '-coinstatsindex'], extra_args=['-prune=550', '-reindex-chainstate'],
) )
self.nodes[0].assert_start_raises_init_error( self.nodes[0].assert_start_raises_init_error(
expected_msg='Error: Prune mode is incompatible with -disablegovernance=false.', expected_msg='Error: Prune mode is incompatible with -disablegovernance=false.',
@ -213,7 +220,7 @@ class PruneTest(BitcoinTestFramework):
self.log.info("New best height: %d" % self.nodes[1].getblockcount()) self.log.info("New best height: %d" % self.nodes[1].getblockcount())
# Mine one block to avoid automatic recovery from forks on restart # Mine one block to avoid automatic recovery from forks on restart
self.generate(self.nodes[1], 1) self.generate(self.nodes[1], 1, sync_fun=self.no_op)
# Disconnect node1 and generate the new chain # Disconnect node1 and generate the new chain
self.disconnect_nodes(0, 1) self.disconnect_nodes(0, 1)
self.disconnect_nodes(1, 2) self.disconnect_nodes(1, 2)
@ -283,13 +290,13 @@ class PruneTest(BitcoinTestFramework):
def manual_test(self, node_number, use_timestamp): def manual_test(self, node_number, use_timestamp):
# at this point, node has 995 blocks and has not yet run in prune mode # at this point, node has 995 blocks and has not yet run in prune mode
self.start_node(node_number, extra_args=["-dip3params=2000:2000", "-testactivationheight=dip0008@2000", "-disablegovernance", "-txindex=0"]) self.start_node(node_number, extra_args=["-disablegovernance", "-txindex=0"] + DEPLOYMENT_ARGS)
node = self.nodes[node_number] node = self.nodes[node_number]
assert_equal(node.getblockcount(), 995) assert_equal(node.getblockcount(), 995)
assert_raises_rpc_error(-1, "Cannot prune blocks because node is not in prune mode", node.pruneblockchain, 500) assert_raises_rpc_error(-1, "Cannot prune blocks because node is not in prune mode", node.pruneblockchain, 500)
# now re-start in manual pruning mode # now re-start in manual pruning mode
self.restart_node(node_number, extra_args=["-dip3params=2000:2000", "-testactivationheight=dip0008@2000", "-disablegovernance", "-txindex=0", "-prune=1"], expected_stderr=EXPECTED_STDERR_NO_GOV) self.restart_node(node_number, extra_args=["-disablegovernance", "-txindex=0", "-prune=1"] + DEPLOYMENT_ARGS, expected_stderr=EXPECTED_STDERR_NO_GOV)
node = self.nodes[node_number] node = self.nodes[node_number]
assert_equal(node.getblockcount(), 995) assert_equal(node.getblockcount(), 995)
@ -358,14 +365,14 @@ class PruneTest(BitcoinTestFramework):
assert not has_block(3), "blk00003.dat is still there, should be pruned by now" assert not has_block(3), "blk00003.dat is still there, should be pruned by now"
# stop node, start back up with auto-prune at 550 MiB, make sure still runs # stop node, start back up with auto-prune at 550 MiB, make sure still runs
self.restart_node(node_number, extra_args=["-dip3params=2000:2000", "-testactivationheight=dip0008@2000", "-disablegovernance", "-txindex=0", "-prune=550"], expected_stderr=EXPECTED_STDERR_NO_GOV_PRUNE) self.restart_node(node_number, extra_args=["-disablegovernance", "-txindex=0", "-prune=550"] + DEPLOYMENT_ARGS, expected_stderr=EXPECTED_STDERR_NO_GOV_PRUNE)
self.log.info("Success") self.log.info("Success")
def wallet_test(self): def wallet_test(self):
# check that the pruning node's wallet is still in good shape # check that the pruning node's wallet is still in good shape
self.log.info("Stop and start pruning node to trigger wallet rescan") self.log.info("Stop and start pruning node to trigger wallet rescan")
self.restart_node(2, extra_args=["-dip3params=2000:2000", "-testactivationheight=dip0008@2000", "-disablegovernance", "-txindex=0", "-prune=550"], expected_stderr=EXPECTED_STDERR_NO_GOV_PRUNE) self.restart_node(2, extra_args=["-disablegovernance", "-txindex=0", "-prune=550"] + DEPLOYMENT_ARGS, expected_stderr=EXPECTED_STDERR_NO_GOV_PRUNE)
self.log.info("Success") self.log.info("Success")
# check that wallet loads successfully when restarting a pruned node after IBD. # check that wallet loads successfully when restarting a pruned node after IBD.
@ -374,7 +381,7 @@ class PruneTest(BitcoinTestFramework):
self.connect_nodes(0, 5) self.connect_nodes(0, 5)
nds = [self.nodes[0], self.nodes[5]] nds = [self.nodes[0], self.nodes[5]]
self.sync_blocks(nds, wait=5, timeout=300) self.sync_blocks(nds, wait=5, timeout=300)
self.restart_node(5, extra_args=["-dip3params=2000:2000", "-testactivationheight=dip0008@2000", "-disablegovernance", "-txindex=0", "-prune=550"], expected_stderr=EXPECTED_STDERR_NO_GOV_PRUNE) # restart to trigger rescan self.restart_node(5, extra_args=["-disablegovernance", "-txindex=0", "-prune=550"] + DEPLOYMENT_ARGS, expected_stderr=EXPECTED_STDERR_NO_GOV_PRUNE) # restart to trigger rescan
self.log.info("Success") self.log.info("Success")
def run_test(self): def run_test(self):

View File

@ -7,9 +7,12 @@
- Start a single node and generate 3 blocks. - Start a single node and generate 3 blocks.
- Stop the node and restart it with -reindex. Verify that the node has reindexed up to block 3. - Stop the node and restart it with -reindex. Verify that the node has reindexed up to block 3.
- Stop the node and restart it with -reindex-chainstate. Verify that the node has reindexed up to block 3. - Stop the node and restart it with -reindex-chainstate. Verify that the node has reindexed up to block 3.
- Verify that out-of-order blocks are correctly processed, see LoadExternalBlockFile()
""" """
import os
from test_framework.test_framework import BitcoinTestFramework from test_framework.test_framework import BitcoinTestFramework
from test_framework.p2p import MAGIC_BYTES
from test_framework.util import assert_equal from test_framework.util import assert_equal
@ -27,6 +30,50 @@ class ReindexTest(BitcoinTestFramework):
assert_equal(self.nodes[0].getblockcount(), blockcount) # start_node is blocking on reindex assert_equal(self.nodes[0].getblockcount(), blockcount) # start_node is blocking on reindex
self.log.info("Success") self.log.info("Success")
# Check that blocks can be processed out of order
def out_of_order(self):
# The previous test created 24 blocks
assert_equal(self.nodes[0].getblockcount(), 24)
self.stop_nodes()
# In this test environment, blocks will always be in order (since
# we're generating them rather than getting them from peers), so to
# test out-of-order handling, swap blocks 1 and 2 on disk.
blk0 = os.path.join(self.nodes[0].datadir, self.nodes[0].chain, 'blocks', 'blk00000.dat')
with open(blk0, 'r+b') as bf:
# Read at least the first few blocks (including genesis)
b = bf.read(2000)
# Find the offsets of blocks 2, 3, and 4 (the first 3 blocks beyond genesis)
# by searching for the regtest marker bytes (see pchMessageStart).
def find_block(b, start):
return b.find(MAGIC_BYTES["regtest"], start)+4
genesis_start = find_block(b, 0)
assert_equal(genesis_start, 4)
b2_start = find_block(b, genesis_start)
b3_start = find_block(b, b2_start)
b4_start = find_block(b, b3_start)
# Blocks 2 and 3 should be the same size.
assert_equal(b3_start-b2_start, b4_start-b3_start)
# Swap the second and third blocks (don't disturb the genesis block).
bf.seek(b2_start)
bf.write(b[b3_start:b4_start])
bf.write(b[b2_start:b3_start])
# The reindexing code should detect and accommodate out of order blocks.
with self.nodes[0].assert_debug_log([
'LoadExternalBlockFile: Out of order block',
'LoadExternalBlockFile: Processing out of order child',
]):
extra_args = [["-reindex"]]
self.start_nodes(extra_args)
# All blocks should be accepted and processed.
assert_equal(self.nodes[0].getblockcount(), 24)
def run_test(self): def run_test(self):
for txindex in [0, 1]: for txindex in [0, 1]:
self.reindex(False, txindex) self.reindex(False, txindex)
@ -34,5 +81,8 @@ class ReindexTest(BitcoinTestFramework):
self.reindex(False, txindex) self.reindex(False, txindex)
self.reindex(True, txindex) self.reindex(True, txindex)
self.out_of_order()
if __name__ == '__main__': if __name__ == '__main__':
ReindexTest().main() ReindexTest().main()

View File

@ -0,0 +1,97 @@
#!/usr/bin/env python3
# Copyright (c) 2021 The Bitcoin Core developers
# Distributed under the MIT software license, see the accompanying
# file COPYING or http://www.opensource.org/licenses/mit-license.php.
"""Test that legacy txindex will be disabled on upgrade.
Previous releases are required by this test, see test/README.md.
"""
import os
import shutil
from test_framework.test_framework import BitcoinTestFramework
from test_framework.wallet import MiniWallet
class MempoolCompatibilityTest(BitcoinTestFramework):
def set_test_params(self):
self.num_nodes = 3
self.extra_args = [
["-reindex", "-txindex"],
["-txindex=0"],
["-txindex=0"],
]
def skip_test_if_missing_module(self):
self.skip_if_no_previous_releases()
def setup_network(self):
self.add_nodes(
self.num_nodes,
self.extra_args,
versions=[
170003, # Last release with legacy txindex
None, # For MiniWallet, without migration code
18020200, # Any release with migration code (0.18.x - 21.x)
# We are using v18.2.2 to avoid MigrateDBIfNeeded(2) routines as
# they don't handle a no-upgrade case correctly
],
)
# Delete v18.2.2 cached datadir to avoid making a legacy version try to
# make sense of our current database formats
shutil.rmtree(os.path.join(self.nodes[2].datadir, self.chain))
self.start_nodes()
self.connect_nodes(0, 1)
self.connect_nodes(1, 2)
def run_test(self):
mini_wallet = MiniWallet(self.nodes[1])
mini_wallet.rescan_utxos()
spend_utxo = mini_wallet.get_utxo()
mini_wallet.send_self_transfer(from_node=self.nodes[1], utxo_to_spend=spend_utxo)
self.generate(self.nodes[1], 1)
self.log.info("Check legacy txindex")
self.nodes[0].getrawtransaction(txid=spend_utxo["txid"]) # Requires -txindex
self.stop_nodes()
legacy_chain_dir = os.path.join(self.nodes[0].datadir, self.chain)
self.log.info("Migrate legacy txindex")
migrate_chain_dir = os.path.join(self.nodes[2].datadir, self.chain)
shutil.rmtree(migrate_chain_dir)
shutil.copytree(legacy_chain_dir, migrate_chain_dir)
with self.nodes[2].assert_debug_log([
"Upgrading txindex database...",
"txindex is enabled at height 200",
]):
self.start_node(2, extra_args=["-txindex"])
self.nodes[2].getrawtransaction(txid=spend_utxo["txid"]) # Requires -txindex
self.log.info("Drop legacy txindex")
drop_index_chain_dir = os.path.join(self.nodes[1].datadir, self.chain)
shutil.rmtree(drop_index_chain_dir)
shutil.copytree(legacy_chain_dir, drop_index_chain_dir)
self.nodes[1].assert_start_raises_init_error(
extra_args=["-txindex"],
expected_msg="Error: The block index db contains a legacy 'txindex'. To clear the occupied disk space, run a full -reindex, otherwise ignore this error. This error message will not be displayed again.",
)
# Build txindex from scratch and check there is no error this time
self.start_node(1, extra_args=["-txindex"])
self.nodes[2].getrawtransaction(txid=spend_utxo["txid"]) # Requires -txindex
self.stop_nodes()
self.log.info("Check migrated txindex can not be read by legacy node")
err_msg = f": You need to rebuild the database using -reindex to change -txindex.{os.linesep}Please restart with -reindex or -reindex-chainstate to recover."
shutil.rmtree(legacy_chain_dir)
shutil.copytree(migrate_chain_dir, legacy_chain_dir)
self.nodes[0].assert_start_raises_init_error(extra_args=["-txindex"], expected_msg=err_msg)
shutil.rmtree(legacy_chain_dir)
shutil.copytree(drop_index_chain_dir, legacy_chain_dir)
self.nodes[0].assert_start_raises_init_error(extra_args=["-txindex"], expected_msg=err_msg)
if __name__ == "__main__":
MempoolCompatibilityTest().main()

View File

@ -27,7 +27,7 @@ class RpcMiscTest(BitcoinTestFramework):
self.log.info("test CHECK_NONFATAL") self.log.info("test CHECK_NONFATAL")
assert_raises_rpc_error( assert_raises_rpc_error(
-1, -1,
'Internal bug detected: \'request.params[9].get_str() != "trigger_internal_bug"\'', 'Internal bug detected: "request.params[9].get_str() != "trigger_internal_bug""',
lambda: node.echo(arg9='trigger_internal_bug'), lambda: node.echo(arg9='trigger_internal_bug'),
) )

View File

@ -87,6 +87,7 @@ EXTENDED_SCRIPTS = [
# Longest test should go first, to favor running tests in parallel # Longest test should go first, to favor running tests in parallel
'feature_pruning.py', # NOTE: Prune mode is incompatible with -txindex, should work with governance validation disabled though. 'feature_pruning.py', # NOTE: Prune mode is incompatible with -txindex, should work with governance validation disabled though.
'feature_dbcrash.py', 'feature_dbcrash.py',
'feature_index_prune.py',
] ]
BASE_SCRIPTS = [ BASE_SCRIPTS = [
@ -357,6 +358,7 @@ BASE_SCRIPTS = [
'rpc_deriveaddresses.py --usecli', 'rpc_deriveaddresses.py --usecli',
'p2p_ping.py', 'p2p_ping.py',
'rpc_scantxoutset.py', 'rpc_scantxoutset.py',
'feature_txindex_compatibility.py',
'feature_logging.py', 'feature_logging.py',
'feature_anchors.py', 'feature_anchors.py',
'feature_coinstatsindex.py', 'feature_coinstatsindex.py',
@ -376,7 +378,6 @@ BASE_SCRIPTS = [
'rpc_help.py', 'rpc_help.py',
'feature_dirsymlinks.py', 'feature_dirsymlinks.py',
'feature_help.py', 'feature_help.py',
'feature_blockfilterindex_prune.py'
# Don't append tests at the end to avoid merge conflicts # Don't append tests at the end to avoid merge conflicts
# Put them in a random line within the section that fits their approximate run-time # Put them in a random line within the section that fits their approximate run-time
] ]

View File

@ -11,8 +11,6 @@ export LC_ALL=C
EXPECTED_CIRCULAR_DEPENDENCIES=( EXPECTED_CIRCULAR_DEPENDENCIES=(
"chainparamsbase -> util/system -> chainparamsbase" "chainparamsbase -> util/system -> chainparamsbase"
"node/blockstorage -> validation -> node/blockstorage" "node/blockstorage -> validation -> node/blockstorage"
"index/blockfilterindex -> node/blockstorage -> validation -> index/blockfilterindex"
"index/base -> validation -> index/blockfilterindex -> index/base"
"index/coinstatsindex -> node/coinstats -> index/coinstatsindex" "index/coinstatsindex -> node/coinstats -> index/coinstatsindex"
"policy/fees -> txmempool -> policy/fees" "policy/fees -> txmempool -> policy/fees"
"qt/addresstablemodel -> qt/walletmodel -> qt/addresstablemodel" "qt/addresstablemodel -> qt/walletmodel -> qt/addresstablemodel"