Merge #6067: backport: merge bitcoin#21148, #21327, #23970, #24021, #24543, #26844, #25325, #28165, partial bitcoin#20524, #26036, #27981 (networking backports: part 7)

76a458e5f9 fmt: apply formatting suggestions from `clang-format-diff.py` (Kittywhiskers Van Gogh)
63962ec475 merge bitcoin#28165: transport abstraction (Kittywhiskers Van Gogh)
c6b9186e69 merge bitcoin#25325: Add pool based memory resource (Kittywhiskers Van Gogh)
8c986d6b08 partial bitcoin#27981: Fix potential network stalling bug (Kittywhiskers Van Gogh)
13f6dc1b27 merge bitcoin#26844: Pass MSG_MORE flag when sending non-final network messages (Kittywhiskers Van Gogh)
caaa0fda01 net: use `std::deque` for `vSendMsg` instead of `std::list` (Kittywhiskers Van Gogh)
2ecba6ba5f partial bitcoin#26036: add NetEventsInterface::g_msgproc_mutex (Kittywhiskers Van Gogh)
f6c943922f merge bitcoin#24543: Move remaining globals into PeerManagerImpl (Kittywhiskers Van Gogh)
dbe41ea141 refactor: move object request logic to `PeerManagerImpl` (Kittywhiskers Van Gogh)
112c4e0a16 merge bitcoin#24021: Rename and move PoissonNextSend functions (Kittywhiskers Van Gogh)
6d690ede82 merge bitcoin#23970: Remove pointless and confusing shift in RelayAddress (Kittywhiskers Van Gogh)
87205f26b5 merge bitcoin#21327: ignore transactions while in IBD (Kittywhiskers Van Gogh)
51ad8e4dde merge bitcoin#21148: Split orphan handling from net_processing into txorphanage (Kittywhiskers Van Gogh)
cbff29a630 partial bitcoin#20524: Move MIN_VERSION_SUPPORTED to p2p.py (Kittywhiskers Van Gogh)

Pull request description:

  ## Additional Information

  * Dependent on https://github.com/dashpay/dash/pull/6098

  * Dependent on https://github.com/dashpay/dash/pull/6233

  * `p2p_ibd_txrelay.py` was first introduced in [bitcoin#19423](https://github.com/bitcoin/bitcoin/pull/19423) to test feefilter logic but on account of Dash not having feefilter capabilities, that backport was skipped over but on account of the tests introduced in [bitcoin#21327](https://github.com/bitcoin/bitcoin/pull/21327) that test capabilities present in Dash, a minimal version of `p2p_ibd_txrelay.py` has been committed in.

  * `vSendMsg` is originally a `std::deque` and as an optimization, was changed to a `std::list` in 027a852a ([dash#3398](https://github.com/dashpay/dash/pull/3398)) but this renders us unable to backport [bitcoin#26844](https://github.com/bitcoin/bitcoin/pull/26844) as it introduces build failures. The optimization has been reverted to make way for the backport.

    <details>

    <summary>Compile failure:</summary>

    ```
    net.cpp:959:20: error: invalid operands to binary expression ('iterator' (aka '_List_iterator<std::vector<unsigned char, std::allocator<unsigned char>>>') and 'int')
                if (it + 1 != node.vSendMsg.end()) {
                    ~~ ^ ~
    /usr/bin/../lib/gcc/x86_64-linux-gnu/9/../../../../include/c++/9/bits/stl_bvector.h:303:3: note: candidate function not viable: no known conversion from 'iterator' (aka '_List_iterator<std::vector<unsigned char, std::allocator<unsigned char>>>') to 'ptrdiff_t' (aka 'long') for 1st argument
      operator+(ptrdiff_t __n, const _Bit_iterator& __x)
    [...]
    1 error generated.
    make[2]: *** [Makefile:11296: libbitcoin_server_a-net.o] Error 1
    make[2]: *** Waiting for unfinished jobs....
    make[2]: Leaving directory '/src/dash/src'
    make[1]: *** [Makefile:19171: all-recursive] Error 1
    make[1]: Leaving directory '/src/dash/src'
    make: *** [Makefile:799: all-recursive] Error 1
    ```

    </details>

  * The collection of `CNode` pointers in `CConnman::SocketHandlerConnected` has been changed to a `std::set` to allow for us to erase elements from `vReceivableNodes` if the node is _also_ in the set of sendable nodes and the send hasn't entirely succeeded to avoid a deadlock (i.e. backport [bitcoin#27981](https://github.com/bitcoin/bitcoin/pull/27981))

  * When backporting [bitcoin#28165](https://github.com/bitcoin/bitcoin/pull/28165), `denialofservice_tests` has been modified to still check with `vSendMsg` instead of `Transport::GetBytesToSend()` as changes in networking code to support LT-SEMs (level-triggered socket events mode) mean that the message doesn't get shifted from `vSendMsg` to `m_message_to_send`, as the test expects.
    * Specifically, the changes made for LT-SEM support result in the function responsible for making that shift (`Transport::SetMessageToSend()` through `CConnman::SocketSendData()`), not being called during the test runtime.

  * As checking `vSendMsg` (directly or through `nSendMsgSize`) isn't enough to determine if the queue is empty, we now also check with `to_send` from `Transport::GetBytesToSend()` to help us make that determination. This mirrors the change present in the upstream backport ([source](https://github.com/bitcoin/bitcoin/pull/28165/files#diff-00021eed586a482abdb09d6cdada1d90115abe988a91421851960e26658bed02R1324-R1327)).

  ## Breaking Changes

  * `bandwidth.message.*.bytesSent` will no longer include overhead and will now only report message size as specifics that let us calculate the overhead have been abstracted away.

  ## Checklist:

  - [x] I have performed a self-review of my own code
  - [x] I have commented my code, particularly in hard-to-understand areas **(note: N/A)**
  - [x] I have added or updated relevant unit/integration/functional/e2e tests
  - [x] I have made corresponding changes to the documentation **(note: N/A)**
  - [x] I have assigned this pull request to a milestone _(for repository code-owners and collaborators only)_

ACKs for top commit:
  PastaPastaPasta:
    utACK 76a458e5f9

Tree-SHA512: 2e47c207c1f854cfbd5b28c07dd78e12765ddb919abcd7710325df5d253cd0ba4bc30aa21545d88519e8acfe65638a57db4ca66853aca82fc355542210f4b394
This commit is contained in:
pasta 2024-09-04 12:07:30 -05:00
commit ddc53d7afd
No known key found for this signature in database
GPG Key ID: 52527BEDABE87984
54 changed files with 2786 additions and 1167 deletions

View File

@ -314,6 +314,7 @@ BITCOIN_CORE_H = \
streams.h \
statsd_client.h \
support/allocators/mt_pooled_secure.h \
support/allocators/pool.h \
support/allocators/pooled_secure.h \
support/allocators/secure.h \
support/allocators/zeroafterfree.h \
@ -328,6 +329,7 @@ BITCOIN_CORE_H = \
torcontrol.h \
txdb.h \
txmempool.h \
txorphanage.h \
undo.h \
unordered_lru_cache.h \
util/bip32.h \
@ -527,6 +529,7 @@ libbitcoin_server_a_SOURCES = \
torcontrol.cpp \
txdb.cpp \
txmempool.cpp \
txorphanage.cpp \
validation.cpp \
validationinterface.cpp \
versionbits.cpp \

View File

@ -41,6 +41,7 @@ bench_bench_dash_SOURCES = \
bench/nanobench.h \
bench/nanobench.cpp \
bench/peer_eviction.cpp \
bench/pool.cpp \
bench/rpc_blockchain.cpp \
bench/rpc_mempool.cpp \
bench/util_time.cpp \

View File

@ -136,6 +136,7 @@ BITCOIN_TESTS =\
test/netbase_tests.cpp \
test/pmt_tests.cpp \
test/policyestimator_tests.cpp \
test/pool_tests.cpp \
test/pow_tests.cpp \
test/prevector_tests.cpp \
test/raii_event_tests.cpp \
@ -298,6 +299,7 @@ test_fuzz_fuzz_SOURCES = \
test/fuzz/parse_univalue.cpp \
test/fuzz/policy_estimator.cpp \
test/fuzz/policy_estimator_io.cpp \
test/fuzz/poolresource.cpp \
test/fuzz/pow.cpp \
test/fuzz/prevector.cpp \
test/fuzz/primitives_transaction.cpp \

View File

@ -14,6 +14,7 @@ TEST_UTIL_H = \
test/util/logging.h \
test/util/mining.h \
test/util/net.h \
test/util/poolresourcetester.h \
test/util/script.h \
test/util/setup_common.h \
test/util/str.h \

50
src/bench/pool.cpp Normal file
View File

@ -0,0 +1,50 @@
// Copyright (c) 2022 The Bitcoin Core developers
// Distributed under the MIT software license, see the accompanying
// file COPYING or http://www.opensource.org/licenses/mit-license.php.
#include <bench/bench.h>
#include <support/allocators/pool.h>
#include <unordered_map>
template <typename Map>
void BenchFillClearMap(benchmark::Bench& bench, Map& map)
{
size_t batch_size = 5000;
// make sure each iteration of the benchmark contains exactly 5000 inserts and one clear.
// do this at least 10 times so we get reasonable accurate results
bench.batch(batch_size).minEpochIterations(10).run([&] {
auto rng = ankerl::nanobench::Rng(1234);
for (size_t i = 0; i < batch_size; ++i) {
map[rng()];
}
map.clear();
});
}
static void PoolAllocator_StdUnorderedMap(benchmark::Bench& bench)
{
auto map = std::unordered_map<uint64_t, uint64_t>();
BenchFillClearMap(bench, map);
}
static void PoolAllocator_StdUnorderedMapWithPoolResource(benchmark::Bench& bench)
{
using Map = std::unordered_map<uint64_t,
uint64_t,
std::hash<uint64_t>,
std::equal_to<uint64_t>,
PoolAllocator<std::pair<const uint64_t, uint64_t>,
sizeof(std::pair<const uint64_t, uint64_t>) + 4 * sizeof(void*),
alignof(void*)>>;
// make sure the resource supports large enough pools to hold the node. We do this by adding the size of a few pointers to it.
auto pool_resource = Map::allocator_type::ResourceType();
auto map = Map{0, std::hash<uint64_t>{}, std::equal_to<uint64_t>{}, &pool_resource};
BenchFillClearMap(bench, map);
}
BENCHMARK(PoolAllocator_StdUnorderedMap);
BENCHMARK(PoolAllocator_StdUnorderedMapWithPoolResource);

View File

@ -33,7 +33,7 @@ size_t CCoinsViewBacked::EstimateSize() const { return base->EstimateSize(); }
CCoinsViewCache::CCoinsViewCache(CCoinsView* baseIn, bool deterministic) :
CCoinsViewBacked(baseIn), m_deterministic(deterministic),
cacheCoins(0, SaltedOutpointHasher(/*deterministic=*/deterministic))
cacheCoins(0, SaltedOutpointHasher(/*deterministic=*/deterministic), CCoinsMap::key_equal{}, &m_cache_coins_memory_resource)
{}
size_t CCoinsViewCache::DynamicMemoryUsage() const {
@ -240,9 +240,12 @@ bool CCoinsViewCache::BatchWrite(CCoinsMap &mapCoins, const uint256 &hashBlockIn
bool CCoinsViewCache::Flush() {
bool fOk = base->BatchWrite(cacheCoins, hashBlock, /*erase=*/true);
if (fOk && !cacheCoins.empty()) {
/* BatchWrite must erase all cacheCoins elements when erase=true. */
throw std::logic_error("Not all cached coins were erased");
if (fOk) {
if (!cacheCoins.empty()) {
/* BatchWrite must erase all cacheCoins elements when erase=true. */
throw std::logic_error("Not all cached coins were erased");
}
ReallocateCache();
}
cachedCoinsUsage = 0;
return fOk;
@ -295,7 +298,9 @@ void CCoinsViewCache::ReallocateCache()
// Cache should be empty when we're calling this.
assert(cacheCoins.size() == 0);
cacheCoins.~CCoinsMap();
::new (&cacheCoins) CCoinsMap(0, SaltedOutpointHasher(/*deterministic=*/m_deterministic));
m_cache_coins_memory_resource.~CCoinsMapMemoryResource();
::new (&m_cache_coins_memory_resource) CCoinsMapMemoryResource{};
::new (&cacheCoins) CCoinsMap{0, SaltedOutpointHasher{/*deterministic=*/m_deterministic}, CCoinsMap::key_equal{}, &m_cache_coins_memory_resource};
}
void CCoinsViewCache::SanityCheck() const

View File

@ -11,6 +11,7 @@
#include <memusage.h>
#include <primitives/transaction.h>
#include <serialize.h>
#include <support/allocators/pool.h>
#include <uint256.h>
#include <util/hasher.h>
@ -131,7 +132,23 @@ struct CCoinsCacheEntry
CCoinsCacheEntry(Coin&& coin_, unsigned char flag) : coin(std::move(coin_)), flags(flag) {}
};
typedef std::unordered_map<COutPoint, CCoinsCacheEntry, SaltedOutpointHasher> CCoinsMap;
/**
* PoolAllocator's MAX_BLOCK_SIZE_BYTES parameter here uses sizeof the data, and adds the size
* of 4 pointers. We do not know the exact node size used in the std::unordered_node implementation
* because it is implementation defined. Most implementations have an overhead of 1 or 2 pointers,
* so nodes can be connected in a linked list, and in some cases the hash value is stored as well.
* Using an additional sizeof(void*)*4 for MAX_BLOCK_SIZE_BYTES should thus be sufficient so that
* all implementations can allocate the nodes from the PoolAllocator.
*/
using CCoinsMap = std::unordered_map<COutPoint,
CCoinsCacheEntry,
SaltedOutpointHasher,
std::equal_to<COutPoint>,
PoolAllocator<std::pair<const COutPoint, CCoinsCacheEntry>,
sizeof(std::pair<const COutPoint, CCoinsCacheEntry>) + sizeof(void*) * 4,
alignof(void*)>>;
using CCoinsMapMemoryResource = CCoinsMap::allocator_type::ResourceType;
/** Cursor for iterating over CoinsView state */
class CCoinsViewCursor
@ -221,6 +238,7 @@ protected:
* declared as "const".
*/
mutable uint256 hashBlock;
mutable CCoinsMapMemoryResource m_cache_coins_memory_resource{};
mutable CCoinsMap cacheCoins;
/* Cached dynamic memory usage for the inner Coin objects. */

View File

@ -157,10 +157,7 @@ PeerMsgRet CGovernanceManager::ProcessMessage(CNode& peer, CConnman& connman, Pe
uint256 nHash = govobj.GetHash();
{
LOCK(cs_main);
EraseObjectRequest(peer.GetId(), CInv(MSG_GOVERNANCE_OBJECT, nHash));
}
WITH_LOCK(::cs_main, peerman.EraseObjectRequest(peer.GetId(), CInv(MSG_GOVERNANCE_OBJECT, nHash)));
if (!m_mn_sync->IsBlockchainSynced()) {
LogPrint(BCLog::GOBJECT, "MNGOVERNANCEOBJECT -- masternode list not synced\n");
@ -223,11 +220,7 @@ PeerMsgRet CGovernanceManager::ProcessMessage(CNode& peer, CConnman& connman, Pe
vRecv >> vote;
uint256 nHash = vote.GetHash();
{
LOCK(cs_main);
EraseObjectRequest(peer.GetId(), CInv(MSG_GOVERNANCE_OBJECT_VOTE, nHash));
}
WITH_LOCK(::cs_main, peerman.EraseObjectRequest(peer.GetId(), CInv(MSG_GOVERNANCE_OBJECT_VOTE, nHash)));
// Ignore such messages until masternode list is synced
if (!m_mn_sync->IsBlockchainSynced()) {
@ -1222,13 +1215,14 @@ void CGovernanceManager::RequestGovernanceObject(CNode* pfrom, const uint256& nH
connman.PushMessage(pfrom, msgMaker.Make(NetMsgType::MNGOVERNANCESYNC, nHash, filter));
}
int CGovernanceManager::RequestGovernanceObjectVotes(CNode& peer, CConnman& connman) const
int CGovernanceManager::RequestGovernanceObjectVotes(CNode& peer, CConnman& connman, const PeerManager& peerman) const
{
const std::vector<CNode*> vNodeCopy{&peer};
return RequestGovernanceObjectVotes(vNodeCopy, connman);
return RequestGovernanceObjectVotes(vNodeCopy, connman, peerman);
}
int CGovernanceManager::RequestGovernanceObjectVotes(const std::vector<CNode*>& vNodesCopy, CConnman& connman) const
int CGovernanceManager::RequestGovernanceObjectVotes(const std::vector<CNode*>& vNodesCopy, CConnman& connman,
const PeerManager& peerman) const
{
static std::map<uint256, std::map<CService, int64_t> > mapAskedRecently;
@ -1304,7 +1298,7 @@ int CGovernanceManager::RequestGovernanceObjectVotes(const std::vector<CNode*>&
// stop early to prevent setAskFor overflow
{
LOCK(cs_main);
size_t nProjectedSize = GetRequestedObjectCount(pnode->GetId()) + nProjectedVotes;
size_t nProjectedSize = peerman.GetRequestedObjectCount(pnode->GetId()) + nProjectedVotes;
if (nProjectedSize > MAX_INV_SZ) continue;
// to early to ask the same node
if (mapAskedRecently[nHashGovobj].count(pnode->addr)) continue;

View File

@ -357,8 +357,9 @@ public:
void InitOnLoad();
int RequestGovernanceObjectVotes(CNode& peer, CConnman& connman) const;
int RequestGovernanceObjectVotes(const std::vector<CNode*>& vNodesCopy, CConnman& connman) const;
int RequestGovernanceObjectVotes(CNode& peer, CConnman& connman, const PeerManager& peerman) const;
int RequestGovernanceObjectVotes(const std::vector<CNode*>& vNodesCopy, CConnman& connman,
const PeerManager& peerman) const;
/*
* Trigger Management (formerly CGovernanceTriggerManager)

View File

@ -56,6 +56,7 @@
#include <torcontrol.h>
#include <txdb.h>
#include <txmempool.h>
#include <txorphanage.h>
#include <util/asmap.h>
#include <util/error.h>
#include <util/moneystr.h>
@ -578,7 +579,7 @@ void SetupServerArgs(ArgsManager& argsman)
argsman.AddArg("-listenonion", strprintf("Automatically create Tor onion service (default: %d)", DEFAULT_LISTEN_ONION), ArgsManager::ALLOW_ANY, OptionsCategory::CONNECTION);
argsman.AddArg("-maxconnections=<n>", strprintf("Maintain at most <n> connections to peers (temporary service connections excluded) (default: %u). This limit does not apply to connections manually added via -addnode or the addnode RPC, which have a separate limit of %u.", DEFAULT_MAX_PEER_CONNECTIONS, MAX_ADDNODE_CONNECTIONS), ArgsManager::ALLOW_ANY, OptionsCategory::CONNECTION);
argsman.AddArg("-maxreceivebuffer=<n>", strprintf("Maximum per-connection receive buffer, <n>*1000 bytes (default: %u)", DEFAULT_MAXRECEIVEBUFFER), ArgsManager::ALLOW_ANY, OptionsCategory::CONNECTION);
argsman.AddArg("-maxsendbuffer=<n>", strprintf("Maximum per-connection send buffer, <n>*1000 bytes (default: %u)", DEFAULT_MAXSENDBUFFER), ArgsManager::ALLOW_ANY, OptionsCategory::CONNECTION);
argsman.AddArg("-maxsendbuffer=<n>", strprintf("Maximum per-connection memory usage for the send buffer, <n>*1000 bytes (default: %u)", DEFAULT_MAXSENDBUFFER), ArgsManager::ALLOW_ANY, OptionsCategory::CONNECTION);
argsman.AddArg("-maxtimeadjustment", strprintf("Maximum allowed median peer time offset adjustment. Local perspective of time may be influenced by peers forward or backward by this amount. (default: %u seconds)", DEFAULT_MAX_TIME_ADJUSTMENT), ArgsManager::ALLOW_ANY, OptionsCategory::CONNECTION);
argsman.AddArg("-maxuploadtarget=<n>", strprintf("Tries to keep outbound traffic under the given target (in MiB per 24h). Limit does not apply to peers with 'download' permission. 0 = no limit (default: %d)", DEFAULT_MAX_UPLOAD_TARGET), ArgsManager::ALLOW_ANY, OptionsCategory::CONNECTION);
argsman.AddArg("-onion=<ip:port>", "Use separate SOCKS5 proxy to reach peers via Tor onion services, set -noonion to disable (default: -proxy)", ArgsManager::ALLOW_ANY, OptionsCategory::CONNECTION);
@ -2219,7 +2220,7 @@ bool AppInitMain(NodeContext& node, interfaces::BlockAndHeaderTipInfo* tip_info)
// ********************************************************* Step 10a: schedule Dash-specific tasks
node.scheduler->scheduleEvery(std::bind(&CNetFulfilledRequestManager::DoMaintenance, std::ref(*node.netfulfilledman)), std::chrono::minutes{1});
node.scheduler->scheduleEvery(std::bind(&CMasternodeSync::DoMaintenance, std::ref(*node.mn_sync)), std::chrono::seconds{1});
node.scheduler->scheduleEvery(std::bind(&CMasternodeSync::DoMaintenance, std::ref(*node.mn_sync), std::cref(*node.peerman)), std::chrono::seconds{1});
node.scheduler->scheduleEvery(std::bind(&CMasternodeUtils::DoMaintenance, std::ref(*node.connman), std::ref(*node.dmnman), std::ref(*node.mn_sync), std::ref(*node.cj_ctx)), std::chrono::minutes{1});
node.scheduler->scheduleEvery(std::bind(&CDeterministicMNManager::DoMaintenance, std::ref(*node.dmnman)), std::chrono::seconds{10});

View File

@ -60,7 +60,8 @@ PeerMsgRet CQuorumBlockProcessor::ProcessMessage(const CNode& peer, std::string_
CFinalCommitment qc;
vRecv >> qc;
WITH_LOCK(cs_main, EraseObjectRequest(peer.GetId(), CInv(MSG_QUORUM_FINAL_COMMITMENT, ::SerializeHash(qc))));
WITH_LOCK(::cs_main, Assert(m_peerman)->EraseObjectRequest(peer.GetId(),
CInv(MSG_QUORUM_FINAL_COMMITMENT, ::SerializeHash(qc))));
if (qc.IsNull()) {
LogPrint(BCLog::LLMQ, "CQuorumBlockProcessor::%s -- null commitment from peer=%d\n", __func__, peer.GetId());

View File

@ -115,8 +115,7 @@ PeerMsgRet CChainLocksHandler::ProcessNewChainLock(const NodeId from, const llmq
CInv clsigInv(MSG_CLSIG, hash);
if (from != -1) {
LOCK(cs_main);
EraseObjectRequest(from, clsigInv);
WITH_LOCK(::cs_main, Assert(m_peerman)->EraseObjectRequest(from, clsigInv));
}
{

View File

@ -72,8 +72,7 @@ void CDKGPendingMessages::PushPendingMessage(NodeId from, PeerManager* peerman,
uint256 hash = hw.GetHash();
if (from != -1) {
LOCK(cs_main);
EraseObjectRequest(from, CInv(invType, hash));
WITH_LOCK(::cs_main, Assert(m_peerman.load())->EraseObjectRequest(from, CInv(invType, hash)));
}
LOCK(cs_messages);

View File

@ -762,7 +762,7 @@ PeerMsgRet CInstantSendManager::ProcessMessageInstantSendLock(const CNode& pfrom
{
auto hash = ::SerializeHash(*islock);
WITH_LOCK(cs_main, EraseObjectRequest(pfrom.GetId(), CInv(MSG_ISDLOCK, hash)));
WITH_LOCK(::cs_main, Assert(m_peerman)->EraseObjectRequest(pfrom.GetId(), CInv(MSG_ISDLOCK, hash)));
if (!islock->TriviallyValid()) {
return tl::unexpected{100};
@ -1446,7 +1446,8 @@ void CInstantSendManager::RemoveConflictingLock(const uint256& islockHash, const
}
}
void CInstantSendManager::AskNodesForLockedTx(const uint256& txid, const CConnman& connman, const PeerManager& peerman, bool is_masternode)
void CInstantSendManager::AskNodesForLockedTx(const uint256& txid, const CConnman& connman, PeerManager& peerman,
bool is_masternode)
{
std::vector<CNode*> nodesToAskFor;
nodesToAskFor.reserve(4);
@ -1476,7 +1477,8 @@ void CInstantSendManager::AskNodesForLockedTx(const uint256& txid, const CConnma
txid.ToString(), pnode->GetId());
CInv inv(MSG_TX, txid);
RequestObject(pnode->GetId(), inv, GetTime<std::chrono::microseconds>(), is_masternode, /* fForce = */ true);
peerman.RequestObject(pnode->GetId(), inv, GetTime<std::chrono::microseconds>(), is_masternode,
/* fForce = */ true);
}
}
for (CNode* pnode : nodesToAskFor) {

View File

@ -315,8 +315,7 @@ private:
EXCLUSIVE_LOCKS_REQUIRED(!cs_inputReqests, !cs_nonLocked, !cs_pendingRetry);
void ResolveBlockConflicts(const uint256& islockHash, const CInstantSendLock& islock)
EXCLUSIVE_LOCKS_REQUIRED(!cs_inputReqests, !cs_nonLocked, !cs_pendingLocks, !cs_pendingRetry);
static void AskNodesForLockedTx(const uint256& txid, const CConnman& connman, const PeerManager& peerman,
bool is_masternode);
static void AskNodesForLockedTx(const uint256& txid, const CConnman& connman, PeerManager& peerman, bool is_masternode);
void ProcessPendingRetryLockTxs()
EXCLUSIVE_LOCKS_REQUIRED(!cs_creating, !cs_inputReqests, !cs_nonLocked, !cs_pendingRetry);

View File

@ -604,10 +604,8 @@ static bool PreVerifyRecoveredSig(const CQuorumManager& quorum_manager, const CR
PeerMsgRet CSigningManager::ProcessMessageRecoveredSig(const CNode& pfrom, const std::shared_ptr<const CRecoveredSig>& recoveredSig)
{
{
LOCK(cs_main);
EraseObjectRequest(pfrom.GetId(), CInv(MSG_QUORUM_RECOVERED_SIG, recoveredSig->GetHash()));
}
WITH_LOCK(::cs_main, Assert(m_peerman)->EraseObjectRequest(pfrom.GetId(),
CInv(MSG_QUORUM_RECOVERED_SIG, recoveredSig->GetHash())));
bool ban = false;
if (!PreVerifyRecoveredSig(qman, *recoveredSig, ban)) {

View File

@ -115,7 +115,7 @@ void CMasternodeSync::ProcessMessage(const CNode& peer, std::string_view msg_typ
LogPrint(BCLog::MNSYNC, "SYNCSTATUSCOUNT -- got inventory count: nItemID=%d nCount=%d peer=%d\n", nItemID, nCount, peer.GetId());
}
void CMasternodeSync::ProcessTick()
void CMasternodeSync::ProcessTick(const PeerManager& peerman)
{
assert(m_netfulfilledman.IsValid());
@ -144,7 +144,7 @@ void CMasternodeSync::ProcessTick()
// gradually request the rest of the votes after sync finished
if(IsSynced()) {
m_govman.RequestGovernanceObjectVotes(snap.Nodes(), connman);
m_govman.RequestGovernanceObjectVotes(snap.Nodes(), connman, peerman);
return;
}
@ -264,7 +264,7 @@ void CMasternodeSync::ProcessTick()
if(!m_netfulfilledman.HasFulfilledRequest(pnode->addr, "governance-sync")) {
continue; // to early for this node
}
int nObjsLeftToAsk = m_govman.RequestGovernanceObjectVotes(*pnode, connman);
int nObjsLeftToAsk = m_govman.RequestGovernanceObjectVotes(*pnode, connman, peerman);
// check for data
if(nObjsLeftToAsk == 0) {
static int64_t nTimeNoObjectsLeft = 0;
@ -368,9 +368,9 @@ void CMasternodeSync::UpdatedBlockTip(const CBlockIndex *pindexTip, const CBlock
pindexNew->nHeight, pindexTip->nHeight, fInitialDownload, fReachedBestHeader);
}
void CMasternodeSync::DoMaintenance()
void CMasternodeSync::DoMaintenance(const PeerManager& peerman)
{
if (ShutdownRequested()) return;
ProcessTick();
ProcessTick(peerman);
}

View File

@ -15,6 +15,7 @@ class CGovernanceManager;
class CMasternodeSync;
class CNetFulfilledRequestManager;
class CNode;
class PeerManager;
static constexpr int MASTERNODE_SYNC_BLOCKCHAIN = 1;
static constexpr int MASTERNODE_SYNC_GOVERNANCE = 4;
@ -71,13 +72,13 @@ public:
void SwitchToNextAsset();
void ProcessMessage(const CNode& peer, std::string_view msg_type, CDataStream& vRecv) const;
void ProcessTick();
void ProcessTick(const PeerManager& peerman);
void AcceptedBlockHeader(const CBlockIndex *pindexNew);
void NotifyHeaderTip(const CBlockIndex *pindexNew, bool fInitialDownload);
void UpdatedBlockTip(const CBlockIndex *pindexTip, const CBlockIndex *pindexNew, bool fInitialDownload);
void DoMaintenance();
void DoMaintenance(const PeerManager& peerman);
};
#endif // BITCOIN_MASTERNODE_SYNC_H

View File

@ -7,6 +7,7 @@
#include <indirectmap.h>
#include <prevector.h>
#include <support/allocators/pool.h>
#include <stdlib.h>
@ -167,6 +168,25 @@ static inline size_t DynamicUsage(const std::unordered_map<X, Y, Z>& m)
return MallocUsage(sizeof(unordered_node<std::pair<const X, Y> >)) * m.size() + MallocUsage(sizeof(void*) * m.bucket_count());
}
template <class Key, class T, class Hash, class Pred, std::size_t MAX_BLOCK_SIZE_BYTES, std::size_t ALIGN_BYTES>
static inline size_t DynamicUsage(const std::unordered_map<Key,
T,
Hash,
Pred,
PoolAllocator<std::pair<const Key, T>,
MAX_BLOCK_SIZE_BYTES,
ALIGN_BYTES>>& m)
{
auto* pool_resource = m.get_allocator().resource();
// The allocated chunks are stored in a std::list. Size per node should
// therefore be 3 pointers: next, previous, and a pointer to the chunk.
size_t estimated_list_node_size = MallocUsage(sizeof(void*) * 3);
size_t usage_resource = estimated_list_node_size * pool_resource->NumAllocatedChunks();
size_t usage_chunks = MallocUsage(pool_resource->ChunkSizeBytes()) * pool_resource->NumAllocatedChunks();
return usage_resource + usage_chunks + MallocUsage(sizeof(void*) * m.bucket_count());
}
} // namespace memusage
#endif // BITCOIN_MEMUSAGE_H

View File

@ -19,6 +19,7 @@
#include <crypto/sha256.h>
#include <fs.h>
#include <i2p.h>
#include <memusage.h>
#include <net_permissions.h>
#include <netaddress.h>
#include <netbase.h>
@ -142,6 +143,14 @@ std::map<CNetAddr, LocalServiceInfo> mapLocalHost GUARDED_BY(g_maplocalhost_mute
static bool vfLimited[NET_MAX] GUARDED_BY(g_maplocalhost_mutex) = {};
std::string strSubVersion;
size_t CSerializedNetMsg::GetMemoryUsage() const noexcept
{
// Don't count the dynamic memory used for the m_type string, by assuming it fits in the
// "small string" optimization area (which stores data inside the object itself, up to some
// size; 15 bytes in modern libstdc++).
return sizeof(*this) + memusage::DynamicUsage(data);
}
void CConnman::AddAddrFetch(const std::string& strDest)
{
LOCK(m_addr_fetches_mutex);
@ -783,16 +792,15 @@ bool CNode::ReceiveMsgBytes(Span<const uint8_t> msg_bytes, bool& complete)
nRecvBytes += msg_bytes.size();
while (msg_bytes.size() > 0) {
// absorb network data
int handled = m_deserializer->Read(msg_bytes);
if (handled < 0) {
// Serious header problem, disconnect from the peer.
if (!m_transport->ReceivedBytes(msg_bytes)) {
// Serious transport problem, disconnect from the peer.
return false;
}
if (m_deserializer->Complete()) {
if (m_transport->ReceivedMessageComplete()) {
// decompose a transport agnostic CNetMessage from the deserializer
bool reject_message{false};
CNetMessage msg = m_deserializer->GetMessage(time, reject_message);
CNetMessage msg = m_transport->GetReceivedMessage(time, reject_message);
if (reject_message) {
// Message deserialization failed. Drop the message but don't disconnect the peer.
// store the size of the corrupt message
@ -820,8 +828,18 @@ bool CNode::ReceiveMsgBytes(Span<const uint8_t> msg_bytes, bool& complete)
return true;
}
int V1TransportDeserializer::readHeader(Span<const uint8_t> msg_bytes)
V1Transport::V1Transport(const NodeId node_id, int nTypeIn, int nVersionIn) noexcept :
m_node_id(node_id), hdrbuf(nTypeIn, nVersionIn), vRecv(nTypeIn, nVersionIn)
{
assert(std::size(Params().MessageStart()) == std::size(m_magic_bytes));
std::copy(std::begin(Params().MessageStart()), std::end(Params().MessageStart()), m_magic_bytes);
LOCK(m_recv_mutex);
Reset();
}
int V1Transport::readHeader(Span<const uint8_t> msg_bytes)
{
AssertLockHeld(m_recv_mutex);
// copy data to temporary parsing buffer
unsigned int nRemaining = CMessageHeader::HEADER_SIZE - nHdrPos;
unsigned int nCopy = std::min<unsigned int>(nRemaining, msg_bytes.size());
@ -843,7 +861,7 @@ int V1TransportDeserializer::readHeader(Span<const uint8_t> msg_bytes)
}
// Check start string, network magic
if (memcmp(hdr.pchMessageStart, m_chain_params.MessageStart(), CMessageHeader::MESSAGE_START_SIZE) != 0) {
if (memcmp(hdr.pchMessageStart, m_magic_bytes, CMessageHeader::MESSAGE_START_SIZE) != 0) {
LogPrint(BCLog::NET, "Header error: Wrong MessageStart %s received, peer=%d\n", HexStr(hdr.pchMessageStart), m_node_id);
return -1;
}
@ -860,8 +878,9 @@ int V1TransportDeserializer::readHeader(Span<const uint8_t> msg_bytes)
return nCopy;
}
int V1TransportDeserializer::readData(Span<const uint8_t> msg_bytes)
int V1Transport::readData(Span<const uint8_t> msg_bytes)
{
AssertLockHeld(m_recv_mutex);
unsigned int nRemaining = hdr.nMessageSize - nDataPos;
unsigned int nCopy = std::min<unsigned int>(nRemaining, msg_bytes.size());
@ -877,19 +896,22 @@ int V1TransportDeserializer::readData(Span<const uint8_t> msg_bytes)
return nCopy;
}
const uint256& V1TransportDeserializer::GetMessageHash() const
const uint256& V1Transport::GetMessageHash() const
{
assert(Complete());
AssertLockHeld(m_recv_mutex);
assert(CompleteInternal());
if (data_hash.IsNull())
hasher.Finalize(data_hash);
return data_hash;
}
CNetMessage V1TransportDeserializer::GetMessage(const std::chrono::microseconds time, bool& reject_message)
CNetMessage V1Transport::GetReceivedMessage(const std::chrono::microseconds time, bool& reject_message)
{
AssertLockNotHeld(m_recv_mutex);
// Initialize out parameter
reject_message = false;
// decompose a single CNetMessage from the TransportDeserializer
LOCK(m_recv_mutex);
CNetMessage msg(std::move(vRecv));
// store message type string, time, and sizes
@ -922,47 +944,122 @@ CNetMessage V1TransportDeserializer::GetMessage(const std::chrono::microseconds
return msg;
}
void V1TransportSerializer::prepareForTransport(CSerializedNetMsg& msg, std::vector<unsigned char>& header) const
bool V1Transport::SetMessageToSend(CSerializedNetMsg& msg) noexcept
{
AssertLockNotHeld(m_send_mutex);
// Determine whether a new message can be set.
LOCK(m_send_mutex);
if (m_sending_header || m_bytes_sent < m_message_to_send.data.size()) return false;
// create dbl-sha256 checksum
uint256 hash = Hash(msg.data);
// create header
CMessageHeader hdr(Params().MessageStart(), msg.m_type.c_str(), msg.data.size());
CMessageHeader hdr(m_magic_bytes, msg.m_type.c_str(), msg.data.size());
memcpy(hdr.pchChecksum, hash.begin(), CMessageHeader::CHECKSUM_SIZE);
// serialize header
header.reserve(CMessageHeader::HEADER_SIZE);
CVectorWriter{SER_NETWORK, INIT_PROTO_VERSION, header, 0, hdr};
m_header_to_send.clear();
CVectorWriter{SER_NETWORK, INIT_PROTO_VERSION, m_header_to_send, 0, hdr};
// update state
m_message_to_send = std::move(msg);
m_sending_header = true;
m_bytes_sent = 0;
return true;
}
size_t CConnman::SocketSendData(CNode& node)
Transport::BytesToSend V1Transport::GetBytesToSend() const noexcept
{
AssertLockNotHeld(m_send_mutex);
LOCK(m_send_mutex);
if (m_sending_header) {
return {Span{m_header_to_send}.subspan(m_bytes_sent),
// We have more to send after the header if the message has payload.
!m_message_to_send.data.empty(),
m_message_to_send.m_type
};
} else {
return {Span{m_message_to_send.data}.subspan(m_bytes_sent),
// We never have more to send after this message's payload.
false,
m_message_to_send.m_type
};
}
}
void V1Transport::MarkBytesSent(size_t bytes_sent) noexcept
{
AssertLockNotHeld(m_send_mutex);
LOCK(m_send_mutex);
m_bytes_sent += bytes_sent;
if (m_sending_header && m_bytes_sent == m_header_to_send.size()) {
// We're done sending a message's header. Switch to sending its data bytes.
m_sending_header = false;
m_bytes_sent = 0;
} else if (!m_sending_header && m_bytes_sent == m_message_to_send.data.size()) {
// We're done sending a message's data. Wipe the data vector to reduce memory consumption.
m_message_to_send.data.clear();
m_message_to_send.data.shrink_to_fit();
m_bytes_sent = 0;
}
}
size_t V1Transport::GetSendMemoryUsage() const noexcept
{
AssertLockNotHeld(m_send_mutex);
LOCK(m_send_mutex);
// Don't count sending-side fields besides m_message_to_send, as they're all small and bounded.
return m_message_to_send.GetMemoryUsage();
}
std::pair<size_t, bool> CConnman::SocketSendData(CNode& node) const
{
auto it = node.vSendMsg.begin();
size_t nSentSize = 0;
bool data_left{false}; //!< second return value (whether unsent data remains)
while (it != node.vSendMsg.end()) {
const auto& data = *it;
assert(data.size() > node.nSendOffset);
while (true) {
if (it != node.vSendMsg.end()) {
// If possible, move one message from the send queue to the transport. This fails when
// there is an existing message still being sent.
size_t memusage = it->GetMemoryUsage();
if (node.m_transport->SetMessageToSend(*it)) {
// Update memory usage of send buffer (as *it will be deleted).
node.m_send_memusage -= memusage;
++it;
}
}
const auto& [data, more, msg_type] = node.m_transport->GetBytesToSend();
data_left = !data.empty(); // will be overwritten on next loop if all of data gets sent
int nBytes = 0;
{
if (!data.empty()) {
LOCK(node.m_sock_mutex);
// There is no socket in case we've already disconnected, or in test cases without
// real connections. In these cases, we bail out immediately and just leave things
// in the send queue and transport.
if (!node.m_sock) {
break;
}
nBytes = node.m_sock->Send(reinterpret_cast<const char*>(data.data()) + node.nSendOffset, data.size() - node.nSendOffset, MSG_NOSIGNAL | MSG_DONTWAIT);
int flags = MSG_NOSIGNAL | MSG_DONTWAIT;
#ifdef MSG_MORE
// We have more to send if either the transport itself has more, or if we have more
// messages to send.
if (more || it != node.vSendMsg.end()) {
flags |= MSG_MORE;
}
#endif
nBytes = node.m_sock->Send(reinterpret_cast<const char*>(data.data()), data.size(), flags);
}
if (nBytes > 0) {
node.m_last_send = GetTime<std::chrono::seconds>();
node.nSendBytes += nBytes;
node.nSendOffset += nBytes;
// Notify transport that bytes have been processed.
node.m_transport->MarkBytesSent(nBytes);
// Update statistics per message type.
node.mapSendBytesPerMsgType[msg_type] += nBytes;
nSentSize += nBytes;
if (node.nSendOffset == data.size()) {
node.nSendOffset = 0;
node.nSendSize -= data.size();
node.fPauseSend = node.nSendSize > nSendBufferMaxSize;
it++;
} else {
if ((size_t)nBytes != data.size()) {
// could not send full message; stop sending more
node.fCanSendData = false;
break;
@ -976,19 +1073,18 @@ size_t CConnman::SocketSendData(CNode& node)
node.fDisconnect = true;
}
}
// couldn't send anything at all
node.fCanSendData = false;
break;
}
}
node.fPauseSend = node.m_send_memusage + node.m_transport->GetSendMemoryUsage() > nSendBufferMaxSize;
if (it == node.vSendMsg.end()) {
assert(node.nSendOffset == 0);
assert(node.nSendSize == 0);
assert(node.m_send_memusage == 0);
}
node.vSendMsg.erase(node.vSendMsg.begin(), it);
node.nSendMsgSize = node.vSendMsg.size();
return nSentSize;
return {nSentSize, data_left};
}
static bool ReverseCompareNodeMinPingTime(const NodeEvictionCandidate& a, const NodeEvictionCandidate& b)
@ -1513,7 +1609,9 @@ void CConnman::DisconnectNodes()
}
if (GetTimeMillis() < pnode->nDisconnectLingerTime) {
// everything flushed to the kernel?
if (!pnode->fSocketShutdown && pnode->nSendMsgSize == 0) {
const auto& [to_send, _more, _msg_type] = pnode->m_transport->GetBytesToSend();
const bool queue_is_empty{to_send.empty() && pnode->nSendMsgSize == 0};
if (!pnode->fSocketShutdown && queue_is_empty) {
LOCK(pnode->m_sock_mutex);
if (pnode->m_sock) {
// Give the other side a chance to detect the disconnect as early as possible (recv() will return 0)
@ -1705,8 +1803,7 @@ bool CConnman::GenerateSelectSet(const std::vector<CNode*>& nodes,
recv_set.insert(hListenSocket.sock->Get());
}
for (CNode* pnode : nodes)
{
for (CNode* pnode : nodes) {
bool select_recv = !pnode->fHasRecvData;
bool select_send = !pnode->fCanSendData;
@ -2021,9 +2118,9 @@ void CConnman::SocketHandlerConnected(const std::set<SOCKET>& recv_set,
if (interruptNet) return;
std::vector<CNode*> vErrorNodes;
std::vector<CNode*> vReceivableNodes;
std::vector<CNode*> vSendableNodes;
std::set<CNode*> vErrorNodes;
std::set<CNode*> vReceivableNodes;
std::set<CNode*> vSendableNodes;
{
LOCK(cs_mapSocketToNode);
for (auto hSocket : error_set) {
@ -2032,7 +2129,7 @@ void CConnman::SocketHandlerConnected(const std::set<SOCKET>& recv_set,
continue;
}
it->second->AddRef();
vErrorNodes.emplace_back(it->second);
vErrorNodes.emplace(it->second);
}
for (auto hSocket : recv_set) {
if (error_set.count(hSocket)) {
@ -2067,7 +2164,6 @@ void CConnman::SocketHandlerConnected(const std::set<SOCKET>& recv_set,
{
LOCK(cs_sendable_receivable_nodes);
vReceivableNodes.reserve(mapReceivableNodes.size());
for (auto it = mapReceivableNodes.begin(); it != mapReceivableNodes.end(); ) {
if (!it->second->fHasRecvData) {
it = mapReceivableNodes.erase(it);
@ -2080,9 +2176,11 @@ void CConnman::SocketHandlerConnected(const std::set<SOCKET>& recv_set,
// receiving data. This means properly utilizing TCP flow control signalling.
// * Otherwise, if there is space left in the receive buffer (!fPauseRecv), try
// receiving data (which should succeed as the socket signalled as receivable).
if (!it->second->fPauseRecv && it->second->nSendMsgSize == 0 && !it->second->fDisconnect) {
const auto& [to_send, _more, _msg_type] = it->second->m_transport->GetBytesToSend();
const bool queue_is_empty{to_send.empty() && it->second->nSendMsgSize == 0};
if (!it->second->fPauseRecv && !it->second->fDisconnect && queue_is_empty) {
it->second->AddRef();
vReceivableNodes.emplace_back(it->second);
vReceivableNodes.emplace(it->second);
}
++it;
}
@ -2093,22 +2191,45 @@ void CConnman::SocketHandlerConnected(const std::set<SOCKET>& recv_set,
// also clean up mapNodesWithDataToSend from nodes that had messages to send in the last iteration
// but don't have any in this iteration
LOCK(cs_mapNodesWithDataToSend);
vSendableNodes.reserve(mapNodesWithDataToSend.size());
for (auto it = mapNodesWithDataToSend.begin(); it != mapNodesWithDataToSend.end(); ) {
if (it->second->nSendMsgSize == 0) {
const auto& [to_send, _more, _msg_type] = it->second->m_transport->GetBytesToSend();
if (to_send.empty() && it->second->nSendMsgSize == 0) {
// See comment in PushMessage
it->second->Release();
it = mapNodesWithDataToSend.erase(it);
} else {
if (it->second->fCanSendData) {
it->second->AddRef();
vSendableNodes.emplace_back(it->second);
vSendableNodes.emplace(it->second);
}
++it;
}
}
}
for (CNode* pnode : vSendableNodes) {
if (interruptNet) {
break;
}
// Send data
auto [bytes_sent, data_left] = WITH_LOCK(pnode->cs_vSend, return SocketSendData(*pnode));
if (bytes_sent) {
RecordBytesSent(bytes_sent);
// If both receiving and (non-optimistic) sending were possible, we first attempt
// sending. If that succeeds, but does not fully drain the send queue, do not
// attempt to receive. This avoids needlessly queueing data if the remote peer
// is slow at receiving data, by means of TCP flow control. We only do this when
// sending actually succeeded to make sure progress is always made; otherwise a
// deadlock would be possible when both sides have data to send, but neither is
// receiving.
if (data_left && vReceivableNodes.erase(pnode)) {
pnode->Release();
}
}
}
for (CNode* pnode : vErrorNodes)
{
if (interruptNet) {
@ -2130,16 +2251,6 @@ void CConnman::SocketHandlerConnected(const std::set<SOCKET>& recv_set,
SocketRecvData(pnode);
}
for (CNode* pnode : vSendableNodes) {
if (interruptNet) {
break;
}
// Send data
size_t bytes_sent = WITH_LOCK(pnode->cs_vSend, return SocketSendData(*pnode));
if (bytes_sent) RecordBytesSent(bytes_sent);
}
for (auto& node : vErrorNodes) {
node->Release();
}
@ -2497,8 +2608,8 @@ void CConnman::ThreadOpenConnections(const std::vector<std::string> connect, CDe
auto start = GetTime<std::chrono::microseconds>();
// Minimum time before next feeler connection (in microseconds).
auto next_feeler = PoissonNextSend(start, FEELER_INTERVAL);
auto next_extra_block_relay = PoissonNextSend(start, EXTRA_BLOCK_RELAY_ONLY_PEER_INTERVAL);
auto next_feeler = GetExponentialRand(start, FEELER_INTERVAL);
auto next_extra_block_relay = GetExponentialRand(start, EXTRA_BLOCK_RELAY_ONLY_PEER_INTERVAL);
const bool dnsseed = gArgs.GetBoolArg("-dnsseed", DEFAULT_DNSSEED);
bool add_fixed_seeds = gArgs.GetBoolArg("-fixedseeds", DEFAULT_FIXEDSEEDS);
@ -2632,7 +2743,7 @@ void CConnman::ThreadOpenConnections(const std::vector<std::string> connect, CDe
//
// This is similar to the logic for trying extra outbound (full-relay)
// peers, except:
// - we do this all the time on a poisson timer, rather than just when
// - we do this all the time on an exponential timer, rather than just when
// our tip is stale
// - we potentially disconnect our next-youngest block-relay-only peer, if our
// newest block-relay-only peer delivers a block more recently.
@ -2641,10 +2752,10 @@ void CConnman::ThreadOpenConnections(const std::vector<std::string> connect, CDe
// Because we can promote these connections to block-relay-only
// connections, they do not get their own ConnectionType enum
// (similar to how we deal with extra outbound peers).
next_extra_block_relay = PoissonNextSend(now, EXTRA_BLOCK_RELAY_ONLY_PEER_INTERVAL);
next_extra_block_relay = GetExponentialRand(now, EXTRA_BLOCK_RELAY_ONLY_PEER_INTERVAL);
conn_type = ConnectionType::BLOCK_RELAY;
} else if (now > next_feeler) {
next_feeler = PoissonNextSend(now, FEELER_INTERVAL);
next_feeler = GetExponentialRand(now, FEELER_INTERVAL);
conn_type = ConnectionType::FEELER;
fFeeler = true;
} else if (nOutboundOnionRelay < m_max_outbound_onion && IsReachable(Network::NET_ONION)) {
@ -3142,8 +3253,12 @@ void CConnman::OpenMasternodeConnection(const CAddress &addrConnect, MasternodeP
OpenNetworkConnection(addrConnect, false, nullptr, nullptr, ConnectionType::OUTBOUND_FULL_RELAY, MasternodeConn::IsConnection, probe);
}
Mutex NetEventsInterface::g_msgproc_mutex;
void CConnman::ThreadMessageHandler()
{
LOCK(NetEventsInterface::g_msgproc_mutex);
int64_t nLastSendMessagesTimeMasternodes = 0;
FastRandomContext rng;
@ -3173,7 +3288,6 @@ void CConnman::ThreadMessageHandler()
return;
// Send messages
if (!fSkipSendMessagesForMasternodes || !pnode->m_masternode_connection) {
LOCK(pnode->cs_sendProcessing);
m_msgproc->SendMessages(pnode);
}
@ -4123,8 +4237,7 @@ CNode::CNode(NodeId idIn,
ConnectionType conn_type_in,
bool inbound_onion,
std::unique_ptr<i2p::sam::Session>&& i2p_sam_session)
: m_deserializer{std::make_unique<V1TransportDeserializer>(V1TransportDeserializer(Params(), idIn, SER_NETWORK, INIT_PROTO_VERSION))},
m_serializer{std::make_unique<V1TransportSerializer>(V1TransportSerializer())},
: m_transport{std::make_unique<V1Transport>(idIn, SER_NETWORK, INIT_PROTO_VERSION)},
m_sock{sock},
m_connected{GetTime<std::chrono::seconds>()},
addr{addrIn},
@ -4163,26 +4276,19 @@ void CConnman::PushMessage(CNode* pnode, CSerializedNetMsg&& msg)
if (gArgs.GetBoolArg("-capturemessages", false)) {
CaptureMessage(pnode->addr, msg.m_type, msg.data, /* incoming */ false);
}
// make sure we use the appropriate network transport format
std::vector<unsigned char> serializedHeader;
pnode->m_serializer->prepareForTransport(msg, serializedHeader);
size_t nTotalSize = nMessageSize + serializedHeader.size();
statsClient.count("bandwidth.message." + SanitizeString(msg.m_type.c_str()) + ".bytesSent", nTotalSize, 1.0f);
statsClient.inc("message.sent." + SanitizeString(msg.m_type.c_str()), 1.0f);
statsClient.count(strprintf("bandwidth.message.%s.bytesSent", msg.m_type), nMessageSize, 1.0f);
statsClient.inc(strprintf("message.sent.%s", msg.m_type), 1.0f);
{
LOCK(pnode->cs_vSend);
bool hasPendingData = !pnode->vSendMsg.empty();
const auto& [to_send, _more, _msg_type] = pnode->m_transport->GetBytesToSend();
const bool queue_was_empty{to_send.empty() && pnode->vSendMsg.empty()};
//log total amount of bytes per message type
pnode->mapSendBytesPerMsgType[msg.m_type] += nTotalSize;
pnode->nSendSize += nTotalSize;
if (pnode->nSendSize > nSendBufferMaxSize) pnode->fPauseSend = true;
pnode->vSendMsg.push_back(std::move(serializedHeader));
if (nMessageSize) pnode->vSendMsg.push_back(std::move(msg.data));
// Update memory usage of send buffer.
pnode->m_send_memusage += msg.GetMemoryUsage();
if (pnode->m_send_memusage + pnode->m_transport->GetSendMemoryUsage() > nSendBufferMaxSize) pnode->fPauseSend = true;
// Move message to vSendMsg queue.
pnode->vSendMsg.push_back(std::move(msg));
pnode->nSendMsgSize = pnode->vSendMsg.size();
{
@ -4196,9 +4302,13 @@ void CConnman::PushMessage(CNode* pnode, CSerializedNetMsg&& msg)
}
}
// wake up select() call in case there was no pending data before (so it was not selecting this socket for sending)
if (!hasPendingData && (m_wakeup_pipe && m_wakeup_pipe->m_need_wakeup.load()))
m_wakeup_pipe->Write();
// Wake up select() call in case there was no pending data before (so it was not selecting
// this socket for sending)
if (queue_was_empty) {
if (m_wakeup_pipe && m_wakeup_pipe->m_need_wakeup.load()) {
m_wakeup_pipe->Write();
}
}
}
}
@ -4234,23 +4344,6 @@ bool CConnman::IsMasternodeOrDisconnectRequested(const CService& addr) {
});
}
std::chrono::microseconds CConnman::PoissonNextSendInbound(std::chrono::microseconds now, std::chrono::seconds average_interval)
{
if (m_next_send_inv_to_incoming.load() < now) {
// If this function were called from multiple threads simultaneously
// it would possible that both update the next send variable, and return a different result to their caller.
// This is not possible in practice as only the net processing thread invokes this function.
m_next_send_inv_to_incoming = PoissonNextSend(now, average_interval);
}
return m_next_send_inv_to_incoming;
}
std::chrono::microseconds PoissonNextSend(std::chrono::microseconds now, std::chrono::seconds average_interval)
{
double unscaled = -log1p(GetRand(1ULL << 48) * -0.0000000000000035527136788 /* -1/2^48 */);
return now + std::chrono::duration_cast<std::chrono::microseconds>(unscaled * average_interval + 0.5us);
}
CConnman::NodesSnapshot::NodesSnapshot(const CConnman& connman, std::function<bool(const CNode* pnode)> filter,
bool shuffle)
{

228
src/net.h
View File

@ -151,6 +151,9 @@ struct CSerializedNetMsg {
std::vector<unsigned char> data;
std::string m_type;
/** Compute total memory usage of this object (own memory + any dynamic memory). */
size_t GetMemoryUsage() const noexcept;
};
/** Different types of connections to a peer. This enum encapsulates the
@ -350,42 +353,105 @@ public:
}
};
/** The TransportDeserializer takes care of holding and deserializing the
* network receive buffer. It can deserialize the network buffer into a
* transport protocol agnostic CNetMessage (message type & payload)
*/
class TransportDeserializer {
/** The Transport converts one connection's sent messages to wire bytes, and received bytes back. */
class Transport {
public:
// returns true if the current deserialization is complete
virtual bool Complete() const = 0;
// set the serialization context version
virtual void SetVersion(int version) = 0;
/** read and deserialize data, advances msg_bytes data pointer */
virtual int Read(Span<const uint8_t>& msg_bytes) = 0;
// decomposes a message from the context
virtual CNetMessage GetMessage(std::chrono::microseconds time, bool& reject_message) = 0;
virtual ~TransportDeserializer() {}
virtual ~Transport() {}
// 1. Receiver side functions, for decoding bytes received on the wire into transport protocol
// agnostic CNetMessage (message type & payload) objects.
/** Returns true if the current message is complete (so GetReceivedMessage can be called). */
virtual bool ReceivedMessageComplete() const = 0;
/** Set the deserialization context version for objects returned by GetReceivedMessage. */
virtual void SetReceiveVersion(int version) = 0;
/** Feed wire bytes to the transport.
*
* @return false if some bytes were invalid, in which case the transport can't be used anymore.
*
* Consumed bytes are chopped off the front of msg_bytes.
*/
virtual bool ReceivedBytes(Span<const uint8_t>& msg_bytes) = 0;
/** Retrieve a completed message from transport.
*
* This can only be called when ReceivedMessageComplete() is true.
*
* If reject_message=true is returned the message itself is invalid, but (other than false
* returned by ReceivedBytes) the transport is not in an inconsistent state.
*/
virtual CNetMessage GetReceivedMessage(std::chrono::microseconds time, bool& reject_message) = 0;
// 2. Sending side functions, for converting messages into bytes to be sent over the wire.
/** Set the next message to send.
*
* If no message can currently be set (perhaps because the previous one is not yet done being
* sent), returns false, and msg will be unmodified. Otherwise msg is enqueued (and
* possibly moved-from) and true is returned.
*/
virtual bool SetMessageToSend(CSerializedNetMsg& msg) noexcept = 0;
/** Return type for GetBytesToSend, consisting of:
* - Span<const uint8_t> to_send: span of bytes to be sent over the wire (possibly empty).
* - bool more: whether there will be more bytes to be sent after the ones in to_send are
* all sent (as signaled by MarkBytesSent()).
* - const std::string& m_type: message type on behalf of which this is being sent.
*/
using BytesToSend = std::tuple<
Span<const uint8_t> /*to_send*/,
bool /*more*/,
const std::string& /*m_type*/
>;
/** Get bytes to send on the wire.
*
* As a const function, it does not modify the transport's observable state, and is thus safe
* to be called multiple times.
*
* The bytes returned by this function act as a stream which can only be appended to. This
* means that with the exception of MarkBytesSent, operations on the transport can only append
* to what is being returned.
*
* Note that m_type and to_send refer to data that is internal to the transport, and calling
* any non-const function on this object may invalidate them.
*/
virtual BytesToSend GetBytesToSend() const noexcept = 0;
/** Report how many bytes returned by the last GetBytesToSend() have been sent.
*
* bytes_sent cannot exceed to_send.size() of the last GetBytesToSend() result.
*
* If bytes_sent=0, this call has no effect.
*/
virtual void MarkBytesSent(size_t bytes_sent) noexcept = 0;
/** Return the memory usage of this transport attributable to buffered data to send. */
virtual size_t GetSendMemoryUsage() const noexcept = 0;
};
class V1TransportDeserializer final : public TransportDeserializer
class V1Transport final : public Transport
{
private:
const CChainParams& m_chain_params;
CMessageHeader::MessageStartChars m_magic_bytes;
const NodeId m_node_id; // Only for logging
mutable CHash256 hasher;
mutable uint256 data_hash;
bool in_data; // parsing header (false) or data (true)
CDataStream hdrbuf; // partially received header
CMessageHeader hdr; // complete header
CDataStream vRecv; // received message data
unsigned int nHdrPos;
unsigned int nDataPos;
mutable Mutex m_recv_mutex; //!< Lock for receive state
mutable CHash256 hasher GUARDED_BY(m_recv_mutex);
mutable uint256 data_hash GUARDED_BY(m_recv_mutex);
bool in_data GUARDED_BY(m_recv_mutex); // parsing header (false) or data (true)
CDataStream hdrbuf GUARDED_BY(m_recv_mutex); // partially received header
CMessageHeader hdr GUARDED_BY(m_recv_mutex); // complete header
CDataStream vRecv GUARDED_BY(m_recv_mutex); // received message data
unsigned int nHdrPos GUARDED_BY(m_recv_mutex);
unsigned int nDataPos GUARDED_BY(m_recv_mutex);
const uint256& GetMessageHash() const;
int readHeader(Span<const uint8_t> msg_bytes);
int readData(Span<const uint8_t> msg_bytes);
const uint256& GetMessageHash() const EXCLUSIVE_LOCKS_REQUIRED(m_recv_mutex);
int readHeader(Span<const uint8_t> msg_bytes) EXCLUSIVE_LOCKS_REQUIRED(m_recv_mutex);
int readData(Span<const uint8_t> msg_bytes) EXCLUSIVE_LOCKS_REQUIRED(m_recv_mutex);
void Reset() {
void Reset() EXCLUSIVE_LOCKS_REQUIRED(m_recv_mutex) {
AssertLockHeld(m_recv_mutex);
vRecv.clear();
hdrbuf.clear();
hdrbuf.resize(24);
@ -396,52 +462,60 @@ private:
hasher.Reset();
}
public:
V1TransportDeserializer(const CChainParams& chain_params, const NodeId node_id, int nTypeIn, int nVersionIn)
: m_chain_params(chain_params),
m_node_id(node_id),
hdrbuf(nTypeIn, nVersionIn),
vRecv(nTypeIn, nVersionIn)
bool CompleteInternal() const noexcept EXCLUSIVE_LOCKS_REQUIRED(m_recv_mutex)
{
Reset();
AssertLockHeld(m_recv_mutex);
if (!in_data) return false;
return hdr.nMessageSize == nDataPos;
}
bool Complete() const override
/** Lock for sending state. */
mutable Mutex m_send_mutex;
/** The header of the message currently being sent. */
std::vector<uint8_t> m_header_to_send GUARDED_BY(m_send_mutex);
/** The data of the message currently being sent. */
CSerializedNetMsg m_message_to_send GUARDED_BY(m_send_mutex);
/** Whether we're currently sending header bytes or message bytes. */
bool m_sending_header GUARDED_BY(m_send_mutex) {false};
/** How many bytes have been sent so far (from m_header_to_send, or from m_message_to_send.data). */
size_t m_bytes_sent GUARDED_BY(m_send_mutex) {0};
public:
V1Transport(const NodeId node_id, int nTypeIn, int nVersionIn) noexcept;
bool ReceivedMessageComplete() const override EXCLUSIVE_LOCKS_REQUIRED(!m_recv_mutex)
{
if (!in_data)
return false;
return (hdr.nMessageSize == nDataPos);
AssertLockNotHeld(m_recv_mutex);
return WITH_LOCK(m_recv_mutex, return CompleteInternal());
}
void SetVersion(int nVersionIn) override
void SetReceiveVersion(int nVersionIn) override EXCLUSIVE_LOCKS_REQUIRED(!m_recv_mutex)
{
AssertLockNotHeld(m_recv_mutex);
LOCK(m_recv_mutex);
hdrbuf.SetVersion(nVersionIn);
vRecv.SetVersion(nVersionIn);
}
int Read(Span<const uint8_t>& msg_bytes) override
bool ReceivedBytes(Span<const uint8_t>& msg_bytes) override EXCLUSIVE_LOCKS_REQUIRED(!m_recv_mutex)
{
AssertLockNotHeld(m_recv_mutex);
LOCK(m_recv_mutex);
int ret = in_data ? readData(msg_bytes) : readHeader(msg_bytes);
if (ret < 0) {
Reset();
} else {
msg_bytes = msg_bytes.subspan(ret);
}
return ret;
return ret >= 0;
}
CNetMessage GetMessage(std::chrono::microseconds time, bool& reject_message) override;
};
/** The TransportSerializer prepares messages for the network transport
*/
class TransportSerializer {
public:
// prepare message for transport (header construction, error-correction computation, payload encryption, etc.)
virtual void prepareForTransport(CSerializedNetMsg& msg, std::vector<unsigned char>& header) const = 0;
virtual ~TransportSerializer() {}
};
CNetMessage GetReceivedMessage(std::chrono::microseconds time, bool& reject_message) override EXCLUSIVE_LOCKS_REQUIRED(!m_recv_mutex);
class V1TransportSerializer : public TransportSerializer {
public:
void prepareForTransport(CSerializedNetMsg& msg, std::vector<unsigned char>& header) const override;
bool SetMessageToSend(CSerializedNetMsg& msg) noexcept override EXCLUSIVE_LOCKS_REQUIRED(!m_send_mutex);
BytesToSend GetBytesToSend() const noexcept override EXCLUSIVE_LOCKS_REQUIRED(!m_send_mutex);
void MarkBytesSent(size_t bytes_sent) noexcept override EXCLUSIVE_LOCKS_REQUIRED(!m_send_mutex);
size_t GetSendMemoryUsage() const noexcept override EXCLUSIVE_LOCKS_REQUIRED(!m_send_mutex);
};
/** Information about a peer */
@ -451,8 +525,9 @@ class CNode
friend struct ConnmanTestMsg;
public:
const std::unique_ptr<TransportDeserializer> m_deserializer; // Used only by SocketHandler thread
const std::unique_ptr<const TransportSerializer> m_serializer;
/** Transport serializer/deserializer. The receive side functions are only called under cs_vRecv, while
* the sending side functions are only called under cs_vSend. */
const std::unique_ptr<Transport> m_transport;
NetPermissionFlags m_permissionFlags{NetPermissionFlags::None}; // treated as const outside of fuzz tester
@ -466,12 +541,12 @@ public:
*/
std::shared_ptr<Sock> m_sock GUARDED_BY(m_sock_mutex);
/** Total size of all vSendMsg entries */
size_t nSendSize GUARDED_BY(cs_vSend){0};
/** Offset inside the first vSendMsg already sent */
size_t nSendOffset GUARDED_BY(cs_vSend){0};
/** Sum of GetMemoryUsage of all vSendMsg entries. */
size_t m_send_memusage GUARDED_BY(cs_vSend){0};
/** Total number of bytes sent on the wire to this peer. */
uint64_t nSendBytes GUARDED_BY(cs_vSend){0};
std::list<std::vector<unsigned char>> vSendMsg GUARDED_BY(cs_vSend);
/** Messages still to be fed to m_transport->SetMessageToSend. */
std::deque<CSerializedNetMsg> vSendMsg GUARDED_BY(cs_vSend);
std::atomic<size_t> nSendMsgSize{0};
Mutex cs_vSend;
Mutex m_sock_mutex;
@ -481,8 +556,6 @@ public:
std::list<CNetMessage> vProcessMsg GUARDED_BY(cs_vProcessMsg);
size_t nProcessQueueSize GUARDED_BY(cs_vProcessMsg){0};
RecursiveMutex cs_sendProcessing;
uint64_t nRecvBytes GUARDED_BY(cs_vRecv){0};
std::atomic<std::chrono::seconds> m_last_send{0s};
@ -816,6 +889,9 @@ private:
class NetEventsInterface
{
public:
/** Mutex for anything that is only accessed via the msg processing thread */
static Mutex g_msgproc_mutex;
/** Initialize a peer (setup state, queue any initial messages) */
virtual void InitializeNode(CNode& node, ServiceFlags our_services) = 0;
@ -829,7 +905,7 @@ public:
* @param[in] interrupt Interrupt condition for processing threads
* @return True if there is more work to be done
*/
virtual bool ProcessMessages(CNode* pnode, std::atomic<bool>& interrupt) = 0;
virtual bool ProcessMessages(CNode* pnode, std::atomic<bool>& interrupt) EXCLUSIVE_LOCKS_REQUIRED(g_msgproc_mutex) = 0;
/**
* Send queued protocol messages to a given node.
@ -837,7 +913,7 @@ public:
* @param[in] pnode The node which we are sending messages to.
* @return True if there is more work to be done
*/
virtual bool SendMessages(CNode* pnode) EXCLUSIVE_LOCKS_REQUIRED(pnode->cs_sendProcessing) = 0;
virtual bool SendMessages(CNode* pnode) EXCLUSIVE_LOCKS_REQUIRED(g_msgproc_mutex) = 0;
protected:
@ -1205,12 +1281,6 @@ public:
void WakeMessageHandler() EXCLUSIVE_LOCKS_REQUIRED(!mutexMsgProc);
/** Attempts to obfuscate tx time through exponentially distributed emitting.
Works assuming that a single interval is used.
Variable intervals will result in privacy decrease.
*/
std::chrono::microseconds PoissonNextSendInbound(std::chrono::microseconds now, std::chrono::seconds average_interval);
/** Return true if we should disconnect the peer for failing an inactivity check. */
bool ShouldRunInactivityChecks(const CNode& node, std::chrono::seconds now) const;
@ -1392,8 +1462,11 @@ private:
NodeId GetNewNodeId();
size_t SocketSendData(CNode& node) EXCLUSIVE_LOCKS_REQUIRED(node.cs_vSend);
/** (Try to) send data from node's vSendMsg. Returns (bytes_sent, data_left). */
std::pair<size_t, bool> SocketSendData(CNode& node) const EXCLUSIVE_LOCKS_REQUIRED(node.cs_vSend);
size_t SocketRecvData(CNode* pnode) EXCLUSIVE_LOCKS_REQUIRED(!mutexMsgProc);
void DumpAddresses();
// Network stats
@ -1584,8 +1657,6 @@ private:
*/
std::atomic_bool m_start_extra_block_relay_peers{false};
std::atomic<std::chrono::microseconds> m_next_send_inv_to_incoming{0us};
/**
* A vector of -bind=<address>:<port>=onion arguments each of which is
* an address and port that are designated for incoming Tor connections.
@ -1616,9 +1687,6 @@ private:
friend struct ConnmanTestMsg;
};
/** Return a timestamp in the future (in microseconds) for exponentially distributed events. */
std::chrono::microseconds PoissonNextSend(std::chrono::microseconds now, std::chrono::seconds average_interval);
/** Dump binary message to file, with timestamp */
void CaptureMessageToFile(const CAddress& addr,
const std::string& msg_type,
@ -1665,10 +1733,6 @@ public:
extern RecursiveMutex cs_main;
void EraseObjectRequest(NodeId nodeId, const CInv& inv) EXCLUSIVE_LOCKS_REQUIRED(cs_main);
void RequestObject(NodeId nodeId, const CInv& inv, std::chrono::microseconds current_time, bool is_masternode, bool fForce=false) EXCLUSIVE_LOCKS_REQUIRED(cs_main);
size_t GetRequestedObjectCount(NodeId nodeId) EXCLUSIVE_LOCKS_REQUIRED(cs_main);
/** Protect desirable or disadvantaged inbound peers from eviction by ratio.
*
* This function protects half of the peers which have been connected the

File diff suppressed because it is too large Load Diff

View File

@ -29,7 +29,6 @@ struct CJContext;
struct LLMQContext;
extern RecursiveMutex cs_main;
extern RecursiveMutex g_cs_orphans;
/** Default for -maxorphantxsize, maximum size in megabytes the orphan map can grow before entries are removed */
static const unsigned int DEFAULT_MAX_ORPHAN_TRANSACTIONS_SIZE = 10; // this allows around 100 TXs of max size (and many more of normal size)
@ -126,9 +125,17 @@ public:
/** Process a single message from a peer. Public for fuzz testing */
virtual void ProcessMessage(CNode& pfrom, const std::string& msg_type, CDataStream& vRecv,
const std::chrono::microseconds time_received, const std::atomic<bool>& interruptMsgProc) = 0;
const std::chrono::microseconds time_received, const std::atomic<bool>& interruptMsgProc) EXCLUSIVE_LOCKS_REQUIRED(g_msgproc_mutex) = 0;
/** This function is used for testing the stale tip eviction logic, see denialofservice_tests.cpp */
virtual void UpdateLastBlockAnnounceTime(NodeId node, int64_t time_in_seconds) = 0;
virtual bool IsBanned(NodeId pnode) = 0;
virtual void EraseObjectRequest(NodeId nodeid, const CInv& inv) = 0;
virtual void RequestObject(NodeId nodeid, const CInv& inv, std::chrono::microseconds current_time,
bool is_masternode, bool fForce = false) = 0;
virtual size_t GetRequestedObjectCount(NodeId nodeid) const = 0;
};
#endif // BITCOIN_NET_PROCESSING_H

View File

@ -22,6 +22,7 @@
#include <util/time.h> // for GetTimeMicros()
#include <array>
#include <cmath>
#include <stdlib.h>
#include <thread>
@ -724,3 +725,9 @@ void RandomInit()
ReportHardwareRand();
}
std::chrono::microseconds GetExponentialRand(std::chrono::microseconds now, std::chrono::seconds average_interval)
{
double unscaled = -std::log1p(GetRand(uint64_t{1} << 48) * -0.0000000000000035527136788 /* -1/2^48 */);
return now + std::chrono::duration_cast<std::chrono::microseconds>(unscaled * average_interval + 0.5us);
}

View File

@ -85,6 +85,18 @@ D GetRandomDuration(typename std::common_type<D>::type max) noexcept
};
constexpr auto GetRandMicros = GetRandomDuration<std::chrono::microseconds>;
constexpr auto GetRandMillis = GetRandomDuration<std::chrono::milliseconds>;
/**
* Return a timestamp in the future sampled from an exponential distribution
* (https://en.wikipedia.org/wiki/Exponential_distribution). This distribution
* is memoryless and should be used for repeated network events (e.g. sending a
* certain type of message) to minimize leaking information to observers.
*
* The probability of an event occuring before time x is 1 - e^-(x/a) where a
* is the average interval between events.
* */
std::chrono::microseconds GetExponentialRand(std::chrono::microseconds now, std::chrono::seconds average_interval);
int GetRandInt(int nMax) noexcept;
uint256 GetRandHash() noexcept;

View File

@ -145,12 +145,9 @@ PeerMsgRet CSporkManager::ProcessSpork(const CNode& peer, PeerManager& peerman,
uint256 hash = spork.GetHash();
std::string strLogMsg;
{
LOCK(cs_main);
EraseObjectRequest(peer.GetId(), CInv(MSG_SPORK, hash));
strLogMsg = strprintf("SPORK -- hash: %s id: %d value: %10d peer=%d", hash.ToString(), spork.nSporkID, spork.nValue, peer.GetId());
}
WITH_LOCK(::cs_main, peerman.EraseObjectRequest(peer.GetId(), CInv(MSG_SPORK, hash)));
std::string strLogMsg{strprintf("SPORK -- hash: %s id: %d value: %10d peer=%d", hash.ToString(), spork.nSporkID,
spork.nValue, peer.GetId())};
if (spork.nTimeSigned > GetAdjustedTime() + 2 * 60 * 60) {
LogPrint(BCLog::SPORK, "CSporkManager::ProcessSpork -- ERROR: too far into the future\n");

View File

@ -0,0 +1,349 @@
// Copyright (c) 2022 The Bitcoin Core developers
// Distributed under the MIT software license, see the accompanying
// file COPYING or http://www.opensource.org/licenses/mit-license.php.
#ifndef BITCOIN_SUPPORT_ALLOCATORS_POOL_H
#define BITCOIN_SUPPORT_ALLOCATORS_POOL_H
#include <array>
#include <cassert>
#include <cstddef>
#include <list>
#include <memory>
#include <new>
#include <type_traits>
#include <utility>
/**
* A memory resource similar to std::pmr::unsynchronized_pool_resource, but
* optimized for node-based containers. It has the following properties:
*
* * Owns the allocated memory and frees it on destruction, even when deallocate
* has not been called on the allocated blocks.
*
* * Consists of a number of pools, each one for a different block size.
* Each pool holds blocks of uniform size in a freelist.
*
* * Exhausting memory in a freelist causes a new allocation of a fixed size chunk.
* This chunk is used to carve out blocks.
*
* * Block sizes or alignments that can not be served by the pools are allocated
* and deallocated by operator new().
*
* PoolResource is not thread-safe. It is intended to be used by PoolAllocator.
*
* @tparam MAX_BLOCK_SIZE_BYTES Maximum size to allocate with the pool. If larger
* sizes are requested, allocation falls back to new().
*
* @tparam ALIGN_BYTES Required alignment for the allocations.
*
* An example: If you create a PoolResource<128, 8>(262144) and perform a bunch of
* allocations and deallocate 2 blocks with size 8 bytes, and 3 blocks with size 16,
* the members will look like this:
*
* m_free_lists m_allocated_chunks
* -------
* blocks 262144 B
* -------
* 1 8 B 8 B
* :
*
*
* 2 16 B 16 B 16 B
*
*
* . m_available_memory_end
* . m_available_memory_it
* .
*
*
* 16
*
*
* Here m_free_lists[1] holds the 2 blocks of size 8 bytes, and m_free_lists[2]
* holds the 3 blocks of size 16. The blocks came from the data stored in the
* m_allocated_chunks list. Each chunk has bytes 262144. The last chunk has still
* some memory available for the blocks, and when m_available_memory_it is at the
* end, a new chunk will be allocated and added to the list.
*/
template <std::size_t MAX_BLOCK_SIZE_BYTES, std::size_t ALIGN_BYTES>
class PoolResource final
{
static_assert(ALIGN_BYTES > 0, "ALIGN_BYTES must be nonzero");
static_assert((ALIGN_BYTES & (ALIGN_BYTES - 1)) == 0, "ALIGN_BYTES must be a power of two");
/**
* In-place linked list of the allocations, used for the freelist.
*/
struct ListNode {
ListNode* m_next;
explicit ListNode(ListNode* next) : m_next(next) {}
};
static_assert(std::is_trivially_destructible_v<ListNode>, "Make sure we don't need to manually call a destructor");
/**
* Internal alignment value. The larger of the requested ALIGN_BYTES and alignof(FreeList).
*/
static constexpr std::size_t ELEM_ALIGN_BYTES = std::max(alignof(ListNode), ALIGN_BYTES);
static_assert((ELEM_ALIGN_BYTES & (ELEM_ALIGN_BYTES - 1)) == 0, "ELEM_ALIGN_BYTES must be a power of two");
static_assert(sizeof(ListNode) <= ELEM_ALIGN_BYTES, "Units of size ELEM_SIZE_ALIGN need to be able to store a ListNode");
static_assert((MAX_BLOCK_SIZE_BYTES & (ELEM_ALIGN_BYTES - 1)) == 0, "MAX_BLOCK_SIZE_BYTES needs to be a multiple of the alignment.");
/**
* Size in bytes to allocate per chunk
*/
const size_t m_chunk_size_bytes;
/**
* Contains all allocated pools of memory, used to free the data in the destructor.
*/
std::list<std::byte*> m_allocated_chunks{};
/**
* Single linked lists of all data that came from deallocating.
* m_free_lists[n] will serve blocks of size n*ELEM_ALIGN_BYTES.
*/
std::array<ListNode*, MAX_BLOCK_SIZE_BYTES / ELEM_ALIGN_BYTES + 1> m_free_lists{};
/**
* Points to the beginning of available memory for carving out allocations.
*/
std::byte* m_available_memory_it = nullptr;
/**
* Points to the end of available memory for carving out allocations.
*
* That member variable is redundant, and is always equal to `m_allocated_chunks.back() + m_chunk_size_bytes`
* whenever it is accessed, but `m_available_memory_end` caches this for clarity and efficiency.
*/
std::byte* m_available_memory_end = nullptr;
/**
* How many multiple of ELEM_ALIGN_BYTES are necessary to fit bytes. We use that result directly as an index
* into m_free_lists. Round up for the special case when bytes==0.
*/
[[nodiscard]] static constexpr std::size_t NumElemAlignBytes(std::size_t bytes)
{
return (bytes + ELEM_ALIGN_BYTES - 1) / ELEM_ALIGN_BYTES + (bytes == 0);
}
/**
* True when it is possible to make use of the freelist
*/
[[nodiscard]] static constexpr bool IsFreeListUsable(std::size_t bytes, std::size_t alignment)
{
return alignment <= ELEM_ALIGN_BYTES && bytes <= MAX_BLOCK_SIZE_BYTES;
}
/**
* Replaces node with placement constructed ListNode that points to the previous node
*/
void PlacementAddToList(void* p, ListNode*& node)
{
node = new (p) ListNode{node};
}
/**
* Allocate one full memory chunk which will be used to carve out allocations.
* Also puts any leftover bytes into the freelist.
*
* Precondition: leftover bytes are either 0 or few enough to fit into a place in the freelist
*/
void AllocateChunk()
{
// if there is still any available memory left, put it into the freelist.
size_t remaining_available_bytes = std::distance(m_available_memory_it, m_available_memory_end);
if (0 != remaining_available_bytes) {
PlacementAddToList(m_available_memory_it, m_free_lists[remaining_available_bytes / ELEM_ALIGN_BYTES]);
}
void* storage = ::operator new (m_chunk_size_bytes, std::align_val_t{ELEM_ALIGN_BYTES});
m_available_memory_it = new (storage) std::byte[m_chunk_size_bytes];
m_available_memory_end = m_available_memory_it + m_chunk_size_bytes;
m_allocated_chunks.emplace_back(m_available_memory_it);
}
/**
* Access to internals for testing purpose only
*/
friend class PoolResourceTester;
public:
/**
* Construct a new PoolResource object which allocates the first chunk.
* chunk_size_bytes will be rounded up to next multiple of ELEM_ALIGN_BYTES.
*/
explicit PoolResource(std::size_t chunk_size_bytes)
: m_chunk_size_bytes(NumElemAlignBytes(chunk_size_bytes) * ELEM_ALIGN_BYTES)
{
assert(m_chunk_size_bytes >= MAX_BLOCK_SIZE_BYTES);
AllocateChunk();
}
/**
* Construct a new Pool Resource object, defaults to 2^18=262144 chunk size.
*/
PoolResource() : PoolResource(262144) {}
/**
* Disable copy & move semantics, these are not supported for the resource.
*/
PoolResource(const PoolResource&) = delete;
PoolResource& operator=(const PoolResource&) = delete;
PoolResource(PoolResource&&) = delete;
PoolResource& operator=(PoolResource&&) = delete;
/**
* Deallocates all memory allocated associated with the memory resource.
*/
~PoolResource()
{
for (std::byte* chunk : m_allocated_chunks) {
std::destroy(chunk, chunk + m_chunk_size_bytes);
::operator delete ((void*)chunk, std::align_val_t{ELEM_ALIGN_BYTES});
}
}
/**
* Allocates a block of bytes. If possible the freelist is used, otherwise allocation
* is forwarded to ::operator new().
*/
void* Allocate(std::size_t bytes, std::size_t alignment)
{
if (IsFreeListUsable(bytes, alignment)) {
const std::size_t num_alignments = NumElemAlignBytes(bytes);
if (nullptr != m_free_lists[num_alignments]) {
// we've already got data in the pool's freelist, unlink one element and return the pointer
// to the unlinked memory. Since FreeList is trivially destructible we can just treat it as
// uninitialized memory.
return std::exchange(m_free_lists[num_alignments], m_free_lists[num_alignments]->m_next);
}
// freelist is empty: get one allocation from allocated chunk memory.
const std::ptrdiff_t round_bytes = static_cast<std::ptrdiff_t>(num_alignments * ELEM_ALIGN_BYTES);
if (round_bytes > m_available_memory_end - m_available_memory_it) {
// slow path, only happens when a new chunk needs to be allocated
AllocateChunk();
}
// Make sure we use the right amount of bytes for that freelist (might be rounded up),
return std::exchange(m_available_memory_it, m_available_memory_it + round_bytes);
}
// Can't use the pool => use operator new()
return ::operator new (bytes, std::align_val_t{alignment});
}
/**
* Returns a block to the freelists, or deletes the block when it did not come from the chunks.
*/
void Deallocate(void* p, std::size_t bytes, std::size_t alignment) noexcept
{
if (IsFreeListUsable(bytes, alignment)) {
const std::size_t num_alignments = NumElemAlignBytes(bytes);
// put the memory block into the linked list. We can placement construct the FreeList
// into the memory since we can be sure the alignment is correct.
PlacementAddToList(p, m_free_lists[num_alignments]);
} else {
// Can't use the pool => forward deallocation to ::operator delete().
::operator delete (p, std::align_val_t{alignment});
}
}
/**
* Number of allocated chunks
*/
[[nodiscard]] std::size_t NumAllocatedChunks() const
{
return m_allocated_chunks.size();
}
/**
* Size in bytes to allocate per chunk, currently hardcoded to a fixed size.
*/
[[nodiscard]] size_t ChunkSizeBytes() const
{
return m_chunk_size_bytes;
}
};
/**
* Forwards all allocations/deallocations to the PoolResource.
*/
template <class T, std::size_t MAX_BLOCK_SIZE_BYTES, std::size_t ALIGN_BYTES>
class PoolAllocator
{
PoolResource<MAX_BLOCK_SIZE_BYTES, ALIGN_BYTES>* m_resource;
template <typename U, std::size_t M, std::size_t A>
friend class PoolAllocator;
public:
using value_type = T;
using ResourceType = PoolResource<MAX_BLOCK_SIZE_BYTES, ALIGN_BYTES>;
/**
* Not explicit so we can easily construct it with the correct resource
*/
PoolAllocator(ResourceType* resource) noexcept
: m_resource(resource)
{
}
PoolAllocator(const PoolAllocator& other) noexcept = default;
PoolAllocator& operator=(const PoolAllocator& other) noexcept = default;
template <class U>
PoolAllocator(const PoolAllocator<U, MAX_BLOCK_SIZE_BYTES, ALIGN_BYTES>& other) noexcept
: m_resource(other.resource())
{
}
/**
* The rebind struct here is mandatory because we use non type template arguments for
* PoolAllocator. See https://en.cppreference.com/w/cpp/named_req/Allocator#cite_note-2
*/
template <typename U>
struct rebind {
using other = PoolAllocator<U, MAX_BLOCK_SIZE_BYTES, ALIGN_BYTES>;
};
/**
* Forwards each call to the resource.
*/
T* allocate(size_t n)
{
return static_cast<T*>(m_resource->Allocate(n * sizeof(T), alignof(T)));
}
/**
* Forwards each call to the resource.
*/
void deallocate(T* p, size_t n) noexcept
{
m_resource->Deallocate(p, n * sizeof(T), alignof(T));
}
ResourceType* resource() const noexcept
{
return m_resource;
}
};
template <class T1, class T2, std::size_t MAX_BLOCK_SIZE_BYTES, std::size_t ALIGN_BYTES>
bool operator==(const PoolAllocator<T1, MAX_BLOCK_SIZE_BYTES, ALIGN_BYTES>& a,
const PoolAllocator<T2, MAX_BLOCK_SIZE_BYTES, ALIGN_BYTES>& b) noexcept
{
return a.resource() == b.resource();
}
template <class T1, class T2, std::size_t MAX_BLOCK_SIZE_BYTES, std::size_t ALIGN_BYTES>
bool operator!=(const PoolAllocator<T1, MAX_BLOCK_SIZE_BYTES, ALIGN_BYTES>& a,
const PoolAllocator<T2, MAX_BLOCK_SIZE_BYTES, ALIGN_BYTES>& b) noexcept
{
return !(a == b);
}
#endif // BITCOIN_SUPPORT_ALLOCATORS_POOL_H

View File

@ -6,6 +6,7 @@
#include <coins.h>
#include <script/standard.h>
#include <streams.h>
#include <test/util/poolresourcetester.h>
#include <test/util/setup_common.h>
#include <txdb.h>
#include <uint256.h>
@ -625,7 +626,8 @@ void GetCoinsMapEntry(const CCoinsMap& map, CAmount& value, char& flags, const C
void WriteCoinsViewEntry(CCoinsView& view, CAmount value, char flags)
{
CCoinsMap map;
CCoinsMapMemoryResource resource;
CCoinsMap map{0, CCoinsMap::hasher{}, CCoinsMap::key_equal{}, &resource};
InsertCoinsMapEntry(map, value, flags);
BOOST_CHECK(view.BatchWrite(map, {}));
}
@ -924,6 +926,7 @@ void TestFlushBehavior(
CAmount value;
char flags;
size_t cache_usage;
size_t cache_size;
auto flush_all = [&all_caches](bool erase) {
// Flush in reverse order to ensure that flushes happen from children up.
@ -948,6 +951,8 @@ void TestFlushBehavior(
view->AddCoin(outp, Coin(coin), false);
cache_usage = view->DynamicMemoryUsage();
cache_size = view->map().size();
// `base` shouldn't have coin (no flush yet) but `view` should have cached it.
BOOST_CHECK(!base.HaveCoin(outp));
BOOST_CHECK(view->HaveCoin(outp));
@ -962,6 +967,7 @@ void TestFlushBehavior(
// CoinsMap usage should be unchanged since we didn't erase anything.
BOOST_CHECK_EQUAL(cache_usage, view->DynamicMemoryUsage());
BOOST_CHECK_EQUAL(cache_size, view->map().size());
// --- 3. Ensuring the entry still exists in the cache and has been written to parent
//
@ -978,8 +984,10 @@ void TestFlushBehavior(
//
flush_all(/*erase=*/ true);
// Memory usage should have gone down.
BOOST_CHECK(view->DynamicMemoryUsage() < cache_usage);
// Memory does not necessarily go down due to the map using a memory pool
BOOST_TEST(view->DynamicMemoryUsage() <= cache_usage);
// Size of the cache must go down though
BOOST_TEST(view->map().size() < cache_size);
// --- 5. Ensuring the entry is no longer in the cache
//
@ -1095,4 +1103,29 @@ BOOST_AUTO_TEST_CASE(ccoins_flush_behavior)
}
}
BOOST_AUTO_TEST_CASE(coins_resource_is_used)
{
CCoinsMapMemoryResource resource;
PoolResourceTester::CheckAllDataAccountedFor(resource);
{
CCoinsMap map{0, CCoinsMap::hasher{}, CCoinsMap::key_equal{}, &resource};
BOOST_TEST(memusage::DynamicUsage(map) >= resource.ChunkSizeBytes());
map.reserve(1000);
// The resource has preallocated a chunk, so we should have space for at several nodes without the need to allocate anything else.
const auto usage_before = memusage::DynamicUsage(map);
COutPoint out_point{};
for (size_t i = 0; i < 1000; ++i) {
out_point.n = i;
map[out_point];
}
BOOST_TEST(usage_before == memusage::DynamicUsage(map));
}
PoolResourceTester::CheckAllDataAccountedFor(resource);
}
BOOST_AUTO_TEST_SUITE_END()

View File

@ -15,6 +15,7 @@
#include <test/util/net.h>
#include <test/util/setup_common.h>
#include <timedata.h>
#include <txorphanage.h>
#include <util/string.h>
#include <util/system.h>
#include <util/time.h>
@ -26,18 +27,6 @@
#include <boost/test/unit_test.hpp>
// Tests these internal-to-net_processing.cpp methods:
extern bool AddOrphanTx(const CTransactionRef& tx, NodeId peer);
extern void EraseOrphansFor(NodeId peer);
extern unsigned int LimitOrphanTxSize(unsigned int nMaxOrphans);
struct COrphanTx {
CTransactionRef tx;
NodeId fromPeer;
int64_t nTimeExpire;
};
extern std::map<uint256, COrphanTx> mapOrphanTransactions GUARDED_BY(g_cs_orphans);
static CService ip(uint32_t i)
{
struct in_addr s;
@ -45,8 +34,6 @@ static CService ip(uint32_t i)
return CService(CNetAddr(s), Params().GetDefaultPort());
}
void UpdateLastBlockAnnounceTime(NodeId node, int64_t time_in_seconds);
BOOST_FIXTURE_TEST_SUITE(denialofservice_tests, TestingSetup)
// Test eviction of an outbound peer whose chain never advances
@ -59,6 +46,8 @@ BOOST_FIXTURE_TEST_SUITE(denialofservice_tests, TestingSetup)
// work.
BOOST_AUTO_TEST_CASE(outbound_slow_chain_eviction)
{
LOCK(NetEventsInterface::g_msgproc_mutex);
ConnmanTestMsg& connman = static_cast<ConnmanTestMsg&>(*m_node.connman);
// Disable inactivity checks for this test to avoid interference
connman.SetPeerConnectTimeout(99999s);
@ -95,34 +84,29 @@ BOOST_AUTO_TEST_CASE(outbound_slow_chain_eviction)
}
// Test starts here
{
LOCK(dummyNode1.cs_sendProcessing);
BOOST_CHECK(peerman.SendMessages(&dummyNode1)); // should result in getheaders
}
BOOST_CHECK(peerman.SendMessages(&dummyNode1)); // should result in getheaders
{
LOCK(dummyNode1.cs_vSend);
BOOST_CHECK(dummyNode1.vSendMsg.size() > 0);
dummyNode1.vSendMsg.clear();
dummyNode1.nSendMsgSize = 0;
}
connman.FlushSendBuffer(dummyNode1);
{
LOCK(dummyNode1.cs_vSend);
BOOST_CHECK(dummyNode1.vSendMsg.empty());
}
int64_t nStartTime = GetTime();
// Wait 21 minutes
SetMockTime(nStartTime+21*60);
{
LOCK(dummyNode1.cs_sendProcessing);
BOOST_CHECK(peerman.SendMessages(&dummyNode1)); // should result in getheaders
}
BOOST_CHECK(peerman.SendMessages(&dummyNode1)); // should result in getheaders
{
LOCK(dummyNode1.cs_vSend);
BOOST_CHECK(dummyNode1.vSendMsg.size() > 0);
}
// Wait 3 more minutes
SetMockTime(nStartTime+24*60);
{
LOCK(dummyNode1.cs_sendProcessing);
BOOST_CHECK(peerman.SendMessages(&dummyNode1)); // should result in disconnect
}
BOOST_CHECK(peerman.SendMessages(&dummyNode1)); // should result in disconnect
BOOST_CHECK(dummyNode1.fDisconnect == true);
peerman.FinalizeNode(dummyNode1);
@ -213,7 +197,7 @@ BOOST_AUTO_TEST_CASE(stale_tip_peer_management)
// Update the last announced block time for the last
// peer, and check that the next newest node gets evicted.
UpdateLastBlockAnnounceTime(vNodes.back()->GetId(), GetTime());
peerLogic->UpdateLastBlockAnnounceTime(vNodes.back()->GetId(), GetTime());
peerLogic->CheckForStaleTipAndEvictPeers();
for (int i = 0; i < max_outbound_full_relay - 1; ++i) {
@ -296,6 +280,8 @@ BOOST_AUTO_TEST_CASE(block_relay_only_eviction)
BOOST_AUTO_TEST_CASE(peer_discouragement)
{
LOCK(NetEventsInterface::g_msgproc_mutex);
const CChainParams& chainparams = Params();
auto banman = std::make_unique<BanMan>(m_args.GetDataDirBase() / "banlist", nullptr, DEFAULT_MISBEHAVING_BANTIME);
auto connman = std::make_unique<ConnmanTestMsg>(0x1337, 0x1337, *m_node.addrman);
@ -333,10 +319,7 @@ BOOST_AUTO_TEST_CASE(peer_discouragement)
nodes[0]->fSuccessfullyConnected = true;
connman->AddTestNode(*nodes[0]);
peerLogic->Misbehaving(nodes[0]->GetId(), DISCOURAGEMENT_THRESHOLD); // Should be discouraged
{
LOCK(nodes[0]->cs_sendProcessing);
BOOST_CHECK(peerLogic->SendMessages(nodes[0]));
}
BOOST_CHECK(peerLogic->SendMessages(nodes[0]));
BOOST_CHECK(banman->IsDiscouraged(addr[0]));
BOOST_CHECK(nodes[0]->fDisconnect);
BOOST_CHECK(!banman->IsDiscouraged(other_addr)); // Different address, not discouraged
@ -355,10 +338,7 @@ BOOST_AUTO_TEST_CASE(peer_discouragement)
nodes[1]->fSuccessfullyConnected = true;
connman->AddTestNode(*nodes[1]);
peerLogic->Misbehaving(nodes[1]->GetId(), DISCOURAGEMENT_THRESHOLD - 1);
{
LOCK(nodes[1]->cs_sendProcessing);
BOOST_CHECK(peerLogic->SendMessages(nodes[1]));
}
BOOST_CHECK(peerLogic->SendMessages(nodes[1]));
// [0] is still discouraged/disconnected.
BOOST_CHECK(banman->IsDiscouraged(addr[0]));
BOOST_CHECK(nodes[0]->fDisconnect);
@ -366,10 +346,7 @@ BOOST_AUTO_TEST_CASE(peer_discouragement)
BOOST_CHECK(!banman->IsDiscouraged(addr[1]));
BOOST_CHECK(!nodes[1]->fDisconnect);
peerLogic->Misbehaving(nodes[1]->GetId(), 1); // [1] reaches discouragement threshold
{
LOCK(nodes[1]->cs_sendProcessing);
BOOST_CHECK(peerLogic->SendMessages(nodes[1]));
}
BOOST_CHECK(peerLogic->SendMessages(nodes[1]));
// Expect both [0] and [1] to be discouraged/disconnected now.
BOOST_CHECK(banman->IsDiscouraged(addr[0]));
BOOST_CHECK(nodes[0]->fDisconnect);
@ -392,10 +369,7 @@ BOOST_AUTO_TEST_CASE(peer_discouragement)
nodes[2]->fSuccessfullyConnected = true;
connman->AddTestNode(*nodes[2]);
peerLogic->Misbehaving(nodes[2]->GetId(), DISCOURAGEMENT_THRESHOLD, /* message */ "");
{
LOCK(nodes[2]->cs_sendProcessing);
BOOST_CHECK(peerLogic->SendMessages(nodes[2]));
}
BOOST_CHECK(peerLogic->SendMessages(nodes[2]));
BOOST_CHECK(banman->IsDiscouraged(addr[0]));
BOOST_CHECK(banman->IsDiscouraged(addr[1]));
BOOST_CHECK(banman->IsDiscouraged(addr[2]));
@ -411,6 +385,8 @@ BOOST_AUTO_TEST_CASE(peer_discouragement)
BOOST_AUTO_TEST_CASE(DoS_bantime)
{
LOCK(NetEventsInterface::g_msgproc_mutex);
const CChainParams& chainparams = Params();
auto banman = std::make_unique<BanMan>(m_args.GetDataDirBase() / "banlist", nullptr, DEFAULT_MISBEHAVING_BANTIME);
auto connman = std::make_unique<CConnman>(0x1337, 0x1337, *m_node.addrman);
@ -439,24 +415,29 @@ BOOST_AUTO_TEST_CASE(DoS_bantime)
dummyNode.fSuccessfullyConnected = true;
peerLogic->Misbehaving(dummyNode.GetId(), DISCOURAGEMENT_THRESHOLD);
{
LOCK(dummyNode.cs_sendProcessing);
BOOST_CHECK(peerLogic->SendMessages(&dummyNode));
}
BOOST_CHECK(peerLogic->SendMessages(&dummyNode));
BOOST_CHECK(banman->IsDiscouraged(addr));
peerLogic->FinalizeNode(dummyNode);
}
static CTransactionRef RandomOrphan()
class TxOrphanageTest : public TxOrphanage
{
std::map<uint256, COrphanTx>::iterator it;
LOCK2(cs_main, g_cs_orphans);
it = mapOrphanTransactions.lower_bound(InsecureRand256());
if (it == mapOrphanTransactions.end())
it = mapOrphanTransactions.begin();
return it->second.tx;
}
public:
inline size_t CountOrphans() const EXCLUSIVE_LOCKS_REQUIRED(g_cs_orphans)
{
return m_orphans.size();
}
CTransactionRef RandomOrphan() EXCLUSIVE_LOCKS_REQUIRED(g_cs_orphans)
{
std::map<uint256, OrphanTx>::iterator it;
it = m_orphans.lower_bound(InsecureRand256());
if (it == m_orphans.end())
it = m_orphans.begin();
return it->second.tx;
}
};
static void MakeNewKeyWithFastRandomContext(CKey& key)
{
@ -476,11 +457,14 @@ BOOST_AUTO_TEST_CASE(DoS_mapOrphans)
// signature's R and S values have leading zeros.
g_insecure_rand_ctx = FastRandomContext{uint256{33}};
TxOrphanageTest orphanage;
CKey key;
MakeNewKeyWithFastRandomContext(key);
FillableSigningProvider keystore;
BOOST_CHECK(keystore.AddKey(key));
LOCK(g_cs_orphans);
// 50 orphan transactions:
for (int i = 0; i < 50; i++)
{
@ -493,13 +477,13 @@ BOOST_AUTO_TEST_CASE(DoS_mapOrphans)
tx.vout[0].nValue = 1*CENT;
tx.vout[0].scriptPubKey = GetScriptForDestination(PKHash(key.GetPubKey()));
AddOrphanTx(MakeTransactionRef(tx), i);
orphanage.AddTx(MakeTransactionRef(tx), i);
}
// ... and 50 that depend on other orphans:
for (int i = 0; i < 50; i++)
{
CTransactionRef txPrev = RandomOrphan();
CTransactionRef txPrev = orphanage.RandomOrphan();
CMutableTransaction tx;
tx.vin.resize(1);
@ -510,13 +494,13 @@ BOOST_AUTO_TEST_CASE(DoS_mapOrphans)
tx.vout[0].scriptPubKey = GetScriptForDestination(PKHash(key.GetPubKey()));
BOOST_CHECK(SignSignature(keystore, *txPrev, tx, 0, SIGHASH_ALL));
AddOrphanTx(MakeTransactionRef(tx), i);
orphanage.AddTx(MakeTransactionRef(tx), i);
}
// This really-big orphan should be ignored:
for (int i = 0; i < 10; i++)
{
CTransactionRef txPrev = RandomOrphan();
CTransactionRef txPrev = orphanage.RandomOrphan();
CMutableTransaction tx;
tx.vout.resize(1);
@ -534,25 +518,24 @@ BOOST_AUTO_TEST_CASE(DoS_mapOrphans)
for (unsigned int j = 1; j < tx.vin.size(); j++)
tx.vin[j].scriptSig = tx.vin[0].scriptSig;
BOOST_CHECK(!AddOrphanTx(MakeTransactionRef(tx), i));
BOOST_CHECK(!orphanage.AddTx(MakeTransactionRef(tx), i));
}
LOCK2(cs_main, g_cs_orphans);
// Test EraseOrphansFor:
for (NodeId i = 0; i < 3; i++)
{
size_t sizeBefore = mapOrphanTransactions.size();
EraseOrphansFor(i);
BOOST_CHECK(mapOrphanTransactions.size() < sizeBefore);
size_t sizeBefore = orphanage.CountOrphans();
orphanage.EraseForPeer(i);
BOOST_CHECK(orphanage.CountOrphans() < sizeBefore);
}
// Test LimitOrphanTxSize() function:
LimitOrphanTxSize(40);
BOOST_CHECK(mapOrphanTransactions.size() <= 40);
LimitOrphanTxSize(10);
BOOST_CHECK(mapOrphanTransactions.size() <= 10);
LimitOrphanTxSize(0);
BOOST_CHECK(mapOrphanTransactions.empty());
orphanage.LimitOrphans(40);
BOOST_CHECK(orphanage.CountOrphans() <= 40);
orphanage.LimitOrphans(10);
BOOST_CHECK(orphanage.CountOrphans() <= 10);
orphanage.LimitOrphans(0);
BOOST_CHECK(orphanage.CountOrphans() == 0);
}
BOOST_AUTO_TEST_SUITE_END()

View File

@ -117,7 +117,8 @@ FUZZ_TARGET_INIT(coins_view, initialize_coins_view)
random_mutable_transaction = *opt_mutable_transaction;
},
[&] {
CCoinsMap coins_map;
CCoinsMapMemoryResource resource;
CCoinsMap coins_map{0, SaltedOutpointHasher{/*deterministic=*/true}, CCoinsMap::key_equal{}, &resource};
while (fuzzed_data_provider.ConsumeBool()) {
CCoinsCacheEntry coins_cache_entry;
coins_cache_entry.flags = fuzzed_data_provider.ConsumeIntegral<unsigned char>();

View File

@ -91,12 +91,6 @@ FUZZ_TARGET_INIT(connman, initialize_connman)
[&] {
(void)connman.OutboundTargetReached(fuzzed_data_provider.ConsumeBool());
},
[&] {
// Limit now to int32_t to avoid signed integer overflow
(void)connman.PoissonNextSendInbound(
std::chrono::microseconds{fuzzed_data_provider.ConsumeIntegral<int32_t>()},
std::chrono::seconds{fuzzed_data_provider.ConsumeIntegral<int>()});
},
[&] {
CSerializedNetMsg serialized_net_msg;
serialized_net_msg.m_type = fuzzed_data_provider.ConsumeRandomLengthString(CMessageHeader::COMMAND_SIZE);

View File

@ -9,6 +9,8 @@
#include <protocol.h>
#include <test/fuzz/FuzzedDataProvider.h>
#include <test/fuzz/fuzz.h>
#include <test/fuzz/util.h>
#include <test/util/xoroshiro128plusplus.h>
#include <cassert>
#include <cstdint>
@ -16,16 +18,21 @@
#include <optional>
#include <vector>
std::vector<std::string> g_all_messages;
void initialize_p2p_transport_serialization()
{
SelectParams(CBaseChainParams::REGTEST);
g_all_messages = getAllNetMessageTypes();
std::sort(g_all_messages.begin(), g_all_messages.end());
}
FUZZ_TARGET_INIT(p2p_transport_serialization, initialize_p2p_transport_serialization)
{
// Construct deserializer, with a dummy NodeId
V1TransportDeserializer deserializer{Params(), (NodeId)0, SER_NETWORK, INIT_PROTO_VERSION};
V1TransportSerializer serializer{};
// Construct transports for both sides, with dummy NodeIds.
V1Transport recv_transport{NodeId{0}, SER_NETWORK, INIT_PROTO_VERSION};
V1Transport send_transport{NodeId{1}, SER_NETWORK, INIT_PROTO_VERSION};
FuzzedDataProvider fuzzed_data_provider{buffer.data(), buffer.size()};
auto checksum_assist = fuzzed_data_provider.ConsumeBool();
@ -62,14 +69,13 @@ FUZZ_TARGET_INIT(p2p_transport_serialization, initialize_p2p_transport_serializa
mutable_msg_bytes.insert(mutable_msg_bytes.end(), payload_bytes.begin(), payload_bytes.end());
Span<const uint8_t> msg_bytes{mutable_msg_bytes};
while (msg_bytes.size() > 0) {
const int handled = deserializer.Read(msg_bytes);
if (handled < 0) {
if (!recv_transport.ReceivedBytes(msg_bytes)) {
break;
}
if (deserializer.Complete()) {
if (recv_transport.ReceivedMessageComplete()) {
const std::chrono::microseconds m_time{std::numeric_limits<int64_t>::max()};
bool reject_message{false};
CNetMessage msg = deserializer.GetMessage(m_time, reject_message);
CNetMessage msg = recv_transport.GetReceivedMessage(m_time, reject_message);
assert(msg.m_type.size() <= CMessageHeader::COMMAND_SIZE);
assert(msg.m_raw_message_size <= mutable_msg_bytes.size());
assert(msg.m_raw_message_size == CMessageHeader::HEADER_SIZE + msg.m_message_size);
@ -77,7 +83,247 @@ FUZZ_TARGET_INIT(p2p_transport_serialization, initialize_p2p_transport_serializa
std::vector<unsigned char> header;
auto msg2 = CNetMsgMaker{msg.m_recv.GetVersion()}.Make(msg.m_type, MakeUCharSpan(msg.m_recv));
serializer.prepareForTransport(msg2, header);
bool queued = send_transport.SetMessageToSend(msg2);
assert(queued);
std::optional<bool> known_more;
while (true) {
const auto& [to_send, more, _msg_type] = send_transport.GetBytesToSend();
if (known_more) assert(!to_send.empty() == *known_more);
if (to_send.empty()) break;
send_transport.MarkBytesSent(to_send.size());
known_more = more;
}
}
}
}
namespace {
template<typename R>
void SimulationTest(Transport& initiator, Transport& responder, R& rng, FuzzedDataProvider& provider)
{
// Simulation test with two Transport objects, which send messages to each other, with
// sending and receiving fragmented into multiple pieces that may be interleaved. It primarily
// verifies that the sending and receiving side are compatible with each other, plus a few
// sanity checks. It does not attempt to introduce errors in the communicated data.
// Put the transports in an array for by-index access.
const std::array<Transport*, 2> transports = {&initiator, &responder};
// Two vectors representing in-flight bytes. inflight[i] is from transport[i] to transport[!i].
std::array<std::vector<uint8_t>, 2> in_flight;
// Two queues with expected messages. expected[i] is expected to arrive in transport[!i].
std::array<std::deque<CSerializedNetMsg>, 2> expected;
// Vectors with bytes last returned by GetBytesToSend() on transport[i].
std::array<std::vector<uint8_t>, 2> to_send;
// Last returned 'more' values (if still relevant) by transport[i]->GetBytesToSend().
std::array<std::optional<bool>, 2> last_more;
// Whether more bytes to be sent are expected on transport[i].
std::array<std::optional<bool>, 2> expect_more;
// Function to consume a message type.
auto msg_type_fn = [&]() {
uint8_t v = provider.ConsumeIntegral<uint8_t>();
if (v == 0xFF) {
// If v is 0xFF, construct a valid (but possibly unknown) message type from the fuzz
// data.
std::string ret;
while (ret.size() < CMessageHeader::COMMAND_SIZE) {
char c = provider.ConsumeIntegral<char>();
// Match the allowed characters in CMessageHeader::IsCommandValid(). Any other
// character is interpreted as end.
if (c < ' ' || c > 0x7E) break;
ret += c;
}
return ret;
} else {
// Otherwise, use it as index into the list of known messages.
return g_all_messages[v % g_all_messages.size()];
}
};
// Function to construct a CSerializedNetMsg to send.
auto make_msg_fn = [&](bool first) {
CSerializedNetMsg msg;
if (first) {
// Always send a "version" message as first one.
msg.m_type = "version";
} else {
msg.m_type = msg_type_fn();
}
// Determine size of message to send (limited to 75 kB for performance reasons).
size_t size = provider.ConsumeIntegralInRange<uint32_t>(0, 75000);
// Get payload of message from RNG.
msg.data.resize(size);
for (auto& v : msg.data) v = uint8_t(rng());
// Return.
return msg;
};
// The next message to be sent (initially version messages, but will be replaced once sent).
std::array<CSerializedNetMsg, 2> next_msg = {
make_msg_fn(/*first=*/true),
make_msg_fn(/*first=*/true)
};
// Wrapper around transport[i]->GetBytesToSend() that performs sanity checks.
auto bytes_to_send_fn = [&](int side) -> Transport::BytesToSend {
const auto& [bytes, more, msg_type] = transports[side]->GetBytesToSend();
// Compare with expected more.
if (expect_more[side].has_value()) assert(!bytes.empty() == *expect_more[side]);
// Compare with previously reported output.
assert(to_send[side].size() <= bytes.size());
assert(to_send[side] == Span{bytes}.first(to_send[side].size()));
to_send[side].resize(bytes.size());
std::copy(bytes.begin(), bytes.end(), to_send[side].begin());
// Remember 'more' result.
last_more[side] = {more};
// Return.
return {bytes, more, msg_type};
};
// Function to make side send a new message.
auto new_msg_fn = [&](int side) {
// Don't do anything if there are too many unreceived messages already.
if (expected[side].size() >= 16) return;
// Try to send (a copy of) the message in next_msg[side].
CSerializedNetMsg msg = next_msg[side].Copy();
bool queued = transports[side]->SetMessageToSend(msg);
// Update expected more data.
expect_more[side] = std::nullopt;
// Verify consistency of GetBytesToSend after SetMessageToSend
bytes_to_send_fn(/*side=*/side);
if (queued) {
// Remember that this message is now expected by the receiver.
expected[side].emplace_back(std::move(next_msg[side]));
// Construct a new next message to send.
next_msg[side] = make_msg_fn(/*first=*/false);
}
};
// Function to make side send out bytes (if any).
auto send_fn = [&](int side, bool everything = false) {
const auto& [bytes, more, msg_type] = bytes_to_send_fn(/*side=*/side);
// Don't do anything if no bytes to send.
if (bytes.empty()) return false;
size_t send_now = everything ? bytes.size() : provider.ConsumeIntegralInRange<size_t>(0, bytes.size());
if (send_now == 0) return false;
// Add bytes to the in-flight queue, and mark those bytes as consumed.
in_flight[side].insert(in_flight[side].end(), bytes.begin(), bytes.begin() + send_now);
transports[side]->MarkBytesSent(send_now);
// If all to-be-sent bytes were sent, move last_more data to expect_more data.
if (send_now == bytes.size()) {
expect_more[side] = last_more[side];
}
// Remove the bytes from the last reported to-be-sent vector.
assert(to_send[side].size() >= send_now);
to_send[side].erase(to_send[side].begin(), to_send[side].begin() + send_now);
// Verify that GetBytesToSend gives a result consistent with earlier.
bytes_to_send_fn(/*side=*/side);
// Return whether anything was sent.
return send_now > 0;
};
// Function to make !side receive bytes (if any).
auto recv_fn = [&](int side, bool everything = false) {
// Don't do anything if no bytes in flight.
if (in_flight[side].empty()) return false;
// Decide span to receive
size_t to_recv_len = in_flight[side].size();
if (!everything) to_recv_len = provider.ConsumeIntegralInRange<size_t>(0, to_recv_len);
Span<const uint8_t> to_recv = Span{in_flight[side]}.first(to_recv_len);
// Process those bytes
while (!to_recv.empty()) {
size_t old_len = to_recv.size();
bool ret = transports[!side]->ReceivedBytes(to_recv);
// Bytes must always be accepted, as this test does not introduce any errors in
// communication.
assert(ret);
// Clear cached expected 'more' information: if certainly no more data was to be sent
// before, receiving bytes makes this uncertain.
if (expect_more[!side] == false) expect_more[!side] = std::nullopt;
// Verify consistency of GetBytesToSend after ReceivedBytes
bytes_to_send_fn(/*side=*/!side);
bool progress = to_recv.size() < old_len;
if (transports[!side]->ReceivedMessageComplete()) {
bool reject{false};
auto received = transports[!side]->GetReceivedMessage({}, reject);
// Receiving must succeed.
assert(!reject);
// There must be a corresponding expected message.
assert(!expected[side].empty());
// The m_message_size field must be correct.
assert(received.m_message_size == received.m_recv.size());
// The m_type must match what is expected.
assert(received.m_type == expected[side].front().m_type);
// The data must match what is expected.
assert(MakeByteSpan(received.m_recv) == MakeByteSpan(expected[side].front().data));
expected[side].pop_front();
progress = true;
}
// Progress must be made (by processing incoming bytes and/or returning complete
// messages) until all received bytes are processed.
assert(progress);
}
// Remove the processed bytes from the in_flight buffer.
in_flight[side].erase(in_flight[side].begin(), in_flight[side].begin() + to_recv_len);
// Return whether anything was received.
return to_recv_len > 0;
};
// Main loop, interleaving new messages, sends, and receives.
LIMITED_WHILE(provider.remaining_bytes(), 1000) {
CallOneOf(provider,
// (Try to) give the next message to the transport.
[&] { new_msg_fn(/*side=*/0); },
[&] { new_msg_fn(/*side=*/1); },
// (Try to) send some bytes from the transport to the network.
[&] { send_fn(/*side=*/0); },
[&] { send_fn(/*side=*/1); },
// (Try to) receive bytes from the network, converting to messages.
[&] { recv_fn(/*side=*/0); },
[&] { recv_fn(/*side=*/1); }
);
}
// When we're done, perform sends and receives of existing messages to flush anything already
// in flight.
while (true) {
bool any = false;
if (send_fn(/*side=*/0, /*everything=*/true)) any = true;
if (send_fn(/*side=*/1, /*everything=*/true)) any = true;
if (recv_fn(/*side=*/0, /*everything=*/true)) any = true;
if (recv_fn(/*side=*/1, /*everything=*/true)) any = true;
if (!any) break;
}
// Make sure nothing is left in flight.
assert(in_flight[0].empty());
assert(in_flight[1].empty());
// Make sure all expected messages were received.
assert(expected[0].empty());
assert(expected[1].empty());
}
std::unique_ptr<Transport> MakeV1Transport(NodeId nodeid) noexcept
{
return std::make_unique<V1Transport>(nodeid, SER_NETWORK, INIT_PROTO_VERSION);
}
} // namespace
FUZZ_TARGET_INIT(p2p_transport_bidirectional, initialize_p2p_transport_serialization)
{
// Test with two V1 transports talking to each other.
FuzzedDataProvider provider{buffer.data(), buffer.size()};
XoRoShiRo128PlusPlus rng(provider.ConsumeIntegral<uint64_t>());
auto t1 = MakeV1Transport(NodeId{0});
auto t2 = MakeV1Transport(NodeId{1});
if (!t1 || !t2) return;
SimulationTest(*t1, *t2, rng, provider);
}

View File

@ -0,0 +1,174 @@
// Copyright (c) 2022 The Bitcoin Core developers
// Distributed under the MIT software license, see the accompanying
// file COPYING or http://www.opensource.org/licenses/mit-license.php.
#include <span.h>
#include <support/allocators/pool.h>
#include <test/fuzz/FuzzedDataProvider.h>
#include <test/fuzz/fuzz.h>
#include <test/fuzz/util.h>
#include <test/util/poolresourcetester.h>
#include <test/util/xoroshiro128plusplus.h>
#include <cstdint>
#include <tuple>
#include <vector>
namespace {
template <std::size_t MAX_BLOCK_SIZE_BYTES, std::size_t ALIGN_BYTES>
class PoolResourceFuzzer
{
FuzzedDataProvider& m_provider;
PoolResource<MAX_BLOCK_SIZE_BYTES, ALIGN_BYTES> m_test_resource;
uint64_t m_sequence{0};
size_t m_total_allocated{};
struct Entry {
Span<std::byte> span;
size_t alignment;
uint64_t seed;
Entry(Span<std::byte> s, size_t a, uint64_t se) : span(s), alignment(a), seed(se) {}
};
std::vector<Entry> m_entries;
public:
PoolResourceFuzzer(FuzzedDataProvider& provider)
: m_provider{provider},
m_test_resource{provider.ConsumeIntegralInRange<size_t>(MAX_BLOCK_SIZE_BYTES, 262144)}
{
}
void Allocate(size_t size, size_t alignment)
{
assert(size > 0); // Must allocate at least 1 byte.
assert(alignment > 0); // Alignment must be at least 1.
assert((alignment & (alignment - 1)) == 0); // Alignment must be power of 2.
assert((size & (alignment - 1)) == 0); // Size must be a multiple of alignment.
auto span = Span(static_cast<std::byte*>(m_test_resource.Allocate(size, alignment)), size);
m_total_allocated += size;
auto ptr_val = reinterpret_cast<std::uintptr_t>(span.data());
assert((ptr_val & (alignment - 1)) == 0);
uint64_t seed = m_sequence++;
RandomContentFill(m_entries.emplace_back(span, alignment, seed));
}
void
Allocate()
{
if (m_total_allocated > 0x1000000) return;
size_t alignment_bits = m_provider.ConsumeIntegralInRange<size_t>(0, 7);
size_t alignment = 1 << alignment_bits;
size_t size_bits = m_provider.ConsumeIntegralInRange<size_t>(0, 16 - alignment_bits);
size_t size = m_provider.ConsumeIntegralInRange<size_t>(1U << size_bits, (1U << (size_bits + 1)) - 1U) << alignment_bits;
Allocate(size, alignment);
}
void RandomContentFill(Entry& entry)
{
XoRoShiRo128PlusPlus rng(entry.seed);
auto ptr = entry.span.data();
auto size = entry.span.size();
while (size >= 8) {
auto r = rng();
std::memcpy(ptr, &r, 8);
size -= 8;
ptr += 8;
}
if (size > 0) {
auto r = rng();
std::memcpy(ptr, &r, size);
}
}
void RandomContentCheck(const Entry& entry)
{
XoRoShiRo128PlusPlus rng(entry.seed);
auto ptr = entry.span.data();
auto size = entry.span.size();
std::byte buf[8];
while (size >= 8) {
auto r = rng();
std::memcpy(buf, &r, 8);
assert(std::memcmp(buf, ptr, 8) == 0);
size -= 8;
ptr += 8;
}
if (size > 0) {
auto r = rng();
std::memcpy(buf, &r, size);
assert(std::memcmp(buf, ptr, size) == 0);
}
}
void Deallocate(const Entry& entry)
{
auto ptr_val = reinterpret_cast<std::uintptr_t>(entry.span.data());
assert((ptr_val & (entry.alignment - 1)) == 0);
RandomContentCheck(entry);
m_total_allocated -= entry.span.size();
m_test_resource.Deallocate(entry.span.data(), entry.span.size(), entry.alignment);
}
void Deallocate()
{
if (m_entries.empty()) {
return;
}
size_t idx = m_provider.ConsumeIntegralInRange<size_t>(0, m_entries.size() - 1);
Deallocate(m_entries[idx]);
if (idx != m_entries.size() - 1) {
m_entries[idx] = std::move(m_entries.back());
}
m_entries.pop_back();
}
void Clear()
{
while (!m_entries.empty()) {
Deallocate();
}
PoolResourceTester::CheckAllDataAccountedFor(m_test_resource);
}
void Fuzz()
{
LIMITED_WHILE(m_provider.ConsumeBool(), 10000)
{
CallOneOf(
m_provider,
[&] { Allocate(); },
[&] { Deallocate(); });
}
Clear();
}
};
} // namespace
FUZZ_TARGET(pool_resource)
{
FuzzedDataProvider provider(buffer.data(), buffer.size());
CallOneOf(
provider,
[&] { PoolResourceFuzzer<128, 1>{provider}.Fuzz(); },
[&] { PoolResourceFuzzer<128, 2>{provider}.Fuzz(); },
[&] { PoolResourceFuzzer<128, 4>{provider}.Fuzz(); },
[&] { PoolResourceFuzzer<128, 8>{provider}.Fuzz(); },
[&] { PoolResourceFuzzer<8, 8>{provider}.Fuzz(); },
[&] { PoolResourceFuzzer<16, 16>{provider}.Fuzz(); },
[&] { PoolResourceFuzzer<256, alignof(max_align_t)>{provider}.Fuzz(); },
[&] { PoolResourceFuzzer<256, 64>{provider}.Fuzz(); });
}

View File

@ -18,6 +18,7 @@
#include <test/util/net.h>
#include <test/util/setup_common.h>
#include <test/util/validation.h>
#include <txorphanage.h>
#include <validationinterface.h>
#include <version.h>
@ -80,6 +81,8 @@ void fuzz_target(FuzzBufferType buffer, const std::string& LIMIT_TO_MESSAGE_TYPE
SetMockTime(1610000000); // any time to successfully reset ibd
chainstate.ResetIbd();
LOCK(NetEventsInterface::g_msgproc_mutex);
const std::string random_message_type{fuzzed_data_provider.ConsumeBytesAsString(CMessageHeader::COMMAND_SIZE).c_str()};
if (!LIMIT_TO_MESSAGE_TYPE.empty() && random_message_type != LIMIT_TO_MESSAGE_TYPE) {
return;
@ -98,10 +101,7 @@ void fuzz_target(FuzzBufferType buffer, const std::string& LIMIT_TO_MESSAGE_TYPE
g_setup->m_node.peerman->ProcessMessage(p2p_node, random_message_type, random_bytes_data_stream, GetTime<std::chrono::microseconds>(), std::atomic<bool>{false});
} catch (const std::ios_base::failure& e) {
}
{
LOCK(p2p_node.cs_sendProcessing);
g_setup->m_node.peerman->SendMessages(&p2p_node);
}
g_setup->m_node.peerman->SendMessages(&p2p_node);
SyncWithValidationInterfaceQueue();
LOCK2(::cs_main, g_cs_orphans); // See init.cpp for rationale for implicit locking order requirement
g_setup->m_node.connman->StopNodes();

View File

@ -13,6 +13,7 @@
#include <test/util/net.h>
#include <test/util/setup_common.h>
#include <test/util/validation.h>
#include <txorphanage.h>
#include <validation.h>
#include <validationinterface.h>
@ -39,6 +40,8 @@ FUZZ_TARGET_INIT(process_messages, initialize_process_messages)
SetMockTime(1610000000); // any time to successfully reset ibd
chainstate.ResetIbd();
LOCK(NetEventsInterface::g_msgproc_mutex);
std::vector<CNode*> peers;
const auto num_peers_to_add = fuzzed_data_provider.ConsumeIntegralInRange(1, 3);
for (int i = 0; i < num_peers_to_add; ++i) {
@ -62,17 +65,15 @@ FUZZ_TARGET_INIT(process_messages, initialize_process_messages)
CNode& random_node = *PickValue(fuzzed_data_provider, peers);
(void)connman.ReceiveMsgFrom(random_node, net_msg);
connman.FlushSendBuffer(random_node);
(void)connman.ReceiveMsgFrom(random_node, std::move(net_msg));
random_node.fPauseSend = false;
try {
connman.ProcessMessagesOnce(random_node);
} catch (const std::ios_base::failure&) {
}
{
LOCK(random_node.cs_sendProcessing);
g_setup->m_node.peerman->SendMessages(&random_node);
}
g_setup->m_node.peerman->SendMessages(&random_node);
}
SyncWithValidationInterfaceQueue();
LOCK2(::cs_main, g_cs_orphans); // See init.cpp for rationale for implicit locking order requirement

View File

@ -390,7 +390,7 @@ auto ConsumeNode(FuzzedDataProvider& fuzzed_data_provider, const std::optional<N
}
inline std::unique_ptr<CNode> ConsumeNodeAsUniquePtr(FuzzedDataProvider& fdp, const std::optional<NodeId>& node_id_in = std::nullopt) { return ConsumeNode<true>(fdp, node_id_in); }
void FillNode(FuzzedDataProvider& fuzzed_data_provider, ConnmanTestMsg& connman, CNode& node) noexcept;
void FillNode(FuzzedDataProvider& fuzzed_data_provider, ConnmanTestMsg& connman, CNode& node) noexcept EXCLUSIVE_LOCKS_REQUIRED(NetEventsInterface::g_msgproc_mutex);
class FuzzedFileProvider
{

View File

@ -809,6 +809,8 @@ BOOST_AUTO_TEST_CASE(LocalAddress_BasicLifecycle)
BOOST_AUTO_TEST_CASE(initial_advertise_from_version_message)
{
LOCK(NetEventsInterface::g_msgproc_mutex);
// Tests the following scenario:
// * -bind=3.4.5.6:20001 is specified
// * we make an outbound connection to a peer
@ -893,10 +895,7 @@ BOOST_AUTO_TEST_CASE(initial_advertise_from_version_message)
}
};
{
LOCK(peer.cs_sendProcessing);
m_node.peerman->SendMessages(&peer);
}
m_node.peerman->SendMessages(&peer);
BOOST_CHECK(sent);

189
src/test/pool_tests.cpp Normal file
View File

@ -0,0 +1,189 @@
// Copyright (c) 2022 The Bitcoin Core developers
// Distributed under the MIT software license, see the accompanying
// file COPYING or http://www.opensource.org/licenses/mit-license.php.
#include <memusage.h>
#include <support/allocators/pool.h>
#include <test/util/poolresourcetester.h>
#include <test/util/setup_common.h>
#include <boost/test/unit_test.hpp>
#include <cstddef>
#include <cstdint>
#include <unordered_map>
#include <vector>
BOOST_FIXTURE_TEST_SUITE(pool_tests, BasicTestingSetup)
BOOST_AUTO_TEST_CASE(basic_allocating)
{
auto resource = PoolResource<8, 8>();
PoolResourceTester::CheckAllDataAccountedFor(resource);
// first chunk is already allocated
size_t expected_bytes_available = resource.ChunkSizeBytes();
BOOST_TEST(expected_bytes_available == PoolResourceTester::AvailableMemoryFromChunk(resource));
// chunk is used, no more allocation
void* block = resource.Allocate(8, 8);
expected_bytes_available -= 8;
BOOST_TEST(expected_bytes_available == PoolResourceTester::AvailableMemoryFromChunk(resource));
BOOST_TEST(0 == PoolResourceTester::FreeListSizes(resource)[1]);
resource.Deallocate(block, 8, 8);
PoolResourceTester::CheckAllDataAccountedFor(resource);
BOOST_TEST(1 == PoolResourceTester::FreeListSizes(resource)[1]);
// alignment is too small, but the best fitting freelist is used. Nothing is allocated.
void* b = resource.Allocate(8, 1);
BOOST_TEST(b == block); // we got the same block of memory as before
BOOST_TEST(0 == PoolResourceTester::FreeListSizes(resource)[1]);
BOOST_TEST(expected_bytes_available == PoolResourceTester::AvailableMemoryFromChunk(resource));
resource.Deallocate(block, 8, 1);
PoolResourceTester::CheckAllDataAccountedFor(resource);
BOOST_TEST(1 == PoolResourceTester::FreeListSizes(resource)[1]);
BOOST_TEST(expected_bytes_available == PoolResourceTester::AvailableMemoryFromChunk(resource));
// can't use resource because alignment is too big, allocate system memory
b = resource.Allocate(8, 16);
BOOST_TEST(b != block);
block = b;
PoolResourceTester::CheckAllDataAccountedFor(resource);
BOOST_TEST(1 == PoolResourceTester::FreeListSizes(resource)[1]);
BOOST_TEST(expected_bytes_available == PoolResourceTester::AvailableMemoryFromChunk(resource));
resource.Deallocate(block, 8, 16);
PoolResourceTester::CheckAllDataAccountedFor(resource);
BOOST_TEST(1 == PoolResourceTester::FreeListSizes(resource)[1]);
BOOST_TEST(expected_bytes_available == PoolResourceTester::AvailableMemoryFromChunk(resource));
// can't use chunk because size is too big
block = resource.Allocate(16, 8);
PoolResourceTester::CheckAllDataAccountedFor(resource);
BOOST_TEST(1 == PoolResourceTester::FreeListSizes(resource)[1]);
BOOST_TEST(expected_bytes_available == PoolResourceTester::AvailableMemoryFromChunk(resource));
resource.Deallocate(block, 16, 8);
PoolResourceTester::CheckAllDataAccountedFor(resource);
BOOST_TEST(1 == PoolResourceTester::FreeListSizes(resource)[1]);
BOOST_TEST(expected_bytes_available == PoolResourceTester::AvailableMemoryFromChunk(resource));
// it's possible that 0 bytes are allocated, make sure this works. In that case the call is forwarded to operator new
// 0 bytes takes one entry from the first freelist
void* p = resource.Allocate(0, 1);
BOOST_TEST(0 == PoolResourceTester::FreeListSizes(resource)[1]);
BOOST_TEST(expected_bytes_available == PoolResourceTester::AvailableMemoryFromChunk(resource));
resource.Deallocate(p, 0, 1);
PoolResourceTester::CheckAllDataAccountedFor(resource);
BOOST_TEST(1 == PoolResourceTester::FreeListSizes(resource)[1]);
BOOST_TEST(expected_bytes_available == PoolResourceTester::AvailableMemoryFromChunk(resource));
}
// Allocates from 0 to n bytes were n > the PoolResource's data, and each should work
BOOST_AUTO_TEST_CASE(allocate_any_byte)
{
auto resource = PoolResource<128, 8>(1024);
uint8_t num_allocs = 200;
auto data = std::vector<Span<uint8_t>>();
// allocate an increasing number of bytes
for (uint8_t num_bytes = 0; num_bytes < num_allocs; ++num_bytes) {
uint8_t* bytes = new (resource.Allocate(num_bytes, 1)) uint8_t[num_bytes];
BOOST_TEST(bytes != nullptr);
data.emplace_back(bytes, num_bytes);
// set each byte to num_bytes
std::fill(bytes, bytes + num_bytes, num_bytes);
}
// now that we got all allocated, test if all still have the correct values, and give everything back to the allocator
uint8_t val = 0;
for (auto const& span : data) {
for (auto x : span) {
BOOST_TEST(val == x);
}
std::destroy(span.data(), span.data() + span.size());
resource.Deallocate(span.data(), span.size(), 1);
++val;
}
PoolResourceTester::CheckAllDataAccountedFor(resource);
}
BOOST_AUTO_TEST_CASE(random_allocations)
{
struct PtrSizeAlignment {
void* ptr;
size_t bytes;
size_t alignment;
};
// makes a bunch of random allocations and gives all of them back in random order.
auto resource = PoolResource<128, 8>(65536);
std::vector<PtrSizeAlignment> ptr_size_alignment{};
for (size_t i = 0; i < 1000; ++i) {
// make it a bit more likely to allocate than deallocate
if (ptr_size_alignment.empty() || 0 != InsecureRandRange(4)) {
// allocate a random item
std::size_t alignment = std::size_t{1} << InsecureRandRange(8); // 1, 2, ..., 128
std::size_t size = (InsecureRandRange(200) / alignment + 1) * alignment; // multiple of alignment
void* ptr = resource.Allocate(size, alignment);
BOOST_TEST(ptr != nullptr);
BOOST_TEST((reinterpret_cast<uintptr_t>(ptr) & (alignment - 1)) == 0);
ptr_size_alignment.push_back({ptr, size, alignment});
} else {
// deallocate a random item
auto& x = ptr_size_alignment[InsecureRandRange(ptr_size_alignment.size())];
resource.Deallocate(x.ptr, x.bytes, x.alignment);
x = ptr_size_alignment.back();
ptr_size_alignment.pop_back();
}
}
// deallocate all the rest
for (auto const& x : ptr_size_alignment) {
resource.Deallocate(x.ptr, x.bytes, x.alignment);
}
PoolResourceTester::CheckAllDataAccountedFor(resource);
}
BOOST_AUTO_TEST_CASE(memusage_test)
{
auto std_map = std::unordered_map<int, int>{};
using Map = std::unordered_map<int,
int,
std::hash<int>,
std::equal_to<int>,
PoolAllocator<std::pair<const int, int>,
sizeof(std::pair<const int, int>) + sizeof(void*) * 4,
alignof(void*)>>;
auto resource = Map::allocator_type::ResourceType(1024);
PoolResourceTester::CheckAllDataAccountedFor(resource);
{
auto resource_map = Map{0, std::hash<int>{}, std::equal_to<int>{}, &resource};
// can't have the same resource usage
BOOST_TEST(memusage::DynamicUsage(std_map) != memusage::DynamicUsage(resource_map));
for (size_t i = 0; i < 10000; ++i) {
std_map[i];
resource_map[i];
}
// Eventually the resource_map should have a much lower memory usage because it has less malloc overhead
BOOST_TEST(memusage::DynamicUsage(resource_map) <= memusage::DynamicUsage(std_map) * 90 / 100);
}
PoolResourceTester::CheckAllDataAccountedFor(resource);
}
BOOST_AUTO_TEST_SUITE_END()

View File

@ -25,6 +25,7 @@ void ConnmanTestMsg::Handshake(CNode& node,
const CNetMsgMaker mm{0};
peerman.InitializeNode(node, local_services);
FlushSendBuffer(node); // Drop the version message added by InitializeNode.
CSerializedNetMsg msg_version{
mm.Make(NetMsgType::VERSION,
@ -41,13 +42,11 @@ void ConnmanTestMsg::Handshake(CNode& node,
relay_txs),
};
(void)connman.ReceiveMsgFrom(node, msg_version);
(void)connman.ReceiveMsgFrom(node, std::move(msg_version));
node.fPauseSend = false;
connman.ProcessMessagesOnce(node);
{
LOCK(node.cs_sendProcessing);
peerman.SendMessages(&node);
}
peerman.SendMessages(&node);
FlushSendBuffer(node); // Drop the verack message added by SendMessages.
if (node.fDisconnect) return;
assert(node.nVersion == version);
assert(node.GetCommonVersion() == std::min(version, PROTOCOL_VERSION));
@ -58,13 +57,10 @@ void ConnmanTestMsg::Handshake(CNode& node,
node.m_permissionFlags = permission_flags;
if (successfully_connected) {
CSerializedNetMsg msg_verack{mm.Make(NetMsgType::VERACK)};
(void)connman.ReceiveMsgFrom(node, msg_verack);
(void)connman.ReceiveMsgFrom(node, std::move(msg_verack));
node.fPauseSend = false;
connman.ProcessMessagesOnce(node);
{
LOCK(node.cs_sendProcessing);
peerman.SendMessages(&node);
}
peerman.SendMessages(&node);
assert(node.fSuccessfullyConnected == true);
}
}
@ -89,14 +85,29 @@ void ConnmanTestMsg::NodeReceiveMsgBytes(CNode& node, Span<const uint8_t> msg_by
}
}
bool ConnmanTestMsg::ReceiveMsgFrom(CNode& node, CSerializedNetMsg& ser_msg) const
void ConnmanTestMsg::FlushSendBuffer(CNode& node) const
{
std::vector<uint8_t> ser_msg_header;
node.m_serializer->prepareForTransport(ser_msg, ser_msg_header);
LOCK(node.cs_vSend);
node.vSendMsg.clear();
node.m_send_memusage = 0;
while (true) {
const auto& [to_send, _more, _msg_type] = node.m_transport->GetBytesToSend();
if (to_send.empty()) break;
node.m_transport->MarkBytesSent(to_send.size());
}
}
bool complete;
NodeReceiveMsgBytes(node, ser_msg_header, complete);
NodeReceiveMsgBytes(node, ser_msg.data, complete);
bool ConnmanTestMsg::ReceiveMsgFrom(CNode& node, CSerializedNetMsg&& ser_msg) const
{
bool queued = node.m_transport->SetMessageToSend(ser_msg);
assert(queued);
bool complete{false};
while (true) {
const auto& [to_send, _more, _msg_type] = node.m_transport->GetBytesToSend();
if (to_send.empty()) break;
NodeReceiveMsgBytes(node, to_send, complete);
node.m_transport->MarkBytesSent(to_send.size());
}
return complete;
}

View File

@ -44,13 +44,15 @@ struct ConnmanTestMsg : public CConnman {
ServiceFlags local_services,
NetPermissionFlags permission_flags,
int32_t version,
bool relay_txs);
bool relay_txs)
EXCLUSIVE_LOCKS_REQUIRED(NetEventsInterface::g_msgproc_mutex);
void ProcessMessagesOnce(CNode& node) { m_msgproc->ProcessMessages(&node, flagInterruptMsgProc); }
void ProcessMessagesOnce(CNode& node) EXCLUSIVE_LOCKS_REQUIRED(NetEventsInterface::g_msgproc_mutex) { m_msgproc->ProcessMessages(&node, flagInterruptMsgProc); }
void NodeReceiveMsgBytes(CNode& node, Span<const uint8_t> msg_bytes, bool& complete) const;
bool ReceiveMsgFrom(CNode& node, CSerializedNetMsg& ser_msg) const;
bool ReceiveMsgFrom(CNode& node, CSerializedNetMsg&& ser_msg) const;
void FlushSendBuffer(CNode& node) const;
};
constexpr ServiceFlags ALL_SERVICE_FLAGS[]{

View File

@ -0,0 +1,129 @@
// Copyright (c) 2022 The Bitcoin Core developers
// Distributed under the MIT software license, see the accompanying
// file COPYING or http://www.opensource.org/licenses/mit-license.php.
#ifndef BITCOIN_TEST_UTIL_POOLRESOURCETESTER_H
#define BITCOIN_TEST_UTIL_POOLRESOURCETESTER_H
#include <support/allocators/pool.h>
#include <algorithm>
#include <cassert>
#include <cstddef>
#include <cstdint>
#include <vector>
/**
* Helper to get access to private parts of PoolResource. Used in unit tests and in the fuzzer
*/
class PoolResourceTester
{
struct PtrAndBytes {
uintptr_t ptr;
std::size_t size;
PtrAndBytes(const void* p, std::size_t s)
: ptr(reinterpret_cast<uintptr_t>(p)), size(s)
{
}
/**
* defines a sort ordering by the pointer value
*/
friend bool operator<(PtrAndBytes const& a, PtrAndBytes const& b)
{
return a.ptr < b.ptr;
}
};
public:
/**
* Extracts the number of elements per freelist
*/
template <std::size_t MAX_BLOCK_SIZE_BYTES, std::size_t ALIGN_BYTES>
static std::vector<std::size_t> FreeListSizes(const PoolResource<MAX_BLOCK_SIZE_BYTES, ALIGN_BYTES>& resource)
{
auto sizes = std::vector<std::size_t>();
for (const auto* ptr : resource.m_free_lists) {
size_t size = 0;
while (ptr != nullptr) {
++size;
ptr = ptr->m_next;
}
sizes.push_back(size);
}
return sizes;
}
/**
* How many bytes are still available from the last allocated chunk
*/
template <std::size_t MAX_BLOCK_SIZE_BYTES, std::size_t ALIGN_BYTES>
static std::size_t AvailableMemoryFromChunk(const PoolResource<MAX_BLOCK_SIZE_BYTES, ALIGN_BYTES>& resource)
{
return resource.m_available_memory_end - resource.m_available_memory_it;
}
/**
* Once all blocks are given back to the resource, tests that the freelists are consistent:
*
* * All data in the freelists must come from the chunks
* * Memory doesn't overlap
* * Each byte in the chunks can be accounted for in either the freelist or as available bytes.
*/
template <std::size_t MAX_BLOCK_SIZE_BYTES, std::size_t ALIGN_BYTES>
static void CheckAllDataAccountedFor(const PoolResource<MAX_BLOCK_SIZE_BYTES, ALIGN_BYTES>& resource)
{
// collect all free blocks by iterating all freelists
std::vector<PtrAndBytes> free_blocks;
for (std::size_t freelist_idx = 0; freelist_idx < resource.m_free_lists.size(); ++freelist_idx) {
std::size_t bytes = freelist_idx * resource.ELEM_ALIGN_BYTES;
auto* ptr = resource.m_free_lists[freelist_idx];
while (ptr != nullptr) {
free_blocks.emplace_back(ptr, bytes);
ptr = ptr->m_next;
}
}
// also add whatever has not yet been used for blocks
auto num_available_bytes = resource.m_available_memory_end - resource.m_available_memory_it;
if (num_available_bytes > 0) {
free_blocks.emplace_back(resource.m_available_memory_it, num_available_bytes);
}
// collect all chunks
std::vector<PtrAndBytes> chunks;
for (const std::byte* ptr : resource.m_allocated_chunks) {
chunks.emplace_back(ptr, resource.ChunkSizeBytes());
}
// now we have all the data from all freelists on the one hand side, and all chunks on the other hand side.
// To check if all of them match, sort by address and iterate.
std::sort(free_blocks.begin(), free_blocks.end());
std::sort(chunks.begin(), chunks.end());
auto chunk_it = chunks.begin();
auto chunk_ptr_remaining = chunk_it->ptr;
auto chunk_size_remaining = chunk_it->size;
for (const auto& free_block : free_blocks) {
if (chunk_size_remaining == 0) {
assert(chunk_it != chunks.end());
++chunk_it;
assert(chunk_it != chunks.end());
chunk_ptr_remaining = chunk_it->ptr;
chunk_size_remaining = chunk_it->size;
}
assert(free_block.ptr == chunk_ptr_remaining); // ensure addresses match
assert(free_block.size <= chunk_size_remaining); // ensure no overflow
assert((free_block.ptr & (resource.ELEM_ALIGN_BYTES - 1)) == 0); // ensure correct alignment
chunk_ptr_remaining += free_block.size;
chunk_size_remaining -= free_block.size;
}
// ensure we are at the end of the chunks
assert(chunk_ptr_remaining == chunk_it->ptr + chunk_it->size);
++chunk_it;
assert(chunk_it == chunks.end());
assert(chunk_size_remaining == 0);
}
};
#endif // BITCOIN_TEST_UTIL_POOLRESOURCETESTER_H

View File

@ -56,12 +56,12 @@ BOOST_AUTO_TEST_CASE(getcoinscachesizestate)
BOOST_TEST_MESSAGE("CCoinsViewCache memory usage: " << view.DynamicMemoryUsage());
};
constexpr size_t MAX_COINS_CACHE_BYTES = 1024;
// PoolResource defaults to 256 KiB that will be allocated, so we'll take that and make it a bit larger.
constexpr size_t MAX_COINS_CACHE_BYTES = 262144 + 512;
// Without any coins in the cache, we shouldn't need to flush.
BOOST_CHECK_EQUAL(
chainstate.GetCoinsCacheSizeState(MAX_COINS_CACHE_BYTES, /*max_mempool_size_bytes*/ 0),
CoinsCacheSizeState::OK);
BOOST_TEST(
chainstate.GetCoinsCacheSizeState(MAX_COINS_CACHE_BYTES, /*max_mempool_size_bytes=*/ 0) != CoinsCacheSizeState::CRITICAL);
// If the initial memory allocations of cacheCoins don't match these common
// cases, we can't really continue to make assertions about memory usage.
@ -91,13 +91,21 @@ BOOST_AUTO_TEST_CASE(getcoinscachesizestate)
// cacheCoins (unordered_map) preallocates.
constexpr int COINS_UNTIL_CRITICAL{3};
// no coin added, so we have plenty of space left.
BOOST_CHECK_EQUAL(
chainstate.GetCoinsCacheSizeState(MAX_COINS_CACHE_BYTES, /*max_mempool_size_bytes*/ 0),
CoinsCacheSizeState::OK);
for (int i{0}; i < COINS_UNTIL_CRITICAL; ++i) {
COutPoint res = add_coin(view);
print_view_mem_usage(view);
BOOST_CHECK_EQUAL(view.AccessCoin(res).DynamicMemoryUsage(), COIN_SIZE);
// adding first coin causes the MemoryResource to allocate one 256 KiB chunk of memory,
// pushing us immediately over to LARGE
BOOST_CHECK_EQUAL(
chainstate.GetCoinsCacheSizeState(MAX_COINS_CACHE_BYTES, /*max_mempool_size_bytes*/ 0),
CoinsCacheSizeState::OK);
chainstate.GetCoinsCacheSizeState(MAX_COINS_CACHE_BYTES, /*max_mempool_size_bytes=*/ 0),
CoinsCacheSizeState::LARGE);
}
// Adding some additional coins will push us over the edge to CRITICAL.
@ -114,16 +122,16 @@ BOOST_AUTO_TEST_CASE(getcoinscachesizestate)
chainstate.GetCoinsCacheSizeState(MAX_COINS_CACHE_BYTES, /*max_mempool_size_bytes*/ 0),
CoinsCacheSizeState::CRITICAL);
// Passing non-zero max mempool usage should allow us more headroom.
// Passing non-zero max mempool usage (512 KiB) should allow us more headroom.
BOOST_CHECK_EQUAL(
chainstate.GetCoinsCacheSizeState(MAX_COINS_CACHE_BYTES, /*max_mempool_size_bytes*/ 1 << 10),
chainstate.GetCoinsCacheSizeState(MAX_COINS_CACHE_BYTES, /*max_mempool_size_bytes=*/ 1 << 19),
CoinsCacheSizeState::OK);
for (int i{0}; i < 3; ++i) {
add_coin(view);
print_view_mem_usage(view);
BOOST_CHECK_EQUAL(
chainstate.GetCoinsCacheSizeState(MAX_COINS_CACHE_BYTES, /*max_mempool_size_bytes*/ 1 << 10),
chainstate.GetCoinsCacheSizeState(MAX_COINS_CACHE_BYTES, /*max_mempool_size_bytes=*/ 1 << 19),
CoinsCacheSizeState::OK);
}
@ -139,7 +147,7 @@ BOOST_AUTO_TEST_CASE(getcoinscachesizestate)
BOOST_CHECK(usage_percentage >= 0.9);
BOOST_CHECK(usage_percentage < 1);
BOOST_CHECK_EQUAL(
chainstate.GetCoinsCacheSizeState(MAX_COINS_CACHE_BYTES, 1 << 10),
chainstate.GetCoinsCacheSizeState(MAX_COINS_CACHE_BYTES, /*max_mempool_size_bytes*/ 1 << 10), // 1024
CoinsCacheSizeState::LARGE);
}
@ -151,8 +159,7 @@ BOOST_AUTO_TEST_CASE(getcoinscachesizestate)
CoinsCacheSizeState::OK);
}
// Flushing the view doesn't take us back to OK because cacheCoins has
// preallocated memory that doesn't get reclaimed even after flush.
// Flushing the view does take us back to OK because ReallocateCache() is called
BOOST_CHECK_EQUAL(
chainstate.GetCoinsCacheSizeState(MAX_COINS_CACHE_BYTES, 0),
@ -164,7 +171,7 @@ BOOST_AUTO_TEST_CASE(getcoinscachesizestate)
BOOST_CHECK_EQUAL(
chainstate.GetCoinsCacheSizeState(MAX_COINS_CACHE_BYTES, 0),
CoinsCacheSizeState::CRITICAL);
CoinsCacheSizeState::OK);
}
BOOST_AUTO_TEST_SUITE_END()

227
src/txorphanage.cpp Normal file
View File

@ -0,0 +1,227 @@
// Copyright (c) 2021 The Bitcoin Core developers
// Distributed under the MIT software license, see the accompanying
// file COPYING or http://www.opensource.org/licenses/mit-license.php.
#include <txorphanage.h>
#include <consensus/validation.h>
#include <logging.h>
#include <policy/policy.h>
#include <statsd_client.h>
#include <cassert>
/** Expiration time for orphan transactions in seconds */
static constexpr int64_t ORPHAN_TX_EXPIRE_TIME = 20 * 60;
/** Minimum time between orphan transactions expire time checks in seconds */
static constexpr int64_t ORPHAN_TX_EXPIRE_INTERVAL = 5 * 60;
RecursiveMutex g_cs_orphans;
bool TxOrphanage::AddTx(const CTransactionRef& tx, NodeId peer)
{
AssertLockHeld(g_cs_orphans);
const uint256& hash = tx->GetHash();
if (m_orphans.count(hash))
return false;
// Ignore big transactions, to avoid a
// send-big-orphans memory exhaustion attack. If a peer has a legitimate
// large transaction with a missing parent then we assume
// it will rebroadcast it later, after the parent transaction(s)
// have been mined or received.
// 100 orphans, each of which is at most 99,999 bytes big is
// at most 10 megabytes of orphans and somewhat more byprev index (in the worst case):
unsigned int sz = GetSerializeSize(*tx, CTransaction::CURRENT_VERSION);
if (sz > MAX_STANDARD_TX_SIZE)
{
LogPrint(BCLog::MEMPOOL, "ignoring large orphan tx (size: %u, hash: %s)\n", sz, hash.ToString());
return false;
}
auto ret = m_orphans.emplace(hash, OrphanTx{tx, peer, GetTime() + ORPHAN_TX_EXPIRE_TIME, m_orphan_list.size(), sz});
assert(ret.second);
m_orphan_list.push_back(ret.first);
for (const CTxIn& txin : tx->vin) {
m_outpoint_to_orphan_it[txin.prevout].insert(ret.first);
}
m_orphan_tx_size += sz;
LogPrint(BCLog::MEMPOOL, "stored orphan tx %s (mapsz %u outsz %u)\n", hash.ToString(),
m_orphans.size(), m_outpoint_to_orphan_it.size());
statsClient.inc("transactions.orphans.add", 1.0f);
statsClient.gauge("transactions.orphans", m_orphans.size());
return true;
}
int TxOrphanage::EraseTx(const uint256& txid)
{
AssertLockHeld(g_cs_orphans);
std::map<uint256, OrphanTx>::iterator it = m_orphans.find(txid);
if (it == m_orphans.end())
return 0;
for (const CTxIn& txin : it->second.tx->vin)
{
auto itPrev = m_outpoint_to_orphan_it.find(txin.prevout);
if (itPrev == m_outpoint_to_orphan_it.end())
continue;
itPrev->second.erase(it);
if (itPrev->second.empty())
m_outpoint_to_orphan_it.erase(itPrev);
}
size_t old_pos = it->second.list_pos;
assert(m_orphan_list[old_pos] == it);
if (old_pos + 1 != m_orphan_list.size()) {
// Unless we're deleting the last entry in m_orphan_list, move the last
// entry to the position we're deleting.
auto it_last = m_orphan_list.back();
m_orphan_list[old_pos] = it_last;
it_last->second.list_pos = old_pos;
}
m_orphan_list.pop_back();
assert(m_orphan_tx_size >= it->second.nTxSize);
m_orphan_tx_size -= it->second.nTxSize;
m_orphans.erase(it);
statsClient.inc("transactions.orphans.remove", 1.0f);
statsClient.gauge("transactions.orphans", m_orphans.size());
return 1;
}
void TxOrphanage::EraseForPeer(NodeId peer)
{
AssertLockHeld(g_cs_orphans);
int nErased = 0;
std::map<uint256, OrphanTx>::iterator iter = m_orphans.begin();
while (iter != m_orphans.end())
{
std::map<uint256, OrphanTx>::iterator maybeErase = iter++; // increment to avoid iterator becoming invalid
if (maybeErase->second.fromPeer == peer)
{
nErased += EraseTx(maybeErase->second.tx->GetHash());
}
}
if (nErased > 0) LogPrint(BCLog::MEMPOOL, "Erased %d orphan tx from peer=%d\n", nErased, peer);
}
unsigned int TxOrphanage::LimitOrphans(unsigned int max_orphans_size)
{
AssertLockHeld(g_cs_orphans);
unsigned int nEvicted = 0;
static int64_t nNextSweep;
int64_t nNow = GetTime();
if (nNextSweep <= nNow) {
// Sweep out expired orphan pool entries:
int nErased = 0;
int64_t nMinExpTime = nNow + ORPHAN_TX_EXPIRE_TIME - ORPHAN_TX_EXPIRE_INTERVAL;
std::map<uint256, OrphanTx>::iterator iter = m_orphans.begin();
while (iter != m_orphans.end())
{
std::map<uint256, OrphanTx>::iterator maybeErase = iter++;
if (maybeErase->second.nTimeExpire <= nNow) {
nErased += EraseTx(maybeErase->second.tx->GetHash());
} else {
nMinExpTime = std::min(maybeErase->second.nTimeExpire, nMinExpTime);
}
}
// Sweep again 5 minutes after the next entry that expires in order to batch the linear scan.
nNextSweep = nMinExpTime + ORPHAN_TX_EXPIRE_INTERVAL;
if (nErased > 0) LogPrint(BCLog::MEMPOOL, "Erased %d orphan tx due to expiration\n", nErased);
}
FastRandomContext rng;
while (!m_orphans.empty() && m_orphan_tx_size > max_orphans_size)
{
// Evict a random orphan:
size_t randompos = rng.randrange(m_orphan_list.size());
EraseTx(m_orphan_list[randompos]->first);
++nEvicted;
}
return nEvicted;
}
void TxOrphanage::AddChildrenToWorkSet(const CTransaction& tx, std::set<uint256>& orphan_work_set) const
{
AssertLockHeld(g_cs_orphans);
for (unsigned int i = 0; i < tx.vout.size(); i++) {
const auto it_by_prev = m_outpoint_to_orphan_it.find(COutPoint(tx.GetHash(), i));
if (it_by_prev != m_outpoint_to_orphan_it.end()) {
for (const auto& elem : it_by_prev->second) {
orphan_work_set.insert(elem->first);
}
}
}
}
bool TxOrphanage::HaveTx(const uint256& txid) const
{
LOCK(g_cs_orphans);
return m_orphans.count(txid);
}
std::pair<CTransactionRef, NodeId> TxOrphanage::GetTx(const uint256& txid) const
{
AssertLockHeld(g_cs_orphans);
const auto it = m_orphans.find(txid);
if (it == m_orphans.end()) return {nullptr, -1};
return {it->second.tx, it->second.fromPeer};
}
std::set<uint256> TxOrphanage::GetCandidatesForBlock(const CBlock& block)
{
AssertLockHeld(g_cs_orphans);
std::set<uint256> orphanWorkSet;
for (const CTransactionRef& ptx : block.vtx) {
const CTransaction& tx = *ptx;
// Which orphan pool entries we should reprocess and potentially try to accept into mempool again?
for (size_t i = 0; i < tx.vin.size(); i++) {
auto itByPrev = m_outpoint_to_orphan_it.find(COutPoint(tx.GetHash(), (uint32_t)i));
if (itByPrev == m_outpoint_to_orphan_it.end()) continue;
for (const auto& elem : itByPrev->second) {
orphanWorkSet.insert(elem->first);
}
}
}
return orphanWorkSet;
}
void TxOrphanage::EraseForBlock(const CBlock& block)
{
AssertLockHeld(g_cs_orphans);
std::vector<uint256> vOrphanErase;
for (const CTransactionRef& ptx : block.vtx) {
const CTransaction& tx = *ptx;
// Which orphan pool entries must we evict?
for (const auto& txin : tx.vin) {
auto itByPrev = m_outpoint_to_orphan_it.find(txin.prevout);
if (itByPrev == m_outpoint_to_orphan_it.end()) continue;
for (auto mi = itByPrev->second.begin(); mi != itByPrev->second.end(); ++mi) {
const CTransaction& orphanTx = *(*mi)->second.tx;
const uint256& orphanHash = orphanTx.GetHash();
vOrphanErase.push_back(orphanHash);
}
}
}
// Erase orphan transactions included or precluded by this block
if (vOrphanErase.size()) {
int nErased = 0;
for (const uint256& orphanHash : vOrphanErase) {
nErased += EraseTx(orphanHash);
}
LogPrint(BCLog::MEMPOOL, "Erased %d orphan tx included or conflicted by block\n", nErased);
}
}

88
src/txorphanage.h Normal file
View File

@ -0,0 +1,88 @@
// Copyright (c) 2021 The Bitcoin Core developers
// Distributed under the MIT software license, see the accompanying
// file COPYING or http://www.opensource.org/licenses/mit-license.php.
#ifndef BITCOIN_TXORPHANAGE_H
#define BITCOIN_TXORPHANAGE_H
#include <net.h>
#include <primitives/block.h>
#include <primitives/transaction.h>
#include <sync.h>
/** Guards orphan transactions and extra txs for compact blocks */
extern RecursiveMutex g_cs_orphans;
/** A class to track orphan transactions (failed on TX_MISSING_INPUTS)
* Since we cannot distinguish orphans from bad transactions with
* non-existent inputs, we heavily limit the number of orphans
* we keep and the duration we keep them for.
*/
class TxOrphanage {
public:
/** Add a new orphan transaction */
bool AddTx(const CTransactionRef& tx, NodeId peer) EXCLUSIVE_LOCKS_REQUIRED(g_cs_orphans);
/** Check if we already have an orphan transaction */
bool HaveTx(const uint256& txid) const LOCKS_EXCLUDED(::g_cs_orphans);
/** Get an orphan transaction and its orginating peer
* (Transaction ref will be nullptr if not found)
*/
std::pair<CTransactionRef, NodeId> GetTx(const uint256& txid) const EXCLUSIVE_LOCKS_REQUIRED(g_cs_orphans);
/** Get a set of orphan transactions that can be candidates for reconsideration into the mempool */
std::set<uint256> GetCandidatesForBlock(const CBlock& block) EXCLUSIVE_LOCKS_REQUIRED(g_cs_orphans);
/** Erase an orphan by txid */
int EraseTx(const uint256& txid) EXCLUSIVE_LOCKS_REQUIRED(g_cs_orphans);
/** Erase all orphans announced by a peer (eg, after that peer disconnects) */
void EraseForPeer(NodeId peer) EXCLUSIVE_LOCKS_REQUIRED(g_cs_orphans);
/** Erase all orphans included in or invalidated by a new block */
void EraseForBlock(const CBlock& block) EXCLUSIVE_LOCKS_REQUIRED(g_cs_orphans);
/** Limit the orphanage to the given maximum */
unsigned int LimitOrphans(unsigned int max_orphans_size) EXCLUSIVE_LOCKS_REQUIRED(g_cs_orphans);
/** Add any orphans that list a particular tx as a parent into a peer's work set
* (ie orphans that may have found their final missing parent, and so should be reconsidered for the mempool) */
void AddChildrenToWorkSet(const CTransaction& tx, std::set<uint256>& orphan_work_set) const EXCLUSIVE_LOCKS_REQUIRED(g_cs_orphans);
protected:
struct OrphanTx {
CTransactionRef tx;
NodeId fromPeer;
int64_t nTimeExpire;
size_t list_pos;
size_t nTxSize;
};
/** Map from txid to orphan transaction record. Limited by
* -maxorphantx/DEFAULT_MAX_ORPHAN_TRANSACTIONS */
std::map<uint256, OrphanTx> m_orphans GUARDED_BY(g_cs_orphans);
using OrphanMap = decltype(m_orphans);
struct IteratorComparator
{
template<typename I>
bool operator()(const I& a, const I& b) const
{
return &(*a) < &(*b);
}
};
/** Index from the parents' COutPoint into the m_orphans. Used
* to remove orphan transactions from the m_orphans */
std::map<COutPoint, std::set<OrphanMap::iterator, IteratorComparator>> m_outpoint_to_orphan_it GUARDED_BY(g_cs_orphans);
/** Orphan transactions in vector for quick random eviction */
std::vector<OrphanMap::iterator> m_orphan_list GUARDED_BY(g_cs_orphans);
/** Cumulative size of all transactions in the orphan map */
size_t m_orphan_tx_size{0};
};
#endif // BITCOIN_TXORPHANAGE_H

View File

@ -4923,7 +4923,6 @@ bool CChainState::ResizeCoinsCaches(size_t coinstip_size, size_t coinsdb_size)
} else {
// Otherwise, flush state to disk and deallocate the in-memory coins map.
ret = FlushStateToDisk(state, FlushStateMode::ALWAYS);
CoinsTip().ReallocateCache();
}
return ret;
}

View File

@ -19,7 +19,13 @@ from test_framework.messages import (
msg_mempool,
msg_version,
)
from test_framework.p2p import P2PInterface, p2p_lock
from test_framework.p2p import (
P2PInterface,
P2P_SERVICES,
P2P_SUBVERSION,
P2P_VERSION,
p2p_lock,
)
from test_framework.script import MAX_SCRIPT_ELEMENT_SIZE
from test_framework.test_framework import BitcoinTestFramework
@ -215,9 +221,12 @@ class FilterTest(BitcoinTestFramework):
self.log.info('Test BIP 37 for a node with fRelay = False')
# Add peer but do not send version yet
filter_peer_without_nrelay = self.nodes[0].add_p2p_connection(P2PBloomFilter(), send_version=False, wait_for_verack=False)
# Send version with fRelay=False
# Send version with relay=False
version_without_fRelay = msg_version()
version_without_fRelay.nRelay = 0
version_without_fRelay.nVersion = P2P_VERSION
version_without_fRelay.strSubVer = P2P_SUBVERSION
version_without_fRelay.nServices = P2P_SERVICES
version_without_fRelay.relay = 0
filter_peer_without_nrelay.send_message(version_without_fRelay)
filter_peer_without_nrelay.wait_for_verack()
assert not self.nodes[0].getpeerinfo()[0]['relaytxes']

View File

@ -0,0 +1,78 @@
#!/usr/bin/env python3
# Copyright (c) 2020 The Bitcoin Core developers
# Distributed under the MIT software license, see the accompanying
# file COPYING or http://www.opensource.org/licenses/mit-license.php.
"""Test transaction relay behavior during IBD:
- Don't request transactions
- Ignore all transaction messages
"""
from decimal import Decimal
import time
from test_framework.messages import (
CInv,
COIN,
CTransaction,
from_hex,
msg_inv,
msg_tx,
MSG_TX,
)
from test_framework.p2p import (
NONPREF_PEER_TX_DELAY,
P2PDataStore,
P2PInterface,
p2p_lock
)
from test_framework.test_framework import BitcoinTestFramework
NORMAL_FEE_FILTER = Decimal(100) / COIN
class P2PIBDTxRelayTest(BitcoinTestFramework):
def set_test_params(self):
self.setup_clean_chain = True
self.disable_mocktime = True
self.num_nodes = 2
self.extra_args = [
["-minrelaytxfee={}".format(NORMAL_FEE_FILTER)],
["-minrelaytxfee={}".format(NORMAL_FEE_FILTER)],
]
def run_test(self):
self.log.info("Check that nodes don't send getdatas for transactions while still in IBD")
peer_inver = self.nodes[0].add_p2p_connection(P2PDataStore())
txid = 0xdeadbeef
peer_inver.send_and_ping(msg_inv([CInv(t=MSG_TX, h=txid)]))
# The node should not send a getdata, but if it did, it would first delay 2 seconds
self.nodes[0].setmocktime(int(time.time() + NONPREF_PEER_TX_DELAY))
peer_inver.sync_send_with_ping()
with p2p_lock:
assert txid not in peer_inver.getdata_requests
self.nodes[0].disconnect_p2ps()
self.log.info("Check that nodes don't process unsolicited transactions while still in IBD")
# A transaction hex pulled from tx_valid.json. There are no valid transactions since no UTXOs
# exist yet, but it should be a well-formed transaction.
rawhex = "0100000001b14bdcbc3e01bdaad36cc08e81e69c82e1060bc14e518db2b49aa43ad90ba260000000004a01ff473" + \
"04402203f16c6f40162ab686621ef3000b04e75418a0c0cb2d8aebeac894ae360ac1e780220ddc15ecdfc3507ac48e168" + \
"1a33eb60996631bf6bf5bc0a0682c4db743ce7ca2b01ffffffff0140420f00000000001976a914660d4ef3a743e3e696a" + \
"d990364e555c271ad504b88ac00000000"
assert self.nodes[1].decoderawtransaction(rawhex) # returns a dict, should not throw
tx = from_hex(CTransaction(), rawhex)
peer_txer = self.nodes[0].add_p2p_connection(P2PInterface())
with self.nodes[0].assert_debug_log(expected_msgs=["received: tx"], unexpected_msgs=["was not accepted"]):
peer_txer.send_and_ping(msg_tx(tx))
self.nodes[0].disconnect_p2ps()
# Come out of IBD by generating a block
self.nodes[0].generate(1)
self.sync_all()
self.log.info("Check that nodes process the same transaction, even when unsolicited, when no longer in IBD")
peer_txer = self.nodes[0].add_p2p_connection(P2PInterface())
with self.nodes[0].assert_debug_log(expected_msgs=["was not accepted"]):
peer_txer.send_and_ping(msg_tx(tx))
if __name__ == '__main__':
P2PIBDTxRelayTest().main()

View File

@ -194,7 +194,7 @@ class InvalidTxRequestTest(BitcoinTestFramework):
for j in range(110):
orphan_tx_pool[i].vout.append(CTxOut(nValue=COIN // 10, scriptPubKey=SCRIPT_PUB_KEY_OP_TRUE))
with node.assert_debug_log(['mapOrphan overflow, removed 1 tx']):
with node.assert_debug_log(['orphanage overflow, removed 1 tx']):
node.p2ps[0].send_txs_and_test(orphan_tx_pool, node, success=False)
rejected_parent = CTransaction()

View File

@ -18,7 +18,12 @@ from test_framework.messages import (
msg_ping,
msg_version,
)
from test_framework.p2p import P2PInterface
from test_framework.p2p import (
P2PInterface,
P2P_SUBVERSION,
P2P_SERVICES,
P2P_VERSION_RELAY,
)
from test_framework.test_framework import BitcoinTestFramework
from test_framework.util import (
assert_equal,
@ -126,12 +131,15 @@ class P2PLeakTest(BitcoinTestFramework):
assert_equal(ver.addrFrom.port, 0)
assert_equal(ver.addrFrom.ip, '0.0.0.0')
assert_equal(ver.nStartingHeight, 201)
assert_equal(ver.nRelay, 1)
assert_equal(ver.relay, 1)
self.log.info('Check that old peers are disconnected')
p2p_old_peer = self.nodes[0].add_p2p_connection(P2PInterface(), send_version=False, wait_for_verack=False)
old_version_msg = msg_version()
old_version_msg.nVersion = 31799
old_version_msg.strSubVer = P2P_SUBVERSION
old_version_msg.nServices = P2P_SERVICES
old_version_msg.relay = P2P_VERSION_RELAY
with self.nodes[0].assert_debug_log(['peer=3 using obsolete version 31799; disconnecting']):
p2p_old_peer.send_message(old_version_msg)
p2p_old_peer.wait_for_disconnect()

View File

@ -31,11 +31,6 @@ from test_framework.util import assert_equal
import dash_hash
MIN_VERSION_SUPPORTED = 60001
MY_VERSION = 70231 # NO_LEGACY_ISLOCK_PROTO_VERSION
MY_SUBVERSION = "/python-p2p-tester:0.0.3%s/"
MY_RELAY = 1 # from version 70001 onwards, fRelay should be appended to version messages (BIP37)
MAX_LOCATOR_SZ = 101
MAX_BLOCK_SIZE = 2000000
MAX_BLOOM_FILTER_SIZE = 36000
@ -383,22 +378,20 @@ class CBlockLocator:
__slots__ = ("nVersion", "vHave")
def __init__(self):
self.nVersion = MY_VERSION
self.vHave = []
def deserialize(self, f):
self.nVersion = struct.unpack("<i", f.read(4))[0]
struct.unpack("<i", f.read(4))[0] # Ignore version field.
self.vHave = deser_uint256_vector(f)
def serialize(self):
r = b""
r += struct.pack("<i", self.nVersion)
r += struct.pack("<i", 0) # Bitcoin Core ignores version field. Set it to 0.
r += ser_uint256_vector(self.vHave)
return r
def __repr__(self):
return "CBlockLocator(nVersion=%i vHave=%s)" \
% (self.nVersion, repr(self.vHave))
return "CBlockLocator(vHave=%s)" % (repr(self.vHave))
class COutPoint:
@ -1534,20 +1527,20 @@ class CBLSIESEncryptedSecretKey:
# Objects that correspond to messages on the wire
class msg_version:
__slots__ = ("addrFrom", "addrTo", "nNonce", "nRelay", "nServices",
__slots__ = ("addrFrom", "addrTo", "nNonce", "relay", "nServices",
"nStartingHeight", "nTime", "nVersion", "strSubVer")
msgtype = b"version"
def __init__(self):
self.nVersion = MY_VERSION
self.nServices = 1
self.nVersion = 0
self.nServices = 0
self.nTime = int(time.time())
self.addrTo = CAddress()
self.addrFrom = CAddress()
self.nNonce = random.getrandbits(64)
self.strSubVer = MY_SUBVERSION % ""
self.strSubVer = ''
self.nStartingHeight = -1
self.nRelay = MY_RELAY
self.relay = 0
def deserialize(self, f):
self.nVersion = struct.unpack("<i", f.read(4))[0]
@ -1566,9 +1559,9 @@ class msg_version:
# Relay field is optional for version 70001 onwards
# But, unconditionally check it to match behaviour in bitcoind
try:
self.nRelay = struct.unpack("<b", f.read(1))[0]
self.relay = struct.unpack("<b", f.read(1))[0]
except struct.error:
self.nRelay = 0
self.relay = 0
def serialize(self):
r = b""
@ -1580,14 +1573,14 @@ class msg_version:
r += struct.pack("<Q", self.nNonce)
r += ser_string(self.strSubVer.encode('utf-8'))
r += struct.pack("<i", self.nStartingHeight)
r += struct.pack("<b", self.nRelay)
r += struct.pack("<b", self.relay)
return r
def __repr__(self):
return 'msg_version(nVersion=%i nServices=%i nTime=%s addrTo=%s addrFrom=%s nNonce=0x%016X strSubVer=%s nStartingHeight=%i nRelay=%i)' \
return 'msg_version(nVersion=%i nServices=%i nTime=%s addrTo=%s addrFrom=%s nNonce=0x%016X strSubVer=%s nStartingHeight=%i relay=%i)' \
% (self.nVersion, self.nServices, time.ctime(self.nTime),
repr(self.addrTo), repr(self.addrFrom), self.nNonce,
self.strSubVer, self.nStartingHeight, self.nRelay)
self.strSubVer, self.nStartingHeight, self.relay)
class msg_verack:

View File

@ -32,7 +32,6 @@ from test_framework.messages import (
CBlockHeader,
CompressibleBlockHeader,
MAX_HEADERS_RESULTS,
MIN_VERSION_SUPPORTED,
NODE_HEADERS_COMPRESSED,
msg_addr,
msg_addrv2,
@ -75,7 +74,6 @@ from test_framework.messages import (
msg_tx,
msg_verack,
msg_version,
MY_SUBVERSION,
MSG_BLOCK,
MSG_TX,
MSG_TYPE_MASK,
@ -90,6 +88,20 @@ from test_framework.util import (
logger = logging.getLogger("TestFramework.p2p")
# The minimum P2P version that this test framework supports
MIN_P2P_VERSION_SUPPORTED = 60001
# The P2P version that this test framework implements and sends in its `version` message
# Version 70231 drops supports for legacy InstantSend locks
P2P_VERSION = 70231
# The services that this test framework offers in its `version` message
P2P_SERVICES = NODE_NETWORK | NODE_HEADERS_COMPRESSED
# The P2P user agent string that this test framework sends in its `version` message
P2P_SUBVERSION = "/python-p2p-tester:0.0.3%s/"
# Value for relay that this test framework sends in its `version` message
P2P_VERSION_RELAY = 1
# Delay after receiving a tx inv before requesting transactions from non-preferred peers, in seconds
NONPREF_PEER_TX_DELAY = 2
MESSAGEMAP = {
b"addr": msg_addr,
b"addrv2": msg_addrv2,
@ -186,13 +198,13 @@ class P2PConnection(asyncio.Protocol):
if net == "devnet":
devnet_name = "devnet1" # see initialize_datadir()
if self.uacomment is None:
self.strSubVer = MY_SUBVERSION % ("(devnet.devnet-%s)" % devnet_name)
self.strSubVer = P2P_SUBVERSION % ("(devnet.devnet-%s)" % devnet_name)
else:
self.strSubVer = MY_SUBVERSION % ("(devnet.devnet-%s,%s)" % (devnet_name, self.uacomment))
self.strSubVer = P2P_SUBVERSION % ("(devnet.devnet-%s,%s)" % (devnet_name, self.uacomment))
elif self.uacomment is not None:
self.strSubVer = MY_SUBVERSION % ("(%s)" % self.uacomment)
self.strSubVer = P2P_SUBVERSION % ("(%s)" % self.uacomment)
else:
self.strSubVer = MY_SUBVERSION % ""
self.strSubVer = P2P_SUBVERSION % ""
def peer_connect(self, dstaddr, dstport, *, net, timeout_factor, uacomment=None):
self.peer_connect_helper(dstaddr, dstport, net, timeout_factor, uacomment)
@ -368,6 +380,9 @@ class P2PInterface(P2PConnection):
def peer_connect_send_version(self, services):
# Send a version msg
vt = msg_version()
vt.nVersion = P2P_VERSION
vt.strSubVer = P2P_SUBVERSION
vt.relay = P2P_VERSION_RELAY
vt.nServices = services
vt.addrTo.ip = self.dstaddr
vt.addrTo.port = self.dstport
@ -376,7 +391,7 @@ class P2PInterface(P2PConnection):
vt.strSubVer = self.strSubVer
self.on_connection_send_msg = vt # Will be sent soon after connection_made
def peer_connect(self, *args, services=NODE_NETWORK | NODE_HEADERS_COMPRESSED, send_version=True, **kwargs):
def peer_connect(self, *args, services=P2P_SERVICES, send_version=True, **kwargs):
create_conn = super().peer_connect(*args, **kwargs)
if send_version:
@ -469,7 +484,7 @@ class P2PInterface(P2PConnection):
def on_verack(self, message): pass
def on_version(self, message):
assert message.nVersion >= MIN_VERSION_SUPPORTED, "Version {} received. Test framework only supports versions greater than {}".format(message.nVersion, MIN_VERSION_SUPPORTED)
assert message.nVersion >= MIN_P2P_VERSION_SUPPORTED, "Version {} received. Test framework only supports versions greater than {}".format(message.nVersion, MIN_P2P_VERSION_SUPPORTED)
if self.support_addrv2:
self.send_message(msg_sendaddrv2())
self.send_message(msg_verack())

View File

@ -23,7 +23,7 @@ import collections
from .authproxy import JSONRPCException
from .descriptors import descsum_create
from .messages import MY_SUBVERSION
from .p2p import P2P_SUBVERSION
from .util import (
MAX_NODES,
append_config,
@ -596,7 +596,7 @@ class TestNode():
def num_test_p2p_connections(self):
"""Return number of test framework p2p connections to the node."""
return len([peer for peer in self.getpeerinfo() if peer['subver'] == MY_SUBVERSION])
return len([peer for peer in self.getpeerinfo() if P2P_SUBVERSION % "" in peer['subver']])
def disconnect_p2ps(self):
"""Close all p2p connections to the node."""

View File

@ -306,6 +306,7 @@ BASE_SCRIPTS = [
'rpc_estimatefee.py',
'p2p_unrequested_blocks.py', # NOTE: needs dash_hash to pass
'feature_shutdown.py',
'p2p_ibd_txrelay.py',
'rpc_coinjoin.py',
'rpc_masternode.py',
'rpc_mnauth.py',

View File

@ -101,7 +101,6 @@ shift-base:arith_uint256.cpp
shift-base:crypto/
shift-base:hash.cpp
shift-base:leveldb/
shift-base:net_processing.cpp
shift-base:streams.h
shift-base:test/fuzz/crypto_diff_fuzz_chacha20.cpp
shift-base:util/bip32.cpp