* Prepare Dash-related stuff before starting ThreadImport
* Ensure activeMasternodeManager is not null in ThreadImport when DIP3 is active and we are running in masternode mode
* Pass CNode* to IsMasternodeQuorumNode and let it also check verifiedProRegTxHash
This makes IsMasternodeQuorumNode return true on incoming peer connections
as well.
* Let GetMasternodeQuorumNodes also take verifiedProRegTxHash into account
This makes it return NodeIds for incoming peer connections as well.
* Remove AddParticipatingNode and the need for it
This was needed in the past when we were unable to identify incoming
connections from other quorum members. Now that we have MNAUTH, we can
easily identify all connected members.
* Don't track interestedIn quorums in CSigSharesNodeState anymore
Same as with the previous commit, we're now able to easily identify which
nodes to announce sig shares to.
* Remove unused CConnman::GetMasternodeQuorumAddresses
* Sort evo/* source files in Makefile.am
* Keep track of proRegTxHash in CConnman::masternodeQuorumNodes map
We will later need the proRegTxHash
* Fix serialization of std::tuple with const rvalue elements
Having serialization and deserialization in the same specialized template
results in compilation failures due to the "if(for_read)" branch.
* Implement MNAUTH message
This allows masternodes to authenticate themself.
* Protect fresh incoming connections for a second from eviction
Give fresh connections some time to do the VERSION/VERACK handshake and
an optional MNAUTH when it's a masternode. When an MNAUTH happened, the
incoming connection is then forever protected against eviction.
If a timeout of 1 second occurs or the first message after VERACK is not
MNAUTH, the node is not protected anymore and becomes eligable for
eviction.
* Avoid connecting to masternodes if an incoming connection is from the same one
Now that incoming connections from MNs authenticate them self, we can avoid
connecting to the same MNs through intra-quorum connections.
* Apply review suggestions
* Manually pull builder base image and let travis retry it on failure
* Split base package installation in Dockerfile.builder into multiple RUN lines
This allows better local cache usage on failure and retry.
* Use travis_retry for docker build
* Fix warning about size_t to int conversion
* Fix loop in CLLMQUtils::GetQuorumConnections to add at least 2 connections
When reaching very small quorum sizes, the current algorithm results in
only a single connection to be added. This would be fine usually, but is an
issue when this connection fails. We should always have at least one backup
connection.
This fixes simple PoSe test failures where the quorum size gets down to 4
with one of the 4 members being down. If other nodes are unlucky to connect
to this node, they fail as well even though 3 members in a quorum should
work fine.
* Update src/llmq/quorums_utils.cpp
Co-Authored-By: codablock <ablock84@gmail.com>
* Bump MAX_OUTBOUND_MASTERNODE_CONNECTIONS to 250 on masternodes
Masternodes now need to connect to much more other MNs due to the intra-quorum
communication.
250 is a very conservative value loosely based on the absolute worst-case
number of outgoing connections required, assuming that a MN manages to
become part of all 24 active LLMQs.
* Fix infinite loop in CConnman::Interrupt
* Move out conditional calc into it's own variable
Co-Authored-By: codablock <ablock84@gmail.com>
* check matches for special transactions additional data
* additional method to check matches for CKeyID
* remove code duplication
* unit tests for bloom filters for DIP2 txes
* automatically update filters if special transaction matches
* unit tests for filter updates
* Error in comment
Co-Authored-By: gladcow <sergey@dash.org>
* use switch instead of if-chain
* fix version check
* remove code duplication
* add negative tests in unit tests
* Introduce "qsendrecsigs" to indicate that plain recovered sigs should be sent
Full nodes, including masternodes, will send this message automatically.
Other node implementations (e.g. SPV) are usually not interested and would
not send this message.
* Use std::atomic<bool> instead of std::atomic_bool
Not related to this PR, but a small enough change to include it here as
well.
* Add support for log category to CBatchedLogger
* Use "llmq" logging category in LLMQ code
* Use "chainlocks" logging category in ChainLocks code
* Log errors without logging category
* Don't rely on UTXO set in CheckCanLock
The UTXO set only works for TXs in the mempool and won't work when we try
to retroactively lock unlocked TXs from blocks.
This is safe as ProcessTx is only called when a TX was accepted into the
mempool or connected in a block, which means that all input checks were
good.
* Rename RetryLockMempoolTxs to RetryLockTxs and let it retry connected TXs
* Instead of manually calling ProcessTx, let SyncTransaction handle all cases
SyncTransaction is called from AcceptToMemoryPool and when transactions got
connected in a block. So this is the time we want to run TXs through
ProcessTx. This also enables retroactive signing of TXs that were unknown
before a new block appeared.
* Test retroactive signing and safe TXs in LLMQ ChainLocks tests
* Also test for retroactive signing of chained TXs
* Honor lockedParentTx when looking for TXs to retry signing
* Stop scanning for TXs to retry after a depth of 6
* Generate 6 block to avoid retroactive signing overloading Travis
* Avoid retroactive signing
* Don't rely on NewPoWValidBlock and use SyncTransaction to build blockTxs
NewPoWValidBlock is not guaranteed to be called when blocks come in fast.
When a block is accepted in AcceptBlock, NewPoWValidBlock is only called
when the new block is a successor of the currently active tip. This is not
the case when after the first block a second block is accepted immediately
as the first block is not connected yet.
This might be a bug actually in the handling of NewPoWValidBlock, so we
might need to check/fix this later, but currently I prefer to not touch
that part.
Instead, we now use SyncTransaction to gather TXs for blockTxs. This works
because SyncTransaction is called for all transactions in a freshly
connected block in one go. The call also happens before UpdatedBlockTip is
called, so it's fine with the existing logic.
* Use tx.IsCoinBase() instead of checking index 0
Also check for empty vin.
* Remove unused parameters from CInstantSendManager::ProcessTx
* Pass txHash in CheckCanLock by reference instead of pointer
* Dont' allow locking of TXs without inputs
* Remove unused local variable nInstantSendConfirmationsRequired
* Don't subtract 1 from nInstantSendConfirmationsRequired
This was necessary in the old system but is not necessary in the new system.
It also prevented proper retroactive signing of chained TXs in regtest as
it resulted in child TXs to return true immediately for CheckCanLock when
it should actually have waited for the parent TX to become locked first.
* Access chainActive.Height() while cs_main is locked
* Properly read and write lastChainLockBlock
"pindex" is NOT the chainlocked block after the while loop finishes. We
must use the pindex (renamed to pindexChainLock now) given on method entry.
Also, the GetLastChainLockBlock() result was not assigned to,
lastChainLockBlock which resulted in the while loop to run unnecessarily
long.
* Generalize filtering in NewPoWValidBlock and SyncTransaction
We're actually interested in all TXs that have inputs, so no need to
explicitly check for tx types.
* Use tx.IsCoinBase() instead of checking for index 0
* Handle cases where a TX is not received yet in wait_for_instantlock
* Wait on all nodes for the locks
Otherwise we end up with the sender having it locked but other nodes
not yet, failing the test.
* Fix LogPrintf call in CChainLocksHandler::DoInvalidateBlock
* Unify autoIS/send_smth functions
* Rename autoix-mempool.py -> autois-mempool.py
* Make sure create_raw_trx produces expected results
* Make sure sender has enough inputs and nodes are synced before starting the actual test
* Mine one block to clean mempool up
* 2 blocks is enough for IS on regtest
This also unifies it across different IS tests
* Allow wait_for_instantlock to be called on any node, not only on the one that has the tx in the wallet
* No need to query for tx this often in wait_for_instantlock
* Rename create_raw_trx -> create_raw_tx
* Fund sender with a single TX instead of 30
* Require only 3 out of 5 signatures for old InstantSend in regtest mode
* Use LLMQs of size 5 with threshold of 3 for regtest
* Fix wrong check for out-of-range bits in CFixedBitSet
* Reduce number of masternodes in masternode/LLMQ tests
* Add missing \n to LogPrintf call
* Use correct indexes for isolated/receiver/sender nodes
The way it was before resulted in nodes 1-3 being unused and 6-8 being used
for these 3 special nodes even though these are masternodes.
* Avoid stopping/starting isolated node in p2p-instantsend.py
It's enough to disable networking for this node.
* Print which DKG type aborted
* Don't directly call EnforceBestChainLock and instead schedule the call
Calling EnforceBestChainLock might result in switching chains, which in
turn might end up calling signals, so we get into a recursive call chain.
Better to call EnforceBestChainLock from the scheduler.
* Regularly call EnforceBestChainLock and reset error flags on locked chain
* Don't invalidate blocks from CChainLocksHandler::TrySignChainTip
As the name of this method implies, it's trying to sign something and not
enforce/invalidate chains. Invalidating blocks is the job of
EnforceBestChainLock.
* Only call ActivateBestChain when tip != best CL tip
* Fix unprotected access of bestChainLockBlockIndex and bail out if its null
* Fix ChainLocks tests after changes in enforcement handling
* Only invoke NotifyChainLock signal from EnforceBestChainLock
This ensures that NotifyChainLock is not prematurely called before the
block is fully connected.
* Use a mutex to ensure that only one thread executes ActivateBestChain
It might happen that 2 threads enter ActivateBestChain at the same time
start processing block by block, while randomly switching between threads
so that sometimes one thread processed the block and then another one
processes it. A mutex protects ActivateBestChain now against this race.
* Rename local copy of bestChainLockBlockIndex to currentBestChainLockBlockIndex
* Don't call ActivateBestChain when best CL is part of the main chain
This requires the removal of some very liberal (incorrect) cs_mains
sprinkled in some tests. It adds some chainActive.Tip() races, but
the tests are all single-threaded anyway.
725b79a [test] Verify node doesn't send headers that haven't been fully validated (Russell Yanofsky)
3788a84 Do not send (potentially) invalid headers in response to getheaders (Matt Corallo)
Pull request description:
Nowhere else in the protocol do we send headers which are for
blocks we have not fully validated except in response to getheaders
messages with a null locator. On my public node I have not seen any
such request (whether for an invalid block or not) in at least two
years of debug.log output, indicating that this should have minimal
impact.
Tree-SHA512: c1f6e0cdcdfb78ea577d555f9b3ceb1b4b60eff4f6cf313bfd8b576c9562d797bea73abc23f7011f249ae36dd539c715f3d20487ac03ace60e84e1b77c0c1e1a
eff4bd8 [test] P2P functional test for certain fingerprinting protections (Jim Posen)
a2be3b6 [net] Ignore getheaders requests for very old side blocks (Jim Posen)
Pull request description:
Sending a getheaders message with an empty locator and a stop hash is a request for a single header by hash. The node will respond with headers for blocks not in the main chain as well as those in the main chain. To avoid fingerprinting, the node should, however, ignore requests for headers on side branches that are too old. This replicates the logic that currently exists for `getdata` requests for blocks.
Tree-SHA512: e04ef61e2b73945be6ec5977b3c5680b6dc3667246f8bfb67afae1ecaba900c0b49b18bbbb74869f7a37ef70b6ed99e78ebe0ea0a1569369fad9e447d720ffc4
b49ad44 Add comment about cs_most_recent_block coverage (Matt Corallo)
c47f5b7 Cache witness-enabled state with recent-compact-block-cache (Matt Corallo)
efc135f Use cached [compact] blocks to respond to getdata messages (Matt Corallo)
Tree-SHA512: ffc478bddbf14b8ed304a3041f47746520ce545bdeffa9652eff2ccb25c8b0d5194abe72568c10f9c1b246ee361176ba217767af834752a2ca7263d292005e87
This seems to be backported wrongly. In the Bitcoin code, there is a
condition on requested witness data and we took the other branch which
recreates the compact block. We should have taken the other branch because
we always send with witness data (there is no Segwit in Dash).
It's actually not true that these should always be the same. In case a
quorum is built and the total number of masternodes in the network is below
the quorum size, we might still end up having a valid quorum as long as
the total number of masternodes is >= minSize.