This fixes a bug in ScanQuorums which made it return quorums which were not
mined at the time of pindexStart. This was due to quorumHashes being based
on older blocks (the phase=0 block) which are ancestors of pindexStart even
if the commitment was actually mined in a later block.
GetMinedAndActiveCommitmentsUntilBlock is also going to be used for quorum
commitment merkle roots in CCbTx.
This also removes GetFirstMinedQuorumHash as it's not needed anymore.
* Pass CNode* to IsMasternodeQuorumNode and let it also check verifiedProRegTxHash
This makes IsMasternodeQuorumNode return true on incoming peer connections
as well.
* Let GetMasternodeQuorumNodes also take verifiedProRegTxHash into account
This makes it return NodeIds for incoming peer connections as well.
* Remove AddParticipatingNode and the need for it
This was needed in the past when we were unable to identify incoming
connections from other quorum members. Now that we have MNAUTH, we can
easily identify all connected members.
* Don't track interestedIn quorums in CSigSharesNodeState anymore
Same as with the previous commit, we're now able to easily identify which
nodes to announce sig shares to.
* Remove unused CConnman::GetMasternodeQuorumAddresses
* Sort evo/* source files in Makefile.am
* Keep track of proRegTxHash in CConnman::masternodeQuorumNodes map
We will later need the proRegTxHash
* Fix serialization of std::tuple with const rvalue elements
Having serialization and deserialization in the same specialized template
results in compilation failures due to the "if(for_read)" branch.
* Implement MNAUTH message
This allows masternodes to authenticate themself.
* Protect fresh incoming connections for a second from eviction
Give fresh connections some time to do the VERSION/VERACK handshake and
an optional MNAUTH when it's a masternode. When an MNAUTH happened, the
incoming connection is then forever protected against eviction.
If a timeout of 1 second occurs or the first message after VERACK is not
MNAUTH, the node is not protected anymore and becomes eligable for
eviction.
* Avoid connecting to masternodes if an incoming connection is from the same one
Now that incoming connections from MNs authenticate them self, we can avoid
connecting to the same MNs through intra-quorum connections.
* Apply review suggestions
* Fix warning about size_t to int conversion
* Fix loop in CLLMQUtils::GetQuorumConnections to add at least 2 connections
When reaching very small quorum sizes, the current algorithm results in
only a single connection to be added. This would be fine usually, but is an
issue when this connection fails. We should always have at least one backup
connection.
This fixes simple PoSe test failures where the quorum size gets down to 4
with one of the 4 members being down. If other nodes are unlucky to connect
to this node, they fail as well even though 3 members in a quorum should
work fine.
* Update src/llmq/quorums_utils.cpp
Co-Authored-By: codablock <ablock84@gmail.com>
* Introduce "qsendrecsigs" to indicate that plain recovered sigs should be sent
Full nodes, including masternodes, will send this message automatically.
Other node implementations (e.g. SPV) are usually not interested and would
not send this message.
* Use std::atomic<bool> instead of std::atomic_bool
Not related to this PR, but a small enough change to include it here as
well.
* Add support for log category to CBatchedLogger
* Use "llmq" logging category in LLMQ code
* Use "chainlocks" logging category in ChainLocks code
* Log errors without logging category
* Don't rely on UTXO set in CheckCanLock
The UTXO set only works for TXs in the mempool and won't work when we try
to retroactively lock unlocked TXs from blocks.
This is safe as ProcessTx is only called when a TX was accepted into the
mempool or connected in a block, which means that all input checks were
good.
* Rename RetryLockMempoolTxs to RetryLockTxs and let it retry connected TXs
* Instead of manually calling ProcessTx, let SyncTransaction handle all cases
SyncTransaction is called from AcceptToMemoryPool and when transactions got
connected in a block. So this is the time we want to run TXs through
ProcessTx. This also enables retroactive signing of TXs that were unknown
before a new block appeared.
* Test retroactive signing and safe TXs in LLMQ ChainLocks tests
* Also test for retroactive signing of chained TXs
* Honor lockedParentTx when looking for TXs to retry signing
* Stop scanning for TXs to retry after a depth of 6
* Generate 6 block to avoid retroactive signing overloading Travis
* Avoid retroactive signing
* Don't rely on NewPoWValidBlock and use SyncTransaction to build blockTxs
NewPoWValidBlock is not guaranteed to be called when blocks come in fast.
When a block is accepted in AcceptBlock, NewPoWValidBlock is only called
when the new block is a successor of the currently active tip. This is not
the case when after the first block a second block is accepted immediately
as the first block is not connected yet.
This might be a bug actually in the handling of NewPoWValidBlock, so we
might need to check/fix this later, but currently I prefer to not touch
that part.
Instead, we now use SyncTransaction to gather TXs for blockTxs. This works
because SyncTransaction is called for all transactions in a freshly
connected block in one go. The call also happens before UpdatedBlockTip is
called, so it's fine with the existing logic.
* Use tx.IsCoinBase() instead of checking index 0
Also check for empty vin.
* Remove unused parameters from CInstantSendManager::ProcessTx
* Pass txHash in CheckCanLock by reference instead of pointer
* Dont' allow locking of TXs without inputs
* Remove unused local variable nInstantSendConfirmationsRequired
* Don't subtract 1 from nInstantSendConfirmationsRequired
This was necessary in the old system but is not necessary in the new system.
It also prevented proper retroactive signing of chained TXs in regtest as
it resulted in child TXs to return true immediately for CheckCanLock when
it should actually have waited for the parent TX to become locked first.
* Access chainActive.Height() while cs_main is locked
* Properly read and write lastChainLockBlock
"pindex" is NOT the chainlocked block after the while loop finishes. We
must use the pindex (renamed to pindexChainLock now) given on method entry.
Also, the GetLastChainLockBlock() result was not assigned to,
lastChainLockBlock which resulted in the while loop to run unnecessarily
long.
* Generalize filtering in NewPoWValidBlock and SyncTransaction
We're actually interested in all TXs that have inputs, so no need to
explicitly check for tx types.
* Use tx.IsCoinBase() instead of checking for index 0
* Handle cases where a TX is not received yet in wait_for_instantlock
* Wait on all nodes for the locks
Otherwise we end up with the sender having it locked but other nodes
not yet, failing the test.
* Fix LogPrintf call in CChainLocksHandler::DoInvalidateBlock
* Require only 3 out of 5 signatures for old InstantSend in regtest mode
* Use LLMQs of size 5 with threshold of 3 for regtest
* Fix wrong check for out-of-range bits in CFixedBitSet
* Reduce number of masternodes in masternode/LLMQ tests
* Add missing \n to LogPrintf call
* Use correct indexes for isolated/receiver/sender nodes
The way it was before resulted in nodes 1-3 being unused and 6-8 being used
for these 3 special nodes even though these are masternodes.
* Avoid stopping/starting isolated node in p2p-instantsend.py
It's enough to disable networking for this node.
* Print which DKG type aborted
* Don't directly call EnforceBestChainLock and instead schedule the call
Calling EnforceBestChainLock might result in switching chains, which in
turn might end up calling signals, so we get into a recursive call chain.
Better to call EnforceBestChainLock from the scheduler.
* Regularly call EnforceBestChainLock and reset error flags on locked chain
* Don't invalidate blocks from CChainLocksHandler::TrySignChainTip
As the name of this method implies, it's trying to sign something and not
enforce/invalidate chains. Invalidating blocks is the job of
EnforceBestChainLock.
* Only call ActivateBestChain when tip != best CL tip
* Fix unprotected access of bestChainLockBlockIndex and bail out if its null
* Fix ChainLocks tests after changes in enforcement handling
* Only invoke NotifyChainLock signal from EnforceBestChainLock
This ensures that NotifyChainLock is not prematurely called before the
block is fully connected.
* Use a mutex to ensure that only one thread executes ActivateBestChain
It might happen that 2 threads enter ActivateBestChain at the same time
start processing block by block, while randomly switching between threads
so that sometimes one thread processed the block and then another one
processes it. A mutex protects ActivateBestChain now against this race.
* Rename local copy of bestChainLockBlockIndex to currentBestChainLockBlockIndex
* Don't call ActivateBestChain when best CL is part of the main chain
It's actually not true that these should always be the same. In case a
quorum is built and the total number of masternodes in the network is below
the quorum size, we might still end up having a valid quorum as long as
the total number of masternodes is >= minSize.
* Fix remaining `print`s in tests
* use AssertLockHeld(cs) instead of relying on comments
* actually use `clsig` in `EnforceBestChainLock()`
* fix log output in `EnforceBestChainLock()`
* drop comments
* Fix deadlock in CSigSharesManager::SendMessages
Locking "cs" at this location caused a (potential) deadlock due to changed
order of cs and cs_vNodes locking. This changes the method to not require
the session object anymore which removes the need for locking.
* Pass size of LLMQ instead of llmqType into CSigSharesInv::Init
This allows use of sizes which are not supported in chainparams.
Later commits will introduce checks for "safe TXs" which might abort the
signing on first try, but succeed a few seconds later, so we periodically
retry to sign the tip.
The local node might be the bad one actually as it might not have catched
up with the chain. In that case, LLMQs might be different for the sending
and receiving node.
When ProcessMessageBatchedSigShares returns false, it's interpreted as
if an invalid/malicious message was received, causing a ban. So, we should
return "!ban" instead of just "ban".
* Ignore sig share inv messages when we don't have the quorum vvec
* Update src/llmq/quorums_signing_shares.cpp
Co-Authored-By: codablock <ablock84@gmail.com>
* On timeout, print members proTxHashes from members which did not send a share
* Move inactive quorums check above timeout checks
This allows to reuse things in the next commit
* Avoid locking cs_main through GetQuorum by using a pre-filled map
* Use find() instead of [] to access quorums map
* Return bool in ProcessMessageXXX methods to indicate misbehaviour
* Send/Receive multiple messages as part of one P2P message in CSigSharesManager
Many messages, especially QSIGSHARESINV and QGETSIGSHARES, are very small
by nature (5-14 bytes for a 50 members LLMQ). The message headers are
24 bytes, meaning that we produce a lot of overhead for these small messages.
This sums up quite a bit when thousands of signing sessions are happening
in parallel.
This commit changes all related P2P messages to send a vector of messages
instead of a single message.
* Remove bogus lines
Included these by accident
* Unify handling of BanNode in ProcessMessageXXX methods
* Remove bogus check for fMasternodeMode
* Properly use == instead of misleading >= in SendMessages
* Put "didSend = true" near PushMessage
Stop relying on the information previously found in the CSigSharesInv
and CBatchedSigShares messages and instead use the information found in
the session refereced by the session id.
This also updates a few LogPrintf calls. Previously, CSigSharesInv::ToString
also included the signHash in the returned string, which is not the case
anymore, so we have to add it manually.
We must watch out to not blindly use externally provided keys in unordered
sets/maps, as attackers might find ways to cause unbalanced hash buckets
causing performance degradation.
* Indicate success when signing was unnecessary
* Fix typo in name of LLMQ_400_60
* Move RemoveAskFor call for CLSIGs into ProcessNewChainLock
In case we got INV items for the same CLSIG that we recreated through
HandleNewRecoveredSig, (re-)requesting of the CLSIG from other peers
becomes unnecessary.
* Move Cleanup() call in CChainLocksHandler::UpdatedBlockTip up
We bail out early in a few situations from this method, so that Cleanup()
might not be called while its at the bottom.
* Bail out from CChainLocksHandler::UpdatedBlockTip if we already got the CLSIG
* Call RemoveAskFor when QFCOMMITMENT was received
Otherwise we might end up re-requesting it for a very long time when the
commitment INV was received shortly before it got mined.
* Call RemoveSigSharesForSession when a recovered sig is received
Otherwise we end up with session data in node states lingering around until
a fake "timeout" occurs (can be seen in the logs).
* Better handling of false-positive conflicts in CSigningManager
The old code was emitting a lot of messages in logs as it treated sigs
for exactly the same session as a conflict. This commit fixes this by
looking at the signHash before logging.
Also handle a corner-case where a recovered sig might be deleted between
the HasRecoveredSigForId and GetRecoveredSigById call.
* Don't run into session timeout when sig shares come in slow
Instead of just tracking when the first share was received, we now also
track when the last (non-duplicate) share was received. Sessios will now
timeout 5 minutes after the first share arrives, or 1 minute after the last
one arrived.
Instead of trying to manually figure out params for different quorum/ring sizes, connect to nodes at indexes (i+2^k)%n where k: 0..floor(log2(n-1))-1, n: size of the quorum/ring
* Implement and use SigShareMap instead of ordered map with helper methods
The old implementation was relying on the maps being ordered, which allowed
us to grab all sig shares for the same signHash by doing range queries on
the map. This has the disadvantage of being unnecessarily slow when the
maps get larger. Using an unordered map would be the naive solution, but
then it's not possible to query by range anymore.
The solution now is to have a specialized map "SigShareMap" which is
indexed by "SigShareKey". It's internally just an unordered map, indexed by
the sign hash and another unordered map for the value, indexed by the
quorum member index.
* Only use unordered maps/sets in CSigSharesManager
These are faster when maps/sets get larger.
* Use unorderes sets/maps in CSigningManager
* Don't sleep in WorkThreadMain when CPU intensive work was done
When the current iteration resulted in CPU intensive work, it's likely that
the next iteration will result in work as well. Do not sleep in that case,
as we're otherwise wasting (unused) CPU resources.
* No matter how fast we process sig shares, always force 100ms between sending
* Apply review suggestions
This removes the burden on the message handler thread when many sig batches
arrive. The expensive part of deserialization is now performed in the sig
shares worker thread.
This also removes the need for the specialized deserialization of the sig
shares which tried to avoid the malleability check, as CBLSLazySignature does
not perform malleability checks at all.
* Implement secure verification in bls_batchverifier
* Rename CBLSInsecureBatchVerifier to CBLSBatchVerifier
* Add unit tests for simple BLS verifcation and CBLSBatchVerifier
* Store quorumHash of first mined commitment in evoDb
This allows to skip scanning for quorums below this block.
* Speed up CQuorumManager::ScanQuorums
This does 2 things:
1. Only call HasQuorum for blocks that are potentially a quorumBlockHash
These are only blocks which are at index 0 of each DKG interval
2. Stop scanning for quorums when we get below the first block that
contained a commitment. If no commitment was ever mined, we bail out
immediately.
* Return result instead of {}
* Remove HasQuorum() call as GetQuorum already does this
* Remove unnecessary "if (!qc.IsNull()))"
It's already checked at the top of the loop
* When necessary, remove DB_FIRST_MINED_COMMITMENT from evoDb in UndoBlock
* Check aggPubKey for IsValid() instead of aggSig
aggSig is not reliable here as it might already be initialized by the
previous message.
* Significantly reduce sleep time for each DKG phase
Turns out the DKG is much faster then expected, and waiting multiple
minutes for each phase in a devnet is not much fun.
* Correctly use SIGN_HEIGHT_OFFSET when checking for out of bound height
* Introduce startBlockHeight to make things more explicit
* Allow sub-batch verification in CBLSInsecureBatchVerifier
* Implement batch verification of CDKGDebugStatus messages
* Use uint8_t for statusBitset in CDKGDebugMemberStatus and CDKGDebugSessionStatus
No need to waste one byte per member and per LLMQ type.
* Reserve 4k of buffer for CSerializedNetMsg buffer
Profiling has shown that a lot of time is spent in resizing the data
vector when large messages are involved.
* Remove nHeight from CDKGDebugStatus
This field changes every block and causes all masternodes to propagate
its status for every block, even if nothing DKG related has changed.
* Leave out session statuses when we're not a member of that session
Otherwise MNs which are not members of DKG sessions will spam the network
* Remove receivedFinalCommitment from CDKGDebugSessionStatus
This is not bound to a session and thus is prone to spam the network when
final commitments are propagated in the finalization phase.
* Add "minableCommitments" to "quorum dkgstatus"
* Hold cs_main while calling GetMinableCommitment
* Abort processing of pending debug messages when spork18 gets disabled
* Don't ask for debug messages when we've already seen them
"statuses" only contains the current messages but none of the old messages,
so nodes kept re-requesting old messages.