Stop relying on the information previously found in the CSigSharesInv
and CBatchedSigShares messages and instead use the information found in
the session refereced by the session id.
This also updates a few LogPrintf calls. Previously, CSigSharesInv::ToString
also included the signHash in the returned string, which is not the case
anymore, so we have to add it manually.
We must watch out to not blindly use externally provided keys in unordered
sets/maps, as attackers might find ways to cause unbalanced hash buckets
causing performance degradation.
* Indicate success when signing was unnecessary
* Fix typo in name of LLMQ_400_60
* Move RemoveAskFor call for CLSIGs into ProcessNewChainLock
In case we got INV items for the same CLSIG that we recreated through
HandleNewRecoveredSig, (re-)requesting of the CLSIG from other peers
becomes unnecessary.
* Move Cleanup() call in CChainLocksHandler::UpdatedBlockTip up
We bail out early in a few situations from this method, so that Cleanup()
might not be called while its at the bottom.
* Bail out from CChainLocksHandler::UpdatedBlockTip if we already got the CLSIG
* Call RemoveAskFor when QFCOMMITMENT was received
Otherwise we might end up re-requesting it for a very long time when the
commitment INV was received shortly before it got mined.
* Call RemoveSigSharesForSession when a recovered sig is received
Otherwise we end up with session data in node states lingering around until
a fake "timeout" occurs (can be seen in the logs).
* Better handling of false-positive conflicts in CSigningManager
The old code was emitting a lot of messages in logs as it treated sigs
for exactly the same session as a conflict. This commit fixes this by
looking at the signHash before logging.
Also handle a corner-case where a recovered sig might be deleted between
the HasRecoveredSigForId and GetRecoveredSigById call.
* Don't run into session timeout when sig shares come in slow
Instead of just tracking when the first share was received, we now also
track when the last (non-duplicate) share was received. Sessios will now
timeout 5 minutes after the first share arrives, or 1 minute after the last
one arrived.
Instead of trying to manually figure out params for different quorum/ring sizes, connect to nodes at indexes (i+2^k)%n where k: 0..floor(log2(n-1))-1, n: size of the quorum/ring
* Implement and use SigShareMap instead of ordered map with helper methods
The old implementation was relying on the maps being ordered, which allowed
us to grab all sig shares for the same signHash by doing range queries on
the map. This has the disadvantage of being unnecessarily slow when the
maps get larger. Using an unordered map would be the naive solution, but
then it's not possible to query by range anymore.
The solution now is to have a specialized map "SigShareMap" which is
indexed by "SigShareKey". It's internally just an unordered map, indexed by
the sign hash and another unordered map for the value, indexed by the
quorum member index.
* Only use unordered maps/sets in CSigSharesManager
These are faster when maps/sets get larger.
* Use unorderes sets/maps in CSigningManager
* Don't sleep in WorkThreadMain when CPU intensive work was done
When the current iteration resulted in CPU intensive work, it's likely that
the next iteration will result in work as well. Do not sleep in that case,
as we're otherwise wasting (unused) CPU resources.
* No matter how fast we process sig shares, always force 100ms between sending
* Apply review suggestions
This removes the burden on the message handler thread when many sig batches
arrive. The expensive part of deserialization is now performed in the sig
shares worker thread.
This also removes the need for the specialized deserialization of the sig
shares which tried to avoid the malleability check, as CBLSLazySignature does
not perform malleability checks at all.
* Implement secure verification in bls_batchverifier
* Rename CBLSInsecureBatchVerifier to CBLSBatchVerifier
* Add unit tests for simple BLS verifcation and CBLSBatchVerifier
* Store quorumHash of first mined commitment in evoDb
This allows to skip scanning for quorums below this block.
* Speed up CQuorumManager::ScanQuorums
This does 2 things:
1. Only call HasQuorum for blocks that are potentially a quorumBlockHash
These are only blocks which are at index 0 of each DKG interval
2. Stop scanning for quorums when we get below the first block that
contained a commitment. If no commitment was ever mined, we bail out
immediately.
* Return result instead of {}
* Remove HasQuorum() call as GetQuorum already does this
* Remove unnecessary "if (!qc.IsNull()))"
It's already checked at the top of the loop
* When necessary, remove DB_FIRST_MINED_COMMITMENT from evoDb in UndoBlock
* Check aggPubKey for IsValid() instead of aggSig
aggSig is not reliable here as it might already be initialized by the
previous message.
* Significantly reduce sleep time for each DKG phase
Turns out the DKG is much faster then expected, and waiting multiple
minutes for each phase in a devnet is not much fun.
* Correctly use SIGN_HEIGHT_OFFSET when checking for out of bound height
* Introduce startBlockHeight to make things more explicit
* Allow sub-batch verification in CBLSInsecureBatchVerifier
* Implement batch verification of CDKGDebugStatus messages
* Use uint8_t for statusBitset in CDKGDebugMemberStatus and CDKGDebugSessionStatus
No need to waste one byte per member and per LLMQ type.
* Reserve 4k of buffer for CSerializedNetMsg buffer
Profiling has shown that a lot of time is spent in resizing the data
vector when large messages are involved.
* Remove nHeight from CDKGDebugStatus
This field changes every block and causes all masternodes to propagate
its status for every block, even if nothing DKG related has changed.
* Leave out session statuses when we're not a member of that session
Otherwise MNs which are not members of DKG sessions will spam the network
* Remove receivedFinalCommitment from CDKGDebugSessionStatus
This is not bound to a session and thus is prone to spam the network when
final commitments are propagated in the finalization phase.
* Add "minableCommitments" to "quorum dkgstatus"
* Hold cs_main while calling GetMinableCommitment
* Abort processing of pending debug messages when spork18 gets disabled
* Don't ask for debug messages when we've already seen them
"statuses" only contains the current messages but none of the old messages,
so nodes kept re-requesting old messages.
* Use fast_dip3_enforcement instead of fast_dip3_activation
DashTestFramework was refactored before ChainLocks got merged, causing tests
to fail now.
* Move updating of DKG debug status into WaitForNextPhase
Otherwise callers of the RPCs might believe that the next phase has already
started and start producing more blocks, which would then cancel the
current session if it happens faster than the phase handler thread can
progress to the next phase.
* Fix off-by-1 in phase calculations
* Fix wait_for_quorum_phase, should look for check_received_messages
* Fix wait_for_quorum_phase for complain phase
* Bump default timeout in wait_for_quorum_phase/wait_for_quorum_commitment to 15
* Fix cleanup of old recovered sigs
When iterating the db, we should also include entries that match exactly
the end time.
* Fix key not found error
* Raise AssertionError in case wait_for_quorum_phase/wait_for_quorum_commitment time out
* Fix confusion: `quorumHash` is both a class member and an argument of a function
Rename `height` too while at it
* Make sure height and hash we pass to InitNewQuorum are related
* Don't update expectedQuorumHash, make it const
This also streamlines logic a bit
* Compact phase calculation
* Decouple invCs and cs_vPendingMasternodes
Not an issue atm but we'd better avoid any potential interlocking if possible
* wrap `%` in `()`
Co-Authored-By: UdjinM6 <UdjinM6@users.noreply.github.com>
* Switch GetQuorumBlockHash from CBlockIndex* to nHeight
* `pindexPrev -> pindex` for ProcessCommitment
* Switch IsCommitmentRequired from CBlockIndex* to block height
* Switch GetMinableCommitment/Tx from CBlockIndex* to block height
* Add `AssertLockHeld(cs_main);`
Co-Authored-By: UdjinM6 <UdjinM6@users.noreply.github.com>
* Implement creation and propagation of dummy contributions
These act as a ping which is broadcast a few blocks before the dummy
commitments are created. They are meant to determine online/offline members.
* Use information about received dummy contributions to determine validMembers
* Fix PoSe tests
* Fix dummy DKG phase progress in PoSe tests and give tests more time
Mine one block at a time until we reach the mining phase.
* Deserialize CFinalCommitmentTxPayload instead of CFinalCommitment in TxToJSON
* Implement ToJson for CFinalCommitmentTxPayload and use it in TxToJSON
Otherwise the nVersion and nHeight members of it are not shown.
* Allow to skip sig verification for CFinalCommitment::Verify
* Add CFinalCommitmentTxPayload and CheckLLMQCommitment and use it
As described in https://github.com/dashpay/dips/pull/31 (see discussion).
* Properly ban nodes for invalid commitments
* Add SPORK_17_QUORUM_DKG_ENABLED spork
* Implement CDummyDKG and CDummyCommitment until we have the real DKG merged
This is only used on testnet/devnet/regtest and will NEVER be used on
mainnet. It is NOT SECURE AT ALL!
See comment in quorums_dummydkg.h for more details.
* Test simple PoSe in DIP3 tests
* Generate 2 instead of 4 blocks per iteration in PoSe tests
4 was based on old chainparams where I used larger phases.
* Only sleep when necessary in PoSe tests
* Fix typo in comment
* Give PoSe tests more time and sync after fast-forward
* Add LLMQ parameters to consensus params
* Add DIP6 quorum commitment special TX
* Implement CQuorumBlockProcessor which validates and handles commitments
* Add quorum commitments to new blocks
* Propagate QFCOMMITMENT messages to all nodes
* Allow special transactions in blocks which have no inputs/outputs
But only for TRANSACTION_QUORUM_COMMITMENT for now.
* Add quorum commitments to self-crafted blocks in DIP3 tests
* Add simple fork logic for current testnet
This should avoid a fork on the current testnet. It only applies to the
current chain which activated DIP3 at height 264000 and block
00000048e6e71d4bd90e7c456dcb94683ae832fcad13e1760d8283f7e89f332f.
When we revert the chain to retest the DIP3 deployment, this fork logic
can be removed again.
* Use quorumVvecHash instead of quorumHash to make null commitments unique
Implementation of https://github.com/dashpay/dips/pull/31
* Re-add quorum commitments after pruning mempool selected blocks
* Refactor CQuorumBlockProcessor::ProcessBlock to have less nested if/else statements
Also add BEGIN/END markers for temporary code.
* Add comments/documentation to LLMQParams
* Move code which determines if a commitment is required into IsCommitmentRequired
This should make the code easier to read and also removes some duplication.
The also changes the error types that are possible from 3 to 2 now. Instead
of having "bad-qc-already-mined" and "bad-qc-not-mining-phase", there is
only "bad-qc-not-allowed" now.
* Use new parameter from consensus parames for the temporary fork