* Allow modifying simulate DKG error rates via RPC
* Don't lie to yourself :)
* Add some missing new-lines in LogPrintf calls
* More fine grained control over which messages to expect in mine_quorum
* Implement llmq-dkgerrors.py integration tests
These test DKG errors and malicious behavior.
* Move code to write archived ISLOCKs into its own method
We'll need this from another method as well later.
* Return ISLOCK instead of conflicting txid in GetConflictingTx/GetConflictingLock
* Implement GetInstantSendLocksByParent and RemoveChainedInstantSendLocks
These allow to easily delete multiple chains (actually trees) of ISLOCKs
in one go.
* Implement RemoveConflictedTx and call it from RemoveMempoolConflictsForLock
Also add "retryChildren" parameter to RemoveNonLockedTx so that we can
skip retrying of non-locked children TXs.
* Properly handle/remove conflicted TXs (between mempool and new blocks)
* Track non-locked TXs by inputs
* Implement and call ResolveBlockConflicts
* Also call ResolveBlockConflicts from ConnectBlock
But only when a block is known to have a conflict and at the same time is
ChainLocked, which causes the ISLOCK to be pruned.
* Split out RemoveChainLockConflictingLock from ResolveBlockConflicts
* Implement "quorum getrecsig" RPC
* Include decoded TX data in result of create_raw_tx
* Implement support for CLSIG in mininode.py
* Fix condition for update of nonLockedTxs.pindexMined
* Only add entries to nonLockedTxsByInputs when AddNonLockedTx is called for the first time
* Implement support for ISLOCK in mininode.py
* Implement tests for ChainLock vs InstantSend lock conflict resolution
* Handle review comment
Bail out (continue) early
e722777 fix logging in nulldummy and proxy_test (John Newbery)
1f70653 Use log.info() instead of print() in importmulti.py (John Newbery)
Tree-SHA512: 0e58f0a970cd93bc1e9d73c6f53ca0671b0c5135cbf92e97d8563bd8a063679bf04f8bde511c275d5f84036aed32f70d3d03679a92688952b46dc97929e0405c
7759aa2 Save watch only key timestamps when reimporting keys (Russell Yanofsky)
Tree-SHA512: 433b5a78e5626fb2f3166e6c84c22eabd5239d451dc82694da95af237e034612a24f1a8bc959b7d2f2e576ce0b679be1fa4af929ebfae758c7e832056ab67061
9576b01 Enable xvfb in travis to allow running test_bitcoin-qt (Russell Yanofsky)
9e6817e Add new test_bitcoin-qt static library dependencies (Russell Yanofsky)
2754ef1 Add simple qt wallet test sending a transaction (Russell Yanofsky)
b61b34c Add braces to if statements in Qt test_main (Russell Yanofsky)
cc9503c Make qt test compatible with TestChain100Setup framework (Russell Yanofsky)
91e3035 Make test_bitcoin.cpp compatible with Qt Test framework (Russell Yanofsky)
Tree-SHA512: da491181848b8c39138e997ae5ff2df0b16eef2d9cdd0a965229b1a28d4fa862d5f1ef314a1736e5050e88858f329124d15c689659fc6e50fefde769ba24e523
remove line, testing
bitcoin -> dash, testing
bitcoin -> dash, testing
resolve name conflict, testing
bitcoin -> dash
re-add test fixture line
code review, fix tests
Signed-off-by: Pasta <Pasta@dash.org>
move ExceptionInitializer into test_dash_main.cpp
remove witness from nulldummy.py
Signed-off-by: Pasta <Pasta@dash.org>
change error text to match expected
Signed-off-by: Pasta <Pasta@dash.org>
* Also test conflicts in mempool instead of only in blocks
* Ask for locked TXs after removing conflicting TXs
When we removed a conflicting TX from the mempool, the correct/locked TX
is not available locally as the first-seen rule would have filtered before.
We need to re-request this TX if any other node announced it before.
* Apply suggestions from code review
Co-Authored-By: codablock <ablock84@gmail.com>
* Trivial: vout->txout
* Re-use SetHexStr in few more places
* Tweak log output
* fix v13 release notes links
* Drop no longer used stuff
* Few more trivial fixes
* Adjust few rpc help strings
* Apply review suggestions
* Harden DIP3 activation height
Also drop all related but no longer used parts.
* Pass current block index to GetCommitmentsFromBlock
* Allow to change dip3 activation height for tests
And fix them.
generate() will push INV messages to all nodes, including our test_node.
The original check_last_announcement() call done here will then
sporadically return Fals when the INV message is received shortly after
clear_last_announcement()
Solution is to check for the INV announcement first and then continue with
the test.
We'll later need this method to calculate merkle roots outside of CBlock.
I'd like to avoid moving this code outside of CBlock as it might later
conflict with Bitcoin backports.
* Don't rely on UTXO set in CheckCanLock
The UTXO set only works for TXs in the mempool and won't work when we try
to retroactively lock unlocked TXs from blocks.
This is safe as ProcessTx is only called when a TX was accepted into the
mempool or connected in a block, which means that all input checks were
good.
* Rename RetryLockMempoolTxs to RetryLockTxs and let it retry connected TXs
* Instead of manually calling ProcessTx, let SyncTransaction handle all cases
SyncTransaction is called from AcceptToMemoryPool and when transactions got
connected in a block. So this is the time we want to run TXs through
ProcessTx. This also enables retroactive signing of TXs that were unknown
before a new block appeared.
* Test retroactive signing and safe TXs in LLMQ ChainLocks tests
* Also test for retroactive signing of chained TXs
* Honor lockedParentTx when looking for TXs to retry signing
* Stop scanning for TXs to retry after a depth of 6
* Generate 6 block to avoid retroactive signing overloading Travis
* Avoid retroactive signing
* Don't rely on NewPoWValidBlock and use SyncTransaction to build blockTxs
NewPoWValidBlock is not guaranteed to be called when blocks come in fast.
When a block is accepted in AcceptBlock, NewPoWValidBlock is only called
when the new block is a successor of the currently active tip. This is not
the case when after the first block a second block is accepted immediately
as the first block is not connected yet.
This might be a bug actually in the handling of NewPoWValidBlock, so we
might need to check/fix this later, but currently I prefer to not touch
that part.
Instead, we now use SyncTransaction to gather TXs for blockTxs. This works
because SyncTransaction is called for all transactions in a freshly
connected block in one go. The call also happens before UpdatedBlockTip is
called, so it's fine with the existing logic.
* Use tx.IsCoinBase() instead of checking index 0
Also check for empty vin.
* Remove unused parameters from CInstantSendManager::ProcessTx
* Pass txHash in CheckCanLock by reference instead of pointer
* Dont' allow locking of TXs without inputs
* Remove unused local variable nInstantSendConfirmationsRequired
* Don't subtract 1 from nInstantSendConfirmationsRequired
This was necessary in the old system but is not necessary in the new system.
It also prevented proper retroactive signing of chained TXs in regtest as
it resulted in child TXs to return true immediately for CheckCanLock when
it should actually have waited for the parent TX to become locked first.
* Access chainActive.Height() while cs_main is locked
* Properly read and write lastChainLockBlock
"pindex" is NOT the chainlocked block after the while loop finishes. We
must use the pindex (renamed to pindexChainLock now) given on method entry.
Also, the GetLastChainLockBlock() result was not assigned to,
lastChainLockBlock which resulted in the while loop to run unnecessarily
long.
* Generalize filtering in NewPoWValidBlock and SyncTransaction
We're actually interested in all TXs that have inputs, so no need to
explicitly check for tx types.
* Use tx.IsCoinBase() instead of checking for index 0
* Handle cases where a TX is not received yet in wait_for_instantlock
* Wait on all nodes for the locks
Otherwise we end up with the sender having it locked but other nodes
not yet, failing the test.
* Fix LogPrintf call in CChainLocksHandler::DoInvalidateBlock
* Unify autoIS/send_smth functions
* Rename autoix-mempool.py -> autois-mempool.py
* Make sure create_raw_trx produces expected results
* Make sure sender has enough inputs and nodes are synced before starting the actual test
* Mine one block to clean mempool up
* 2 blocks is enough for IS on regtest
This also unifies it across different IS tests
* Allow wait_for_instantlock to be called on any node, not only on the one that has the tx in the wallet
* No need to query for tx this often in wait_for_instantlock
* Rename create_raw_trx -> create_raw_tx
* Fund sender with a single TX instead of 30
* Require only 3 out of 5 signatures for old InstantSend in regtest mode
* Use LLMQs of size 5 with threshold of 3 for regtest
* Fix wrong check for out-of-range bits in CFixedBitSet
* Reduce number of masternodes in masternode/LLMQ tests
* Add missing \n to LogPrintf call
* Use correct indexes for isolated/receiver/sender nodes
The way it was before resulted in nodes 1-3 being unused and 6-8 being used
for these 3 special nodes even though these are masternodes.
* Avoid stopping/starting isolated node in p2p-instantsend.py
It's enough to disable networking for this node.
* Print which DKG type aborted
* Don't directly call EnforceBestChainLock and instead schedule the call
Calling EnforceBestChainLock might result in switching chains, which in
turn might end up calling signals, so we get into a recursive call chain.
Better to call EnforceBestChainLock from the scheduler.
* Regularly call EnforceBestChainLock and reset error flags on locked chain
* Don't invalidate blocks from CChainLocksHandler::TrySignChainTip
As the name of this method implies, it's trying to sign something and not
enforce/invalidate chains. Invalidating blocks is the job of
EnforceBestChainLock.
* Only call ActivateBestChain when tip != best CL tip
* Fix unprotected access of bestChainLockBlockIndex and bail out if its null
* Fix ChainLocks tests after changes in enforcement handling
* Only invoke NotifyChainLock signal from EnforceBestChainLock
This ensures that NotifyChainLock is not prematurely called before the
block is fully connected.
* Use a mutex to ensure that only one thread executes ActivateBestChain
It might happen that 2 threads enter ActivateBestChain at the same time
start processing block by block, while randomly switching between threads
so that sometimes one thread processed the block and then another one
processes it. A mutex protects ActivateBestChain now against this race.
* Rename local copy of bestChainLockBlockIndex to currentBestChainLockBlockIndex
* Don't call ActivateBestChain when best CL is part of the main chain
725b79a [test] Verify node doesn't send headers that haven't been fully validated (Russell Yanofsky)
3788a84 Do not send (potentially) invalid headers in response to getheaders (Matt Corallo)
Pull request description:
Nowhere else in the protocol do we send headers which are for
blocks we have not fully validated except in response to getheaders
messages with a null locator. On my public node I have not seen any
such request (whether for an invalid block or not) in at least two
years of debug.log output, indicating that this should have minimal
impact.
Tree-SHA512: c1f6e0cdcdfb78ea577d555f9b3ceb1b4b60eff4f6cf313bfd8b576c9562d797bea73abc23f7011f249ae36dd539c715f3d20487ac03ace60e84e1b77c0c1e1a
eff4bd8 [test] P2P functional test for certain fingerprinting protections (Jim Posen)
a2be3b6 [net] Ignore getheaders requests for very old side blocks (Jim Posen)
Pull request description:
Sending a getheaders message with an empty locator and a stop hash is a request for a single header by hash. The node will respond with headers for blocks not in the main chain as well as those in the main chain. To avoid fingerprinting, the node should, however, ignore requests for headers on side branches that are too old. This replicates the logic that currently exists for `getdata` requests for blocks.
Tree-SHA512: e04ef61e2b73945be6ec5977b3c5680b6dc3667246f8bfb67afae1ecaba900c0b49b18bbbb74869f7a37ef70b6ed99e78ebe0ea0a1569369fad9e447d720ffc4
These tend to fail quite often on Travis due to multiple reasons. One
reason is that establishing intra quorum connections take some time and
the tests in dip3-deterministicmns.py did not sleep long enough. Another
reason is that the individual stages were not really checked for completion
but instead just a hardcoded sleep was used. And another reason was that
with a total of 13 MNs, it's not guaranteed that every DKG results in one
MN to be punished.
* Fix remaining `print`s in tests
* use AssertLockHeld(cs) instead of relying on comments
* actually use `clsig` in `EnforceBestChainLock()`
* fix log output in `EnforceBestChainLock()`
* drop comments