neobytes/test/functional/llmq-chainlocks.py

155 lines
6.9 KiB
Python
Raw Normal View History

#!/usr/bin/env python3
# Copyright (c) 2015-2018 The Dash Core developers
# Distributed under the MIT software license, see the accompanying
# file COPYING or http://www.opensource.org/licenses/mit-license.php.
import time
from test_framework.mininode import *
from test_framework.test_framework import DashTestFramework
from test_framework.util import *
'''
llmq-chainlocks.py
Checks LLMQs based ChainLocks
'''
class LLMQChainLocksTest(DashTestFramework):
def set_test_params(self):
self.set_dash_test_params(6, 5, [], fast_dip3_enforcement=True)
def run_test(self):
self.log.info("Wait for dip0008 activation")
while self.nodes[0].getblockchaininfo()["bip9_softforks"]["dip0008"]["status"] != "active":
self.nodes[0].generate(10)
sync_blocks(self.nodes, timeout=60*5)
self.nodes[0].spork("SPORK_17_QUORUM_DKG_ENABLED", 0)
self.wait_for_sporks_same()
self.log.info("Mining 4 quorums")
for i in range(4):
self.mine_quorum()
self.nodes[0].spork("SPORK_19_CHAINLOCKS_ENABLED", 0)
self.wait_for_sporks_same()
self.log.info("Mine single block, wait for chainlock")
self.nodes[0].generate(1)
self.wait_for_chainlocked_block_all_nodes(self.nodes[0].getbestblockhash())
self.log.info("Mine many blocks, wait for chainlock")
self.nodes[0].generate(20)
# We need more time here due to 20 blocks being generated at once
self.wait_for_chainlocked_block_all_nodes(self.nodes[0].getbestblockhash(), timeout=30)
self.log.info("Assert that all blocks up until the tip are chainlocked")
for h in range(1, self.nodes[0].getblockcount()):
block = self.nodes[0].getblock(self.nodes[0].getblockhash(h))
assert(block['chainlock'])
self.log.info("Isolate node, mine on another, and reconnect")
isolate_node(self.nodes[0])
node0_mining_addr = self.nodes[0].getnewaddress()
node0_tip = self.nodes[0].getbestblockhash()
self.nodes[1].generatetoaddress(5, node0_mining_addr)
self.wait_for_chainlocked_block(self.nodes[1], self.nodes[1].getbestblockhash())
assert(self.nodes[0].getbestblockhash() == node0_tip)
reconnect_isolated_node(self.nodes[0], 1)
self.nodes[1].generatetoaddress(1, node0_mining_addr)
self.wait_for_chainlocked_block(self.nodes[0], self.nodes[1].getbestblockhash())
self.log.info("Isolate node, mine on both parts of the network, and reconnect")
isolate_node(self.nodes[0])
self.nodes[0].generate(5)
self.nodes[1].generatetoaddress(1, node0_mining_addr)
good_tip = self.nodes[1].getbestblockhash()
self.wait_for_chainlocked_block(self.nodes[1], good_tip)
assert(not self.nodes[0].getblock(self.nodes[0].getbestblockhash())["chainlock"])
reconnect_isolated_node(self.nodes[0], 1)
self.nodes[1].generatetoaddress(1, node0_mining_addr)
self.wait_for_chainlocked_block(self.nodes[0], self.nodes[1].getbestblockhash())
assert(self.nodes[0].getblock(self.nodes[0].getbestblockhash())["previousblockhash"] == good_tip)
assert(self.nodes[1].getblock(self.nodes[1].getbestblockhash())["previousblockhash"] == good_tip)
self.log.info("Keep node connected and let it try to reorg the chain")
good_tip = self.nodes[0].getbestblockhash()
self.log.info("Restart it so that it forgets all the chainlocks from the past")
2019-07-04 16:48:01 +02:00
self.stop_node(0)
self.start_node(0)
connect_nodes(self.nodes[0], 1)
Multiple fixes/refactorings for ChainLocks (#2765) * Print which DKG type aborted * Don't directly call EnforceBestChainLock and instead schedule the call Calling EnforceBestChainLock might result in switching chains, which in turn might end up calling signals, so we get into a recursive call chain. Better to call EnforceBestChainLock from the scheduler. * Regularly call EnforceBestChainLock and reset error flags on locked chain * Don't invalidate blocks from CChainLocksHandler::TrySignChainTip As the name of this method implies, it's trying to sign something and not enforce/invalidate chains. Invalidating blocks is the job of EnforceBestChainLock. * Only call ActivateBestChain when tip != best CL tip * Fix unprotected access of bestChainLockBlockIndex and bail out if its null * Fix ChainLocks tests after changes in enforcement handling * Only invoke NotifyChainLock signal from EnforceBestChainLock This ensures that NotifyChainLock is not prematurely called before the block is fully connected. * Use a mutex to ensure that only one thread executes ActivateBestChain It might happen that 2 threads enter ActivateBestChain at the same time start processing block by block, while randomly switching between threads so that sometimes one thread processed the block and then another one processes it. A mutex protects ActivateBestChain now against this race. * Rename local copy of bestChainLockBlockIndex to currentBestChainLockBlockIndex * Don't call ActivateBestChain when best CL is part of the main chain
2019-03-13 14:00:54 +01:00
self.nodes[0].invalidateblock(self.nodes[0].getbestblockhash())
self.log.info("Now try to reorg the chain")
self.nodes[0].generate(2)
time.sleep(6)
assert(self.nodes[1].getbestblockhash() == good_tip)
self.nodes[0].generate(2)
time.sleep(6)
assert(self.nodes[1].getbestblockhash() == good_tip)
self.log.info("Now let the node which is on the wrong chain reorg back to the locked chain")
self.nodes[0].reconsiderblock(good_tip)
assert(self.nodes[0].getbestblockhash() != good_tip)
self.nodes[1].generatetoaddress(1, node0_mining_addr)
self.wait_for_chainlocked_block(self.nodes[0], self.nodes[1].getbestblockhash())
assert(self.nodes[0].getbestblockhash() == self.nodes[1].getbestblockhash())
self.log.info("Enable LLMQ bases InstantSend, which also enables checks for \"safe\" transactions")
Remove legacy InstantSend code (#3020) * Remove ppszTypeName from protocol.cpp and reimplement GetCommand This removes the need to carefully maintain ppszTypeName, which required correct order and also did not allow to permanently remove old message types. To get the command name for an INV type, GetCommandInternal uses a switch which needs to be maintained from now on. The way this is implemented also resembles the way it is implemented in Bitcoin today, but it's not identical. The original PR that introduced the switch case in Bitcoin was part of the Segwit changes and thus never got backported. I decided to implement it in a slightly different way that avoids throwing exceptions when an unknown INV type is encountered. IsKnownType will now also leverage GetCommandInternal() to figure out if the INV type is known locally. This has the side effect of old/legacy message types to return false from now on. We will depend on this side effect in later commits when we remove legacy InstantSend code. * Stop handling/relaying legacy IX messages When we receive an IX message, we simply treat it as a regular TX and relay it as such. We'll however still request IX messages when they are announced to us. We can't simply revert to requesting TX messages in this case as it might result in the other peer not answering due to the TX not being in mapRelay yet. We should at some point in the future completely drop handling of IX messages instead. * Remove IsNewInstantSendEnabled() and only use IsInstantSendEnabled() * Remove legacy InstantSend from GUI * Remove InstantSend from Bitcoin/Dash URIs * Remove legacy InstantSend from RPC commands * Remove legacy InstantSend from wallet * Remove legacy instantsend.h include * Remove legacy InstantSend from validation code * Completely remove remaining legacy InstantSend code * Remove now unused spork * Fix InstantSend related test failures * Remove now obsolete auto IS tests * Make spork2 and spork3 disabled by default This should have no influence on mainnet as these sporks are actually set there. This will however affect regtest, which shouldn't have LLMQ based InstantSend enabled by default. * Remove instantsend tests from dip3-deterministicmns.py These were only testing legacy InstantSend * Fix .QCheckBox#checkUsePrivateSend styling a bit * s/TXLEGACYLOCKREQUEST/LEGACYTXLOCKREQUEST/ * Revert "verified via InstantSend" back to "verified via LLMQ based InstantSend" * Use cmd == nullptr instead of !cmd * Remove last parameter from AvailableCoins call This was for fUseInstantSend which is not present anymore since rebase
2019-07-09 16:50:08 +02:00
self.nodes[0].spork("SPORK_2_INSTANTSEND_ENABLED", 0)
self.nodes[0].spork("SPORK_3_INSTANTSEND_BLOCK_FILTERING", 0)
Implement retroactive IS locking of transactions first seen in blocks instead of mempool (#2770) * Don't rely on UTXO set in CheckCanLock The UTXO set only works for TXs in the mempool and won't work when we try to retroactively lock unlocked TXs from blocks. This is safe as ProcessTx is only called when a TX was accepted into the mempool or connected in a block, which means that all input checks were good. * Rename RetryLockMempoolTxs to RetryLockTxs and let it retry connected TXs * Instead of manually calling ProcessTx, let SyncTransaction handle all cases SyncTransaction is called from AcceptToMemoryPool and when transactions got connected in a block. So this is the time we want to run TXs through ProcessTx. This also enables retroactive signing of TXs that were unknown before a new block appeared. * Test retroactive signing and safe TXs in LLMQ ChainLocks tests * Also test for retroactive signing of chained TXs * Honor lockedParentTx when looking for TXs to retry signing * Stop scanning for TXs to retry after a depth of 6 * Generate 6 block to avoid retroactive signing overloading Travis * Avoid retroactive signing * Don't rely on NewPoWValidBlock and use SyncTransaction to build blockTxs NewPoWValidBlock is not guaranteed to be called when blocks come in fast. When a block is accepted in AcceptBlock, NewPoWValidBlock is only called when the new block is a successor of the currently active tip. This is not the case when after the first block a second block is accepted immediately as the first block is not connected yet. This might be a bug actually in the handling of NewPoWValidBlock, so we might need to check/fix this later, but currently I prefer to not touch that part. Instead, we now use SyncTransaction to gather TXs for blockTxs. This works because SyncTransaction is called for all transactions in a freshly connected block in one go. The call also happens before UpdatedBlockTip is called, so it's fine with the existing logic. * Use tx.IsCoinBase() instead of checking index 0 Also check for empty vin.
2019-03-19 11:55:51 +01:00
self.wait_for_sporks_same()
self.log.info("Isolate a node and let it create some transactions which won't get IS locked")
isolate_node(self.nodes[0])
Implement retroactive IS locking of transactions first seen in blocks instead of mempool (#2770) * Don't rely on UTXO set in CheckCanLock The UTXO set only works for TXs in the mempool and won't work when we try to retroactively lock unlocked TXs from blocks. This is safe as ProcessTx is only called when a TX was accepted into the mempool or connected in a block, which means that all input checks were good. * Rename RetryLockMempoolTxs to RetryLockTxs and let it retry connected TXs * Instead of manually calling ProcessTx, let SyncTransaction handle all cases SyncTransaction is called from AcceptToMemoryPool and when transactions got connected in a block. So this is the time we want to run TXs through ProcessTx. This also enables retroactive signing of TXs that were unknown before a new block appeared. * Test retroactive signing and safe TXs in LLMQ ChainLocks tests * Also test for retroactive signing of chained TXs * Honor lockedParentTx when looking for TXs to retry signing * Stop scanning for TXs to retry after a depth of 6 * Generate 6 block to avoid retroactive signing overloading Travis * Avoid retroactive signing * Don't rely on NewPoWValidBlock and use SyncTransaction to build blockTxs NewPoWValidBlock is not guaranteed to be called when blocks come in fast. When a block is accepted in AcceptBlock, NewPoWValidBlock is only called when the new block is a successor of the currently active tip. This is not the case when after the first block a second block is accepted immediately as the first block is not connected yet. This might be a bug actually in the handling of NewPoWValidBlock, so we might need to check/fix this later, but currently I prefer to not touch that part. Instead, we now use SyncTransaction to gather TXs for blockTxs. This works because SyncTransaction is called for all transactions in a freshly connected block in one go. The call also happens before UpdatedBlockTip is called, so it's fine with the existing logic. * Use tx.IsCoinBase() instead of checking index 0 Also check for empty vin.
2019-03-19 11:55:51 +01:00
txs = []
for i in range(3):
txs.append(self.nodes[0].sendtoaddress(self.nodes[0].getnewaddress(), 1))
txs += self.create_chained_txs(self.nodes[0], 1)
self.log.info("Assert that after block generation these TXs are NOT included (as they are \"unsafe\")")
Implement retroactive IS locking of transactions first seen in blocks instead of mempool (#2770) * Don't rely on UTXO set in CheckCanLock The UTXO set only works for TXs in the mempool and won't work when we try to retroactively lock unlocked TXs from blocks. This is safe as ProcessTx is only called when a TX was accepted into the mempool or connected in a block, which means that all input checks were good. * Rename RetryLockMempoolTxs to RetryLockTxs and let it retry connected TXs * Instead of manually calling ProcessTx, let SyncTransaction handle all cases SyncTransaction is called from AcceptToMemoryPool and when transactions got connected in a block. So this is the time we want to run TXs through ProcessTx. This also enables retroactive signing of TXs that were unknown before a new block appeared. * Test retroactive signing and safe TXs in LLMQ ChainLocks tests * Also test for retroactive signing of chained TXs * Honor lockedParentTx when looking for TXs to retry signing * Stop scanning for TXs to retry after a depth of 6 * Generate 6 block to avoid retroactive signing overloading Travis * Avoid retroactive signing * Don't rely on NewPoWValidBlock and use SyncTransaction to build blockTxs NewPoWValidBlock is not guaranteed to be called when blocks come in fast. When a block is accepted in AcceptBlock, NewPoWValidBlock is only called when the new block is a successor of the currently active tip. This is not the case when after the first block a second block is accepted immediately as the first block is not connected yet. This might be a bug actually in the handling of NewPoWValidBlock, so we might need to check/fix this later, but currently I prefer to not touch that part. Instead, we now use SyncTransaction to gather TXs for blockTxs. This works because SyncTransaction is called for all transactions in a freshly connected block in one go. The call also happens before UpdatedBlockTip is called, so it's fine with the existing logic. * Use tx.IsCoinBase() instead of checking index 0 Also check for empty vin.
2019-03-19 11:55:51 +01:00
self.nodes[0].generate(1)
for txid in txs:
tx = self.nodes[0].getrawtransaction(txid, 1)
assert("confirmations" not in tx)
time.sleep(1)
Implement retroactive IS locking of transactions first seen in blocks instead of mempool (#2770) * Don't rely on UTXO set in CheckCanLock The UTXO set only works for TXs in the mempool and won't work when we try to retroactively lock unlocked TXs from blocks. This is safe as ProcessTx is only called when a TX was accepted into the mempool or connected in a block, which means that all input checks were good. * Rename RetryLockMempoolTxs to RetryLockTxs and let it retry connected TXs * Instead of manually calling ProcessTx, let SyncTransaction handle all cases SyncTransaction is called from AcceptToMemoryPool and when transactions got connected in a block. So this is the time we want to run TXs through ProcessTx. This also enables retroactive signing of TXs that were unknown before a new block appeared. * Test retroactive signing and safe TXs in LLMQ ChainLocks tests * Also test for retroactive signing of chained TXs * Honor lockedParentTx when looking for TXs to retry signing * Stop scanning for TXs to retry after a depth of 6 * Generate 6 block to avoid retroactive signing overloading Travis * Avoid retroactive signing * Don't rely on NewPoWValidBlock and use SyncTransaction to build blockTxs NewPoWValidBlock is not guaranteed to be called when blocks come in fast. When a block is accepted in AcceptBlock, NewPoWValidBlock is only called when the new block is a successor of the currently active tip. This is not the case when after the first block a second block is accepted immediately as the first block is not connected yet. This might be a bug actually in the handling of NewPoWValidBlock, so we might need to check/fix this later, but currently I prefer to not touch that part. Instead, we now use SyncTransaction to gather TXs for blockTxs. This works because SyncTransaction is called for all transactions in a freshly connected block in one go. The call also happens before UpdatedBlockTip is called, so it's fine with the existing logic. * Use tx.IsCoinBase() instead of checking index 0 Also check for empty vin.
2019-03-19 11:55:51 +01:00
assert(not self.nodes[0].getblock(self.nodes[0].getbestblockhash())["chainlock"])
self.log.info("Disable LLMQ based InstantSend for a very short time (this never gets propagated to other nodes)")
Remove legacy InstantSend code (#3020) * Remove ppszTypeName from protocol.cpp and reimplement GetCommand This removes the need to carefully maintain ppszTypeName, which required correct order and also did not allow to permanently remove old message types. To get the command name for an INV type, GetCommandInternal uses a switch which needs to be maintained from now on. The way this is implemented also resembles the way it is implemented in Bitcoin today, but it's not identical. The original PR that introduced the switch case in Bitcoin was part of the Segwit changes and thus never got backported. I decided to implement it in a slightly different way that avoids throwing exceptions when an unknown INV type is encountered. IsKnownType will now also leverage GetCommandInternal() to figure out if the INV type is known locally. This has the side effect of old/legacy message types to return false from now on. We will depend on this side effect in later commits when we remove legacy InstantSend code. * Stop handling/relaying legacy IX messages When we receive an IX message, we simply treat it as a regular TX and relay it as such. We'll however still request IX messages when they are announced to us. We can't simply revert to requesting TX messages in this case as it might result in the other peer not answering due to the TX not being in mapRelay yet. We should at some point in the future completely drop handling of IX messages instead. * Remove IsNewInstantSendEnabled() and only use IsInstantSendEnabled() * Remove legacy InstantSend from GUI * Remove InstantSend from Bitcoin/Dash URIs * Remove legacy InstantSend from RPC commands * Remove legacy InstantSend from wallet * Remove legacy instantsend.h include * Remove legacy InstantSend from validation code * Completely remove remaining legacy InstantSend code * Remove now unused spork * Fix InstantSend related test failures * Remove now obsolete auto IS tests * Make spork2 and spork3 disabled by default This should have no influence on mainnet as these sporks are actually set there. This will however affect regtest, which shouldn't have LLMQ based InstantSend enabled by default. * Remove instantsend tests from dip3-deterministicmns.py These were only testing legacy InstantSend * Fix .QCheckBox#checkUsePrivateSend styling a bit * s/TXLEGACYLOCKREQUEST/LEGACYTXLOCKREQUEST/ * Revert "verified via InstantSend" back to "verified via LLMQ based InstantSend" * Use cmd == nullptr instead of !cmd * Remove last parameter from AvailableCoins call This was for fUseInstantSend which is not present anymore since rebase
2019-07-09 16:50:08 +02:00
self.nodes[0].spork("SPORK_2_INSTANTSEND_ENABLED", 4070908800)
self.log.info("Now the TXs should be included")
Implement retroactive IS locking of transactions first seen in blocks instead of mempool (#2770) * Don't rely on UTXO set in CheckCanLock The UTXO set only works for TXs in the mempool and won't work when we try to retroactively lock unlocked TXs from blocks. This is safe as ProcessTx is only called when a TX was accepted into the mempool or connected in a block, which means that all input checks were good. * Rename RetryLockMempoolTxs to RetryLockTxs and let it retry connected TXs * Instead of manually calling ProcessTx, let SyncTransaction handle all cases SyncTransaction is called from AcceptToMemoryPool and when transactions got connected in a block. So this is the time we want to run TXs through ProcessTx. This also enables retroactive signing of TXs that were unknown before a new block appeared. * Test retroactive signing and safe TXs in LLMQ ChainLocks tests * Also test for retroactive signing of chained TXs * Honor lockedParentTx when looking for TXs to retry signing * Stop scanning for TXs to retry after a depth of 6 * Generate 6 block to avoid retroactive signing overloading Travis * Avoid retroactive signing * Don't rely on NewPoWValidBlock and use SyncTransaction to build blockTxs NewPoWValidBlock is not guaranteed to be called when blocks come in fast. When a block is accepted in AcceptBlock, NewPoWValidBlock is only called when the new block is a successor of the currently active tip. This is not the case when after the first block a second block is accepted immediately as the first block is not connected yet. This might be a bug actually in the handling of NewPoWValidBlock, so we might need to check/fix this later, but currently I prefer to not touch that part. Instead, we now use SyncTransaction to gather TXs for blockTxs. This works because SyncTransaction is called for all transactions in a freshly connected block in one go. The call also happens before UpdatedBlockTip is called, so it's fine with the existing logic. * Use tx.IsCoinBase() instead of checking index 0 Also check for empty vin.
2019-03-19 11:55:51 +01:00
self.nodes[0].generate(1)
Remove legacy InstantSend code (#3020) * Remove ppszTypeName from protocol.cpp and reimplement GetCommand This removes the need to carefully maintain ppszTypeName, which required correct order and also did not allow to permanently remove old message types. To get the command name for an INV type, GetCommandInternal uses a switch which needs to be maintained from now on. The way this is implemented also resembles the way it is implemented in Bitcoin today, but it's not identical. The original PR that introduced the switch case in Bitcoin was part of the Segwit changes and thus never got backported. I decided to implement it in a slightly different way that avoids throwing exceptions when an unknown INV type is encountered. IsKnownType will now also leverage GetCommandInternal() to figure out if the INV type is known locally. This has the side effect of old/legacy message types to return false from now on. We will depend on this side effect in later commits when we remove legacy InstantSend code. * Stop handling/relaying legacy IX messages When we receive an IX message, we simply treat it as a regular TX and relay it as such. We'll however still request IX messages when they are announced to us. We can't simply revert to requesting TX messages in this case as it might result in the other peer not answering due to the TX not being in mapRelay yet. We should at some point in the future completely drop handling of IX messages instead. * Remove IsNewInstantSendEnabled() and only use IsInstantSendEnabled() * Remove legacy InstantSend from GUI * Remove InstantSend from Bitcoin/Dash URIs * Remove legacy InstantSend from RPC commands * Remove legacy InstantSend from wallet * Remove legacy instantsend.h include * Remove legacy InstantSend from validation code * Completely remove remaining legacy InstantSend code * Remove now unused spork * Fix InstantSend related test failures * Remove now obsolete auto IS tests * Make spork2 and spork3 disabled by default This should have no influence on mainnet as these sporks are actually set there. This will however affect regtest, which shouldn't have LLMQ based InstantSend enabled by default. * Remove instantsend tests from dip3-deterministicmns.py These were only testing legacy InstantSend * Fix .QCheckBox#checkUsePrivateSend styling a bit * s/TXLEGACYLOCKREQUEST/LEGACYTXLOCKREQUEST/ * Revert "verified via InstantSend" back to "verified via LLMQ based InstantSend" * Use cmd == nullptr instead of !cmd * Remove last parameter from AvailableCoins call This was for fUseInstantSend which is not present anymore since rebase
2019-07-09 16:50:08 +02:00
self.nodes[0].spork("SPORK_2_INSTANTSEND_ENABLED", 0)
self.log.info("Assert that TXs got included now")
Implement retroactive IS locking of transactions first seen in blocks instead of mempool (#2770) * Don't rely on UTXO set in CheckCanLock The UTXO set only works for TXs in the mempool and won't work when we try to retroactively lock unlocked TXs from blocks. This is safe as ProcessTx is only called when a TX was accepted into the mempool or connected in a block, which means that all input checks were good. * Rename RetryLockMempoolTxs to RetryLockTxs and let it retry connected TXs * Instead of manually calling ProcessTx, let SyncTransaction handle all cases SyncTransaction is called from AcceptToMemoryPool and when transactions got connected in a block. So this is the time we want to run TXs through ProcessTx. This also enables retroactive signing of TXs that were unknown before a new block appeared. * Test retroactive signing and safe TXs in LLMQ ChainLocks tests * Also test for retroactive signing of chained TXs * Honor lockedParentTx when looking for TXs to retry signing * Stop scanning for TXs to retry after a depth of 6 * Generate 6 block to avoid retroactive signing overloading Travis * Avoid retroactive signing * Don't rely on NewPoWValidBlock and use SyncTransaction to build blockTxs NewPoWValidBlock is not guaranteed to be called when blocks come in fast. When a block is accepted in AcceptBlock, NewPoWValidBlock is only called when the new block is a successor of the currently active tip. This is not the case when after the first block a second block is accepted immediately as the first block is not connected yet. This might be a bug actually in the handling of NewPoWValidBlock, so we might need to check/fix this later, but currently I prefer to not touch that part. Instead, we now use SyncTransaction to gather TXs for blockTxs. This works because SyncTransaction is called for all transactions in a freshly connected block in one go. The call also happens before UpdatedBlockTip is called, so it's fine with the existing logic. * Use tx.IsCoinBase() instead of checking index 0 Also check for empty vin.
2019-03-19 11:55:51 +01:00
for txid in txs:
tx = self.nodes[0].getrawtransaction(txid, 1)
assert("confirmations" in tx and tx["confirmations"] > 0)
# Enable network on first node again, which will cause the blocks to propagate and IS locks to happen retroactively
# for the mined TXs, which will then allow the network to create a CLSIG
self.log.info("Reenable network on first node and wait for chainlock")
reconnect_isolated_node(self.nodes[0], 1)
self.wait_for_chainlocked_block(self.nodes[0], self.nodes[0].getbestblockhash(), 30)
Implement retroactive IS locking of transactions first seen in blocks instead of mempool (#2770) * Don't rely on UTXO set in CheckCanLock The UTXO set only works for TXs in the mempool and won't work when we try to retroactively lock unlocked TXs from blocks. This is safe as ProcessTx is only called when a TX was accepted into the mempool or connected in a block, which means that all input checks were good. * Rename RetryLockMempoolTxs to RetryLockTxs and let it retry connected TXs * Instead of manually calling ProcessTx, let SyncTransaction handle all cases SyncTransaction is called from AcceptToMemoryPool and when transactions got connected in a block. So this is the time we want to run TXs through ProcessTx. This also enables retroactive signing of TXs that were unknown before a new block appeared. * Test retroactive signing and safe TXs in LLMQ ChainLocks tests * Also test for retroactive signing of chained TXs * Honor lockedParentTx when looking for TXs to retry signing * Stop scanning for TXs to retry after a depth of 6 * Generate 6 block to avoid retroactive signing overloading Travis * Avoid retroactive signing * Don't rely on NewPoWValidBlock and use SyncTransaction to build blockTxs NewPoWValidBlock is not guaranteed to be called when blocks come in fast. When a block is accepted in AcceptBlock, NewPoWValidBlock is only called when the new block is a successor of the currently active tip. This is not the case when after the first block a second block is accepted immediately as the first block is not connected yet. This might be a bug actually in the handling of NewPoWValidBlock, so we might need to check/fix this later, but currently I prefer to not touch that part. Instead, we now use SyncTransaction to gather TXs for blockTxs. This works because SyncTransaction is called for all transactions in a freshly connected block in one go. The call also happens before UpdatedBlockTip is called, so it's fine with the existing logic. * Use tx.IsCoinBase() instead of checking index 0 Also check for empty vin.
2019-03-19 11:55:51 +01:00
def create_chained_txs(self, node, amount):
txid = node.sendtoaddress(node.getnewaddress(), amount)
tx = node.getrawtransaction(txid, 1)
inputs = []
valueIn = 0
for txout in tx["vout"]:
inputs.append({"txid": txid, "vout": txout["n"]})
valueIn += txout["value"]
outputs = {
node.getnewaddress(): round(float(valueIn) - 0.0001, 6)
}
rawtx = node.createrawtransaction(inputs, outputs)
rawtx = node.signrawtransaction(rawtx)
rawtxid = node.sendrawtransaction(rawtx["hex"])
return [txid, rawtxid]
if __name__ == '__main__':
LLMQChainLocksTest().main()