Tag 0.12.1 final

-----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQEbBAABCgAGBQJXD/i3AAoJEHSBCwEjRsmmOHsH+L5eRpiPeLhrDYyBFbp9RFKU
 TztyoeKAM4llEPmk6vAawgSL8HNY4va6lbY84sDfvCdLJqCxVR7MyiuQ4AQPXG4R
 Ke5DJ/G/K4ngyqruCBsSh2RJdVDrbE3zCmjN5gxPxrNKpi+mXs//A6gjvfxn4U1F
 WZepN3FzNFcqFG/ndKxptMYZoIuiK9JIhK7V/ksFKRPlUhipa1jh5sIWvCeFjiLT
 Wt8wGlHPHDFsPJW1o7EWMTHRhNCVqYhMDU7GT6FixIJFRGANIGlwfIUuqqUt0sil
 7YWIwD/+ai3dfeODazauqJAOEBXjoWCkuXn9IN/VhtvHOFR6AZO2aljS9ks6Cw==
 =6vRi
 -----END PGP SIGNATURE-----

Merge bitcoin tag 'v0.12.1' into dash v0.12.1.x

Merging Bitcoin 0.12.1 into Dash 0.12.1.x
This commit is contained in:
Holger Schinzel 2016-07-04 07:42:50 +02:00
commit f4e4dd65e7
56 changed files with 3054 additions and 220 deletions

View File

@ -26,7 +26,7 @@ files: []
script: |
WRAP_DIR=$HOME/wrapped
HOSTS="i686-pc-linux-gnu x86_64-unknown-linux-gnu"
CONFIGFLAGS="--enable-glibc-back-compat --enable-reduce-exports --disable-bench --disable-gui-tests"
CONFIGFLAGS="--enable-glibc-back-compat --enable-reduce-exports --disable-bench --disable-gui-tests LDFLAGS=-static-libstdc++"
FAKETIME_HOST_PROGS=""
FAKETIME_PROGS="date ar ranlib nm strip objcopy"
HOST_CFLAGS="-O2 -g"

View File

@ -1,6 +1,6 @@
### Verify SF Binaries ###
### Verify Binaries ###
This script attempts to download the signature file `SHA256SUMS.asc` from https://bitcoin.org.
It first checks if the signature passes, and then downloads the files specified in the file, and checks if the hashes of these files match those that are specified in the signature file.
The script returns 0 if everything passes the checks. It returns 1 if either the signature check or the hash check doesn't pass. If an error occurs the return value is 2.
The script returns 0 if everything passes the checks. It returns 1 if either the signature check or the hash check doesn't pass. If an error occurs the return value is 2.

View File

@ -34,7 +34,7 @@ PROJECT_NAME = Dash
# This could be handy for archiving the generated documentation or
# if some version control system is used.
PROJECT_NUMBER = 0.12.0
PROJECT_NUMBER = 0.12.1
# Using the PROJECT_BRIEF tag one can provide an optional one line description
# for a project that appears at the top of each page and should give viewer

View File

@ -1,4 +1,4 @@
Dash Core 0.12.0
Dash Core 0.12.1
=====================
This is the official reference wallet for Dash digital currency and comprises the backbone of the Dash peer-to-peer network. You can [download Dash Core](https://www.dash.org/downloads/) or [build it yourself](#building) using the guides below.
@ -47,7 +47,7 @@ The following are developer notes on how to build Dash on your native platform.
Development
---------------------
The Dash repo's [root README](https://github.com/dashpay/dash/blob/master/README.md) contains relevant information on the development process and automated testing.
The Dash repo's [root README](/README.md) contains relevant information on the development process and automated testing.
- [Developer Notes](developer-notes.md)
- [Multiwallet Qt Development](multiwallet-qt.md)

View File

@ -1,4 +1,4 @@
Dash Core 0.12.0
Dash Core 0.12.1
=====================
Intro

View File

@ -236,3 +236,30 @@ In this case there is no dependency on Berkeley DB 4.8.
Mining is also possible in disable-wallet mode, but only using the `getblocktemplate` RPC
call not `getwork`.
Additional Configure Flags
--------------------------
A list of additional configure flags can be displayed with:
./configure --help
ARM Cross-compilation
-------------------
These steps can be performed on, for example, an Ubuntu VM. The depends system
will also work on other Linux distributions, however the commands for
installing the toolchain will be different.
First install the toolchain:
sudo apt-get install g++-arm-linux-gnueabihf
To build executables for ARM:
cd depends
make HOST=arm-linux-gnueabihf NO_QT=1
cd ..
./configure --prefix=$PWD/depends/arm-linux-gnueabihf --enable-glibc-back-compat --enable-reduce-exports LDFLAGS=-static-libstdc++
make
For further documentation on the depends system see [README.md](../depends/README.md) in the depends directory.

View File

@ -1,10 +1,9 @@
Dash Core 0.12
==================
Dash Core version 0.12.1 is now available from:
Dash Core tree 0.12.1.x release notes can be found here:
- [v0.12.1](release-notes/dash/release-notes-0.12.1.md)
<https://www.dash.org/downloads/>
Dash Core tree 0.12.1.x is a fork of Bitcoin Core tree 0.12
This is a new minor version release, including the BIP9, BIP68 and BIP112
softfork, various bugfixes and updated translations.
@ -29,10 +28,109 @@ earlier.
Notable changes
===============
Example item
---------------------------------------
First version bits BIP9 softfork deployment
-------------------------------------------
Example text.
This release includes a soft fork deployment to enforce [BIP68][],
[BIP112][] and [BIP113][] using the [BIP9][] deployment mechanism.
The deployment sets the block version number to 0x20000001 between
midnight 1st May 2016 and midnight 1st May 2017 to signal readiness for
deployment. The version number consists of 0x20000000 to indicate version
bits together with setting bit 0 to indicate support for this combined
deployment, shown as "csv" in the `getblockchaininfo` RPC call.
For more information about the soft forking change, please see
<https://github.com/bitcoin/bitcoin/pull/7648>
This specific backport pull-request can be viewed at
<https://github.com/bitcoin/bitcoin/pull/7543>
[BIP9]: https://github.com/bitcoin/bips/blob/master/bip-0009.mediawiki
[BIP68]: https://github.com/bitcoin/bips/blob/master/bip-0068.mediawiki
[BIP112]: https://github.com/bitcoin/bips/blob/master/bip-0112.mediawiki
[BIP113]: https://github.com/bitcoin/bips/blob/master/bip-0113.mediawiki
BIP68 soft fork to enforce sequence locks for relative locktime
---------------------------------------------------------------
[BIP68][] introduces relative lock-time consensus-enforced semantics of
the sequence number field to enable a signed transaction input to remain
invalid for a defined period of time after confirmation of its corresponding
outpoint.
For more information about the implementation, see
<https://github.com/bitcoin/bitcoin/pull/7184>
BIP112 soft fork to enforce OP_CHECKSEQUENCEVERIFY
--------------------------------------------------
[BIP112][] redefines the existing OP_NOP3 as OP_CHECKSEQUENCEVERIFY (CSV)
for a new opcode in the Bitcoin scripting system that in combination with
[BIP68][] allows execution pathways of a script to be restricted based
on the age of the output being spent.
For more information about the implementation, see
<https://github.com/bitcoin/bitcoin/pull/7524>
BIP113 locktime enforcement soft fork
-------------------------------------
Bitcoin Core 0.11.2 previously introduced mempool-only locktime
enforcement using GetMedianTimePast(). This release seeks to
consensus enforce the rule.
Bitcoin transactions currently may specify a locktime indicating when
they may be added to a valid block. Current consensus rules require
that blocks have a block header time greater than the locktime specified
in any transaction in that block.
Miners get to choose what time they use for their header time, with the
consensus rule being that no node will accept a block whose time is more
than two hours in the future. This creates a incentive for miners to
set their header times to future values in order to include locktimed
transactions which weren't supposed to be included for up to two more
hours.
The consensus rules also specify that valid blocks may have a header
time greater than that of the median of the 11 previous blocks. This
GetMedianTimePast() time has a key feature we generally associate with
time: it can't go backwards.
[BIP113][] specifies a soft fork enforced in this release that
weakens this perverse incentive for individual miners to use a future
time by requiring that valid blocks have a computed GetMedianTimePast()
greater than the locktime specified in any transaction in that block.
Mempool inclusion rules currently require transactions to be valid for
immediate inclusion in a block in order to be accepted into the mempool.
This release begins applying the BIP113 rule to received transactions,
so transaction whose time is greater than the GetMedianTimePast() will
no longer be accepted into the mempool.
**Implication for miners:** you will begin rejecting transactions that
would not be valid under BIP113, which will prevent you from producing
invalid blocks when BIP113 is enforced on the network. Any
transactions which are valid under the current rules but not yet valid
under the BIP113 rules will either be mined by other miners or delayed
until they are valid under BIP113. Note, however, that time-based
locktime transactions are more or less unseen on the network currently.
**Implication for users:** GetMedianTimePast() always trails behind the
current time, so a transaction locktime set to the present time will be
rejected by nodes running this release until the median time moves
forward. To compensate, subtract one hour (3,600 seconds) from your
locktimes to allow those transactions to be included in mempools at
approximately the expected time.
For more information about the implementation, see
<https://github.com/bitcoin/bitcoin/pull/6566>
Miscellaneous
-------------
The p2p alert system is off by default. To turn on, use `-alert` with
startup configuration.
0.12.1 Change log
=================
@ -42,31 +140,57 @@ behavior, not code moves, refactors and string updates. For convenience in locat
the code changes and accompanying discussion, both the pull request and
git merge commit are mentioned.
### RPC and REST
### RPC and other APIs
- #7739 `7ffc2bd` Add abandoned status to listtransactions (jonasschnelli)
### Configuration and command-line options
### Block and transaction handling
- #7543 `834aaef` Backport BIP9, BIP68 and BIP112 with softfork (btcdrak)
### P2P protocol and network code
- #7804 `90f1d24` Track block download times per individual block (sipa)
- #7832 `4c3a00d` Reduce block timeout to 10 minutes (laanwj)
### Validation
- #7821 `4226aac` init: allow shutdown during 'Activating best chain...' (laanwj)
- #7835 `46898e7` Version 2 transactions remain non-standard until CSV activates (sdaftuar)
### Build system
- #7487 `00d57b4` Workaround Travis-side CI issues (luke-jr)
- #7606 `a10da9a` No need to set -L and --location for curl (MarcoFalke)
- #7614 `ca8f160` Add curl to packages (now needed for depends) (luke-jr)
- #7776 `a784675` Remove unnecessary executables from gitian release (laanwj)
### Wallet
- #7715 `19866c1` Fix calculation of balances and available coins. (morcos)
### GUI
### Tests and QA
### Miscellaneous
- #7617 `f04f4fd` Fix markdown syntax and line terminate LogPrint (MarcoFalke)
- #7747 `4d035bc` added depends cross compile info (accraze)
- #7741 `a0cea89` Mark p2p alert system as deprecated (btcdrak)
- #7780 `c5f94f6` Disable bad-chain alert (btcdrak)
Credits
=======
Thanks to everyone who directly contributed to this release:
- accraze
- Alex Morcos
- BtcDrak
- Jonas Schnelli
- Luke Dashjr
- MarcoFalke
- Mark Friedenbach
- NicolasDorier
- Pieter Wuille
- Suhas Daftuar
- Wladimir J. van der Laan
As well as everyone that helped translating on [Transifex](https://www.transifex.com/projects/p/bitcoin/).

View File

@ -196,7 +196,7 @@ Note: check that SHA256SUMS itself doesn't end up in SHA256SUMS, which is a spur
- Optionally reddit /r/Dashpay, ... but this will usually sort out itself
- Notify Flare (?) ***TODO*** so that he can start building [https://launchpad.net/~dashpay/+archive/ubuntu/dash](the PPAs) ***TODO***
- Notify flare so that he can start building [the PPAs](https://launchpad.net/~dash.org/+archive/ubuntu/dash)
- Add release notes for the new version to the directory `doc/release-notes` in git master

View File

@ -74,6 +74,7 @@ if EXEEXT == ".exe" and "-win" not in opts:
#Tests
testScripts = [
'bip68-112-113-p2p.py',
'wallet.py',
'listtransactions.py',
'receivedby.py',
@ -106,10 +107,13 @@ testScripts = [
'invalidblockrequest.py', # TODO: works, needs dash_hash
'invalidtxrequest.py', # TODO: works, needs dash_hash
'abandonconflict.py',
'p2p-versionbits-warning.py',
]
testScriptsExt = [
'bip9-softforks.py',
'bip65-cltv.py',
'bip65-cltv-p2p.py', # TODO: works, needs dash_hash
'bip68-sequence.py',
'bipdersig-p2p.py', # TODO: works, needs dash_hash
'bipdersig.py',
'getblocktemplate_longpoll.py', # FIXME: "socket.error: [Errno 54] Connection reset by peer" on my Mac, same as https://github.com/bitcoin/bitcoin/issues/6651

View File

@ -83,6 +83,12 @@ class AbandonConflictTest(BitcoinTestFramework):
# inputs are still spent, but change not received
newbalance = self.nodes[0].getbalance()
assert(newbalance == balance - Decimal("24.9996"))
# Unconfirmed received funds that are not in mempool, also shouldn't show
# up in unconfirmed balance
unconfbalance = self.nodes[0].getunconfirmedbalance() + self.nodes[0].getbalance()
assert(unconfbalance == newbalance)
# Also shouldn't show up in listunspent
assert(not txABC2 in [utxo["txid"] for utxo in self.nodes[0].listunspent(0)])
balance = newbalance
# Abandon original transaction and verify inputs are available again

540
qa/rpc-tests/bip68-112-113-p2p.py Executable file
View File

@ -0,0 +1,540 @@
#!/usr/bin/env python2
# Copyright (c) 2015 The Bitcoin Core developers
# Distributed under the MIT/X11 software license, see the accompanying
# file COPYING or http://www.opensource.org/licenses/mit-license.php.
#
from test_framework.test_framework import ComparisonTestFramework
from test_framework.util import *
from test_framework.mininode import ToHex, CTransaction, NetworkThread
from test_framework.blocktools import create_coinbase, create_block
from test_framework.comptool import TestInstance, TestManager
from test_framework.script import *
from binascii import unhexlify
import cStringIO
import time
'''
This test is meant to exercise activation of the first version bits soft fork
This soft fork will activate the following BIPS:
BIP 68 - nSequence relative lock times
BIP 112 - CHECKSEQUENCEVERIFY
BIP 113 - MedianTimePast semantics for nLockTime
regtest lock-in with 108/144 block signalling
activation after a further 144 blocks
mine 82 blocks whose coinbases will be used to generate inputs for our tests
mine 61 blocks to transition from DEFINED to STARTED
mine 144 blocks only 100 of which are signaling readiness in order to fail to change state this period
mine 144 blocks with 108 signaling and verify STARTED->LOCKED_IN
mine 140 blocks and seed block chain with the 82 inputs will use for our tests at height 572
mine 3 blocks and verify still at LOCKED_IN and test that enforcement has not triggered
mine 1 block and test that enforcement has triggered (which triggers ACTIVE)
Test BIP 113 is enforced
Mine 4 blocks so next height is 580 and test BIP 68 is enforced for time and height
Mine 1 block so next height is 581 and test BIP 68 now passes time but not height
Mine 1 block so next height is 582 and test BIP 68 now passes time and height
Test that BIP 112 is enforced
Various transactions will be used to test that the BIPs rules are not enforced before the soft fork activates
And that after the soft fork activates transactions pass and fail as they should according to the rules.
For each BIP, transactions of versions 1 and 2 will be tested.
----------------
BIP 113:
bip113tx - modify the nLocktime variable
BIP 68:
bip68txs - 16 txs with nSequence relative locktime of 10 with various bits set as per the relative_locktimes below
BIP 112:
bip112txs_vary_nSequence - 16 txs with nSequence relative_locktimes of 10 evaluated against 10 OP_CSV OP_DROP
bip112txs_vary_nSequence_9 - 16 txs with nSequence relative_locktimes of 9 evaluated against 10 OP_CSV OP_DROP
bip112txs_vary_OP_CSV - 16 txs with nSequence = 10 evaluated against varying {relative_locktimes of 10} OP_CSV OP_DROP
bip112txs_vary_OP_CSV_9 - 16 txs with nSequence = 9 evaluated against varying {relative_locktimes of 10} OP_CSV OP_DROP
bip112tx_special - test negative argument to OP_CSV
'''
base_relative_locktime = 10
seq_disable_flag = 1<<31
seq_random_high_bit = 1<<25
seq_type_flag = 1<<22
seq_random_low_bit = 1<<18
# b31,b25,b22,b18 represent the 31st, 25th, 22nd and 18th bits respectively in the nSequence field
# relative_locktimes[b31][b25][b22][b18] is a base_relative_locktime with the indicated bits set if their indices are 1
relative_locktimes = []
for b31 in xrange(2):
b25times = []
for b25 in xrange(2):
b22times = []
for b22 in xrange(2):
b18times = []
for b18 in xrange(2):
rlt = base_relative_locktime
if (b31):
rlt = rlt | seq_disable_flag
if (b25):
rlt = rlt | seq_random_high_bit
if (b22):
rlt = rlt | seq_type_flag
if (b18):
rlt = rlt | seq_random_low_bit
b18times.append(rlt)
b22times.append(b18times)
b25times.append(b22times)
relative_locktimes.append(b25times)
def all_rlt_txs(txarray):
txs = []
for b31 in xrange(2):
for b25 in xrange(2):
for b22 in xrange(2):
for b18 in xrange(2):
txs.append(txarray[b31][b25][b22][b18])
return txs
class BIP68_112_113Test(ComparisonTestFramework):
def __init__(self):
self.num_nodes = 1
def setup_network(self):
# Must set the blockversion for this test
self.nodes = start_nodes(1, self.options.tmpdir,
extra_args=[['-debug', '-whitelist=127.0.0.1', '-blockversion=4']],
binary=[self.options.testbinary])
def run_test(self):
test = TestManager(self, self.options.tmpdir)
test.add_all_connections(self.nodes)
NetworkThread().start() # Start up network handling in another thread
test.run()
def send_generic_input_tx(self, node, coinbases):
amount = Decimal("499.99")
return node.sendrawtransaction(ToHex(self.sign_transaction(node, self.create_transaction(node, node.getblock(coinbases.pop())['tx'][0], self.nodeaddress, amount))))
def create_transaction(self, node, txid, to_address, amount):
inputs = [{ "txid" : txid, "vout" : 0}]
outputs = { to_address : amount }
rawtx = node.createrawtransaction(inputs, outputs)
tx = CTransaction()
f = cStringIO.StringIO(unhexlify(rawtx))
tx.deserialize(f)
return tx
def sign_transaction(self, node, unsignedtx):
rawtx = ToHex(unsignedtx)
signresult = node.signrawtransaction(rawtx)
tx = CTransaction()
f = cStringIO.StringIO(unhexlify(signresult['hex']))
tx.deserialize(f)
return tx
def generate_blocks(self, number, version, test_blocks = []):
for i in xrange(number):
block = self.create_test_block([], version)
test_blocks.append([block, True])
self.last_block_time += 600
self.tip = block.sha256
self.tipheight += 1
return test_blocks
def create_test_block(self, txs, version = 536870912):
block = create_block(self.tip, create_coinbase(self.tipheight + 1), self.last_block_time + 600)
block.nVersion = version
block.vtx.extend(txs)
block.hashMerkleRoot = block.calc_merkle_root()
block.rehash()
block.solve()
return block
def create_bip68txs(self, bip68inputs, txversion, locktime_delta = 0):
txs = []
assert(len(bip68inputs) >= 16)
i = 0
for b31 in xrange(2):
b25txs = []
for b25 in xrange(2):
b22txs = []
for b22 in xrange(2):
b18txs = []
for b18 in xrange(2):
tx = self.create_transaction(self.nodes[0], bip68inputs[i], self.nodeaddress, Decimal("499.98"))
i += 1
tx.nVersion = txversion
tx.vin[0].nSequence = relative_locktimes[b31][b25][b22][b18] + locktime_delta
b18txs.append(self.sign_transaction(self.nodes[0], tx))
b22txs.append(b18txs)
b25txs.append(b22txs)
txs.append(b25txs)
return txs
def create_bip112special(self, input, txversion):
tx = self.create_transaction(self.nodes[0], input, self.nodeaddress, Decimal("499.98"))
tx.nVersion = txversion
signtx = self.sign_transaction(self.nodes[0], tx)
signtx.vin[0].scriptSig = CScript([-1, OP_NOP3, OP_DROP] + list(CScript(signtx.vin[0].scriptSig)))
return signtx
def create_bip112txs(self, bip112inputs, varyOP_CSV, txversion, locktime_delta = 0):
txs = []
assert(len(bip112inputs) >= 16)
i = 0
for b31 in xrange(2):
b25txs = []
for b25 in xrange(2):
b22txs = []
for b22 in xrange(2):
b18txs = []
for b18 in xrange(2):
tx = self.create_transaction(self.nodes[0], bip112inputs[i], self.nodeaddress, Decimal("499.98"))
i += 1
if (varyOP_CSV): # if varying OP_CSV, nSequence is fixed
tx.vin[0].nSequence = base_relative_locktime + locktime_delta
else: # vary nSequence instead, OP_CSV is fixed
tx.vin[0].nSequence = relative_locktimes[b31][b25][b22][b18] + locktime_delta
tx.nVersion = txversion
signtx = self.sign_transaction(self.nodes[0], tx)
if (varyOP_CSV):
signtx.vin[0].scriptSig = CScript([relative_locktimes[b31][b25][b22][b18], OP_NOP3, OP_DROP] + list(CScript(signtx.vin[0].scriptSig)))
else:
signtx.vin[0].scriptSig = CScript([base_relative_locktime, OP_NOP3, OP_DROP] + list(CScript(signtx.vin[0].scriptSig)))
b18txs.append(signtx)
b22txs.append(b18txs)
b25txs.append(b22txs)
txs.append(b25txs)
return txs
def get_tests(self):
long_past_time = int(time.time()) - 600 * 1000 # enough to build up to 1000 blocks 10 minutes apart without worrying about getting into the future
self.nodes[0].setmocktime(long_past_time - 100) # enough so that the generated blocks will still all be before long_past_time
self.coinbase_blocks = self.nodes[0].generate(1 + 16 + 2*32 + 1) # 82 blocks generated for inputs
self.nodes[0].setmocktime(0) # set time back to present so yielded blocks aren't in the future as we advance last_block_time
self.tipheight = 82 # height of the next block to build
self.last_block_time = long_past_time
self.tip = int ("0x" + self.nodes[0].getbestblockhash() + "L", 0)
self.nodeaddress = self.nodes[0].getnewaddress()
assert_equal(get_bip9_status(self.nodes[0], 'csv')['status'], 'defined')
test_blocks = self.generate_blocks(61, 4)
yield TestInstance(test_blocks, sync_every_block=False) # 1
# Advanced from DEFINED to STARTED, height = 143
assert_equal(get_bip9_status(self.nodes[0], 'csv')['status'], 'started')
# Fail to achieve LOCKED_IN 100 out of 144 signal bit 0
# using a variety of bits to simulate multiple parallel softforks
test_blocks = self.generate_blocks(50, 536870913) # 0x20000001 (signalling ready)
test_blocks = self.generate_blocks(20, 4, test_blocks) # 0x00000004 (signalling not)
test_blocks = self.generate_blocks(50, 536871169, test_blocks) # 0x20000101 (signalling ready)
test_blocks = self.generate_blocks(24, 536936448, test_blocks) # 0x20010000 (signalling not)
yield TestInstance(test_blocks, sync_every_block=False) # 2
# Failed to advance past STARTED, height = 287
assert_equal(get_bip9_status(self.nodes[0], 'csv')['status'], 'started')
# 108 out of 144 signal bit 0 to achieve lock-in
# using a variety of bits to simulate multiple parallel softforks
test_blocks = self.generate_blocks(58, 536870913) # 0x20000001 (signalling ready)
test_blocks = self.generate_blocks(26, 4, test_blocks) # 0x00000004 (signalling not)
test_blocks = self.generate_blocks(50, 536871169, test_blocks) # 0x20000101 (signalling ready)
test_blocks = self.generate_blocks(10, 536936448, test_blocks) # 0x20010000 (signalling not)
yield TestInstance(test_blocks, sync_every_block=False) # 3
# Advanced from STARTED to LOCKED_IN, height = 431
assert_equal(get_bip9_status(self.nodes[0], 'csv')['status'], 'locked_in')
# 140 more version 4 blocks
test_blocks = self.generate_blocks(140, 4)
yield TestInstance(test_blocks, sync_every_block=False) # 4
### Inputs at height = 572
# Put inputs for all tests in the chain at height 572 (tip now = 571) (time increases by 600s per block)
# Note we reuse inputs for v1 and v2 txs so must test these separately
# 16 normal inputs
bip68inputs = []
for i in xrange(16):
bip68inputs.append(self.send_generic_input_tx(self.nodes[0], self.coinbase_blocks))
# 2 sets of 16 inputs with 10 OP_CSV OP_DROP (actually will be prepended to spending scriptSig)
bip112basicinputs = []
for j in xrange(2):
inputs = []
for i in xrange(16):
inputs.append(self.send_generic_input_tx(self.nodes[0], self.coinbase_blocks))
bip112basicinputs.append(inputs)
# 2 sets of 16 varied inputs with (relative_lock_time) OP_CSV OP_DROP (actually will be prepended to spending scriptSig)
bip112diverseinputs = []
for j in xrange(2):
inputs = []
for i in xrange(16):
inputs.append(self.send_generic_input_tx(self.nodes[0], self.coinbase_blocks))
bip112diverseinputs.append(inputs)
# 1 special input with -1 OP_CSV OP_DROP (actually will be prepended to spending scriptSig)
bip112specialinput = self.send_generic_input_tx(self.nodes[0], self.coinbase_blocks)
# 1 normal input
bip113input = self.send_generic_input_tx(self.nodes[0], self.coinbase_blocks)
self.nodes[0].setmocktime(self.last_block_time + 600)
inputblockhash = self.nodes[0].generate(1)[0] # 1 block generated for inputs to be in chain at height 572
self.nodes[0].setmocktime(0)
self.tip = int("0x" + inputblockhash + "L", 0)
self.tipheight += 1
self.last_block_time += 600
assert_equal(len(self.nodes[0].getblock(inputblockhash,True)["tx"]), 82+1)
# 2 more version 4 blocks
test_blocks = self.generate_blocks(2, 4)
yield TestInstance(test_blocks, sync_every_block=False) # 5
# Not yet advanced to ACTIVE, height = 574 (will activate for block 576, not 575)
assert_equal(get_bip9_status(self.nodes[0], 'csv')['status'], 'locked_in')
# Test both version 1 and version 2 transactions for all tests
# BIP113 test transaction will be modified before each use to put in appropriate block time
bip113tx_v1 = self.create_transaction(self.nodes[0], bip113input, self.nodeaddress, Decimal("499.98"))
bip113tx_v1.vin[0].nSequence = 0xFFFFFFFE
bip113tx_v2 = self.create_transaction(self.nodes[0], bip113input, self.nodeaddress, Decimal("499.98"))
bip113tx_v2.vin[0].nSequence = 0xFFFFFFFE
bip113tx_v2.nVersion = 2
# For BIP68 test all 16 relative sequence locktimes
bip68txs_v1 = self.create_bip68txs(bip68inputs, 1)
bip68txs_v2 = self.create_bip68txs(bip68inputs, 2)
# For BIP112 test:
# 16 relative sequence locktimes of 10 against 10 OP_CSV OP_DROP inputs
bip112txs_vary_nSequence_v1 = self.create_bip112txs(bip112basicinputs[0], False, 1)
bip112txs_vary_nSequence_v2 = self.create_bip112txs(bip112basicinputs[0], False, 2)
# 16 relative sequence locktimes of 9 against 10 OP_CSV OP_DROP inputs
bip112txs_vary_nSequence_9_v1 = self.create_bip112txs(bip112basicinputs[1], False, 1, -1)
bip112txs_vary_nSequence_9_v2 = self.create_bip112txs(bip112basicinputs[1], False, 2, -1)
# sequence lock time of 10 against 16 (relative_lock_time) OP_CSV OP_DROP inputs
bip112txs_vary_OP_CSV_v1 = self.create_bip112txs(bip112diverseinputs[0], True, 1)
bip112txs_vary_OP_CSV_v2 = self.create_bip112txs(bip112diverseinputs[0], True, 2)
# sequence lock time of 9 against 16 (relative_lock_time) OP_CSV OP_DROP inputs
bip112txs_vary_OP_CSV_9_v1 = self.create_bip112txs(bip112diverseinputs[1], True, 1, -1)
bip112txs_vary_OP_CSV_9_v2 = self.create_bip112txs(bip112diverseinputs[1], True, 2, -1)
# -1 OP_CSV OP_DROP input
bip112tx_special_v1 = self.create_bip112special(bip112specialinput, 1)
bip112tx_special_v2 = self.create_bip112special(bip112specialinput, 2)
### TESTING ###
##################################
### Before Soft Forks Activate ###
##################################
# All txs should pass
### Version 1 txs ###
success_txs = []
# add BIP113 tx and -1 CSV tx
bip113tx_v1.nLockTime = self.last_block_time - 600 * 5 # = MTP of prior block (not <) but < time put on current block
bip113signed1 = self.sign_transaction(self.nodes[0], bip113tx_v1)
success_txs.append(bip113signed1)
success_txs.append(bip112tx_special_v1)
# add BIP 68 txs
success_txs.extend(all_rlt_txs(bip68txs_v1))
# add BIP 112 with seq=10 txs
success_txs.extend(all_rlt_txs(bip112txs_vary_nSequence_v1))
success_txs.extend(all_rlt_txs(bip112txs_vary_OP_CSV_v1))
# try BIP 112 with seq=9 txs
success_txs.extend(all_rlt_txs(bip112txs_vary_nSequence_9_v1))
success_txs.extend(all_rlt_txs(bip112txs_vary_OP_CSV_9_v1))
yield TestInstance([[self.create_test_block(success_txs), True]]) # 6
self.nodes[0].invalidateblock(self.nodes[0].getbestblockhash())
### Version 2 txs ###
success_txs = []
# add BIP113 tx and -1 CSV tx
bip113tx_v2.nLockTime = self.last_block_time - 600 * 5 # = MTP of prior block (not <) but < time put on current block
bip113signed2 = self.sign_transaction(self.nodes[0], bip113tx_v2)
success_txs.append(bip113signed2)
success_txs.append(bip112tx_special_v2)
# add BIP 68 txs
success_txs.extend(all_rlt_txs(bip68txs_v2))
# add BIP 112 with seq=10 txs
success_txs.extend(all_rlt_txs(bip112txs_vary_nSequence_v2))
success_txs.extend(all_rlt_txs(bip112txs_vary_OP_CSV_v2))
# try BIP 112 with seq=9 txs
success_txs.extend(all_rlt_txs(bip112txs_vary_nSequence_9_v2))
success_txs.extend(all_rlt_txs(bip112txs_vary_OP_CSV_9_v2))
yield TestInstance([[self.create_test_block(success_txs), True]]) # 7
self.nodes[0].invalidateblock(self.nodes[0].getbestblockhash())
# 1 more version 4 block to get us to height 575 so the fork should now be active for the next block
test_blocks = self.generate_blocks(1, 4)
yield TestInstance(test_blocks, sync_every_block=False) # 8
assert_equal(get_bip9_status(self.nodes[0], 'csv')['status'], 'active')
#################################
### After Soft Forks Activate ###
#################################
### BIP 113 ###
# BIP 113 tests should now fail regardless of version number if nLockTime isn't satisfied by new rules
bip113tx_v1.nLockTime = self.last_block_time - 600 * 5 # = MTP of prior block (not <) but < time put on current block
bip113signed1 = self.sign_transaction(self.nodes[0], bip113tx_v1)
bip113tx_v2.nLockTime = self.last_block_time - 600 * 5 # = MTP of prior block (not <) but < time put on current block
bip113signed2 = self.sign_transaction(self.nodes[0], bip113tx_v2)
for bip113tx in [bip113signed1, bip113signed2]:
yield TestInstance([[self.create_test_block([bip113tx]), False]]) # 9,10
# BIP 113 tests should now pass if the locktime is < MTP
bip113tx_v1.nLockTime = self.last_block_time - 600 * 5 - 1 # < MTP of prior block
bip113signed1 = self.sign_transaction(self.nodes[0], bip113tx_v1)
bip113tx_v2.nLockTime = self.last_block_time - 600 * 5 - 1 # < MTP of prior block
bip113signed2 = self.sign_transaction(self.nodes[0], bip113tx_v2)
for bip113tx in [bip113signed1, bip113signed2]:
yield TestInstance([[self.create_test_block([bip113tx]), True]]) # 11,12
self.nodes[0].invalidateblock(self.nodes[0].getbestblockhash())
# Next block height = 580 after 4 blocks of random version
test_blocks = self.generate_blocks(4, 1234)
yield TestInstance(test_blocks, sync_every_block=False) # 13
### BIP 68 ###
### Version 1 txs ###
# All still pass
success_txs = []
success_txs.extend(all_rlt_txs(bip68txs_v1))
yield TestInstance([[self.create_test_block(success_txs), True]]) # 14
self.nodes[0].invalidateblock(self.nodes[0].getbestblockhash())
### Version 2 txs ###
bip68success_txs = []
# All txs with SEQUENCE_LOCKTIME_DISABLE_FLAG set pass
for b25 in xrange(2):
for b22 in xrange(2):
for b18 in xrange(2):
bip68success_txs.append(bip68txs_v2[1][b25][b22][b18])
yield TestInstance([[self.create_test_block(bip68success_txs), True]]) # 15
self.nodes[0].invalidateblock(self.nodes[0].getbestblockhash())
# All txs without flag fail as we are at delta height = 8 < 10 and delta time = 8 * 600 < 10 * 512
bip68timetxs = []
for b25 in xrange(2):
for b18 in xrange(2):
bip68timetxs.append(bip68txs_v2[0][b25][1][b18])
for tx in bip68timetxs:
yield TestInstance([[self.create_test_block([tx]), False]]) # 16 - 19
bip68heighttxs = []
for b25 in xrange(2):
for b18 in xrange(2):
bip68heighttxs.append(bip68txs_v2[0][b25][0][b18])
for tx in bip68heighttxs:
yield TestInstance([[self.create_test_block([tx]), False]]) # 20 - 23
# Advance one block to 581
test_blocks = self.generate_blocks(1, 1234)
yield TestInstance(test_blocks, sync_every_block=False) # 24
# Height txs should fail and time txs should now pass 9 * 600 > 10 * 512
bip68success_txs.extend(bip68timetxs)
yield TestInstance([[self.create_test_block(bip68success_txs), True]]) # 25
self.nodes[0].invalidateblock(self.nodes[0].getbestblockhash())
for tx in bip68heighttxs:
yield TestInstance([[self.create_test_block([tx]), False]]) # 26 - 29
# Advance one block to 582
test_blocks = self.generate_blocks(1, 1234)
yield TestInstance(test_blocks, sync_every_block=False) # 30
# All BIP 68 txs should pass
bip68success_txs.extend(bip68heighttxs)
yield TestInstance([[self.create_test_block(bip68success_txs), True]]) # 31
self.nodes[0].invalidateblock(self.nodes[0].getbestblockhash())
### BIP 112 ###
### Version 1 txs ###
# -1 OP_CSV tx should fail
yield TestInstance([[self.create_test_block([bip112tx_special_v1]), False]]) #32
# If SEQUENCE_LOCKTIME_DISABLE_FLAG is set in argument to OP_CSV, version 1 txs should still pass
success_txs = []
for b25 in xrange(2):
for b22 in xrange(2):
for b18 in xrange(2):
success_txs.append(bip112txs_vary_OP_CSV_v1[1][b25][b22][b18])
success_txs.append(bip112txs_vary_OP_CSV_9_v1[1][b25][b22][b18])
yield TestInstance([[self.create_test_block(success_txs), True]]) # 33
self.nodes[0].invalidateblock(self.nodes[0].getbestblockhash())
# If SEQUENCE_LOCKTIME_DISABLE_FLAG is unset in argument to OP_CSV, version 1 txs should now fail
fail_txs = []
fail_txs.extend(all_rlt_txs(bip112txs_vary_nSequence_v1))
fail_txs.extend(all_rlt_txs(bip112txs_vary_nSequence_9_v1))
for b25 in xrange(2):
for b22 in xrange(2):
for b18 in xrange(2):
fail_txs.append(bip112txs_vary_OP_CSV_v1[0][b25][b22][b18])
fail_txs.append(bip112txs_vary_OP_CSV_9_v1[0][b25][b22][b18])
for tx in fail_txs:
yield TestInstance([[self.create_test_block([tx]), False]]) # 34 - 81
### Version 2 txs ###
# -1 OP_CSV tx should fail
yield TestInstance([[self.create_test_block([bip112tx_special_v2]), False]]) #82
# If SEQUENCE_LOCKTIME_DISABLE_FLAG is set in argument to OP_CSV, version 2 txs should pass (all sequence locks are met)
success_txs = []
for b25 in xrange(2):
for b22 in xrange(2):
for b18 in xrange(2):
success_txs.append(bip112txs_vary_OP_CSV_v2[1][b25][b22][b18]) # 8/16 of vary_OP_CSV
success_txs.append(bip112txs_vary_OP_CSV_9_v2[1][b25][b22][b18]) # 8/16 of vary_OP_CSV_9
yield TestInstance([[self.create_test_block(success_txs), True]]) # 83
self.nodes[0].invalidateblock(self.nodes[0].getbestblockhash())
## SEQUENCE_LOCKTIME_DISABLE_FLAG is unset in argument to OP_CSV for all remaining txs ##
# All txs with nSequence 9 should fail either due to earlier mismatch or failing the CSV check
fail_txs = []
fail_txs.extend(all_rlt_txs(bip112txs_vary_nSequence_9_v2)) # 16/16 of vary_nSequence_9
for b25 in xrange(2):
for b22 in xrange(2):
for b18 in xrange(2):
fail_txs.append(bip112txs_vary_OP_CSV_9_v2[0][b25][b22][b18]) # 16/16 of vary_OP_CSV_9
for tx in fail_txs:
yield TestInstance([[self.create_test_block([tx]), False]]) # 84 - 107
# If SEQUENCE_LOCKTIME_DISABLE_FLAG is set in nSequence, tx should fail
fail_txs = []
for b25 in xrange(2):
for b22 in xrange(2):
for b18 in xrange(2):
fail_txs.append(bip112txs_vary_nSequence_v2[1][b25][b22][b18]) # 8/16 of vary_nSequence
for tx in fail_txs:
yield TestInstance([[self.create_test_block([tx]), False]]) # 108-115
# If sequencelock types mismatch, tx should fail
fail_txs = []
for b25 in xrange(2):
for b18 in xrange(2):
fail_txs.append(bip112txs_vary_nSequence_v2[0][b25][1][b18]) # 12/16 of vary_nSequence
fail_txs.append(bip112txs_vary_OP_CSV_v2[0][b25][1][b18]) # 12/16 of vary_OP_CSV
for tx in fail_txs:
yield TestInstance([[self.create_test_block([tx]), False]]) # 116-123
# Remaining txs should pass, just test masking works properly
success_txs = []
for b25 in xrange(2):
for b18 in xrange(2):
success_txs.append(bip112txs_vary_nSequence_v2[0][b25][0][b18]) # 16/16 of vary_nSequence
success_txs.append(bip112txs_vary_OP_CSV_v2[0][b25][0][b18]) # 16/16 of vary_OP_CSV
yield TestInstance([[self.create_test_block(success_txs), True]]) # 124
self.nodes[0].invalidateblock(self.nodes[0].getbestblockhash())
# Additional test, of checking that comparison of two time types works properly
time_txs = []
for b25 in xrange(2):
for b18 in xrange(2):
tx = bip112txs_vary_OP_CSV_v2[0][b25][1][b18]
tx.vin[0].nSequence = base_relative_locktime | seq_type_flag
signtx = self.sign_transaction(self.nodes[0], tx)
time_txs.append(signtx)
yield TestInstance([[self.create_test_block(time_txs), True]]) # 125
self.nodes[0].invalidateblock(self.nodes[0].getbestblockhash())
### Missing aspects of test
## Testing empty stack fails
if __name__ == '__main__':
BIP68_112_113Test().main()

425
qa/rpc-tests/bip68-sequence.py Executable file
View File

@ -0,0 +1,425 @@
#!/usr/bin/env python2
# Copyright (c) 2014-2015 The Bitcoin Core developers
# Distributed under the MIT software license, see the accompanying
# file COPYING or http://www.opensource.org/licenses/mit-license.php.
#
# Test BIP68 implementation
#
from test_framework.test_framework import BitcoinTestFramework
from test_framework.util import *
from test_framework.script import *
from test_framework.mininode import *
from test_framework.blocktools import *
COIN = 100000000
SEQUENCE_LOCKTIME_DISABLE_FLAG = (1<<31)
SEQUENCE_LOCKTIME_TYPE_FLAG = (1<<22) # this means use time (0 means height)
SEQUENCE_LOCKTIME_GRANULARITY = 9 # this is a bit-shift
SEQUENCE_LOCKTIME_MASK = 0x0000ffff
# RPC error for non-BIP68 final transactions
NOT_FINAL_ERROR = "64: non-BIP68-final"
class BIP68Test(BitcoinTestFramework):
def setup_network(self):
self.nodes = []
self.nodes.append(start_node(0, self.options.tmpdir, ["-debug", "-blockprioritysize=0"]))
self.nodes.append(start_node(1, self.options.tmpdir, ["-debug", "-blockprioritysize=0", "-acceptnonstdtxn=0"]))
self.is_network_split = False
self.relayfee = self.nodes[0].getnetworkinfo()["relayfee"]
connect_nodes(self.nodes[0], 1)
def run_test(self):
# Generate some coins
self.nodes[0].generate(110)
print "Running test disable flag"
self.test_disable_flag()
print "Running test sequence-lock-confirmed-inputs"
self.test_sequence_lock_confirmed_inputs()
print "Running test sequence-lock-unconfirmed-inputs"
self.test_sequence_lock_unconfirmed_inputs()
print "Running test BIP68 not consensus before versionbits activation"
self.test_bip68_not_consensus()
print "Verifying nVersion=2 transactions aren't standard"
self.test_version2_relay(before_activation=True)
print "Activating BIP68 (and 112/113)"
self.activateCSV()
print "Verifying nVersion=2 transactions are now standard"
self.test_version2_relay(before_activation=False)
print "Passed\n"
# Test that BIP68 is not in effect if tx version is 1, or if
# the first sequence bit is set.
def test_disable_flag(self):
# Create some unconfirmed inputs
new_addr = self.nodes[0].getnewaddress()
self.nodes[0].sendtoaddress(new_addr, 2) # send 2 BTC
utxos = self.nodes[0].listunspent(0, 0)
assert(len(utxos) > 0)
utxo = utxos[0]
tx1 = CTransaction()
value = int(satoshi_round(utxo["amount"] - self.relayfee)*COIN)
# Check that the disable flag disables relative locktime.
# If sequence locks were used, this would require 1 block for the
# input to mature.
sequence_value = SEQUENCE_LOCKTIME_DISABLE_FLAG | 1
tx1.vin = [CTxIn(COutPoint(int(utxo["txid"], 16), utxo["vout"]), nSequence=sequence_value)]
tx1.vout = [CTxOut(value, CScript([b'a']))]
tx1_signed = self.nodes[0].signrawtransaction(ToHex(tx1))["hex"]
tx1_id = self.nodes[0].sendrawtransaction(tx1_signed)
tx1_id = int(tx1_id, 16)
# This transaction will enable sequence-locks, so this transaction should
# fail
tx2 = CTransaction()
tx2.nVersion = 2
sequence_value = sequence_value & 0x7fffffff
tx2.vin = [CTxIn(COutPoint(tx1_id, 0), nSequence=sequence_value)]
tx2.vout = [CTxOut(int(value-self.relayfee*COIN), CScript([b'a']))]
tx2.rehash()
try:
self.nodes[0].sendrawtransaction(ToHex(tx2))
except JSONRPCException as exp:
assert_equal(exp.error["message"], NOT_FINAL_ERROR)
else:
assert(False)
# Setting the version back down to 1 should disable the sequence lock,
# so this should be accepted.
tx2.nVersion = 1
self.nodes[0].sendrawtransaction(ToHex(tx2))
# Calculate the median time past of a prior block ("confirmations" before
# the current tip).
def get_median_time_past(self, confirmations):
block_hash = self.nodes[0].getblockhash(self.nodes[0].getblockcount()-confirmations)
return self.nodes[0].getblockheader(block_hash)["mediantime"]
# Test that sequence locks are respected for transactions spending confirmed inputs.
def test_sequence_lock_confirmed_inputs(self):
# Create lots of confirmed utxos, and use them to generate lots of random
# transactions.
max_outputs = 50
addresses = []
while len(addresses) < max_outputs:
addresses.append(self.nodes[0].getnewaddress())
while len(self.nodes[0].listunspent()) < 200:
import random
random.shuffle(addresses)
num_outputs = random.randint(1, max_outputs)
outputs = {}
for i in xrange(num_outputs):
outputs[addresses[i]] = random.randint(1, 20)*0.01
self.nodes[0].sendmany("", outputs)
self.nodes[0].generate(1)
utxos = self.nodes[0].listunspent()
# Try creating a lot of random transactions.
# Each time, choose a random number of inputs, and randomly set
# some of those inputs to be sequence locked (and randomly choose
# between height/time locking). Small random chance of making the locks
# all pass.
for i in xrange(400):
# Randomly choose up to 10 inputs
num_inputs = random.randint(1, 10)
random.shuffle(utxos)
# Track whether any sequence locks used should fail
should_pass = True
# Track whether this transaction was built with sequence locks
using_sequence_locks = False
tx = CTransaction()
tx.nVersion = 2
value = 0
for j in xrange(num_inputs):
sequence_value = 0xfffffffe # this disables sequence locks
# 50% chance we enable sequence locks
if random.randint(0,1):
using_sequence_locks = True
# 10% of the time, make the input sequence value pass
input_will_pass = (random.randint(1,10) == 1)
sequence_value = utxos[j]["confirmations"]
if not input_will_pass:
sequence_value += 1
should_pass = False
# Figure out what the median-time-past was for the confirmed input
# Note that if an input has N confirmations, we're going back N blocks
# from the tip so that we're looking up MTP of the block
# PRIOR to the one the input appears in, as per the BIP68 spec.
orig_time = self.get_median_time_past(utxos[j]["confirmations"])
cur_time = self.get_median_time_past(0) # MTP of the tip
# can only timelock this input if it's not too old -- otherwise use height
can_time_lock = True
if ((cur_time - orig_time) >> SEQUENCE_LOCKTIME_GRANULARITY) >= SEQUENCE_LOCKTIME_MASK:
can_time_lock = False
# if time-lockable, then 50% chance we make this a time lock
if random.randint(0,1) and can_time_lock:
# Find first time-lock value that fails, or latest one that succeeds
time_delta = sequence_value << SEQUENCE_LOCKTIME_GRANULARITY
if input_will_pass and time_delta > cur_time - orig_time:
sequence_value = ((cur_time - orig_time) >> SEQUENCE_LOCKTIME_GRANULARITY)
elif (not input_will_pass and time_delta <= cur_time - orig_time):
sequence_value = ((cur_time - orig_time) >> SEQUENCE_LOCKTIME_GRANULARITY)+1
sequence_value |= SEQUENCE_LOCKTIME_TYPE_FLAG
tx.vin.append(CTxIn(COutPoint(int(utxos[j]["txid"], 16), utxos[j]["vout"]), nSequence=sequence_value))
value += utxos[j]["amount"]*COIN
# Overestimate the size of the tx - signatures should be less than 120 bytes, and leave 50 for the output
tx_size = len(ToHex(tx))//2 + 120*num_inputs + 50
tx.vout.append(CTxOut(int(value-self.relayfee*tx_size*COIN/1000), CScript([b'a'])))
rawtx = self.nodes[0].signrawtransaction(ToHex(tx))["hex"]
try:
self.nodes[0].sendrawtransaction(rawtx)
except JSONRPCException as exp:
assert(not should_pass and using_sequence_locks)
assert_equal(exp.error["message"], NOT_FINAL_ERROR)
else:
assert(should_pass or not using_sequence_locks)
# Recalculate utxos if we successfully sent the transaction
utxos = self.nodes[0].listunspent()
# Test that sequence locks on unconfirmed inputs must have nSequence
# height or time of 0 to be accepted.
# Then test that BIP68-invalid transactions are removed from the mempool
# after a reorg.
def test_sequence_lock_unconfirmed_inputs(self):
# Store height so we can easily reset the chain at the end of the test
cur_height = self.nodes[0].getblockcount()
# Create a mempool tx.
txid = self.nodes[0].sendtoaddress(self.nodes[0].getnewaddress(), 2)
tx1 = FromHex(CTransaction(), self.nodes[0].getrawtransaction(txid))
tx1.rehash()
# Anyone-can-spend mempool tx.
# Sequence lock of 0 should pass.
tx2 = CTransaction()
tx2.nVersion = 2
tx2.vin = [CTxIn(COutPoint(tx1.sha256, 0), nSequence=0)]
tx2.vout = [CTxOut(int(tx1.vout[0].nValue - self.relayfee*COIN), CScript([b'a']))]
tx2_raw = self.nodes[0].signrawtransaction(ToHex(tx2))["hex"]
tx2 = FromHex(tx2, tx2_raw)
tx2.rehash()
self.nodes[0].sendrawtransaction(tx2_raw)
# Create a spend of the 0th output of orig_tx with a sequence lock
# of 1, and test what happens when submitting.
# orig_tx.vout[0] must be an anyone-can-spend output
def test_nonzero_locks(orig_tx, node, relayfee, use_height_lock):
sequence_value = 1
if not use_height_lock:
sequence_value |= SEQUENCE_LOCKTIME_TYPE_FLAG
tx = CTransaction()
tx.nVersion = 2
tx.vin = [CTxIn(COutPoint(orig_tx.sha256, 0), nSequence=sequence_value)]
tx.vout = [CTxOut(int(orig_tx.vout[0].nValue - relayfee*COIN), CScript([b'a']))]
tx.rehash()
try:
node.sendrawtransaction(ToHex(tx))
except JSONRPCException as exp:
assert_equal(exp.error["message"], NOT_FINAL_ERROR)
assert(orig_tx.hash in node.getrawmempool())
else:
# orig_tx must not be in mempool
assert(orig_tx.hash not in node.getrawmempool())
return tx
test_nonzero_locks(tx2, self.nodes[0], self.relayfee, use_height_lock=True)
test_nonzero_locks(tx2, self.nodes[0], self.relayfee, use_height_lock=False)
# Now mine some blocks, but make sure tx2 doesn't get mined.
# Use prioritisetransaction to lower the effective feerate to 0
self.nodes[0].prioritisetransaction(tx2.hash, -1e15, int(-self.relayfee*COIN))
cur_time = int(time.time())
for i in xrange(10):
self.nodes[0].setmocktime(cur_time + 600)
self.nodes[0].generate(1)
cur_time += 600
assert(tx2.hash in self.nodes[0].getrawmempool())
test_nonzero_locks(tx2, self.nodes[0], self.relayfee, use_height_lock=True)
test_nonzero_locks(tx2, self.nodes[0], self.relayfee, use_height_lock=False)
# Mine tx2, and then try again
self.nodes[0].prioritisetransaction(tx2.hash, 1e15, int(self.relayfee*COIN))
# Advance the time on the node so that we can test timelocks
self.nodes[0].setmocktime(cur_time+600)
self.nodes[0].generate(1)
assert(tx2.hash not in self.nodes[0].getrawmempool())
# Now that tx2 is not in the mempool, a sequence locked spend should
# succeed
tx3 = test_nonzero_locks(tx2, self.nodes[0], self.relayfee, use_height_lock=False)
assert(tx3.hash in self.nodes[0].getrawmempool())
self.nodes[0].generate(1)
assert(tx3.hash not in self.nodes[0].getrawmempool())
# One more test, this time using height locks
tx4 = test_nonzero_locks(tx3, self.nodes[0], self.relayfee, use_height_lock=True)
assert(tx4.hash in self.nodes[0].getrawmempool())
# Now try combining confirmed and unconfirmed inputs
tx5 = test_nonzero_locks(tx4, self.nodes[0], self.relayfee, use_height_lock=True)
assert(tx5.hash not in self.nodes[0].getrawmempool())
utxos = self.nodes[0].listunspent()
tx5.vin.append(CTxIn(COutPoint(int(utxos[0]["txid"], 16), utxos[0]["vout"]), nSequence=1))
tx5.vout[0].nValue += int(utxos[0]["amount"]*COIN)
raw_tx5 = self.nodes[0].signrawtransaction(ToHex(tx5))["hex"]
try:
self.nodes[0].sendrawtransaction(raw_tx5)
except JSONRPCException as exp:
assert_equal(exp.error["message"], NOT_FINAL_ERROR)
else:
assert(False)
# Test mempool-BIP68 consistency after reorg
#
# State of the transactions in the last blocks:
# ... -> [ tx2 ] -> [ tx3 ]
# tip-1 tip
# And currently tx4 is in the mempool.
#
# If we invalidate the tip, tx3 should get added to the mempool, causing
# tx4 to be removed (fails sequence-lock).
self.nodes[0].invalidateblock(self.nodes[0].getbestblockhash())
assert(tx4.hash not in self.nodes[0].getrawmempool())
assert(tx3.hash in self.nodes[0].getrawmempool())
# Now mine 2 empty blocks to reorg out the current tip (labeled tip-1 in
# diagram above).
# This would cause tx2 to be added back to the mempool, which in turn causes
# tx3 to be removed.
tip = int(self.nodes[0].getblockhash(self.nodes[0].getblockcount()-1), 16)
height = self.nodes[0].getblockcount()
for i in xrange(2):
block = create_block(tip, create_coinbase(height), cur_time)
block.nVersion = 3
block.rehash()
block.solve()
tip = block.sha256
height += 1
self.nodes[0].submitblock(ToHex(block))
cur_time += 1
mempool = self.nodes[0].getrawmempool()
assert(tx3.hash not in mempool)
assert(tx2.hash in mempool)
# Reset the chain and get rid of the mocktimed-blocks
self.nodes[0].setmocktime(0)
self.nodes[0].invalidateblock(self.nodes[0].getblockhash(cur_height+1))
self.nodes[0].generate(10)
# Make sure that BIP68 isn't being used to validate blocks, prior to
# versionbits activation. If more blocks are mined prior to this test
# being run, then it's possible the test has activated the soft fork, and
# this test should be moved to run earlier, or deleted.
def test_bip68_not_consensus(self):
assert(get_bip9_status(self.nodes[0], 'csv')['status'] != 'active')
txid = self.nodes[0].sendtoaddress(self.nodes[0].getnewaddress(), 2)
tx1 = FromHex(CTransaction(), self.nodes[0].getrawtransaction(txid))
tx1.rehash()
# Make an anyone-can-spend transaction
tx2 = CTransaction()
tx2.nVersion = 1
tx2.vin = [CTxIn(COutPoint(tx1.sha256, 0), nSequence=0)]
tx2.vout = [CTxOut(int(tx1.vout[0].nValue - self.relayfee*COIN), CScript([b'a']))]
# sign tx2
tx2_raw = self.nodes[0].signrawtransaction(ToHex(tx2))["hex"]
tx2 = FromHex(tx2, tx2_raw)
tx2.rehash()
self.nodes[0].sendrawtransaction(ToHex(tx2))
# Now make an invalid spend of tx2 according to BIP68
sequence_value = 100 # 100 block relative locktime
tx3 = CTransaction()
tx3.nVersion = 2
tx3.vin = [CTxIn(COutPoint(tx2.sha256, 0), nSequence=sequence_value)]
tx3.vout = [CTxOut(int(tx2.vout[0].nValue - self.relayfee*COIN), CScript([b'a']))]
tx3.rehash()
try:
self.nodes[0].sendrawtransaction(ToHex(tx3))
except JSONRPCException as exp:
assert_equal(exp.error["message"], NOT_FINAL_ERROR)
else:
assert(False)
# make a block that violates bip68; ensure that the tip updates
tip = int(self.nodes[0].getbestblockhash(), 16)
block = create_block(tip, create_coinbase(self.nodes[0].getblockcount()+1))
block.nVersion = 3
block.vtx.extend([tx1, tx2, tx3])
block.hashMerkleRoot = block.calc_merkle_root()
block.rehash()
block.solve()
self.nodes[0].submitblock(ToHex(block))
assert_equal(self.nodes[0].getbestblockhash(), block.hash)
def activateCSV(self):
# activation should happen at block height 432 (3 periods)
min_activation_height = 432
height = self.nodes[0].getblockcount()
assert(height < 432)
self.nodes[0].generate(432-height)
assert(get_bip9_status(self.nodes[0], 'csv')['status'] == 'active')
sync_blocks(self.nodes)
# Use self.nodes[1] to test standardness relay policy
def test_version2_relay(self, before_activation):
inputs = [ ]
outputs = { self.nodes[1].getnewaddress() : 1.0 }
rawtx = self.nodes[1].createrawtransaction(inputs, outputs)
rawtxfund = self.nodes[1].fundrawtransaction(rawtx)['hex']
tx = FromHex(CTransaction(), rawtxfund)
tx.nVersion = 2
tx_signed = self.nodes[1].signrawtransaction(ToHex(tx))["hex"]
try:
tx_id = self.nodes[1].sendrawtransaction(tx_signed)
assert(before_activation == False)
except:
assert(before_activation)
if __name__ == '__main__':
BIP68Test().main()

220
qa/rpc-tests/bip9-softforks.py Executable file
View File

@ -0,0 +1,220 @@
#!/usr/bin/env python2
# Copyright (c) 2015 The Bitcoin Core developers
# Distributed under the MIT/X11 software license, see the accompanying
# file COPYING or http://www.opensource.org/licenses/mit-license.php.
#
from test_framework.test_framework import ComparisonTestFramework
from test_framework.util import *
from test_framework.mininode import CTransaction, NetworkThread
from test_framework.blocktools import create_coinbase, create_block
from test_framework.comptool import TestInstance, TestManager
from test_framework.script import CScript, OP_1NEGATE, OP_NOP3, OP_DROP
from binascii import hexlify, unhexlify
import cStringIO
import time
import itertools
'''
This test is meant to exercise BIP forks
Connect to a single node.
regtest lock-in with 108/144 block signalling
activation after a further 144 blocks
mine 2 block and save coinbases for later use
mine 141 blocks to transition from DEFINED to STARTED
mine 100 blocks signalling readiness and 44 not in order to fail to change state this period
mine 108 blocks signalling readiness and 36 blocks not signalling readiness (STARTED->LOCKED_IN)
mine a further 143 blocks (LOCKED_IN)
test that enforcement has not triggered (which triggers ACTIVE)
test that enforcement has triggered
'''
class BIP9SoftForksTest(ComparisonTestFramework):
def __init__(self):
self.num_nodes = 1
def setup_network(self):
self.nodes = start_nodes(1, self.options.tmpdir,
extra_args=[['-debug', '-whitelist=127.0.0.1']],
binary=[self.options.testbinary])
def run_test(self):
self.test = TestManager(self, self.options.tmpdir)
self.test.add_all_connections(self.nodes)
NetworkThread().start() # Start up network handling in another thread
self.test.run()
def create_transaction(self, node, coinbase, to_address, amount):
from_txid = node.getblock(coinbase)['tx'][0]
inputs = [{ "txid" : from_txid, "vout" : 0}]
outputs = { to_address : amount }
rawtx = node.createrawtransaction(inputs, outputs)
tx = CTransaction()
f = cStringIO.StringIO(unhexlify(rawtx))
tx.deserialize(f)
tx.nVersion = 2
return tx
def sign_transaction(self, node, tx):
signresult = node.signrawtransaction(hexlify(tx.serialize()))
tx = CTransaction()
f = cStringIO.StringIO(unhexlify(signresult['hex']))
tx.deserialize(f)
return tx
def generate_blocks(self, number, version, test_blocks = []):
for i in xrange(number):
block = create_block(self.tip, create_coinbase(self.height), self.last_block_time + 1)
block.nVersion = version
block.rehash()
block.solve()
test_blocks.append([block, True])
self.last_block_time += 1
self.tip = block.sha256
self.height += 1
return test_blocks
def get_bip9_status(self, key):
info = self.nodes[0].getblockchaininfo()
for row in info['bip9_softforks']:
if row['id'] == key:
return row
raise IndexError ('key:"%s" not found' % key)
def test_BIP(self, bipName, activated_version, invalidate, invalidatePostSignature):
# generate some coins for later
self.coinbase_blocks = self.nodes[0].generate(2)
self.height = 3 # height of the next block to build
self.tip = int ("0x" + self.nodes[0].getbestblockhash() + "L", 0)
self.nodeaddress = self.nodes[0].getnewaddress()
self.last_block_time = int(time.time())
assert_equal(self.get_bip9_status(bipName)['status'], 'defined')
# Test 1
# Advance from DEFINED to STARTED
test_blocks = self.generate_blocks(141, 4)
yield TestInstance(test_blocks, sync_every_block=False)
assert_equal(self.get_bip9_status(bipName)['status'], 'started')
# Test 2
# Fail to achieve LOCKED_IN 100 out of 144 signal bit 1
# using a variety of bits to simulate multiple parallel softforks
test_blocks = self.generate_blocks(50, activated_version) # 0x20000001 (signalling ready)
test_blocks = self.generate_blocks(20, 4, test_blocks) # 0x00000004 (signalling not)
test_blocks = self.generate_blocks(50, activated_version, test_blocks) # 0x20000101 (signalling ready)
test_blocks = self.generate_blocks(24, 4, test_blocks) # 0x20010000 (signalling not)
yield TestInstance(test_blocks, sync_every_block=False)
assert_equal(self.get_bip9_status(bipName)['status'], 'started')
# Test 3
# 108 out of 144 signal bit 1 to achieve LOCKED_IN
# using a variety of bits to simulate multiple parallel softforks
test_blocks = self.generate_blocks(58, activated_version) # 0x20000001 (signalling ready)
test_blocks = self.generate_blocks(26, 4, test_blocks) # 0x00000004 (signalling not)
test_blocks = self.generate_blocks(50, activated_version, test_blocks) # 0x20000101 (signalling ready)
test_blocks = self.generate_blocks(10, 4, test_blocks) # 0x20010000 (signalling not)
yield TestInstance(test_blocks, sync_every_block=False)
assert_equal(self.get_bip9_status(bipName)['status'], 'locked_in')
# Test 4
# 143 more version 536870913 blocks (waiting period-1)
test_blocks = self.generate_blocks(143, 4)
yield TestInstance(test_blocks, sync_every_block=False)
assert_equal(self.get_bip9_status(bipName)['status'], 'locked_in')
# Test 5
# Check that the new rule is enforced
spendtx = self.create_transaction(self.nodes[0],
self.coinbase_blocks[0], self.nodeaddress, 1.0)
invalidate(spendtx)
spendtx = self.sign_transaction(self.nodes[0], spendtx)
spendtx.rehash()
invalidatePostSignature(spendtx)
spendtx.rehash()
block = create_block(self.tip, create_coinbase(self.height), self.last_block_time + 1)
block.nVersion = activated_version
block.vtx.append(spendtx)
block.hashMerkleRoot = block.calc_merkle_root()
block.rehash()
block.solve()
self.last_block_time += 1
self.tip = block.sha256
self.height += 1
yield TestInstance([[block, True]])
assert_equal(self.get_bip9_status(bipName)['status'], 'active')
# Test 6
# Check that the new sequence lock rules are enforced
spendtx = self.create_transaction(self.nodes[0],
self.coinbase_blocks[1], self.nodeaddress, 1.0)
invalidate(spendtx)
spendtx = self.sign_transaction(self.nodes[0], spendtx)
spendtx.rehash()
invalidatePostSignature(spendtx)
spendtx.rehash()
block = create_block(self.tip, create_coinbase(self.height), self.last_block_time + 1)
block.nVersion = 5
block.vtx.append(spendtx)
block.hashMerkleRoot = block.calc_merkle_root()
block.rehash()
block.solve()
self.last_block_time += 1
yield TestInstance([[block, False]])
# Restart all
stop_nodes(self.nodes)
wait_bitcoinds()
shutil.rmtree(self.options.tmpdir)
self.setup_chain()
self.setup_network()
self.test.clear_all_connections()
self.test.add_all_connections(self.nodes)
NetworkThread().start() # Start up network handling in another thread
def get_tests(self):
for test in itertools.chain(
self.test_BIP('csv', 536870913, self.sequence_lock_invalidate, self.donothing),
self.test_BIP('csv', 536870913, self.mtp_invalidate, self.donothing),
self.test_BIP('csv', 536870913, self.donothing, self.csv_invalidate)
):
yield test
def donothing(self, tx):
return
def csv_invalidate(self, tx):
'''Modify the signature in vin 0 of the tx to fail CSV
Prepends -1 CSV DROP in the scriptSig itself.
'''
tx.vin[0].scriptSig = CScript([OP_1NEGATE, OP_NOP3, OP_DROP] +
list(CScript(tx.vin[0].scriptSig)))
def sequence_lock_invalidate(self, tx):
'''Modify the nSequence to make it fails once sequence lock rule is activated (high timespan)
'''
tx.vin[0].nSequence = 0x00FFFFFF
tx.nLockTime = 0
def mtp_invalidate(self, tx):
'''Modify the nLockTime to make it fails once MTP rule is activated
'''
# Disable Sequence lock, Activate nLockTime
tx.vin[0].nSequence = 0x90FFFFFF
tx.nLockTime = self.last_block_time
if __name__ == '__main__':
BIP9SoftForksTest().main()

View File

@ -44,7 +44,7 @@ class HTTPBasicsTest (BitcoinTestFramework):
#Old authpair
authpair = url.username + ':' + url.password
#New authpair generated via contrib/rpcuser tool
#New authpair generated via share/rpcuser tool
rpcauth = "rpcauth=rt:93648e835a54c573682c2eb19f882535$7681e9c5b74bdd85e78166031d2058e1069b3ed7ed967c93fc63abba06f31144"
password = "cA773lm788buwYe4g4WT+05pKyNruVKjQ25x3n0DQcM="

View File

@ -0,0 +1,160 @@
#!/usr/bin/env python2
# Copyright (c) 2016 The Bitcoin Core developers
# Distributed under the MIT/X11 software license, see the accompanying
# file COPYING or http://www.opensource.org/licenses/mit-license.php.
#
from test_framework.mininode import *
from test_framework.test_framework import BitcoinTestFramework
from test_framework.util import *
import time
from test_framework.blocktools import create_block, create_coinbase
'''
Test version bits' warning system.
Generate chains with block versions that appear to be signalling unknown
soft-forks, and test that warning alerts are generated.
'''
VB_PERIOD = 144 # versionbits period length for regtest
VB_THRESHOLD = 108 # versionbits activation threshold for regtest
VB_TOP_BITS = 0x20000000
VB_UNKNOWN_BIT = 27 # Choose a bit unassigned to any deployment
# TestNode: bare-bones "peer". Used mostly as a conduit for a test to sending
# p2p messages to a node, generating the messages in the main testing logic.
class TestNode(NodeConnCB):
def __init__(self):
NodeConnCB.__init__(self)
self.connection = None
self.ping_counter = 1
self.last_pong = msg_pong()
def add_connection(self, conn):
self.connection = conn
def on_inv(self, conn, message):
pass
# Wrapper for the NodeConn's send_message function
def send_message(self, message):
self.connection.send_message(message)
def on_pong(self, conn, message):
self.last_pong = message
# Sync up with the node after delivery of a block
def sync_with_ping(self, timeout=30):
self.connection.send_message(msg_ping(nonce=self.ping_counter))
received_pong = False
sleep_time = 0.05
while not received_pong and timeout > 0:
time.sleep(sleep_time)
timeout -= sleep_time
with mininode_lock:
if self.last_pong.nonce == self.ping_counter:
received_pong = True
self.ping_counter += 1
return received_pong
class VersionBitsWarningTest(BitcoinTestFramework):
def setup_chain(self):
initialize_chain_clean(self.options.tmpdir, 1)
def setup_network(self):
self.nodes = []
self.alert_filename = os.path.join(self.options.tmpdir, "alert.txt")
# Open and close to create zero-length file
with open(self.alert_filename, 'w') as f:
pass
self.node_options = ["-debug", "-logtimemicros=1", "-alertnotify=echo %s >> \"" + self.alert_filename + "\""]
self.nodes.append(start_node(0, self.options.tmpdir, self.node_options))
import re
self.vb_pattern = re.compile("^Warning.*versionbit")
# Send numblocks blocks via peer with nVersionToUse set.
def send_blocks_with_version(self, peer, numblocks, nVersionToUse):
tip = self.nodes[0].getbestblockhash()
height = self.nodes[0].getblockcount()
block_time = self.nodes[0].getblockheader(tip)["time"]+1
tip = int(tip, 16)
for i in xrange(numblocks):
block = create_block(tip, create_coinbase(height+1), block_time)
block.nVersion = nVersionToUse
block.solve()
peer.send_message(msg_block(block))
block_time += 1
height += 1
tip = block.sha256
peer.sync_with_ping()
def test_versionbits_in_alert_file(self):
with open(self.alert_filename, 'r') as f:
alert_text = f.read()
assert(self.vb_pattern.match(alert_text))
def run_test(self):
# Setup the p2p connection and start up the network thread.
test_node = TestNode()
connections = []
connections.append(NodeConn('127.0.0.1', p2p_port(0), self.nodes[0], test_node))
test_node.add_connection(connections[0])
NetworkThread().start() # Start up network handling in another thread
# Test logic begins here
test_node.wait_for_verack()
# 1. Have the node mine one period worth of blocks
self.nodes[0].generate(VB_PERIOD)
# 2. Now build one period of blocks on the tip, with < VB_THRESHOLD
# blocks signaling some unknown bit.
nVersion = VB_TOP_BITS | (1<<VB_UNKNOWN_BIT)
self.send_blocks_with_version(test_node, VB_THRESHOLD-1, nVersion)
# Fill rest of period with regular version blocks
self.nodes[0].generate(VB_PERIOD - VB_THRESHOLD + 1)
# Check that we're not getting any versionbit-related errors in
# getinfo()
assert(not self.vb_pattern.match(self.nodes[0].getinfo()["errors"]))
# 3. Now build one period of blocks with >= VB_THRESHOLD blocks signaling
# some unknown bit
self.send_blocks_with_version(test_node, VB_THRESHOLD, nVersion)
self.nodes[0].generate(VB_PERIOD - VB_THRESHOLD)
# Might not get a versionbits-related alert yet, as we should
# have gotten a different alert due to more than 51/100 blocks
# being of unexpected version.
# Check that getinfo() shows some kind of error.
assert(len(self.nodes[0].getinfo()["errors"]) != 0)
# Mine a period worth of expected blocks so the generic block-version warning
# is cleared, and restart the node. This should move the versionbit state
# to ACTIVE.
self.nodes[0].generate(VB_PERIOD)
stop_node(self.nodes[0], 0)
wait_bitcoinds()
# Empty out the alert file
with open(self.alert_filename, 'w') as f:
pass
self.nodes[0] = start_node(0, self.options.tmpdir, ["-debug", "-logtimemicros=1", "-alertnotify=echo %s >> \"" + self.alert_filename + "\""])
# Connecting one block should be enough to generate an error.
self.nodes[0].generate(1)
assert(len(self.nodes[0].getinfo()["errors"]) != 0)
stop_node(self.nodes[0], 0)
wait_bitcoinds()
self.test_versionbits_in_alert_file()
# Test framework expects the node to still be running...
self.nodes[0] = start_node(0, self.options.tmpdir, ["-debug", "-logtimemicros=1", "-alertnotify=echo %s >> \"" + self.alert_filename + "\""])
if __name__ == '__main__':
VersionBitsWarningTest().main()

View File

@ -193,6 +193,10 @@ class TestManager(object):
# associated NodeConn
test_node.add_connection(self.connections[-1])
def clear_all_connections(self):
self.connections = []
self.test_nodes = []
def wait_for_disconnections(self):
def disconnected():
return all(node.closed for node in self.test_nodes)

View File

@ -235,6 +235,14 @@ def ser_int_vector(l):
r += struct.pack("<i", i)
return r
# Deserialize from a hex string representation (eg from RPC)
def FromHex(obj, hex_string):
obj.deserialize(cStringIO.StringIO(binascii.unhexlify(hex_string)))
return obj
# Convert a binary-serializable object to hex (eg for submission via RPC)
def ToHex(obj):
return binascii.hexlify(obj.serialize()).decode('utf-8')
# Objects that map to dashd objects, which can be serialized/deserialized

View File

@ -491,3 +491,10 @@ def create_lots_of_big_transactions(node, txouts, utxos, fee):
txid = node.sendrawtransaction(signresult["hex"], True)
txids.append(txid)
return txids
def get_bip9_status(node, key):
info = node.getblockchaininfo()
for row in info['bip9_softforks']:
if row['id'] == key:
return row
raise IndexError ('key:"%s" not found' % key)

View File

@ -7,5 +7,4 @@ Create an RPC user login credential.
Usage:
./rpcuser.py <username>
./rpcuser.py <username>

View File

@ -178,6 +178,7 @@ BITCOIN_CORE_H = \
utiltime.h \
validationinterface.h \
version.h \
versionbits.h \
wallet/crypter.h \
wallet/db.h \
wallet/wallet.h \
@ -233,6 +234,7 @@ libbitcoin_server_a_SOURCES = \
txdb.cpp \
txmempool.cpp \
validationinterface.cpp \
versionbits.cpp \
$(BITCOIN_CORE_H)
if ENABLE_ZMQ

View File

@ -82,6 +82,7 @@ BITCOIN_TESTS =\
test/timedata_tests.cpp \
test/transaction_tests.cpp \
test/txvalidationcache_tests.cpp \
test/versionbits_tests.cpp \
test/uint256_tests.cpp \
test/univalue_tests.cpp \
test/util_tests.cpp

View File

@ -14,8 +14,6 @@
#include <vector>
#include <boost/foreach.hpp>
struct CDiskBlockPos
{
int nFile;

View File

@ -91,6 +91,17 @@ public:
consensus.nPowTargetSpacing = 2.5 * 60; // Dash: 2.5 minutes
consensus.fPowAllowMinDifficultyBlocks = false;
consensus.fPowNoRetargeting = false;
consensus.nRuleChangeActivationThreshold = 1916; // 95% of 2016
consensus.nMinerConfirmationWindow = 2016; // nPowTargetTimespan / nPowTargetSpacing
consensus.vDeployments[Consensus::DEPLOYMENT_TESTDUMMY].bit = 28;
consensus.vDeployments[Consensus::DEPLOYMENT_TESTDUMMY].nStartTime = 1199145601; // January 1, 2008
consensus.vDeployments[Consensus::DEPLOYMENT_TESTDUMMY].nTimeout = 1230767999; // December 31, 2008
// Deployment of BIP68, BIP112, and BIP113.
consensus.vDeployments[Consensus::DEPLOYMENT_CSV].bit = 0;
consensus.vDeployments[Consensus::DEPLOYMENT_CSV].nStartTime = 1462060800; // May 1st, 2016
consensus.vDeployments[Consensus::DEPLOYMENT_CSV].nTimeout = 1493596800; // May 1st, 2017
/**
* The message start string is designed to be unlikely to occur in normal data.
* The characters are rarely used upper ASCII, not valid as UTF-8, and produce
@ -195,6 +206,16 @@ public:
consensus.nPowTargetSpacing = 2.5 * 60; // Dash: 2.5 minutes
consensus.fPowAllowMinDifficultyBlocks = true;
consensus.fPowNoRetargeting = false;
consensus.nRuleChangeActivationThreshold = 1512; // 75% for testchains
consensus.nMinerConfirmationWindow = 2016; // nPowTargetTimespan / nPowTargetSpacing
consensus.vDeployments[Consensus::DEPLOYMENT_TESTDUMMY].bit = 28;
consensus.vDeployments[Consensus::DEPLOYMENT_TESTDUMMY].nStartTime = 1199145601; // January 1, 2008
consensus.vDeployments[Consensus::DEPLOYMENT_TESTDUMMY].nTimeout = 1230767999; // December 31, 2008
// Deployment of BIP68, BIP112, and BIP113.
consensus.vDeployments[Consensus::DEPLOYMENT_CSV].bit = 0;
consensus.vDeployments[Consensus::DEPLOYMENT_CSV].nStartTime = 1456790400; // March 1st, 2016
consensus.vDeployments[Consensus::DEPLOYMENT_CSV].nTimeout = 1493596800; // May 1st, 2017
pchMessageStart[0] = 0xce;
pchMessageStart[1] = 0xe2;
@ -282,6 +303,14 @@ public:
consensus.nPowTargetSpacing = 2.5 * 60; // Dash: 2.5 minutes
consensus.fPowAllowMinDifficultyBlocks = true;
consensus.fPowNoRetargeting = true;
consensus.nRuleChangeActivationThreshold = 108; // 75% for testchains
consensus.nMinerConfirmationWindow = 144; // Faster than normal for regtest (144 instead of 2016)
consensus.vDeployments[Consensus::DEPLOYMENT_TESTDUMMY].bit = 28;
consensus.vDeployments[Consensus::DEPLOYMENT_TESTDUMMY].nStartTime = 0;
consensus.vDeployments[Consensus::DEPLOYMENT_TESTDUMMY].nTimeout = 999999999999ULL;
consensus.vDeployments[Consensus::DEPLOYMENT_CSV].bit = 0;
consensus.vDeployments[Consensus::DEPLOYMENT_CSV].nStartTime = 0;
consensus.vDeployments[Consensus::DEPLOYMENT_CSV].nTimeout = 999999999999ULL;
pchMessageStart[0] = 0xfc;
pchMessageStart[1] = 0xc1;

View File

@ -17,7 +17,7 @@
#define CLIENT_VERSION_MAJOR 0
#define CLIENT_VERSION_MINOR 12
#define CLIENT_VERSION_REVISION 1
#define CLIENT_VERSION_BUILD 1
#define CLIENT_VERSION_BUILD 0
//! Set to true for release, false for prerelease or test build
#define CLIENT_VERSION_IS_RELEASE true

View File

@ -13,8 +13,11 @@ static const unsigned int MAX_BLOCK_SIGOPS = MAX_BLOCK_SIZE/50;
/** Coinbase transaction outputs can only be spent after this number of new blocks (network rule) */
static const int COINBASE_MATURITY = 100;
/** Flags for LockTime() */
/** Flags for nSequence and nLockTime locks */
enum {
/* Interpret sequence numbers as relative lock-time constraints. */
LOCKTIME_VERIFY_SEQUENCE = (1 << 0),
/* Use GetMedianTimePast() instead of nTime for end point timestamp. */
LOCKTIME_MEDIAN_TIME_PAST = (1 << 1),
};

View File

@ -7,8 +7,30 @@
#define BITCOIN_CONSENSUS_PARAMS_H
#include "uint256.h"
#include <map>
#include <string>
namespace Consensus {
enum DeploymentPos
{
DEPLOYMENT_TESTDUMMY,
DEPLOYMENT_CSV, // Deployment of BIP68, BIP112, and BIP113.
MAX_VERSION_BITS_DEPLOYMENTS
};
/**
* Struct for each individual consensus rule change using BIP9.
*/
struct BIP9Deployment {
/** Bit position to select the particular bit in nVersion. */
int bit;
/** Start MedianTime for version bits miner confirmation. Can be a date in the past */
int64_t nStartTime;
/** Timeout/expiry MedianTime for the deployment attempt. */
int64_t nTimeout;
};
/**
* Parameters that influence chain consensus.
*/
@ -30,6 +52,14 @@ struct Params {
/** Block height and hash at which BIP34 becomes active */
int BIP34Height;
uint256 BIP34Hash;
/**
* Minimum blocks including miner confirmation of the total of 2016 blocks in a retargetting period,
* (nPowTargetTimespan / nPowTargetSpacing) which is also used for BIP9 deployments.
* Examples: 1916 for 95%, 1512 for testchains.
*/
uint32_t nRuleChangeActivationThreshold;
uint32_t nMinerConfirmationWindow;
BIP9Deployment vDeployments[MAX_VERSION_BITS_DEPLOYMENTS];
/** Proof of work parameters */
uint256 powLimit;
bool fPowAllowMinDifficultyBlocks;

View File

@ -579,7 +579,7 @@ std::string HelpMessage(HelpMessageMode mode)
strUsage += HelpMessageOpt("-blockmaxsize=<n>", strprintf(_("Set maximum block size in bytes (default: %d)"), DEFAULT_BLOCK_MAX_SIZE));
strUsage += HelpMessageOpt("-blockprioritysize=<n>", strprintf(_("Set maximum size of high-priority/low-fee transactions in bytes (default: %d)"), DEFAULT_BLOCK_PRIORITY_SIZE));
if (showDebug)
strUsage += HelpMessageOpt("-blockversion=<n>", strprintf("Override block version to test forking scenarios (default: %d)", (int)CBlock::CURRENT_VERSION));
strUsage += HelpMessageOpt("-blockversion=<n>", "Override block version to test forking scenarios");
strUsage += HelpMessageGroup(_("RPC server options:"));
strUsage += HelpMessageOpt("-server", _("Accept command line and JSON-RPC commands"));
@ -1180,12 +1180,6 @@ bool AppInit2(boost::thread_group& threadGroup, CScheduler& scheduler)
if (fPrintToDebugLog)
OpenDebugLog();
#if (OPENSSL_VERSION_NUMBER < 0x10100000L)
LogPrintf("Using OpenSSL version %s\n", SSLeay_version(SSLEAY_VERSION));
#else
LogPrintf("Using OpenSSL version %s\n", OpenSSL_version(OPENSSL_VERSION));
#endif
#ifdef ENABLE_WALLET
LogPrintf("Using BerkeleyDB version %s\n", DbEnv::version(0, 0, 0));
#endif
@ -1915,10 +1909,17 @@ bool AppInit2(boost::thread_group& threadGroup, CScheduler& scheduler)
StartNode(threadGroup, scheduler);
// Monitor the chain, and alert if we get blocks much quicker or slower than expected
int64_t nPowTargetSpacing = Params().GetConsensus().nPowTargetSpacing;
CScheduler::Function f = boost::bind(&PartitionCheck, &IsInitialBlockDownload,
boost::ref(cs_main), boost::cref(pindexBestHeader), nPowTargetSpacing);
scheduler.scheduleEvery(f, nPowTargetSpacing);
// The "bad chain alert" scheduler has been disabled because the current system gives far
// too many false positives, such that users are starting to ignore them.
// This code will be disabled for 0.12.1 while a fix is deliberated in #7568
// this was discussed in the IRC meeting on 2016-03-31.
//
// --- disabled ---
//int64_t nPowTargetSpacing = Params().GetConsensus().nPowTargetSpacing;
//CScheduler::Function f = boost::bind(&PartitionCheck, &IsInitialBlockDownload,
// boost::ref(cs_main), boost::cref(pindexBestHeader), nPowTargetSpacing);
//scheduler.scheduleEvery(f, nPowTargetSpacing);
// --- end disabled ---
// Generate coins in the background
GenerateBitcoins(GetBoolArg("-gen", DEFAULT_GENERATE), GetArg("-genproclimit", DEFAULT_GENERATE_THREADS), chainparams);

View File

@ -42,6 +42,7 @@
#include "utilmoneystr.h"
#include "utilstrencodings.h"
#include "validationinterface.h"
#include "versionbits.h"
#include <sstream>
@ -201,16 +202,11 @@ namespace {
/** Blocks that are in flight, and that are in the queue to be downloaded. Protected by cs_main. */
struct QueuedBlock {
uint256 hash;
CBlockIndex *pindex; //! Optional.
int64_t nTime; //! Time of "getdata" request in microseconds.
bool fValidatedHeaders; //! Whether this block has validated headers at the time of request.
int64_t nTimeDisconnect; //! The timeout for this block request (for disconnecting a slow peer)
CBlockIndex* pindex; //!< Optional.
bool fValidatedHeaders; //!< Whether this block has validated headers at the time of request.
};
map<uint256, pair<NodeId, list<QueuedBlock>::iterator> > mapBlocksInFlight;
/** Number of blocks in flight with validated headers. */
int nQueuedValidatedHeaders = 0;
/** Number of preferable block download peers. */
int nPreferredDownload = 0;
@ -219,6 +215,9 @@ namespace {
/** Dirty block file entries. */
set<int> setDirtyFileInfo;
/** Number of peers from which we're downloading blocks. */
int nPeersWithValidatedDownloads = 0;
} // anon namespace
//////////////////////////////////////////////////////////////////////////////
@ -266,6 +265,8 @@ struct CNodeState {
//! Since when we're stalling block download progress (in microseconds), or 0.
int64_t nStallingSince;
list<QueuedBlock> vBlocksInFlight;
//! When the first entry in vBlocksInFlight started downloading. Don't care when vBlocksInFlight is empty.
int64_t nDownloadingSince;
int nBlocksInFlight;
int nBlocksInFlightValidHeaders;
//! Whether we consider this a preferred download peer.
@ -283,6 +284,7 @@ struct CNodeState {
pindexBestHeaderSent = NULL;
fSyncStarted = false;
nStallingSince = 0;
nDownloadingSince = 0;
nBlocksInFlight = 0;
nBlocksInFlightValidHeaders = 0;
fPreferredDownload = false;
@ -317,12 +319,6 @@ void UpdatePreferredDownload(CNode* node, CNodeState* state)
nPreferredDownload += state->fPreferredDownload;
}
// Returns time at which to timeout block request (nTime in microseconds)
int64_t GetBlockTimeout(int64_t nTime, int nValidatedQueuedBefore, const Consensus::Params &consensusParams)
{
return nTime + 500000 * consensusParams.nPowTargetSpacing * (4 + nValidatedQueuedBefore);
}
void InitializeNode(NodeId nodeid, const CNode *pnode) {
LOCK(cs_main);
CNodeState &state = mapNodeState.insert(std::make_pair(nodeid, CNodeState())).first->second;
@ -342,13 +338,21 @@ void FinalizeNode(NodeId nodeid) {
}
BOOST_FOREACH(const QueuedBlock& entry, state->vBlocksInFlight) {
nQueuedValidatedHeaders -= entry.fValidatedHeaders;
mapBlocksInFlight.erase(entry.hash);
}
EraseOrphansFor(nodeid);
nPreferredDownload -= state->fPreferredDownload;
nPeersWithValidatedDownloads -= (state->nBlocksInFlightValidHeaders != 0);
assert(nPeersWithValidatedDownloads >= 0);
mapNodeState.erase(nodeid);
if (mapNodeState.empty()) {
// Do a consistency check after the last peer is removed.
assert(mapBlocksInFlight.empty());
assert(nPreferredDownload == 0);
assert(nPeersWithValidatedDownloads == 0);
}
}
// Requires cs_main.
@ -357,8 +361,15 @@ bool MarkBlockAsReceived(const uint256& hash) {
map<uint256, pair<NodeId, list<QueuedBlock>::iterator> >::iterator itInFlight = mapBlocksInFlight.find(hash);
if (itInFlight != mapBlocksInFlight.end()) {
CNodeState *state = State(itInFlight->second.first);
nQueuedValidatedHeaders -= itInFlight->second.second->fValidatedHeaders;
state->nBlocksInFlightValidHeaders -= itInFlight->second.second->fValidatedHeaders;
if (state->nBlocksInFlightValidHeaders == 0 && itInFlight->second.second->fValidatedHeaders) {
// Last validated block on the queue was received.
nPeersWithValidatedDownloads--;
}
if (state->vBlocksInFlight.begin() == itInFlight->second.second) {
// First block on the queue was received, update the start download time for the next one
state->nDownloadingSince = std::max(state->nDownloadingSince, GetTimeMicros());
}
state->vBlocksInFlight.erase(itInFlight->second.second);
state->nBlocksInFlight--;
state->nStallingSince = 0;
@ -376,12 +387,17 @@ void MarkBlockAsInFlight(NodeId nodeid, const uint256& hash, const Consensus::Pa
// Make sure it's not listed somewhere already.
MarkBlockAsReceived(hash);
int64_t nNow = GetTimeMicros();
QueuedBlock newentry = {hash, pindex, nNow, pindex != NULL, GetBlockTimeout(nNow, nQueuedValidatedHeaders, consensusParams)};
nQueuedValidatedHeaders += newentry.fValidatedHeaders;
QueuedBlock newentry = {hash, pindex, pindex != NULL};
list<QueuedBlock>::iterator it = state->vBlocksInFlight.insert(state->vBlocksInFlight.end(), newentry);
state->nBlocksInFlight++;
state->nBlocksInFlightValidHeaders += newentry.fValidatedHeaders;
if (state->nBlocksInFlight == 1) {
// We're starting a block download (batch) from this peer.
state->nDownloadingSince = GetTimeMicros();
}
if (state->nBlocksInFlightValidHeaders == 1 && pindex != NULL) {
nPeersWithValidatedDownloads++;
}
mapBlocksInFlight[hash] = std::make_pair(nodeid, it);
}
@ -681,9 +697,10 @@ bool IsFinalTx(const CTransaction &tx, int nBlockHeight, int64_t nBlockTime)
return true;
if ((int64_t)tx.nLockTime < ((int64_t)tx.nLockTime < LOCKTIME_THRESHOLD ? (int64_t)nBlockHeight : nBlockTime))
return true;
BOOST_FOREACH(const CTxIn& txin, tx.vin)
if (!txin.IsFinal())
BOOST_FOREACH(const CTxIn& txin, tx.vin) {
if (!(txin.nSequence == CTxIn::SEQUENCE_FINAL))
return false;
}
return true;
}
@ -719,6 +736,178 @@ bool CheckFinalTx(const CTransaction &tx, int flags)
return IsFinalTx(tx, nBlockHeight, nBlockTime);
}
/**
* Calculates the block height and previous block's median time past at
* which the transaction will be considered final in the context of BIP 68.
* Also removes from the vector of input heights any entries which did not
* correspond to sequence locked inputs as they do not affect the calculation.
*/
static std::pair<int, int64_t> CalculateSequenceLocks(const CTransaction &tx, int flags, std::vector<int>* prevHeights, const CBlockIndex& block)
{
assert(prevHeights->size() == tx.vin.size());
// Will be set to the equivalent height- and time-based nLockTime
// values that would be necessary to satisfy all relative lock-
// time constraints given our view of block chain history.
// The semantics of nLockTime are the last invalid height/time, so
// use -1 to have the effect of any height or time being valid.
int nMinHeight = -1;
int64_t nMinTime = -1;
// tx.nVersion is signed integer so requires cast to unsigned otherwise
// we would be doing a signed comparison and half the range of nVersion
// wouldn't support BIP 68.
bool fEnforceBIP68 = static_cast<uint32_t>(tx.nVersion) >= 2
&& flags & LOCKTIME_VERIFY_SEQUENCE;
// Do not enforce sequence numbers as a relative lock time
// unless we have been instructed to
if (!fEnforceBIP68) {
return std::make_pair(nMinHeight, nMinTime);
}
for (size_t txinIndex = 0; txinIndex < tx.vin.size(); txinIndex++) {
const CTxIn& txin = tx.vin[txinIndex];
// Sequence numbers with the most significant bit set are not
// treated as relative lock-times, nor are they given any
// consensus-enforced meaning at this point.
if (txin.nSequence & CTxIn::SEQUENCE_LOCKTIME_DISABLE_FLAG) {
// The height of this input is not relevant for sequence locks
(*prevHeights)[txinIndex] = 0;
continue;
}
int nCoinHeight = (*prevHeights)[txinIndex];
if (txin.nSequence & CTxIn::SEQUENCE_LOCKTIME_TYPE_FLAG) {
int64_t nCoinTime = block.GetAncestor(std::max(nCoinHeight-1, 0))->GetMedianTimePast();
// NOTE: Subtract 1 to maintain nLockTime semantics
// BIP 68 relative lock times have the semantics of calculating
// the first block or time at which the transaction would be
// valid. When calculating the effective block time or height
// for the entire transaction, we switch to using the
// semantics of nLockTime which is the last invalid block
// time or height. Thus we subtract 1 from the calculated
// time or height.
// Time-based relative lock-times are measured from the
// smallest allowed timestamp of the block containing the
// txout being spent, which is the median time past of the
// block prior.
nMinTime = std::max(nMinTime, nCoinTime + (int64_t)((txin.nSequence & CTxIn::SEQUENCE_LOCKTIME_MASK) << CTxIn::SEQUENCE_LOCKTIME_GRANULARITY) - 1);
} else {
nMinHeight = std::max(nMinHeight, nCoinHeight + (int)(txin.nSequence & CTxIn::SEQUENCE_LOCKTIME_MASK) - 1);
}
}
return std::make_pair(nMinHeight, nMinTime);
}
static bool EvaluateSequenceLocks(const CBlockIndex& block, std::pair<int, int64_t> lockPair)
{
assert(block.pprev);
int64_t nBlockTime = block.pprev->GetMedianTimePast();
if (lockPair.first >= block.nHeight || lockPair.second >= nBlockTime)
return false;
return true;
}
bool SequenceLocks(const CTransaction &tx, int flags, std::vector<int>* prevHeights, const CBlockIndex& block)
{
return EvaluateSequenceLocks(block, CalculateSequenceLocks(tx, flags, prevHeights, block));
}
bool TestLockPointValidity(const LockPoints* lp)
{
AssertLockHeld(cs_main);
assert(lp);
// If there are relative lock times then the maxInputBlock will be set
// If there are no relative lock times, the LockPoints don't depend on the chain
if (lp->maxInputBlock) {
// Check whether chainActive is an extension of the block at which the LockPoints
// calculation was valid. If not LockPoints are no longer valid
if (!chainActive.Contains(lp->maxInputBlock)) {
return false;
}
}
// LockPoints still valid
return true;
}
bool CheckSequenceLocks(const CTransaction &tx, int flags, LockPoints* lp, bool useExistingLockPoints)
{
AssertLockHeld(cs_main);
AssertLockHeld(mempool.cs);
CBlockIndex* tip = chainActive.Tip();
CBlockIndex index;
index.pprev = tip;
// CheckSequenceLocks() uses chainActive.Height()+1 to evaluate
// height based locks because when SequenceLocks() is called within
// ConnectBlock(), the height of the block *being*
// evaluated is what is used.
// Thus if we want to know if a transaction can be part of the
// *next* block, we need to use one more than chainActive.Height()
index.nHeight = tip->nHeight + 1;
std::pair<int, int64_t> lockPair;
if (useExistingLockPoints) {
assert(lp);
lockPair.first = lp->height;
lockPair.second = lp->time;
}
else {
// pcoinsTip contains the UTXO set for chainActive.Tip()
CCoinsViewMemPool viewMemPool(pcoinsTip, mempool);
std::vector<int> prevheights;
prevheights.resize(tx.vin.size());
for (size_t txinIndex = 0; txinIndex < tx.vin.size(); txinIndex++) {
const CTxIn& txin = tx.vin[txinIndex];
CCoins coins;
if (!viewMemPool.GetCoins(txin.prevout.hash, coins)) {
return error("%s: Missing input", __func__);
}
if (coins.nHeight == MEMPOOL_HEIGHT) {
// Assume all mempool transaction confirm in the next block
prevheights[txinIndex] = tip->nHeight + 1;
} else {
prevheights[txinIndex] = coins.nHeight;
}
}
lockPair = CalculateSequenceLocks(tx, flags, &prevheights, index);
if (lp) {
lp->height = lockPair.first;
lp->time = lockPair.second;
// Also store the hash of the block with the highest height of
// all the blocks which have sequence locked prevouts.
// This hash needs to still be on the chain
// for these LockPoint calculations to be valid
// Note: It is impossible to correctly calculate a maxInputBlock
// if any of the sequence locked inputs depend on unconfirmed txs,
// except in the special case where the relative lock time/height
// is 0, which is equivalent to no sequence lock. Since we assume
// input height of tip+1 for mempool txs and test the resulting
// lockPair from CalculateSequenceLocks against tip+1. We know
// EvaluateSequenceLocks will fail if there was a non-zero sequence
// lock on a mempool input, so we can use the return value of
// CheckSequenceLocks to indicate the LockPoints validity
int maxInputHeight = 0;
BOOST_FOREACH(int height, prevheights) {
// Can ignore mempool inputs since we'll fail if they had non-zero locks
if (height != tip->nHeight+1) {
maxInputHeight = std::max(maxInputHeight, height);
}
}
lp->maxInputBlock = tip->GetAncestor(maxInputHeight);
}
}
return EvaluateSequenceLocks(index, lockPair);
}
unsigned int GetLegacySigOpCount(const CTransaction& tx)
{
unsigned int nSigOps = 0;
@ -876,6 +1065,14 @@ bool AcceptToMemoryPoolWorker(CTxMemPool& pool, CValidationState &state, const C
if (fRequireStandard && !IsStandardTx(tx, reason))
return state.DoS(0, false, REJECT_NONSTANDARD, reason);
// Don't relay version 2 transactions until CSV is active, and we can be
// sure that such transactions will be mined (unless we're on
// -testnet/-regtest).
const CChainParams& chainparams = Params();
if (fRequireStandard && tx.nVersion >= 2 && VersionBitsTipState(chainparams.GetConsensus(), Consensus::DEPLOYMENT_CSV) != THRESHOLD_ACTIVE) {
return state.DoS(0, false, REJECT_NONSTANDARD, "premature-version2-tx");
}
// Only accept nLockTime-using transactions that can be mined in the next
// block; we don't want our mempool filled up with transactions that can't
// be mined yet.
@ -948,6 +1145,7 @@ bool AcceptToMemoryPoolWorker(CTxMemPool& pool, CValidationState &state, const C
CCoinsViewCache view(&dummy);
CAmount nValueIn = 0;
LockPoints lp;
{
LOCK(pool.cs);
CCoinsViewMemPool viewMemPool(pcoinsTip, pool);
@ -985,6 +1183,14 @@ bool AcceptToMemoryPoolWorker(CTxMemPool& pool, CValidationState &state, const C
// we have all inputs cached now, so switch back to dummy, so we don't need to keep lock on mempool
view.SetBackend(dummy);
// Only accept BIP68 sequence locked transactions that can be mined in the next
// block; we don't want our mempool filled up with transactions that can't
// be mined yet.
// Must keep pool.cs for this unless we change CheckSequenceLocks to take a
// CoinsViewCache instead of create its own
if (!CheckSequenceLocks(tx, STANDARD_LOCKTIME_VERIFY_FLAGS, &lp))
return state.DoS(0, false, REJECT_NONSTANDARD, "non-BIP68-final");
}
// Check for non-standard pay-to-script-hash in inputs
@ -1018,7 +1224,7 @@ bool AcceptToMemoryPoolWorker(CTxMemPool& pool, CValidationState &state, const C
}
}
CTxMemPoolEntry entry(tx, nFees, GetTime(), dPriority, chainActive.Height(), pool.HasNoInputsOf(tx), inChainInputValue, fSpendsCoinbase, nSigOps);
CTxMemPoolEntry entry(tx, nFees, GetTime(), dPriority, chainActive.Height(), pool.HasNoInputsOf(tx), inChainInputValue, fSpendsCoinbase, nSigOps, lp);
unsigned int nSize = entry.GetTxSize();
// Check that the transaction doesn't have an excessive number of
@ -2115,6 +2321,51 @@ void PartitionCheck(bool (*initialDownloadCheck)(), CCriticalSection& cs, const
}
}
// Protected by cs_main
static VersionBitsCache versionbitscache;
int32_t ComputeBlockVersion(const CBlockIndex* pindexPrev, const Consensus::Params& params)
{
LOCK(cs_main);
int32_t nVersion = VERSIONBITS_TOP_BITS;
for (int i = 0; i < (int)Consensus::MAX_VERSION_BITS_DEPLOYMENTS; i++) {
ThresholdState state = VersionBitsState(pindexPrev, params, (Consensus::DeploymentPos)i, versionbitscache);
if (state == THRESHOLD_LOCKED_IN || state == THRESHOLD_STARTED) {
nVersion |= VersionBitsMask(params, (Consensus::DeploymentPos)i);
}
}
return nVersion;
}
/**
* Threshold condition checker that triggers when unknown versionbits are seen on the network.
*/
class WarningBitsConditionChecker : public AbstractThresholdConditionChecker
{
private:
int bit;
public:
WarningBitsConditionChecker(int bitIn) : bit(bitIn) {}
int64_t BeginTime(const Consensus::Params& params) const { return 0; }
int64_t EndTime(const Consensus::Params& params) const { return std::numeric_limits<int64_t>::max(); }
int Period(const Consensus::Params& params) const { return params.nMinerConfirmationWindow; }
int Threshold(const Consensus::Params& params) const { return params.nRuleChangeActivationThreshold; }
bool Condition(const CBlockIndex* pindex, const Consensus::Params& params) const
{
return ((pindex->nVersion & VERSIONBITS_TOP_MASK) == VERSIONBITS_TOP_BITS) &&
((pindex->nVersion >> bit) & 1) != 0 &&
((ComputeBlockVersion(pindex->pprev, params) >> bit) & 1) == 0;
}
};
// Protected by cs_main
static ThresholdConditionCache warningcache[VERSIONBITS_NUM_BITS];
static int64_t nTimeCheck = 0;
static int64_t nTimeForks = 0;
static int64_t nTimeVerify = 0;
@ -2211,6 +2462,13 @@ bool ConnectBlock(const CBlock& block, CValidationState& state, CBlockIndex* pin
flags |= SCRIPT_VERIFY_CHECKLOCKTIMEVERIFY;
}
// Start enforcing BIP68 (sequence locks) and BIP112 (CHECKSEQUENCEVERIFY) using versionbits logic.
int nLockTimeFlags = 0;
if (VersionBitsState(pindex->pprev, chainparams.GetConsensus(), Consensus::DEPLOYMENT_CSV, versionbitscache) == THRESHOLD_ACTIVE) {
flags |= SCRIPT_VERIFY_CHECKSEQUENCEVERIFY;
nLockTimeFlags |= LOCKTIME_VERIFY_SEQUENCE;
}
int64_t nTime2 = GetTimeMicros(); nTimeForks += nTime2 - nTime1;
LogPrint("bench", " - Fork checks: %.2fms [%.2fs]\n", 0.001 * (nTime2 - nTime1), nTimeForks * 0.000001);
@ -2218,6 +2476,7 @@ bool ConnectBlock(const CBlock& block, CValidationState& state, CBlockIndex* pin
CCheckQueueControl<CScriptCheck> control(fScriptChecks && nScriptCheckThreads ? &scriptcheckqueue : NULL);
std::vector<int> prevheights;
CAmount nFees = 0;
int nInputs = 0;
unsigned int nSigOps = 0;
@ -2241,6 +2500,19 @@ bool ConnectBlock(const CBlock& block, CValidationState& state, CBlockIndex* pin
return state.DoS(100, error("ConnectBlock(): inputs missing/spent"),
REJECT_INVALID, "bad-txns-inputs-missingorspent");
// Check that transaction is BIP68 final
// BIP68 lock checks (as opposed to nLockTime checks) must
// be in ConnectBlock because they require the UTXO set
prevheights.resize(tx.vin.size());
for (size_t j = 0; j < tx.vin.size(); j++) {
prevheights[j] = view.AccessCoins(tx.vin[j].prevout.hash)->nHeight;
}
if (!SequenceLocks(tx, nLockTimeFlags, &prevheights, *pindex)) {
return state.DoS(100, error("%s: contains a non-BIP68-final transaction", __func__),
REJECT_INVALID, "bad-txns-nonfinal");
}
if (fStrictPayToScriptHash)
{
// Add in sigops done by pay-to-script-hash inputs;
@ -2468,24 +2740,42 @@ void static UpdateTip(CBlockIndex *pindexNew) {
// Check the version of the last 100 blocks to see if we need to upgrade:
static bool fWarned = false;
if (!IsInitialBlockDownload() && !fWarned)
if (!IsInitialBlockDownload())
{
int nUpgraded = 0;
const CBlockIndex* pindex = chainActive.Tip();
for (int bit = 0; bit < VERSIONBITS_NUM_BITS; bit++) {
WarningBitsConditionChecker checker(bit);
ThresholdState state = checker.GetStateFor(pindex, chainParams.GetConsensus(), warningcache[bit]);
if (state == THRESHOLD_ACTIVE || state == THRESHOLD_LOCKED_IN) {
if (state == THRESHOLD_ACTIVE) {
strMiscWarning = strprintf(_("Warning: unknown new rules activated (versionbit %i)"), bit);
if (!fWarned) {
CAlert::Notify(strMiscWarning, true);
fWarned = true;
}
} else {
LogPrintf("%s: unknown new rules are about to activate (versionbit %i)\n", __func__, bit);
}
}
}
for (int i = 0; i < 100 && pindex != NULL; i++)
{
if (pindex->nVersion > CBlock::CURRENT_VERSION)
int32_t nExpectedVersion = ComputeBlockVersion(pindex->pprev, chainParams.GetConsensus());
if (pindex->nVersion > VERSIONBITS_LAST_OLD_BLOCK_VERSION && (pindex->nVersion & ~nExpectedVersion) != 0)
++nUpgraded;
pindex = pindex->pprev;
}
if (nUpgraded > 0)
LogPrintf("%s: %d of last 100 blocks above version %d\n", __func__, nUpgraded, (int)CBlock::CURRENT_VERSION);
LogPrintf("%s: %d of last 100 blocks have unexpected version\n", __func__, nUpgraded);
if (nUpgraded > 100/2)
{
// strMiscWarning is read by GetWarnings(), called by Qt and the JSON-RPC code to warn the user:
strMiscWarning = _("Warning: This version is obsolete; upgrade required!");
CAlert::Notify(strMiscWarning, true);
fWarned = true;
strMiscWarning = _("Warning: Unknown block versions being mined! It's possible unknown rules are in effect");
if (!fWarned) {
CAlert::Notify(strMiscWarning, true);
fWarned = true;
}
}
}
}
@ -2776,6 +3066,8 @@ bool ActivateBestChain(CValidationState &state, const CChainParams& chainparams,
CBlockIndex *pindexMostWork = NULL;
do {
boost::this_thread::interruption_point();
if (ShutdownRequested())
break;
CBlockIndex *pindexNewTip = NULL;
const CBlockIndex *pindexFork;
@ -3284,12 +3576,18 @@ bool ContextualCheckBlock(const CBlock& block, CValidationState& state, CBlockIn
const int nHeight = pindexPrev == NULL ? 0 : pindexPrev->nHeight + 1;
const Consensus::Params& consensusParams = Params().GetConsensus();
// Start enforcing BIP113 (Median Time Past) using versionbits logic.
int nLockTimeFlags = 0;
if (VersionBitsState(pindexPrev, consensusParams, Consensus::DEPLOYMENT_CSV, versionbitscache) == THRESHOLD_ACTIVE) {
nLockTimeFlags |= LOCKTIME_MEDIAN_TIME_PAST;
}
int64_t nLockTimeCutoff = (nLockTimeFlags & LOCKTIME_MEDIAN_TIME_PAST)
? pindexPrev->GetMedianTimePast()
: block.GetBlockTime();
// Check that all transactions are finalized
BOOST_FOREACH(const CTransaction& tx, block.vtx) {
int nLockTimeFlags = 0;
int64_t nLockTimeCutoff = (nLockTimeFlags & LOCKTIME_MEDIAN_TIME_PAST)
? pindexPrev->GetMedianTimePast()
: block.GetBlockTime();
if (!IsFinalTx(tx, nHeight, nLockTimeCutoff)) {
return state.DoS(10, error("%s: contains a non-final transaction", __func__), REJECT_INVALID, "bad-txns-nonfinal");
}
@ -3879,12 +4177,15 @@ void UnloadBlockIndex()
nBlockSequenceId = 1;
mapBlockSource.clear();
mapBlocksInFlight.clear();
nQueuedValidatedHeaders = 0;
nPreferredDownload = 0;
setDirtyBlockIndex.clear();
setDirtyFileInfo.clear();
mapNodeState.clear();
recentRejects.reset(NULL);
versionbitscache.Clear();
for (int b = 0; b < VERSIONBITS_NUM_BITS; b++) {
warningcache[b].clear();
}
BOOST_FOREACH(BlockMap::value_type& entry, mapBlockIndex) {
delete entry.second;
@ -6032,24 +6333,15 @@ bool SendMessages(CNode* pto)
LogPrintf("Peer=%d is stalling block download, disconnecting\n", pto->id);
pto->fDisconnect = true;
}
// In case there is a block that has been in flight from this peer for (2 + 0.5 * N) times the block interval
// (with N the number of validated blocks that were in flight at the time it was requested), disconnect due to
// timeout. We compensate for in-flight blocks to prevent killing off peers due to our own downstream link
// In case there is a block that has been in flight from this peer for 2 + 0.5 * N times the block interval
// (with N the number of peers from which we're downloading validated blocks), disconnect due to timeout.
// We compensate for other peers to prevent killing off peers due to our own downstream link
// being saturated. We only count validated in-flight blocks so peers can't advertise non-existing block hashes
// to unreasonably increase our timeout.
// We also compare the block download timeout originally calculated against the time at which we'd disconnect
// if we assumed the block were being requested now (ignoring blocks we've requested from this peer, since we're
// only looking at this peer's oldest request). This way a large queue in the past doesn't result in a
// permanently large window for this block to be delivered (ie if the number of blocks in flight is decreasing
// more quickly than once every 5 minutes, then we'll shorten the download window for this block).
if (!pto->fDisconnect && state.vBlocksInFlight.size() > 0) {
QueuedBlock &queuedBlock = state.vBlocksInFlight.front();
int64_t nTimeoutIfRequestedNow = GetBlockTimeout(nNow, nQueuedValidatedHeaders - state.nBlocksInFlightValidHeaders, consensusParams);
if (queuedBlock.nTimeDisconnect > nTimeoutIfRequestedNow) {
LogPrint("net", "Reducing block download timeout for peer=%d block=%s, orig=%d new=%d\n", pto->id, queuedBlock.hash.ToString(), queuedBlock.nTimeDisconnect, nTimeoutIfRequestedNow);
queuedBlock.nTimeDisconnect = nTimeoutIfRequestedNow;
}
if (queuedBlock.nTimeDisconnect < nNow) {
int nOtherPeersWithValidatedDownloads = nPeersWithValidatedDownloads - (state.nBlocksInFlightValidHeaders > 0);
if (nNow > state.nDownloadingSince + consensusParams.nPowTargetSpacing * (BLOCK_DOWNLOAD_TIMEOUT_BASE + BLOCK_DOWNLOAD_TIMEOUT_PER_PEER * nOtherPeersWithValidatedDownloads)) {
LogPrintf("Timeout downloading block %s from peer=%d, disconnecting\n", queuedBlock.hash.ToString(), pto->id);
pto->fDisconnect = true;
}
@ -6110,7 +6402,11 @@ bool SendMessages(CNode* pto)
return strprintf("CBlockFileInfo(blocks=%u, size=%u, heights=%u...%u, time=%s...%s)", nBlocks, nSize, nHeightFirst, nHeightLast, DateTimeStrFormat("%Y-%m-%d", nTimeFirst), DateTimeStrFormat("%Y-%m-%d", nTimeLast));
}
ThresholdState VersionBitsTipState(const Consensus::Params& params, Consensus::DeploymentPos pos)
{
LOCK(cs_main);
return VersionBitsState(chainActive.Tip(), params, pos, versionbitscache);
}
class CMainCleanup
{

View File

@ -17,6 +17,7 @@
#include "net.h"
#include "script/script_error.h"
#include "sync.h"
#include "versionbits.h"
#include <algorithm>
#include <exception>
@ -40,9 +41,10 @@ class CValidationInterface;
class CValidationState;
struct CNodeStateStats;
struct LockPoints;
/** Default for accepting alerts from the P2P network. */
static const bool DEFAULT_ALERTS = true;
static const bool DEFAULT_ALERTS = false;
/** Default for DEFAULT_WHITELISTRELAY. */
static const bool DEFAULT_WHITELISTRELAY = true;
/** Default for DEFAULT_WHITELISTFORCERELAY. */
@ -100,6 +102,10 @@ static const unsigned int AVG_ADDRESS_BROADCAST_INTERVAL = 30;
/** Average delay between trickled inventory broadcasts in seconds.
* Blocks, whitelisted receivers, and a random 25% of transactions bypass this. */
static const unsigned int AVG_INVENTORY_BROADCAST_INTERVAL = 5;
/** Block download timeout base, expressed in millionths of the block interval (i.e. 10 min) */
static const int64_t BLOCK_DOWNLOAD_TIMEOUT_BASE = 1000000;
/** Additional block download timeout per parallel downloading peer (i.e. 5 min) */
static const int64_t BLOCK_DOWNLOAD_TIMEOUT_PER_PEER = 500000;
static const unsigned int DEFAULT_LIMITFREERELAY = 15;
static const bool DEFAULT_RELAYPRIORITY = true;
@ -294,6 +300,9 @@ int GetIXConfirmations(uint256 nTXHash);
/** Convert CValidationState to a human-readable message for logging */
std::string FormatStateMessage(const CValidationState &state);
/** Get the BIP9 state for a given deployment at the current tip. */
ThresholdState VersionBitsTipState(const Consensus::Params& params, Consensus::DeploymentPos pos);
struct CNodeStateStats {
int nMisbehavior;
int nSyncHeight;
@ -373,7 +382,31 @@ bool IsFinalTx(const CTransaction &tx, int nBlockHeight, int64_t nBlockTime);
*/
bool CheckFinalTx(const CTransaction &tx, int flags = -1);
/**
/**
* Test whether the LockPoints height and time are still valid on the current chain
*/
bool TestLockPointValidity(const LockPoints* lp);
/**
* Check if transaction is final per BIP 68 sequence numbers and can be included in a block.
* Consensus critical. Takes as input a list of heights at which tx's inputs (in order) confirmed.
*/
bool SequenceLocks(const CTransaction &tx, int flags, std::vector<int>* prevHeights, const CBlockIndex& block);
/**
* Check if transaction will be BIP 68 final in the next block to be created.
*
* Simulates calling SequenceLocks() with data from the tip of the current active chain.
* Optionally stores in LockPoints the resulting height and time calculated and the hash
* of the block needed for calculation or skips the calculation and uses the LockPoints
* passed in for evaluation.
* The LockPoints should not be considered valid if CheckSequenceLocks returns false.
*
* See consensus/consensus.h for flag definitions.
*/
bool CheckSequenceLocks(const CTransaction &tx, int flags, LockPoints* lp = NULL, bool useExistingLockPoints = false);
/**
* Closure representing one script verification
* Note that this stores references to the spending transaction
*/
@ -526,6 +559,11 @@ extern CBlockTreeDB *pblocktree;
*/
int GetSpendHeight(const CCoinsViewCache& inputs);
/**
* Determine what nVersion a new block should use.
*/
int32_t ComputeBlockVersion(const CBlockIndex* pindexPrev, const Consensus::Params& params);
/** Reject codes greater or equal to this can be returned by AcceptToMemPool
* for transactions, to signal internal conditions. They cannot and should not
* be sent over the P2P network.

View File

@ -81,11 +81,6 @@ CBlockTemplate* CreateNewBlock(const CChainParams& chainparams, const CScript& s
return NULL;
CBlock *pblock = &pblocktemplate->block; // pointer for convenience
// -regtest only: allow overriding block.nVersion with
// -blockversion=N to test forking scenarios
if (chainparams.MineBlocksOnDemand())
pblock->nVersion = GetArg("-blockversion", pblock->nVersion);
// Create coinbase tx
CMutableTransaction txNew;
txNew.vin.resize(1);
@ -138,6 +133,12 @@ CBlockTemplate* CreateNewBlock(const CChainParams& chainparams, const CScript& s
pblock->vtx.push_back(txNew);
pblocktemplate->vTxFees.push_back(-1); // updated at end
pblocktemplate->vTxSigOps.push_back(-1); // updated at end
pblock->nVersion = ComputeBlockVersion(pindexPrev, chainparams.GetConsensus());
// -regtest only: allow overriding block.nVersion with
// -blockversion=N to test forking scenarios
if (chainparams.MineBlocksOnDemand())
pblock->nVersion = GetArg("-blockversion", pblock->nVersion);
int64_t nLockTimeCutoff = (STANDARD_LOCKTIME_VERIFY_FLAGS & LOCKTIME_MEDIAN_TIME_PAST)
? nMedianTimePast
: pblock->GetBlockTime();

View File

@ -58,7 +58,7 @@ bool IsStandard(const CScript& scriptPubKey, txnouttype& whichType)
bool IsStandardTx(const CTransaction& tx, std::string& reason)
{
if (tx.nVersion > CTransaction::CURRENT_VERSION || tx.nVersion < 1) {
if (tx.nVersion > CTransaction::MAX_STANDARD_VERSION || tx.nVersion < 1) {
reason = "version";
return false;
}

View File

@ -40,13 +40,15 @@ static const unsigned int STANDARD_SCRIPT_VERIFY_FLAGS = MANDATORY_SCRIPT_VERIFY
SCRIPT_VERIFY_DISCOURAGE_UPGRADABLE_NOPS |
SCRIPT_VERIFY_CLEANSTACK |
SCRIPT_VERIFY_CHECKLOCKTIMEVERIFY |
SCRIPT_VERIFY_CHECKSEQUENCEVERIFY |
SCRIPT_VERIFY_LOW_S;
/** For convenience, standard but not mandatory verify flags. */
static const unsigned int STANDARD_NOT_MANDATORY_VERIFY_FLAGS = STANDARD_SCRIPT_VERIFY_FLAGS & ~MANDATORY_SCRIPT_VERIFY_FLAGS;
/** Used as the flags parameter to CheckFinalTx() in non-consensus code */
static const unsigned int STANDARD_LOCKTIME_VERIFY_FLAGS = LOCKTIME_MEDIAN_TIME_PAST;
/** Used as the flags parameter to sequence and nLocktime checks in non-consensus code. */
static const unsigned int STANDARD_LOCKTIME_VERIFY_FLAGS = LOCKTIME_VERIFY_SEQUENCE |
LOCKTIME_MEDIAN_TIME_PAST;
bool IsStandard(const CScript& scriptPubKey, txnouttype& whichType);
/**

View File

@ -21,7 +21,6 @@ class CBlockHeader
{
public:
// header
static const int32_t CURRENT_VERSION=4;
int32_t nVersion;
uint256 hashPrevBlock;
uint256 hashMerkleRoot;
@ -49,7 +48,7 @@ public:
void SetNull()
{
nVersion = CBlockHeader::CURRENT_VERSION;
nVersion = 0;
hashPrevBlock.SetNull();
hashMerkleRoot.SetNull();
nTime = 0;

View File

@ -47,7 +47,7 @@ std::string CTxIn::ToString() const
str += strprintf(", coinbase %s", HexStr(scriptSig));
else
str += strprintf(", scriptSig=%s", HexStr(scriptSig).substr(0, 24));
if (nSequence != std::numeric_limits<unsigned int>::max())
if (nSequence != SEQUENCE_FINAL)
str += strprintf(", nSequence=%u", nSequence);
str += ")";
return str;

View File

@ -66,13 +66,40 @@ public:
uint32_t nSequence;
CScript prevPubKey;
/* Setting nSequence to this value for every input in a transaction
* disables nLockTime. */
static const uint32_t SEQUENCE_FINAL = 0xffffffff;
/* Below flags apply in the context of BIP 68*/
/* If this flag set, CTxIn::nSequence is NOT interpreted as a
* relative lock-time. */
static const uint32_t SEQUENCE_LOCKTIME_DISABLE_FLAG = (1 << 31);
/* If CTxIn::nSequence encodes a relative lock-time and this flag
* is set, the relative lock-time has units of 512 seconds,
* otherwise it specifies blocks with a granularity of 1. */
static const uint32_t SEQUENCE_LOCKTIME_TYPE_FLAG = (1 << 22);
/* If CTxIn::nSequence encodes a relative lock-time, this mask is
* applied to extract that lock-time from the sequence field. */
static const uint32_t SEQUENCE_LOCKTIME_MASK = 0x0000ffff;
/* In order to use the same number of bits to encode roughly the
* same wall-clock duration, and because blocks are naturally
* limited to occur every 600s on average, the minimum granularity
* for time-based relative lock-time is fixed at 512 seconds.
* Converting from CTxIn::nSequence to seconds is performed by
* multiplying by 512 = 2^9, or equivalently shifting up by
* 9 bits. */
static const int SEQUENCE_LOCKTIME_GRANULARITY = 9;
CTxIn()
{
nSequence = std::numeric_limits<unsigned int>::max();
nSequence = SEQUENCE_FINAL;
}
explicit CTxIn(COutPoint prevoutIn, CScript scriptSigIn=CScript(), uint32_t nSequenceIn=std::numeric_limits<unsigned int>::max());
CTxIn(uint256 hashPrevTx, uint32_t nOut, CScript scriptSigIn=CScript(), uint32_t nSequenceIn=std::numeric_limits<uint32_t>::max());
explicit CTxIn(COutPoint prevoutIn, CScript scriptSigIn=CScript(), uint32_t nSequenceIn=SEQUENCE_FINAL);
CTxIn(uint256 hashPrevTx, uint32_t nOut, CScript scriptSigIn=CScript(), uint32_t nSequenceIn=SEQUENCE_FINAL);
ADD_SERIALIZE_METHODS;
@ -83,11 +110,6 @@ public:
READWRITE(nSequence);
}
bool IsFinal() const
{
return (nSequence == std::numeric_limits<uint32_t>::max());
}
friend bool operator==(const CTxIn& a, const CTxIn& b)
{
return (a.prevout == b.prevout &&
@ -201,8 +223,15 @@ private:
void UpdateHash() const;
public:
// Default transaction version.
static const int32_t CURRENT_VERSION=1;
// Changing the default transaction version requires a two step process: first
// adapting relay policy by bumping MAX_STANDARD_VERSION, and then later date
// bumping the default CURRENT_VERSION at which point both CURRENT_VERSION and
// MAX_STANDARD_VERSION will be equal.
static const int32_t MAX_STANDARD_VERSION=2;
// The local variables are made const to prevent unintended modification
// without updating the cached hash value. However, CTransaction is not
// actually immutable; deserialization and assignment are implemented,

View File

@ -113,32 +113,6 @@
</widget>
</item>
<item row="4" column="0">
<widget class="QLabel" name="label_14">
<property name="text">
<string>Using OpenSSL version</string>
</property>
<property name="indent">
<number>10</number>
</property>
</widget>
</item>
<item row="4" column="1" colspan="2">
<widget class="QLabel" name="openSSLVersion">
<property name="cursor">
<cursorShape>IBeamCursor</cursorShape>
</property>
<property name="text">
<string>N/A</string>
</property>
<property name="textFormat">
<enum>Qt::PlainText</enum>
</property>
<property name="textInteractionFlags">
<set>Qt::LinksAccessibleByMouse|Qt::TextSelectableByKeyboard|Qt::TextSelectableByMouse</set>
</property>
</widget>
</item>
<item row="5" column="0">
<widget class="QLabel" name="label_berkeleyDBVersion">
<property name="text">
<string>Using BerkeleyDB version</string>
@ -148,7 +122,7 @@
</property>
</widget>
</item>
<item row="5" column="1" colspan="2">
<item row="4" column="1" colspan="2">
<widget class="QLabel" name="berkeleyDBVersion">
<property name="cursor">
<cursorShape>IBeamCursor</cursorShape>
@ -164,14 +138,14 @@
</property>
</widget>
</item>
<item row="6" column="0">
<item row="5" column="0">
<widget class="QLabel" name="label_12">
<property name="text">
<string>Build date</string>
</property>
</widget>
</item>
<item row="6" column="1" colspan="2">
<item row="5" column="1" colspan="2">
<widget class="QLabel" name="buildDate">
<property name="cursor">
<cursorShape>IBeamCursor</cursorShape>
@ -187,14 +161,14 @@
</property>
</widget>
</item>
<item row="7" column="0">
<item row="6" column="0">
<widget class="QLabel" name="label_13">
<property name="text">
<string>Startup time</string>
</property>
</widget>
</item>
<item row="7" column="1" colspan="2">
<item row="6" column="1" colspan="2">
<widget class="QLabel" name="startupTime">
<property name="cursor">
<cursorShape>IBeamCursor</cursorShape>
@ -210,14 +184,27 @@
</property>
</widget>
</item>
<item row="9" column="0">
<item row="7" column="0">
<widget class="QLabel" name="labelNetwork">
<property name="font">
<font>
<weight>75</weight>
<bold>true</bold>
</font>
</property>
<property name="text">
<string>Network</string>
</property>
</widget>
</item>
<item row="8" column="0">
<widget class="QLabel" name="label_8">
<property name="text">
<string>Name</string>
</property>
</widget>
</item>
<item row="9" column="1" colspan="2">
<item row="8" column="1" colspan="2">
<widget class="QLabel" name="networkName">
<property name="cursor">
<cursorShape>IBeamCursor</cursorShape>
@ -233,14 +220,14 @@
</property>
</widget>
</item>
<item row="10" column="0">
<item row="9" column="0">
<widget class="QLabel" name="label_7">
<property name="text">
<string>Number of connections</string>
</property>
</widget>
</item>
<item row="10" column="1" colspan="2">
<item row="9" column="1" colspan="2">
<widget class="QLabel" name="numberOfConnections">
<property name="cursor">
<cursorShape>IBeamCursor</cursorShape>
@ -256,7 +243,7 @@
</property>
</widget>
</item>
<item row="11" column="0">
<item row="10" column="0">
<widget class="QLabel" name="masternodeCountLabel">
<property name="text">
<string>Number of Masternodes</string>
@ -283,14 +270,14 @@
</property>
</widget>
</item>
<item row="13" column="0">
<item row="12" column="0">
<widget class="QLabel" name="label_3">
<property name="text">
<string>Current number of blocks</string>
</property>
</widget>
</item>
<item row="13" column="1" colspan="2">
<item row="12" column="1" colspan="2">
<widget class="QLabel" name="numberOfBlocks">
<property name="cursor">
<cursorShape>IBeamCursor</cursorShape>
@ -306,14 +293,14 @@
</property>
</widget>
</item>
<item row="14" column="0">
<item row="13" column="0">
<widget class="QLabel" name="labelLastBlockTime">
<property name="text">
<string>Last block time</string>
</property>
</widget>
</item>
<item row="14" column="1" colspan="2">
<item row="13" column="1" colspan="2">
<widget class="QLabel" name="lastBlockTime">
<property name="cursor">
<cursorShape>IBeamCursor</cursorShape>
@ -329,7 +316,7 @@
</property>
</widget>
</item>
<item row="15" column="0">
<item row="14" column="0">
<widget class="QLabel" name="labelMempoolTitle">
<property name="font">
<font>
@ -342,14 +329,14 @@
</property>
</widget>
</item>
<item row="16" column="0">
<item row="15" column="0">
<widget class="QLabel" name="labelNumberOfTransactions">
<property name="text">
<string>Current number of transactions</string>
</property>
</widget>
</item>
<item row="16" column="1">
<item row="15" column="1">
<widget class="QLabel" name="mempoolNumberTxs">
<property name="cursor">
<cursorShape>IBeamCursor</cursorShape>
@ -365,27 +352,14 @@
</property>
</widget>
</item>
<item row="8" column="0">
<widget class="QLabel" name="labelNetwork">
<property name="font">
<font>
<weight>75</weight>
<bold>true</bold>
</font>
</property>
<property name="text">
<string>Network</string>
</property>
</widget>
</item>
<item row="17" column="0">
<item row="16" column="0">
<widget class="QLabel" name="labelMemoryUsage">
<property name="text">
<string>Memory usage</string>
</property>
</widget>
</item>
<item row="17" column="1">
<item row="16" column="1">
<widget class="QLabel" name="mempoolSize">
<property name="cursor">
<cursorShape>IBeamCursor</cursorShape>
@ -401,7 +375,7 @@
</property>
</widget>
</item>
<item row="15" column="2" rowspan="3">
<item row="14" column="2" rowspan="3">
<layout class="QVBoxLayout" name="verticalLayoutDebugButton">
<property name="spacing">
<number>3</number>
@ -441,7 +415,7 @@
</item>
</layout>
</item>
<item row="19" column="0">
<item row="18" column="0">
<spacer name="verticalSpacer">
<property name="orientation">
<enum>Qt::Vertical</enum>

View File

@ -283,13 +283,6 @@ RPCConsole::RPCConsole(const PlatformStyle *platformStyle, QWidget *parent) :
connect(ui->btn_reindex, SIGNAL(clicked()), this, SLOT(walletReindex()));
// set library version labels
#if (OPENSSL_VERSION_NUMBER < 0x10100000L)
ui->openSSLVersion->setText(SSLeay_version(SSLEAY_VERSION));
#else
ui->openSSLVersion->setText(OpenSSL_version(OPENSSL_VERSION));
#endif
#ifdef ENABLE_WALLET
ui->berkeleyDBVersion->setText(DbEnv::version(0, 0, 0));
std::string walletPath = GetDataDir().string();

View File

@ -694,6 +694,20 @@ static UniValue SoftForkDesc(const std::string &name, int version, CBlockIndex*
return rv;
}
static UniValue BIP9SoftForkDesc(const std::string& name, const Consensus::Params& consensusParams, Consensus::DeploymentPos id)
{
UniValue rv(UniValue::VOBJ);
rv.push_back(Pair("id", name));
switch (VersionBitsTipState(consensusParams, id)) {
case THRESHOLD_DEFINED: rv.push_back(Pair("status", "defined")); break;
case THRESHOLD_STARTED: rv.push_back(Pair("status", "started")); break;
case THRESHOLD_LOCKED_IN: rv.push_back(Pair("status", "locked_in")); break;
case THRESHOLD_ACTIVE: rv.push_back(Pair("status", "active")); break;
case THRESHOLD_FAILED: rv.push_back(Pair("status", "failed")); break;
}
return rv;
}
UniValue getblockchaininfo(const UniValue& params, bool fHelp)
{
if (fHelp || params.size() != 0)
@ -724,6 +738,12 @@ UniValue getblockchaininfo(const UniValue& params, bool fHelp)
" },\n"
" \"reject\": { ... } (object) progress toward rejecting pre-softfork blocks (same fields as \"enforce\")\n"
" }, ...\n"
" ],\n"
" \"bip9_softforks\": [ (array) status of BIP9 softforks in progress\n"
" {\n"
" \"id\": \"xxxx\", (string) name of the softfork\n"
" \"status\": \"xxxx\", (string) one of \"defined\", \"started\", \"lockedin\", \"active\", \"failed\"\n"
" }\n"
" ]\n"
"}\n"
"\nExamples:\n"
@ -747,10 +767,13 @@ UniValue getblockchaininfo(const UniValue& params, bool fHelp)
const Consensus::Params& consensusParams = Params().GetConsensus();
CBlockIndex* tip = chainActive.Tip();
UniValue softforks(UniValue::VARR);
UniValue bip9_softforks(UniValue::VARR);
softforks.push_back(SoftForkDesc("bip34", 2, tip, consensusParams));
softforks.push_back(SoftForkDesc("bip66", 3, tip, consensusParams));
softforks.push_back(SoftForkDesc("bip65", 4, tip, consensusParams));
bip9_softforks.push_back(BIP9SoftForkDesc("csv", consensusParams, Consensus::DEPLOYMENT_CSV));
obj.push_back(Pair("softforks", softforks));
obj.push_back(Pair("bip9_softforks", bip9_softforks));
if (fPruneMode)
{

View File

@ -373,7 +373,44 @@ bool EvalScript(vector<vector<unsigned char> >& stack, const CScript& script, un
break;
}
case OP_NOP1: case OP_NOP3: case OP_NOP4: case OP_NOP5:
case OP_CHECKSEQUENCEVERIFY:
{
if (!(flags & SCRIPT_VERIFY_CHECKSEQUENCEVERIFY)) {
// not enabled; treat as a NOP3
if (flags & SCRIPT_VERIFY_DISCOURAGE_UPGRADABLE_NOPS) {
return set_error(serror, SCRIPT_ERR_DISCOURAGE_UPGRADABLE_NOPS);
}
break;
}
if (stack.size() < 1)
return set_error(serror, SCRIPT_ERR_INVALID_STACK_OPERATION);
// nSequence, like nLockTime, is a 32-bit unsigned integer
// field. See the comment in CHECKLOCKTIMEVERIFY regarding
// 5-byte numeric operands.
const CScriptNum nSequence(stacktop(-1), fRequireMinimal, 5);
// In the rare event that the argument may be < 0 due to
// some arithmetic being done first, you can always use
// 0 MAX CHECKSEQUENCEVERIFY.
if (nSequence < 0)
return set_error(serror, SCRIPT_ERR_NEGATIVE_LOCKTIME);
// To provide for future soft-fork extensibility, if the
// operand has the disabled lock-time flag set,
// CHECKSEQUENCEVERIFY behaves as a NOP.
if ((nSequence & CTxIn::SEQUENCE_LOCKTIME_DISABLE_FLAG) != 0)
break;
// Compare the specified sequence number with the input.
if (!checker.CheckSequence(nSequence))
return set_error(serror, SCRIPT_ERR_UNSATISFIED_LOCKTIME);
break;
}
case OP_NOP1: case OP_NOP4: case OP_NOP5:
case OP_NOP6: case OP_NOP7: case OP_NOP8: case OP_NOP9: case OP_NOP10:
{
if (flags & SCRIPT_VERIFY_DISCOURAGE_UPGRADABLE_NOPS)
@ -1150,12 +1187,57 @@ bool TransactionSignatureChecker::CheckLockTime(const CScriptNum& nLockTime) con
// prevent this condition. Alternatively we could test all
// inputs, but testing just this input minimizes the data
// required to prove correct CHECKLOCKTIMEVERIFY execution.
if (txTo->vin[nIn].IsFinal())
if (CTxIn::SEQUENCE_FINAL == txTo->vin[nIn].nSequence)
return false;
return true;
}
bool TransactionSignatureChecker::CheckSequence(const CScriptNum& nSequence) const
{
// Relative lock times are supported by comparing the passed
// in operand to the sequence number of the input.
const int64_t txToSequence = (int64_t)txTo->vin[nIn].nSequence;
// Fail if the transaction's version number is not set high
// enough to trigger BIP 68 rules.
if (static_cast<uint32_t>(txTo->nVersion) < 2)
return false;
// Sequence numbers with their most significant bit set are not
// consensus constrained. Testing that the transaction's sequence
// number do not have this bit set prevents using this property
// to get around a CHECKSEQUENCEVERIFY check.
if (txToSequence & CTxIn::SEQUENCE_LOCKTIME_DISABLE_FLAG)
return false;
// Mask off any bits that do not have consensus-enforced meaning
// before doing the integer comparisons
const uint32_t nLockTimeMask = CTxIn::SEQUENCE_LOCKTIME_TYPE_FLAG | CTxIn::SEQUENCE_LOCKTIME_MASK;
const int64_t txToSequenceMasked = txToSequence & nLockTimeMask;
const CScriptNum nSequenceMasked = nSequence & nLockTimeMask;
// There are two kinds of nSequence: lock-by-blockheight
// and lock-by-blocktime, distinguished by whether
// nSequenceMasked < CTxIn::SEQUENCE_LOCKTIME_TYPE_FLAG.
//
// We want to compare apples to apples, so fail the script
// unless the type of nSequenceMasked being tested is the same as
// the nSequenceMasked in the transaction.
if (!(
(txToSequenceMasked < CTxIn::SEQUENCE_LOCKTIME_TYPE_FLAG && nSequenceMasked < CTxIn::SEQUENCE_LOCKTIME_TYPE_FLAG) ||
(txToSequenceMasked >= CTxIn::SEQUENCE_LOCKTIME_TYPE_FLAG && nSequenceMasked >= CTxIn::SEQUENCE_LOCKTIME_TYPE_FLAG)
)) {
return false;
}
// Now that we know we're comparing apples-to-apples, the
// comparison is a simple numeric one.
if (nSequenceMasked > txToSequenceMasked)
return false;
return true;
}
bool VerifyScript(const CScript& scriptSig, const CScript& scriptPubKey, unsigned int flags, const BaseSignatureChecker& checker, ScriptError* serror)
{

View File

@ -81,6 +81,11 @@ enum
//
// See BIP65 for details.
SCRIPT_VERIFY_CHECKLOCKTIMEVERIFY = (1U << 9),
// support CHECKSEQUENCEVERIFY opcode
//
// See BIP112 for details
SCRIPT_VERIFY_CHECKSEQUENCEVERIFY = (1U << 10),
};
bool CheckSignatureEncoding(const std::vector<unsigned char> &vchSig, unsigned int flags, ScriptError* serror);
@ -100,6 +105,11 @@ public:
return false;
}
virtual bool CheckSequence(const CScriptNum& nSequence) const
{
return false;
}
virtual ~BaseSignatureChecker() {}
};
@ -116,6 +126,7 @@ public:
TransactionSignatureChecker(const CTransaction* txToIn, unsigned int nInIn) : txTo(txToIn), nIn(nInIn) {}
bool CheckSig(const std::vector<unsigned char>& scriptSig, const std::vector<unsigned char>& vchPubKey, const CScript& scriptCode) const;
bool CheckLockTime(const CScriptNum& nLockTime) const;
bool CheckSequence(const CScriptNum& nSequence) const;
};
class MutableTransactionSignatureChecker : public TransactionSignatureChecker

View File

@ -165,6 +165,7 @@ enum opcodetype
OP_CHECKLOCKTIMEVERIFY = 0xb1,
OP_NOP2 = OP_CHECKLOCKTIMEVERIFY,
OP_NOP3 = 0xb2,
OP_CHECKSEQUENCEVERIFY = OP_NOP3,
OP_NOP4 = 0xb3,
OP_NOP5 = 0xb4,
OP_NOP6 = 0xb5,
@ -259,6 +260,11 @@ public:
inline CScriptNum& operator+=( const CScriptNum& rhs) { return operator+=(rhs.m_value); }
inline CScriptNum& operator-=( const CScriptNum& rhs) { return operator-=(rhs.m_value); }
inline CScriptNum operator&( const int64_t& rhs) const { return CScriptNum(m_value & rhs);}
inline CScriptNum operator&( const CScriptNum& rhs) const { return operator&(rhs.m_value); }
inline CScriptNum& operator&=( const CScriptNum& rhs) { return operator&=(rhs.m_value); }
inline CScriptNum operator-() const
{
assert(m_value != std::numeric_limits<int64_t>::min());
@ -287,6 +293,12 @@ public:
return *this;
}
inline CScriptNum& operator&=( const int64_t& rhs)
{
m_value &= rhs;
return *this;
}
int getint() const
{
if (m_value > std::numeric_limits<int>::max())

View File

@ -35,7 +35,7 @@ typedef enum ScriptError_t
SCRIPT_ERR_INVALID_ALTSTACK_OPERATION,
SCRIPT_ERR_UNBALANCED_CONDITIONAL,
/* OP_CHECKLOCKTIMEVERIFY */
/* CHECKLOCKTIMEVERIFY and CHECKSEQUENCEVERIFY */
SCRIPT_ERR_NEGATIVE_LOCKTIME,
SCRIPT_ERR_UNSATISFIED_LOCKTIME,

View File

@ -201,5 +201,59 @@
[[["b1dbc81696c8a9c0fccd0693ab66d7c368dbc38c0def4e800685560ddd1b2132", 0, "DUP HASH160 0x14 0x4b3bd7eba3bc0284fd3007be7f3be275e94f5826 EQUALVERIFY CHECKSIG"]],
"010000000132211bdd0d568506804eef0d8cc3db68c3d766ab9306cdfcc0a9c89616c8dbb1000000006c493045022100c7bb0faea0522e74ff220c20c022d2cb6033f8d167fb89e75a50e237a35fd6d202203064713491b1f8ad5f79e623d0219ad32510bfaa1009ab30cbee77b59317d6e30001210237af13eb2d84e4545af287b919c2282019c9691cc509e78e196a9d8274ed1be0ffffffff0100000000000000001976a914f1b3ed2eda9a2ebe5a9374f692877cdf87c0f95b88ac00000000", "P2SH,DERSIG"],
["CHECKSEQUENCEVERIFY tests"],
["By-height locks, with argument just beyond txin.nSequence"],
[[["0000000000000000000000000000000000000000000000000000000000000100", 0, "1 NOP3 1"]],
"020000000100010000000000000000000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000", "P2SH,CHECKSEQUENCEVERIFY"],
[[["0000000000000000000000000000000000000000000000000000000000000100", 0, "4259839 NOP3 1"]],
"020000000100010000000000000000000000000000000000000000000000000000000000000000000000feff40000100000000000000000000000000", "P2SH,CHECKSEQUENCEVERIFY"],
["By-time locks, with argument just beyond txin.nSequence (but within numerical boundries)"],
[[["0000000000000000000000000000000000000000000000000000000000000100", 0, "4194305 NOP3 1"]],
"020000000100010000000000000000000000000000000000000000000000000000000000000000000000000040000100000000000000000000000000", "P2SH,CHECKSEQUENCEVERIFY"],
[[["0000000000000000000000000000000000000000000000000000000000000100", 0, "4259839 NOP3 1"]],
"020000000100010000000000000000000000000000000000000000000000000000000000000000000000feff40000100000000000000000000000000", "P2SH,CHECKSEQUENCEVERIFY"],
["Argument missing"],
[[["0000000000000000000000000000000000000000000000000000000000000100", 0, "NOP3 1"]],
"020000000100010000000000000000000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000", "P2SH,CHECKSEQUENCEVERIFY"],
["Argument negative with by-blockheight txin.nSequence=0"],
[[["0000000000000000000000000000000000000000000000000000000000000100", 0, "-1 NOP3 1"]],
"020000000100010000000000000000000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000", "P2SH,CHECKSEQUENCEVERIFY"],
["Argument negative with by-blocktime txin.nSequence=CTxIn::SEQUENCE_LOCKTIME_TYPE_FLAG"],
[[["0000000000000000000000000000000000000000000000000000000000000100", 0, "-1 NOP3 1"]],
"020000000100010000000000000000000000000000000000000000000000000000000000000000000000000040000100000000000000000000000000", "P2SH,CHECKSEQUENCEVERIFY"],
["Argument/tx height/time mismatch, both versions"],
[[["0000000000000000000000000000000000000000000000000000000000000100", 0, "0 NOP3 1"]],
"020000000100010000000000000000000000000000000000000000000000000000000000000000000000000040000100000000000000000000000000", "P2SH,CHECKSEQUENCEVERIFY"],
[[["0000000000000000000000000000000000000000000000000000000000000100", 0, "65535 NOP3 1"]],
"020000000100010000000000000000000000000000000000000000000000000000000000000000000000000040000100000000000000000000000000", "P2SH,CHECKSEQUENCEVERIFY"],
[[["0000000000000000000000000000000000000000000000000000000000000100", 0, "4194304 NOP3 1"]],
"020000000100010000000000000000000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000", "P2SH,CHECKSEQUENCEVERIFY"],
[[["0000000000000000000000000000000000000000000000000000000000000100", 0, "4259839 NOP3 1"]],
"020000000100010000000000000000000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000", "P2SH,CHECKSEQUENCEVERIFY"],
["6 byte non-minimally-encoded arguments are invalid even if their contents are valid"],
[[["0000000000000000000000000000000000000000000000000000000000000100", 0, "0x06 0x000000000000 NOP3 1"]],
"020000000100010000000000000000000000000000000000000000000000000000000000000000000000ffff00000100000000000000000000000000", "P2SH,CHECKSEQUENCEVERIFY"],
["Failure due to failing CHECKSEQUENCEVERIFY in scriptSig"],
[[["0000000000000000000000000000000000000000000000000000000000000100", 0, "1"]],
"02000000010001000000000000000000000000000000000000000000000000000000000000000000000251b2000000000100000000000000000000000000", "P2SH,CHECKSEQUENCEVERIFY"],
["Failure due to failing CHECKSEQUENCEVERIFY in redeemScript"],
[[["0000000000000000000000000000000000000000000000000000000000000100", 0, "HASH160 0x14 0x7c17aff532f22beb54069942f9bf567a66133eaf EQUAL"]],
"0200000001000100000000000000000000000000000000000000000000000000000000000000000000030251b2000000000100000000000000000000000000", "P2SH,CHECKSEQUENCEVERIFY"],
["Failure due to insufficient tx.nVersion (<2)"],
[[["0000000000000000000000000000000000000000000000000000000000000100", 0, "0 NOP3 1"]],
"010000000100010000000000000000000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000", "P2SH,CHECKSEQUENCEVERIFY"],
[[["0000000000000000000000000000000000000000000000000000000000000100", 0, "4194304 NOP3 1"]],
"010000000100010000000000000000000000000000000000000000000000000000000000000000000000000040000100000000000000000000000000", "P2SH,CHECKSEQUENCEVERIFY"],
["Make diffs cleaner by leaving a comment here without comma at the end"]
]

View File

@ -233,5 +233,89 @@
[[["b1dbc81696c8a9c0fccd0693ab66d7c368dbc38c0def4e800685560ddd1b2132", 0, "DUP HASH160 0x14 0x4b3bd7eba3bc0284fd3007be7f3be275e94f5826 EQUALVERIFY CHECKSIG"]],
"010000000132211bdd0d568506804eef0d8cc3db68c3d766ab9306cdfcc0a9c89616c8dbb1000000006c493045022100c7bb0faea0522e74ff220c20c022d2cb6033f8d167fb89e75a50e237a35fd6d202203064713491b1f8ad5f79e623d0219ad32510bfaa1009ab30cbee77b59317d6e30001210237af13eb2d84e4545af287b919c2282019c9691cc509e78e196a9d8274ed1be0ffffffff0100000000000000001976a914f1b3ed2eda9a2ebe5a9374f692877cdf87c0f95b88ac00000000", "P2SH"],
["CHECKSEQUENCEVERIFY tests"],
["By-height locks, with argument == 0 and == txin.nSequence"],
[[["0000000000000000000000000000000000000000000000000000000000000100", 0, "0 NOP3 1"]],
"020000000100010000000000000000000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000", "P2SH,CHECKSEQUENCEVERIFY"],
[[["0000000000000000000000000000000000000000000000000000000000000100", 0, "65535 NOP3 1"]],
"020000000100010000000000000000000000000000000000000000000000000000000000000000000000ffff00000100000000000000000000000000", "P2SH,CHECKSEQUENCEVERIFY"],
[[["0000000000000000000000000000000000000000000000000000000000000100", 0, "65535 NOP3 1"]],
"020000000100010000000000000000000000000000000000000000000000000000000000000000000000ffffbf7f0100000000000000000000000000", "P2SH,CHECKSEQUENCEVERIFY"],
[[["0000000000000000000000000000000000000000000000000000000000000100", 0, "0 NOP3 1"]],
"020000000100010000000000000000000000000000000000000000000000000000000000000000000000ffffbf7f0100000000000000000000000000", "P2SH,CHECKSEQUENCEVERIFY"],
["By-time locks, with argument == 0 and == txin.nSequence"],
[[["0000000000000000000000000000000000000000000000000000000000000100", 0, "4194304 NOP3 1"]],
"020000000100010000000000000000000000000000000000000000000000000000000000000000000000000040000100000000000000000000000000", "P2SH,CHECKSEQUENCEVERIFY"],
[[["0000000000000000000000000000000000000000000000000000000000000100", 0, "4259839 NOP3 1"]],
"020000000100010000000000000000000000000000000000000000000000000000000000000000000000ffff40000100000000000000000000000000", "P2SH,CHECKSEQUENCEVERIFY"],
[[["0000000000000000000000000000000000000000000000000000000000000100", 0, "4259839 NOP3 1"]],
"020000000100010000000000000000000000000000000000000000000000000000000000000000000000ffffff7f0100000000000000000000000000", "P2SH,CHECKSEQUENCEVERIFY"],
[[["0000000000000000000000000000000000000000000000000000000000000100", 0, "4194304 NOP3 1"]],
"020000000100010000000000000000000000000000000000000000000000000000000000000000000000ffffff7f0100000000000000000000000000", "P2SH,CHECKSEQUENCEVERIFY"],
["Upper sequence with upper sequence is fine"],
[[["0000000000000000000000000000000000000000000000000000000000000100", 0, "2147483648 NOP3 1"]],
"020000000100010000000000000000000000000000000000000000000000000000000000000000000000000000800100000000000000000000000000", "P2SH,CHECKSEQUENCEVERIFY"],
[[["0000000000000000000000000000000000000000000000000000000000000100", 0, "4294967295 NOP3 1"]],
"020000000100010000000000000000000000000000000000000000000000000000000000000000000000000000800100000000000000000000000000", "P2SH,CHECKSEQUENCEVERIFY"],
[[["0000000000000000000000000000000000000000000000000000000000000100", 0, "2147483648 NOP3 1"]],
"020000000100010000000000000000000000000000000000000000000000000000000000000000000000feffffff0100000000000000000000000000", "P2SH,CHECKSEQUENCEVERIFY"],
[[["0000000000000000000000000000000000000000000000000000000000000100", 0, "4294967295 NOP3 1"]],
"020000000100010000000000000000000000000000000000000000000000000000000000000000000000feffffff0100000000000000000000000000", "P2SH,CHECKSEQUENCEVERIFY"],
[[["0000000000000000000000000000000000000000000000000000000000000100", 0, "2147483648 NOP3 1"]],
"020000000100010000000000000000000000000000000000000000000000000000000000000000000000ffffffff0100000000000000000000000000", "P2SH,CHECKSEQUENCEVERIFY"],
[[["0000000000000000000000000000000000000000000000000000000000000100", 0, "4294967295 NOP3 1"]],
"020000000100010000000000000000000000000000000000000000000000000000000000000000000000ffffffff0100000000000000000000000000", "P2SH,CHECKSEQUENCEVERIFY"],
["Argument 2^31 with various nSequence"],
[[["0000000000000000000000000000000000000000000000000000000000000100", 0, "2147483648 NOP3 1"]],
"020000000100010000000000000000000000000000000000000000000000000000000000000000000000ffffbf7f0100000000000000000000000000", "P2SH,CHECKSEQUENCEVERIFY"],
[[["0000000000000000000000000000000000000000000000000000000000000100", 0, "2147483648 NOP3 1"]],
"020000000100010000000000000000000000000000000000000000000000000000000000000000000000ffffff7f0100000000000000000000000000", "P2SH,CHECKSEQUENCEVERIFY"],
[[["0000000000000000000000000000000000000000000000000000000000000100", 0, "2147483648 NOP3 1"]],
"020000000100010000000000000000000000000000000000000000000000000000000000000000000000ffffffff0100000000000000000000000000", "P2SH,CHECKSEQUENCEVERIFY"],
["Argument 2^32-1 with various nSequence"],
[[["0000000000000000000000000000000000000000000000000000000000000100", 0, "4294967295 NOP3 1"]],
"020000000100010000000000000000000000000000000000000000000000000000000000000000000000ffffbf7f0100000000000000000000000000", "P2SH,CHECKSEQUENCEVERIFY"],
[[["0000000000000000000000000000000000000000000000000000000000000100", 0, "4294967295 NOP3 1"]],
"020000000100010000000000000000000000000000000000000000000000000000000000000000000000ffffff7f0100000000000000000000000000", "P2SH,CHECKSEQUENCEVERIFY"],
[[["0000000000000000000000000000000000000000000000000000000000000100", 0, "4294967295 NOP3 1"]],
"020000000100010000000000000000000000000000000000000000000000000000000000000000000000ffffffff0100000000000000000000000000", "P2SH,CHECKSEQUENCEVERIFY"],
["Argument 3<<31 with various nSequence"],
[[["0000000000000000000000000000000000000000000000000000000000000100", 0, "6442450944 NOP3 1"]],
"020000000100010000000000000000000000000000000000000000000000000000000000000000000000ffffbf7f0100000000000000000000000000", "P2SH,CHECKSEQUENCEVERIFY"],
[[["0000000000000000000000000000000000000000000000000000000000000100", 0, "6442450944 NOP3 1"]],
"020000000100010000000000000000000000000000000000000000000000000000000000000000000000ffffff7f0100000000000000000000000000", "P2SH,CHECKSEQUENCEVERIFY"],
[[["0000000000000000000000000000000000000000000000000000000000000100", 0, "6442450944 NOP3 1"]],
"020000000100010000000000000000000000000000000000000000000000000000000000000000000000ffffffff0100000000000000000000000000", "P2SH,CHECKSEQUENCEVERIFY"],
["5 byte non-minimally-encoded operandss are valid"],
[[["0000000000000000000000000000000000000000000000000000000000000100", 0, "0x05 0x0000000000 NOP3 1"]],
"020000000100010000000000000000000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000", "P2SH,CHECKSEQUENCEVERIFY"],
["The argument can be calculated rather than created directly by a PUSHDATA"],
[[["0000000000000000000000000000000000000000000000000000000000000100", 0, "4194303 1ADD NOP3 1"]],
"020000000100010000000000000000000000000000000000000000000000000000000000000000000000000040000100000000000000000000000000", "P2SH,CHECKSEQUENCEVERIFY"],
[[["0000000000000000000000000000000000000000000000000000000000000100", 0, "4194304 1SUB NOP3 1"]],
"020000000100010000000000000000000000000000000000000000000000000000000000000000000000ffff00000100000000000000000000000000", "P2SH,CHECKSEQUENCEVERIFY"],
["An ADD producing a 5-byte result that sets CTxIn::SEQUENCE_LOCKTIME_DISABLE_FLAG"],
[[["0000000000000000000000000000000000000000000000000000000000000100", 0, "2147483647 65536 NOP3 1"]],
"020000000100010000000000000000000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000", "P2SH,CHECKSEQUENCEVERIFY"],
[[["0000000000000000000000000000000000000000000000000000000000000100", 0, "2147483647 4259840 ADD NOP3 1"]],
"020000000100010000000000000000000000000000000000000000000000000000000000000000000000000040000100000000000000000000000000", "P2SH,CHECKSEQUENCEVERIFY"],
["Valid CHECKSEQUENCEVERIFY in scriptSig"],
[[["0000000000000000000000000000000000000000000000000000000000000100", 0, "1"]],
"02000000010001000000000000000000000000000000000000000000000000000000000000000000000251b2010000000100000000000000000000000000", "P2SH,CHECKSEQUENCEVERIFY"],
["Valid CHECKSEQUENCEVERIFY in redeemScript"],
[[["0000000000000000000000000000000000000000000000000000000000000100", 0, "HASH160 0x14 0x7c17aff532f22beb54069942f9bf567a66133eaf EQUAL"]],
"0200000001000100000000000000000000000000000000000000000000000000000000000000000000030251b2010000000100000000000000000000000000", "P2SH,CHECKSEQUENCEVERIFY"],
["Make diffs cleaner by leaving a comment here without comma at the end"]
]

View File

@ -58,6 +58,20 @@ struct {
{0, 0x07665a0f}, {0, 0x07741214},
};
CBlockIndex CreateBlockIndex(int nHeight)
{
CBlockIndex index;
index.nHeight = nHeight;
index.pprev = chainActive.Tip();
return index;
}
bool TestSequenceLocks(const CTransaction &tx, int flags)
{
LOCK(mempool.cs);
return CheckSequenceLocks(tx, flags);
}
// NOTE: These tests rely on CreateNewBlock doing its own self-validation!
BOOST_AUTO_TEST_CASE(CreateNewBlock_validity)
{
@ -83,6 +97,7 @@ BOOST_AUTO_TEST_CASE(CreateNewBlock_validity)
// We can't make transactions until we have inputs
// Therefore, load 100 blocks :)
int baseheight = 0;
std::vector<CTransaction*>txFirst;
for (unsigned int i = 0; i < sizeof(blockinfo)/sizeof(*blockinfo); ++i)
{
@ -96,7 +111,9 @@ BOOST_AUTO_TEST_CASE(CreateNewBlock_validity)
txCoinbase.vin[0].scriptSig.push_back(chainActive.Height());
txCoinbase.vout[0].scriptPubKey = CScript();
pblock->vtx[0] = CTransaction(txCoinbase);
if (txFirst.size() < 2)
if (txFirst.size() == 0)
baseheight = chainActive.Height();
if (txFirst.size() < 4)
txFirst.push_back(new CTransaction(pblock->vtx[0]));
pblock->hashMerkleRoot = BlockMerkleRoot(*pblock);
pblock->nNonce = blockinfo[i].nonce;
@ -234,59 +251,133 @@ BOOST_AUTO_TEST_CASE(CreateNewBlock_validity)
// subsidy changing
// int nHeight = chainActive.Height();
// chainActive.Tip()->nHeight = 209999;
// // Create an actual 209999-long block chain (without valid blocks).
// while (chainActive.Tip()->nHeight < 209999) {
// CBlockIndex* prev = chainActive.Tip();
// CBlockIndex* next = new CBlockIndex();
// next->phashBlock = new uint256(GetRandHash());
// pcoinsTip->SetBestBlock(next->GetBlockHash());
// next->pprev = prev;
// next->nHeight = prev->nHeight + 1;
// next->BuildSkip();
// chainActive.SetTip(next);
// }
// BOOST_CHECK(pblocktemplate = CreateNewBlock(chainparams, scriptPubKey));
// delete pblocktemplate;
// chainActive.Tip()->nHeight = 210000;
// // Extend to a 210000-long block chain.
// while (chainActive.Tip()->nHeight < 210000) {
// CBlockIndex* prev = chainActive.Tip();
// CBlockIndex* next = new CBlockIndex();
// next->phashBlock = new uint256(GetRandHash());
// pcoinsTip->SetBestBlock(next->GetBlockHash());
// next->pprev = prev;
// next->nHeight = prev->nHeight + 1;
// next->BuildSkip();
// chainActive.SetTip(next);
// }
// BOOST_CHECK(pblocktemplate = CreateNewBlock(chainparams, scriptPubKey));
// delete pblocktemplate;
// chainActive.Tip()->nHeight = nHeight;
// // Delete the dummy blocks again.
// while (chainActive.Tip()->nHeight > nHeight) {
// CBlockIndex* del = chainActive.Tip();
// chainActive.SetTip(del->pprev);
// pcoinsTip->SetBestBlock(del->pprev->GetBlockHash());
// delete del->phashBlock;
// delete del;
// }
// non-final txs in mempool
SetMockTime(chainActive.Tip()->GetMedianTimePast()+1);
int flags = LOCKTIME_VERIFY_SEQUENCE|LOCKTIME_MEDIAN_TIME_PAST;
// height map
std::vector<int> prevheights;
// height locked
tx.vin[0].prevout.hash = txFirst[0]->GetHash();
// relative height locked
tx.nVersion = 2;
tx.vin.resize(1);
prevheights.resize(1);
tx.vin[0].prevout.hash = txFirst[0]->GetHash(); // only 1 transaction
tx.vin[0].prevout.n = 0;
tx.vin[0].scriptSig = CScript() << OP_1;
tx.vin[0].nSequence = 0;
tx.vin[0].nSequence = chainActive.Tip()->nHeight + 1; // txFirst[0] is the 2nd block
prevheights[0] = baseheight + 1;
tx.vout.resize(1);
tx.vout[0].nValue = 49000000000LL;
tx.vout[0].scriptPubKey = CScript() << OP_1;
tx.nLockTime = chainActive.Tip()->nHeight+1;
tx.nLockTime = 0;
hash = tx.GetHash();
mempool.addUnchecked(hash, entry.Fee(1000000000L).Time(GetTime()).SpendsCoinbase(true).FromTx(tx));
BOOST_CHECK(!CheckFinalTx(tx, LOCKTIME_MEDIAN_TIME_PAST));
BOOST_CHECK(CheckFinalTx(tx, flags)); // Locktime passes
BOOST_CHECK(!TestSequenceLocks(tx, flags)); // Sequence locks fail
BOOST_CHECK(SequenceLocks(tx, flags, &prevheights, CreateBlockIndex(chainActive.Tip()->nHeight + 2))); // Sequence locks pass on 2nd block
// time locked
tx2.vin.resize(1);
tx2.vin[0].prevout.hash = txFirst[1]->GetHash();
tx2.vin[0].prevout.n = 0;
tx2.vin[0].scriptSig = CScript() << OP_1;
tx2.vin[0].nSequence = 0;
tx2.vout.resize(1);
tx2.vout[0].nValue = 49000000000LL;
tx2.vout[0].scriptPubKey = CScript() << OP_1;
tx2.nLockTime = chainActive.Tip()->GetMedianTimePast()+1;
hash = tx2.GetHash();
mempool.addUnchecked(hash, entry.Fee(1000000000L).Time(GetTime()).SpendsCoinbase(true).FromTx(tx2));
BOOST_CHECK(!CheckFinalTx(tx2, LOCKTIME_MEDIAN_TIME_PAST));
// relative time locked
tx.vin[0].prevout.hash = txFirst[1]->GetHash();
tx.vin[0].nSequence = CTxIn::SEQUENCE_LOCKTIME_TYPE_FLAG | (((chainActive.Tip()->GetMedianTimePast()+1-chainActive[1]->GetMedianTimePast()) >> CTxIn::SEQUENCE_LOCKTIME_GRANULARITY) + 1); // txFirst[1] is the 3rd block
prevheights[0] = baseheight + 2;
hash = tx.GetHash();
mempool.addUnchecked(hash, entry.Time(GetTime()).FromTx(tx));
BOOST_CHECK(CheckFinalTx(tx, flags)); // Locktime passes
BOOST_CHECK(!TestSequenceLocks(tx, flags)); // Sequence locks fail
for (int i = 0; i < CBlockIndex::nMedianTimeSpan; i++)
chainActive.Tip()->GetAncestor(chainActive.Tip()->nHeight - i)->nTime += 512; //Trick the MedianTimePast
BOOST_CHECK(SequenceLocks(tx, flags, &prevheights, CreateBlockIndex(chainActive.Tip()->nHeight + 1))); // Sequence locks pass 512 seconds later
for (int i = 0; i < CBlockIndex::nMedianTimeSpan; i++)
chainActive.Tip()->GetAncestor(chainActive.Tip()->nHeight - i)->nTime -= 512; //undo tricked MTP
// absolute height locked
tx.vin[0].prevout.hash = txFirst[2]->GetHash();
tx.vin[0].nSequence = CTxIn::SEQUENCE_FINAL - 1;
prevheights[0] = baseheight + 3;
tx.nLockTime = chainActive.Tip()->nHeight + 1;
hash = tx.GetHash();
mempool.addUnchecked(hash, entry.Time(GetTime()).FromTx(tx));
BOOST_CHECK(!CheckFinalTx(tx, flags)); // Locktime fails
BOOST_CHECK(TestSequenceLocks(tx, flags)); // Sequence locks pass
BOOST_CHECK(IsFinalTx(tx, chainActive.Tip()->nHeight + 2, chainActive.Tip()->GetMedianTimePast())); // Locktime passes on 2nd block
// absolute time locked
tx.vin[0].prevout.hash = txFirst[3]->GetHash();
tx.nLockTime = chainActive.Tip()->GetMedianTimePast();
prevheights.resize(1);
prevheights[0] = baseheight + 4;
hash = tx.GetHash();
mempool.addUnchecked(hash, entry.Time(GetTime()).FromTx(tx));
BOOST_CHECK(!CheckFinalTx(tx, flags)); // Locktime fails
BOOST_CHECK(TestSequenceLocks(tx, flags)); // Sequence locks pass
BOOST_CHECK(IsFinalTx(tx, chainActive.Tip()->nHeight + 2, chainActive.Tip()->GetMedianTimePast() + 1)); // Locktime passes 1 second later
// mempool-dependent transactions (not added)
tx.vin[0].prevout.hash = hash;
prevheights[0] = chainActive.Tip()->nHeight + 1;
tx.nLockTime = 0;
tx.vin[0].nSequence = 0;
BOOST_CHECK(CheckFinalTx(tx, flags)); // Locktime passes
BOOST_CHECK(TestSequenceLocks(tx, flags)); // Sequence locks pass
tx.vin[0].nSequence = 1;
BOOST_CHECK(!TestSequenceLocks(tx, flags)); // Sequence locks fail
tx.vin[0].nSequence = CTxIn::SEQUENCE_LOCKTIME_TYPE_FLAG;
BOOST_CHECK(TestSequenceLocks(tx, flags)); // Sequence locks pass
tx.vin[0].nSequence = CTxIn::SEQUENCE_LOCKTIME_TYPE_FLAG | 1;
BOOST_CHECK(!TestSequenceLocks(tx, flags)); // Sequence locks fail
BOOST_CHECK(pblocktemplate = CreateNewBlock(chainparams, scriptPubKey));
// Neither tx should have make it into the template.
BOOST_CHECK_EQUAL(pblocktemplate->block.vtx.size(), 1);
// None of the of the absolute height/time locked tx should have made
// it into the template because we still check IsFinalTx in CreateNewBlock,
// but relative locked txs will if inconsistently added to mempool.
// For now these will still generate a valid template until BIP68 soft fork
BOOST_CHECK_EQUAL(pblocktemplate->block.vtx.size(), 3);
delete pblocktemplate;
// However if we advance height and time by one, both will.
// However if we advance height by 1 and time by 512, all of them should be mined
for (int i = 0; i < CBlockIndex::nMedianTimeSpan; i++)
chainActive.Tip()->GetAncestor(chainActive.Tip()->nHeight - i)->nTime += 512; //Trick the MedianTimePast
chainActive.Tip()->nHeight++;
SetMockTime(chainActive.Tip()->GetMedianTimePast()+2);
// FIXME: we should *actually* create a new block so the following test
// works; CheckFinalTx() isn't fooled by monkey-patching nHeight.
//BOOST_CHECK(CheckFinalTx(tx));
//BOOST_CHECK(CheckFinalTx(tx2));
SetMockTime(chainActive.Tip()->GetMedianTimePast() + 1);
BOOST_CHECK(pblocktemplate = CreateNewBlock(chainparams, scriptPubKey));
BOOST_CHECK_EQUAL(pblocktemplate->block.vtx.size(), 2);
BOOST_CHECK_EQUAL(pblocktemplate->block.vtx.size(), 5);
delete pblocktemplate;
chainActive.Tip()->nHeight--;

View File

@ -63,7 +63,7 @@ CMutableTransaction BuildCreditingTransaction(const CScript& scriptPubKey)
txCredit.vout.resize(1);
txCredit.vin[0].prevout.SetNull();
txCredit.vin[0].scriptSig = CScript() << CScriptNum(0) << CScriptNum(0);
txCredit.vin[0].nSequence = std::numeric_limits<unsigned int>::max();
txCredit.vin[0].nSequence = CTxIn::SEQUENCE_FINAL;
txCredit.vout[0].scriptPubKey = scriptPubKey;
txCredit.vout[0].nValue = 0;
@ -80,7 +80,7 @@ CMutableTransaction BuildSpendingTransaction(const CScript& scriptSig, const CMu
txSpend.vin[0].prevout.hash = txCredit.GetHash();
txSpend.vin[0].prevout.n = 0;
txSpend.vin[0].scriptSig = scriptSig;
txSpend.vin[0].nSequence = std::numeric_limits<unsigned int>::max();
txSpend.vin[0].nSequence = CTxIn::SEQUENCE_FINAL;
txSpend.vout[0].scriptPubKey = CScript();
txSpend.vout[0].nValue = 0;

View File

@ -150,7 +150,7 @@ CTxMemPoolEntry TestMemPoolEntryHelper::FromTx(CMutableTransaction &tx, CTxMemPo
CAmount inChainValue = hasNoDependencies ? txn.GetValueOut() : 0;
return CTxMemPoolEntry(txn, nFee, nTime, dPriority, nHeight,
hasNoDependencies, inChainValue, spendsCoinbase, sigOpCount);
hasNoDependencies, inChainValue, spendsCoinbase, sigOpCount, lp);
}
void Shutdown(void* parg)

View File

@ -5,6 +5,7 @@
#include "key.h"
#include "pubkey.h"
#include "txdb.h"
#include "txmempool.h"
#include <boost/filesystem.hpp>
#include <boost/thread.hpp>
@ -67,7 +68,8 @@ struct TestMemPoolEntryHelper
bool hadNoDependencies;
bool spendsCoinbase;
unsigned int sigOpCount;
LockPoints lp;
TestMemPoolEntryHelper() :
nFee(0), nTime(0), dPriority(0.0), nHeight(1),
hadNoDependencies(false), spendsCoinbase(false), sigOpCount(1) { }

View File

@ -44,7 +44,8 @@ static std::map<string, unsigned int> mapFlagNames = boost::assign::map_list_of
(string("NULLDUMMY"), (unsigned int)SCRIPT_VERIFY_NULLDUMMY)
(string("DISCOURAGE_UPGRADABLE_NOPS"), (unsigned int)SCRIPT_VERIFY_DISCOURAGE_UPGRADABLE_NOPS)
(string("CLEANSTACK"), (unsigned int)SCRIPT_VERIFY_CLEANSTACK)
(string("CHECKLOCKTIMEVERIFY"), (unsigned int)SCRIPT_VERIFY_CHECKLOCKTIMEVERIFY);
(string("CHECKLOCKTIMEVERIFY"), (unsigned int)SCRIPT_VERIFY_CHECKLOCKTIMEVERIFY)
(string("CHECKSEQUENCEVERIFY"), (unsigned int)SCRIPT_VERIFY_CHECKSEQUENCEVERIFY);
unsigned int ParseScriptFlags(string strFlags)
{

View File

@ -0,0 +1,316 @@
// Copyright (c) 2014-2015 The Bitcoin Core developers
// Distributed under the MIT software license, see the accompanying
// file COPYING or http://www.opensource.org/licenses/mit-license.php.
#include "chain.h"
#include "random.h"
#include "versionbits.h"
#include "test/test_dash.h"
#include "chainparams.h"
#include "main.h"
#include "consensus/params.h"
#include <boost/test/unit_test.hpp>
/* Define a virtual block time, one block per 10 minutes after Nov 14 2014, 0:55:36am */
int32_t TestTime(int nHeight) { return 1415926536 + 600 * nHeight; }
static const Consensus::Params paramsDummy = Consensus::Params();
class TestConditionChecker : public AbstractThresholdConditionChecker
{
private:
mutable ThresholdConditionCache cache;
public:
int64_t BeginTime(const Consensus::Params& params) const { return TestTime(10000); }
int64_t EndTime(const Consensus::Params& params) const { return TestTime(20000); }
int Period(const Consensus::Params& params) const { return 1000; }
int Threshold(const Consensus::Params& params) const { return 900; }
bool Condition(const CBlockIndex* pindex, const Consensus::Params& params) const { return (pindex->nVersion & 0x100); }
ThresholdState GetStateFor(const CBlockIndex* pindexPrev) const { return AbstractThresholdConditionChecker::GetStateFor(pindexPrev, paramsDummy, cache); }
};
#define CHECKERS 6
class VersionBitsTester
{
// A fake blockchain
std::vector<CBlockIndex*> vpblock;
// 6 independent checkers for the same bit.
// The first one performs all checks, the second only 50%, the third only 25%, etc...
// This is to test whether lack of cached information leads to the same results.
TestConditionChecker checker[CHECKERS];
// Test counter (to identify failures)
int num;
public:
VersionBitsTester() : num(0) {}
VersionBitsTester& Reset() {
for (unsigned int i = 0; i < vpblock.size(); i++) {
delete vpblock[i];
}
for (unsigned int i = 0; i < CHECKERS; i++) {
checker[i] = TestConditionChecker();
}
vpblock.clear();
return *this;
}
~VersionBitsTester() {
Reset();
}
VersionBitsTester& Mine(unsigned int height, int32_t nTime, int32_t nVersion) {
while (vpblock.size() < height) {
CBlockIndex* pindex = new CBlockIndex();
pindex->nHeight = vpblock.size();
pindex->pprev = vpblock.size() > 0 ? vpblock.back() : NULL;
pindex->nTime = nTime;
pindex->nVersion = nVersion;
pindex->BuildSkip();
vpblock.push_back(pindex);
}
return *this;
}
VersionBitsTester& TestDefined() {
for (int i = 0; i < CHECKERS; i++) {
if ((insecure_rand() & ((1 << i) - 1)) == 0) {
BOOST_CHECK_MESSAGE(checker[i].GetStateFor(vpblock.empty() ? NULL : vpblock.back()) == THRESHOLD_DEFINED, strprintf("Test %i for DEFINED", num));
}
}
num++;
return *this;
}
VersionBitsTester& TestStarted() {
for (int i = 0; i < CHECKERS; i++) {
if ((insecure_rand() & ((1 << i) - 1)) == 0) {
BOOST_CHECK_MESSAGE(checker[i].GetStateFor(vpblock.empty() ? NULL : vpblock.back()) == THRESHOLD_STARTED, strprintf("Test %i for STARTED", num));
}
}
num++;
return *this;
}
VersionBitsTester& TestLockedIn() {
for (int i = 0; i < CHECKERS; i++) {
if ((insecure_rand() & ((1 << i) - 1)) == 0) {
BOOST_CHECK_MESSAGE(checker[i].GetStateFor(vpblock.empty() ? NULL : vpblock.back()) == THRESHOLD_LOCKED_IN, strprintf("Test %i for LOCKED_IN", num));
}
}
num++;
return *this;
}
VersionBitsTester& TestActive() {
for (int i = 0; i < CHECKERS; i++) {
if ((insecure_rand() & ((1 << i) - 1)) == 0) {
BOOST_CHECK_MESSAGE(checker[i].GetStateFor(vpblock.empty() ? NULL : vpblock.back()) == THRESHOLD_ACTIVE, strprintf("Test %i for ACTIVE", num));
}
}
num++;
return *this;
}
VersionBitsTester& TestFailed() {
for (int i = 0; i < CHECKERS; i++) {
if ((insecure_rand() & ((1 << i) - 1)) == 0) {
BOOST_CHECK_MESSAGE(checker[i].GetStateFor(vpblock.empty() ? NULL : vpblock.back()) == THRESHOLD_FAILED, strprintf("Test %i for FAILED", num));
}
}
num++;
return *this;
}
CBlockIndex * Tip() { return vpblock.size() ? vpblock.back() : NULL; }
};
BOOST_FIXTURE_TEST_SUITE(versionbits_tests, TestingSetup)
BOOST_AUTO_TEST_CASE(versionbits_test)
{
for (int i = 0; i < 64; i++) {
// DEFINED -> FAILED
VersionBitsTester().TestDefined()
.Mine(1, TestTime(1), 0x100).TestDefined()
.Mine(11, TestTime(11), 0x100).TestDefined()
.Mine(989, TestTime(989), 0x100).TestDefined()
.Mine(999, TestTime(20000), 0x100).TestDefined()
.Mine(1000, TestTime(20000), 0x100).TestFailed()
.Mine(1999, TestTime(30001), 0x100).TestFailed()
.Mine(2000, TestTime(30002), 0x100).TestFailed()
.Mine(2001, TestTime(30003), 0x100).TestFailed()
.Mine(2999, TestTime(30004), 0x100).TestFailed()
.Mine(3000, TestTime(30005), 0x100).TestFailed()
// DEFINED -> STARTED -> FAILED
.Reset().TestDefined()
.Mine(1, TestTime(1), 0).TestDefined()
.Mine(1000, TestTime(10000) - 1, 0x100).TestDefined() // One second more and it would be defined
.Mine(2000, TestTime(10000), 0x100).TestStarted() // So that's what happens the next period
.Mine(2051, TestTime(10010), 0).TestStarted() // 51 old blocks
.Mine(2950, TestTime(10020), 0x100).TestStarted() // 899 new blocks
.Mine(3000, TestTime(20000), 0).TestFailed() // 50 old blocks (so 899 out of the past 1000)
.Mine(4000, TestTime(20010), 0x100).TestFailed()
// DEFINED -> STARTED -> FAILED while threshold reached
.Reset().TestDefined()
.Mine(1, TestTime(1), 0).TestDefined()
.Mine(1000, TestTime(10000) - 1, 0x101).TestDefined() // One second more and it would be defined
.Mine(2000, TestTime(10000), 0x101).TestStarted() // So that's what happens the next period
.Mine(2999, TestTime(30000), 0x100).TestStarted() // 999 new blocks
.Mine(3000, TestTime(30000), 0x100).TestFailed() // 1 new block (so 1000 out of the past 1000 are new)
.Mine(3999, TestTime(30001), 0).TestFailed()
.Mine(4000, TestTime(30002), 0).TestFailed()
.Mine(14333, TestTime(30003), 0).TestFailed()
.Mine(24000, TestTime(40000), 0).TestFailed()
// DEFINED -> STARTED -> LOCKEDIN at the last minute -> ACTIVE
.Reset().TestDefined()
.Mine(1, TestTime(1), 0).TestDefined()
.Mine(1000, TestTime(10000) - 1, 0x101).TestDefined() // One second more and it would be defined
.Mine(2000, TestTime(10000), 0x101).TestStarted() // So that's what happens the next period
.Mine(2050, TestTime(10010), 0x200).TestStarted() // 50 old blocks
.Mine(2950, TestTime(10020), 0x100).TestStarted() // 900 new blocks
.Mine(2999, TestTime(19999), 0x200).TestStarted() // 49 old blocks
.Mine(3000, TestTime(29999), 0x200).TestLockedIn() // 1 old block (so 900 out of the past 1000)
.Mine(3999, TestTime(30001), 0).TestLockedIn()
.Mine(4000, TestTime(30002), 0).TestActive()
.Mine(14333, TestTime(30003), 0).TestActive()
.Mine(24000, TestTime(40000), 0).TestActive();
}
// Sanity checks of version bit deployments
const Consensus::Params &mainnetParams = Params(CBaseChainParams::MAIN).GetConsensus();
for (int i=0; i<(int) Consensus::MAX_VERSION_BITS_DEPLOYMENTS; i++) {
uint32_t bitmask = VersionBitsMask(mainnetParams, (Consensus::DeploymentPos)i);
// Make sure that no deployment tries to set an invalid bit.
BOOST_CHECK_EQUAL(bitmask & ~(uint32_t)VERSIONBITS_TOP_MASK, bitmask);
// Verify that the deployment windows of different deployment using the
// same bit are disjoint.
// This test may need modification at such time as a new deployment
// is proposed that reuses the bit of an activated soft fork, before the
// end time of that soft fork. (Alternatively, the end time of that
// activated soft fork could be later changed to be earlier to avoid
// overlap.)
for (int j=i+1; j<(int) Consensus::MAX_VERSION_BITS_DEPLOYMENTS; j++) {
if (VersionBitsMask(mainnetParams, (Consensus::DeploymentPos)j) == bitmask) {
BOOST_CHECK(mainnetParams.vDeployments[j].nStartTime > mainnetParams.vDeployments[i].nTimeout ||
mainnetParams.vDeployments[i].nStartTime > mainnetParams.vDeployments[j].nTimeout);
}
}
}
}
BOOST_AUTO_TEST_CASE(versionbits_computeblockversion)
{
// Check that ComputeBlockVersion will set the appropriate bit correctly
// on mainnet.
const Consensus::Params &mainnetParams = Params(CBaseChainParams::MAIN).GetConsensus();
// Use the TESTDUMMY deployment for testing purposes.
int64_t bit = mainnetParams.vDeployments[Consensus::DEPLOYMENT_TESTDUMMY].bit;
int64_t nStartTime = mainnetParams.vDeployments[Consensus::DEPLOYMENT_TESTDUMMY].nStartTime;
int64_t nTimeout = mainnetParams.vDeployments[Consensus::DEPLOYMENT_TESTDUMMY].nTimeout;
assert(nStartTime < nTimeout);
// In the first chain, test that the bit is set by CBV until it has failed.
// In the second chain, test the bit is set by CBV while STARTED and
// LOCKED-IN, and then no longer set while ACTIVE.
VersionBitsTester firstChain, secondChain;
// Start generating blocks before nStartTime
int64_t nTime = nStartTime - 1;
// Before MedianTimePast of the chain has crossed nStartTime, the bit
// should not be set.
CBlockIndex *lastBlock = NULL;
lastBlock = firstChain.Mine(2016, nTime, VERSIONBITS_LAST_OLD_BLOCK_VERSION).Tip();
BOOST_CHECK_EQUAL(ComputeBlockVersion(lastBlock, mainnetParams) & (1<<bit), 0);
// Mine 2011 more blocks at the old time, and check that CBV isn't setting the bit yet.
for (int i=1; i<2012; i++) {
lastBlock = firstChain.Mine(2016+i, nTime, VERSIONBITS_LAST_OLD_BLOCK_VERSION).Tip();
// This works because VERSIONBITS_LAST_OLD_BLOCK_VERSION happens
// to be 4, and the bit we're testing happens to be bit 28.
BOOST_CHECK_EQUAL(ComputeBlockVersion(lastBlock, mainnetParams) & (1<<bit), 0);
}
// Now mine 5 more blocks at the start time -- MTP should not have passed yet, so
// CBV should still not yet set the bit.
nTime = nStartTime;
for (int i=2012; i<=2016; i++) {
lastBlock = firstChain.Mine(2016+i, nTime, VERSIONBITS_LAST_OLD_BLOCK_VERSION).Tip();
BOOST_CHECK_EQUAL(ComputeBlockVersion(lastBlock, mainnetParams) & (1<<bit), 0);
}
// Advance to the next period and transition to STARTED,
lastBlock = firstChain.Mine(6048, nTime, VERSIONBITS_LAST_OLD_BLOCK_VERSION).Tip();
// so ComputeBlockVersion should now set the bit,
BOOST_CHECK((ComputeBlockVersion(lastBlock, mainnetParams) & (1<<bit)) != 0);
// and should also be using the VERSIONBITS_TOP_BITS.
BOOST_CHECK_EQUAL(ComputeBlockVersion(lastBlock, mainnetParams) & VERSIONBITS_TOP_MASK, VERSIONBITS_TOP_BITS);
// Check that ComputeBlockVersion will set the bit until nTimeout
nTime += 600;
int blocksToMine = 4032; // test blocks for up to 2 time periods
int nHeight = 6048;
// These blocks are all before nTimeout is reached.
while (nTime < nTimeout && blocksToMine > 0) {
lastBlock = firstChain.Mine(nHeight+1, nTime, VERSIONBITS_LAST_OLD_BLOCK_VERSION).Tip();
BOOST_CHECK((ComputeBlockVersion(lastBlock, mainnetParams) & (1<<bit)) != 0);
BOOST_CHECK_EQUAL(ComputeBlockVersion(lastBlock, mainnetParams) & VERSIONBITS_TOP_MASK, VERSIONBITS_TOP_BITS);
blocksToMine--;
nTime += 600;
nHeight += 1;
};
nTime = nTimeout;
// FAILED is only triggered at the end of a period, so CBV should be setting
// the bit until the period transition.
for (int i=0; i<2015; i++) {
lastBlock = firstChain.Mine(nHeight+1, nTime, VERSIONBITS_LAST_OLD_BLOCK_VERSION).Tip();
BOOST_CHECK((ComputeBlockVersion(lastBlock, mainnetParams) & (1<<bit)) != 0);
nHeight += 1;
}
// The next block should trigger no longer setting the bit.
lastBlock = firstChain.Mine(nHeight+1, nTime, VERSIONBITS_LAST_OLD_BLOCK_VERSION).Tip();
BOOST_CHECK_EQUAL(ComputeBlockVersion(lastBlock, mainnetParams) & (1<<bit), 0);
// On a new chain:
// verify that the bit will be set after lock-in, and then stop being set
// after activation.
nTime = nStartTime;
// Mine one period worth of blocks, and check that the bit will be on for the
// next period.
lastBlock = secondChain.Mine(2016, nStartTime, VERSIONBITS_LAST_OLD_BLOCK_VERSION).Tip();
BOOST_CHECK((ComputeBlockVersion(lastBlock, mainnetParams) & (1<<bit)) != 0);
// Mine another period worth of blocks, signaling the new bit.
lastBlock = secondChain.Mine(4032, nStartTime, VERSIONBITS_TOP_BITS | (1<<bit)).Tip();
// After one period of setting the bit on each block, it should have locked in.
// We keep setting the bit for one more period though, until activation.
BOOST_CHECK((ComputeBlockVersion(lastBlock, mainnetParams) & (1<<bit)) != 0);
// Now check that we keep mining the block until the end of this period, and
// then stop at the beginning of the next period.
lastBlock = secondChain.Mine(6047, nStartTime, VERSIONBITS_LAST_OLD_BLOCK_VERSION).Tip();
BOOST_CHECK((ComputeBlockVersion(lastBlock, mainnetParams) & (1<<bit)) != 0);
lastBlock = secondChain.Mine(6048, nStartTime, VERSIONBITS_LAST_OLD_BLOCK_VERSION).Tip();
BOOST_CHECK_EQUAL(ComputeBlockVersion(lastBlock, mainnetParams) & (1<<bit), 0);
// Finally, verify that after a soft fork has activated, CBV no longer uses
// VERSIONBITS_LAST_OLD_BLOCK_VERSION.
//BOOST_CHECK_EQUAL(ComputeBlockVersion(lastBlock, mainnetParams) & VERSIONBITS_TOP_MASK, VERSIONBITS_TOP_BITS);
}
BOOST_AUTO_TEST_SUITE_END()

View File

@ -22,10 +22,10 @@ using namespace std;
CTxMemPoolEntry::CTxMemPoolEntry(const CTransaction& _tx, const CAmount& _nFee,
int64_t _nTime, double _entryPriority, unsigned int _entryHeight,
bool poolHasNoInputsOf, CAmount _inChainInputValue,
bool _spendsCoinbase, unsigned int _sigOps):
bool _spendsCoinbase, unsigned int _sigOps, LockPoints lp):
tx(_tx), nFee(_nFee), nTime(_nTime), entryPriority(_entryPriority), entryHeight(_entryHeight),
hadNoDependencies(poolHasNoInputsOf), inChainInputValue(_inChainInputValue),
spendsCoinbase(_spendsCoinbase), sigOpCount(_sigOps)
spendsCoinbase(_spendsCoinbase), sigOpCount(_sigOps), lockPoints(lp)
{
nTxSize = ::GetSerializeSize(tx, SER_NETWORK, PROTOCOL_VERSION);
nModSize = tx.CalculateModifiedSize(nTxSize);
@ -61,6 +61,11 @@ void CTxMemPoolEntry::UpdateFeeDelta(int64_t newFeeDelta)
feeDelta = newFeeDelta;
}
void CTxMemPoolEntry::UpdateLockPoints(const LockPoints& lp)
{
lockPoints = lp;
}
// Update the given tx for any in-mempool descendants.
// Assumes that setMemPoolChildren is correct for the given tx and all
// descendants.
@ -506,7 +511,11 @@ void CTxMemPool::removeForReorg(const CCoinsViewCache *pcoins, unsigned int nMem
list<CTransaction> transactionsToRemove;
for (indexed_transaction_set::const_iterator it = mapTx.begin(); it != mapTx.end(); it++) {
const CTransaction& tx = it->GetTx();
if (!CheckFinalTx(tx, flags)) {
LockPoints lp = it->GetLockPoints();
bool validLP = TestLockPointValidity(&lp);
if (!CheckFinalTx(tx, flags) || !CheckSequenceLocks(tx, flags, &lp, validLP)) {
// Note if CheckSequenceLocks fails the LockPoints may still be invalid
// So it's critical that we remove the tx and not depend on the LockPoints.
transactionsToRemove.push_back(tx);
} else if (it->GetSpendsCoinbase()) {
BOOST_FOREACH(const CTxIn& txin, tx.vin) {
@ -521,6 +530,9 @@ void CTxMemPool::removeForReorg(const CCoinsViewCache *pcoins, unsigned int nMem
}
}
}
if (!validLP) {
mapTx.modify(it, update_lock_points(lp));
}
}
BOOST_FOREACH(const CTransaction& tx, transactionsToRemove) {
list<CTransaction> removed;

View File

@ -19,6 +19,7 @@
#include "boost/multi_index/ordered_index.hpp"
class CAutoFile;
class CBlockIndex;
inline double AllowFreeThreshold()
{
@ -36,6 +37,21 @@ inline bool AllowFree(double dPriority)
/** Fake height value used in CCoins to signify they are only in the memory pool (since 0.8) */
static const unsigned int MEMPOOL_HEIGHT = 0x7FFFFFFF;
struct LockPoints
{
// Will be set to the blockchain height and median time past
// values that would be necessary to satisfy all relative locktime
// constraints (BIP68) of this tx given our view of block chain history
int height;
int64_t time;
// As long as the current chain descends from the highest height block
// containing one of the inputs used in the calculation, then the cached
// values are still valid even after a reorg.
CBlockIndex* maxInputBlock;
LockPoints() : height(0), time(0), maxInputBlock(NULL) { }
};
class CTxMemPool;
/** \class CTxMemPoolEntry
@ -71,6 +87,7 @@ private:
bool spendsCoinbase; //! keep track of transactions that spend a coinbase
unsigned int sigOpCount; //! Legacy sig ops plus P2SH sig op count
int64_t feeDelta; //! Used for determining the priority of the transaction for mining in a block
LockPoints lockPoints; //! Track the height and time at which tx was final
// Information about descendants of this transaction that are in the
// mempool; if we remove this transaction we must remove all of these
@ -85,7 +102,7 @@ public:
CTxMemPoolEntry(const CTransaction& _tx, const CAmount& _nFee,
int64_t _nTime, double _entryPriority, unsigned int _entryHeight,
bool poolHasNoInputsOf, CAmount _inChainInputValue, bool spendsCoinbase,
unsigned int nSigOps);
unsigned int nSigOps, LockPoints lp);
CTxMemPoolEntry(const CTxMemPoolEntry& other);
const CTransaction& GetTx() const { return this->tx; }
@ -102,12 +119,15 @@ public:
unsigned int GetSigOpCount() const { return sigOpCount; }
int64_t GetModifiedFee() const { return nFee + feeDelta; }
size_t DynamicMemoryUsage() const { return nUsageSize; }
const LockPoints& GetLockPoints() const { return lockPoints; }
// Adjusts the descendant state, if this entry is not dirty.
void UpdateState(int64_t modifySize, CAmount modifyFee, int64_t modifyCount);
// Updates the fee delta used for mining priority score, and the
// modified fees with descendants.
void UpdateFeeDelta(int64_t feeDelta);
// Update the LockPoints after a reorg
void UpdateLockPoints(const LockPoints& lp);
/** We can set the entry to be dirty if doing the full calculation of in-
* mempool descendants will be too expensive, which can potentially happen
@ -155,6 +175,16 @@ private:
int64_t feeDelta;
};
struct update_lock_points
{
update_lock_points(const LockPoints& _lp) : lp(_lp) { }
void operator() (CTxMemPoolEntry &e) { e.UpdateLockPoints(lp); }
private:
const LockPoints& lp;
};
// extracts a TxMemPoolEntry's transaction hash
struct mempoolentry_txid
{

133
src/versionbits.cpp Normal file
View File

@ -0,0 +1,133 @@
// Copyright (c) 2016 The Bitcoin Core developers
// Distributed under the MIT software license, see the accompanying
// file COPYING or http://www.opensource.org/licenses/mit-license.php.
#include "versionbits.h"
ThresholdState AbstractThresholdConditionChecker::GetStateFor(const CBlockIndex* pindexPrev, const Consensus::Params& params, ThresholdConditionCache& cache) const
{
int nPeriod = Period(params);
int nThreshold = Threshold(params);
int64_t nTimeStart = BeginTime(params);
int64_t nTimeTimeout = EndTime(params);
// A block's state is always the same as that of the first of its period, so it is computed based on a pindexPrev whose height equals a multiple of nPeriod - 1.
if (pindexPrev != NULL) {
pindexPrev = pindexPrev->GetAncestor(pindexPrev->nHeight - ((pindexPrev->nHeight + 1) % nPeriod));
}
// Walk backwards in steps of nPeriod to find a pindexPrev whose information is known
std::vector<const CBlockIndex*> vToCompute;
while (cache.count(pindexPrev) == 0) {
if (pindexPrev == NULL) {
// The genesis block is by definition defined.
cache[pindexPrev] = THRESHOLD_DEFINED;
break;
}
if (pindexPrev->GetMedianTimePast() < nTimeStart) {
// Optimizaton: don't recompute down further, as we know every earlier block will be before the start time
cache[pindexPrev] = THRESHOLD_DEFINED;
break;
}
vToCompute.push_back(pindexPrev);
pindexPrev = pindexPrev->GetAncestor(pindexPrev->nHeight - nPeriod);
}
// At this point, cache[pindexPrev] is known
assert(cache.count(pindexPrev));
ThresholdState state = cache[pindexPrev];
// Now walk forward and compute the state of descendants of pindexPrev
while (!vToCompute.empty()) {
ThresholdState stateNext = state;
pindexPrev = vToCompute.back();
vToCompute.pop_back();
switch (state) {
case THRESHOLD_DEFINED: {
if (pindexPrev->GetMedianTimePast() >= nTimeTimeout) {
stateNext = THRESHOLD_FAILED;
} else if (pindexPrev->GetMedianTimePast() >= nTimeStart) {
stateNext = THRESHOLD_STARTED;
}
break;
}
case THRESHOLD_STARTED: {
if (pindexPrev->GetMedianTimePast() >= nTimeTimeout) {
stateNext = THRESHOLD_FAILED;
break;
}
// We need to count
const CBlockIndex* pindexCount = pindexPrev;
int count = 0;
for (int i = 0; i < nPeriod; i++) {
if (Condition(pindexCount, params)) {
count++;
}
pindexCount = pindexCount->pprev;
}
if (count >= nThreshold) {
stateNext = THRESHOLD_LOCKED_IN;
}
break;
}
case THRESHOLD_LOCKED_IN: {
// Always progresses into ACTIVE.
stateNext = THRESHOLD_ACTIVE;
break;
}
case THRESHOLD_FAILED:
case THRESHOLD_ACTIVE: {
// Nothing happens, these are terminal states.
break;
}
}
cache[pindexPrev] = state = stateNext;
}
return state;
}
namespace
{
/**
* Class to implement versionbits logic.
*/
class VersionBitsConditionChecker : public AbstractThresholdConditionChecker {
private:
const Consensus::DeploymentPos id;
protected:
int64_t BeginTime(const Consensus::Params& params) const { return params.vDeployments[id].nStartTime; }
int64_t EndTime(const Consensus::Params& params) const { return params.vDeployments[id].nTimeout; }
int Period(const Consensus::Params& params) const { return params.nMinerConfirmationWindow; }
int Threshold(const Consensus::Params& params) const { return params.nRuleChangeActivationThreshold; }
bool Condition(const CBlockIndex* pindex, const Consensus::Params& params) const
{
return (((pindex->nVersion & VERSIONBITS_TOP_MASK) == VERSIONBITS_TOP_BITS) && (pindex->nVersion & Mask(params)) != 0);
}
public:
VersionBitsConditionChecker(Consensus::DeploymentPos id_) : id(id_) {}
uint32_t Mask(const Consensus::Params& params) const { return ((uint32_t)1) << params.vDeployments[id].bit; }
};
}
ThresholdState VersionBitsState(const CBlockIndex* pindexPrev, const Consensus::Params& params, Consensus::DeploymentPos pos, VersionBitsCache& cache)
{
return VersionBitsConditionChecker(pos).GetStateFor(pindexPrev, params, cache.caches[pos]);
}
uint32_t VersionBitsMask(const Consensus::Params& params, Consensus::DeploymentPos pos)
{
return VersionBitsConditionChecker(pos).Mask(params);
}
void VersionBitsCache::Clear()
{
for (unsigned int d = 0; d < Consensus::MAX_VERSION_BITS_DEPLOYMENTS; d++) {
caches[d].clear();
}
}

59
src/versionbits.h Normal file
View File

@ -0,0 +1,59 @@
// Copyright (c) 2016 The Bitcoin Core developers
// Distributed under the MIT software license, see the accompanying
// file COPYING or http://www.opensource.org/licenses/mit-license.php.
#ifndef BITCOIN_CONSENSUS_VERSIONBITS
#define BITCOIN_CONSENSUS_VERSIONBITS
#include "chain.h"
#include <map>
/** What block version to use for new blocks (pre versionbits) */
static const int32_t VERSIONBITS_LAST_OLD_BLOCK_VERSION = 4;
/** What bits to set in version for versionbits blocks */
static const int32_t VERSIONBITS_TOP_BITS = 0x20000000UL;
/** What bitmask determines whether versionbits is in use */
static const int32_t VERSIONBITS_TOP_MASK = 0xE0000000UL;
/** Total bits available for versionbits */
static const int32_t VERSIONBITS_NUM_BITS = 29;
enum ThresholdState {
THRESHOLD_DEFINED,
THRESHOLD_STARTED,
THRESHOLD_LOCKED_IN,
THRESHOLD_ACTIVE,
THRESHOLD_FAILED,
};
// A map that gives the state for blocks whose height is a multiple of Period().
// The map is indexed by the block's parent, however, so all keys in the map
// will either be NULL or a block with (height + 1) % Period() == 0.
typedef std::map<const CBlockIndex*, ThresholdState> ThresholdConditionCache;
/**
* Abstract class that implements BIP9-style threshold logic, and caches results.
*/
class AbstractThresholdConditionChecker {
protected:
virtual bool Condition(const CBlockIndex* pindex, const Consensus::Params& params) const =0;
virtual int64_t BeginTime(const Consensus::Params& params) const =0;
virtual int64_t EndTime(const Consensus::Params& params) const =0;
virtual int Period(const Consensus::Params& params) const =0;
virtual int Threshold(const Consensus::Params& params) const =0;
public:
// Note that the function below takes a pindexPrev as input: they compute information for block B based on its parent.
ThresholdState GetStateFor(const CBlockIndex* pindexPrev, const Consensus::Params& params, ThresholdConditionCache& cache) const;
};
struct VersionBitsCache
{
ThresholdConditionCache caches[Consensus::MAX_VERSION_BITS_DEPLOYMENTS];
void Clear();
};
ThresholdState VersionBitsState(const CBlockIndex* pindexPrev, const Consensus::Params& params, Consensus::DeploymentPos pos, VersionBitsCache& cache);
uint32_t VersionBitsMask(const Consensus::Params& params, Consensus::DeploymentPos pos);
#endif

View File

@ -1440,6 +1440,7 @@ void ListTransactions(const CWalletTx& wtx, const string& strAccount, int nMinDe
entry.push_back(Pair("fee", ValueFromAmount(-nFee)));
if (fLong)
WalletTxToJSON(wtx, entry);
entry.push_back(Pair("abandoned", wtx.isAbandoned()));
ret.push_back(entry);
}
}

View File

@ -40,9 +40,7 @@
using namespace std;
/**
* Settings
*/
/** Transaction fee set by the user */
CFeeRate payTxFee(DEFAULT_TRANSACTION_FEE);
CAmount maxTxFee = DEFAULT_TRANSACTION_MAXFEE;
unsigned int nTxConfirmTarget = DEFAULT_TX_CONFIRM_TARGET;
@ -2042,7 +2040,7 @@ CAmount CWallet::GetUnconfirmedBalance() const
for (map<uint256, CWalletTx>::const_iterator it = mapWallet.begin(); it != mapWallet.end(); ++it)
{
const CWalletTx* pcoin = &(*it).second;
if (!CheckFinalTx(*pcoin) || (!pcoin->IsTrusted() && pcoin->GetDepthInMainChain() == 0))
if (!pcoin->IsTrusted() && pcoin->GetDepthInMainChain() == 0 && pcoin->InMempool())
nTotal += pcoin->GetAvailableCredit();
}
}
@ -2087,7 +2085,7 @@ CAmount CWallet::GetUnconfirmedWatchOnlyBalance() const
for (map<uint256, CWalletTx>::const_iterator it = mapWallet.begin(); it != mapWallet.end(); ++it)
{
const CWalletTx* pcoin = &(*it).second;
if (!CheckFinalTx(*pcoin) || (!pcoin->IsTrusted() && pcoin->GetDepthInMainChain() == 0))
if (!pcoin->IsTrusted() && pcoin->GetDepthInMainChain() == 0 && pcoin->InMempool())
nTotal += pcoin->GetAvailableWatchOnlyCredit();
}
}
@ -2133,6 +2131,11 @@ void CWallet::AvailableCoins(vector<COutput>& vCoins, bool fOnlyConfirmed, const
if (useIX && nDepth < 6)
continue;
// We should not consider coins which aren't at least in our mempool
// It's possible for these to be conflicted via ancestors which we may never be able to detect
if (nDepth == 0 && !pcoin->InMempool())
continue;
for (unsigned int i = 0; i < pcoin->vout.size(); i++) {
bool found = false;
if(coin_type == ONLY_DENOMINATED) {