- Replaced coinbase cache in favor of using the masternode payments voting system only
- Syncing masternode payments now supports up to the syncing the entire payment list
We don't want to erase orphans that still have missing inputs, they should still be tracked as orphans. Also, the transaction thats being accepted can't be an orphan otherwise it would have previously been accepted, so doesn't need to be added to the erase queue.
Github-Pull: #5985
Rebased-From: 14d4eef799
Make it possible to opt-out of the centralized alert system by providing
an option `-noalerts` or `-alerts=0`. The default remains unchanged.
This is a gentler form of #6260, in which I went a bit overboard by
removing the alert system completely.
I intend to add this to the GUI options in another pull after this.
Conflicts:
src/init.cpp
src/main.cpp
Github-Pull: #6274
Rebased-From: 02a6702a82
- Do not rely on local lastTimeSeen and requested fRequested anymore. Use last know (signed) ping instead and base all logic on that. Should reduce mn list difference between
nodes.
- Rework CActiveMasternode accordingly along with states, errorMessages, rpc etc.
- Clean some related code, move parts from public to private
- drop c_str in LogPrintf that were related to this functionality (todo: drop it for LogPrintf everywhere else)
Submissions to the network now require a fee to be paid to the network (mining fee) using a special transaction with a OP_RETURN && ProposalHash in one of the outputs. This allows the network to filter spam quickly, while also allowing anyone to submit a proposal to the network.
To implement these changes we've introduced a few new commands:
mnbudget prepare PROPOSAL-NAME URL PAYMENT_COUNT BLOCK_START DASH_ADDRESS DASH_AMOUNT YES|NO|ABSTAIN [USE_IX(TRUE|FALSE)]
- To create the special transaction
mnbudget submit PROPOSAL-NAME URL PAYMENT_COUNT BLOCK_START DASH_ADDRESS DASH_AMOUNT YES|NO|ABSTAIN FEE_TX
- After the transaction is accepted by the network and has 3 confirmations, you can submit the transaction to the network here
mnbudget show
- Get the proposal hash from here
mnbudget vote PROPOSAL-HASH YES|NO|ABSTAIN
- You can now simply vote by hash using this command
-Syncing now happens in stages. Masternodes and Sporks, then Masternode winners, then proposals. Some of these require the masternode signatures, otherwise there are race conditions within the syncing process itself.
-Resigning - When a proposal is sent to the network initially it's signed by a masternode, if that masternode goes inactive the proposal becomes invalid. Resigning allows other masternodes to update proposal keep it valid with the coming and going of masternodes.
-Resigning compatibility - non masternodes will scan and flag proposals as invalid to accept updated owners.
-Invalid votes are now actively removed from the proposals when they go inactive
- Remove budgets with negative votes of more than 10% of network
- Only allow proposals into budget that have more than 10% of network support
- Faster removal of inactive masternodes
4f40716 test: Move reindex test to standard tests (Wladimir J. van der Laan)
36c97b4 Bugfix: Don't check the genesis block header before accepting it (Jorge Timón)
- Implemented spork for only paying new nodes after a period of time on mainnet
- protocol bump
- fixed a few issues with sporks. Spork show now shows all sporks, instead of the changed ones. IsSporkActive now supports sporks set to 0 as on.
- With nodes coming and going on the network, the network could come to different opinions about who should get paid next in line due to some nodes being flagged as failing a PoSe check. This will have to be fixed by introducing a blockchain based PoSe system, but that's out of the scope of this release. To fix the issues in the interrim, I'm removing PoSe checks for the time being.
- Masternode nLastPaid is removed and a new caching system that keeps the last 30 days of coinbase payees replaces it
- To deal with some significant attack vectors, the masternode donation feature was removed. The donation feature was added to support developement anyway, so this will be replaced by the budgeting code.
- This code should allow the network to come to consensus about who should be paid pretty effectively
When responding to a getblocks message, only return inv's as
long as we HAVE_DATA for blocks in the chain, and only for blocks
that we aren't likely to delete in the near future.
Make it possible to opt-out of the centralized alert system by providing
an option `-noalerts` or `-alerts=0`. The default remains unchanged.
This is a gentler form of #6260, in which I went a bit overboard by
removing the alert system completely.
I intend to add this to the GUI options in another pull after this.
The partition checking code was using chainActive timestamps
to detect partitioning; with headers-first syncing, it should use
(and with this pull request, does use) pIndexBestHeader timestamps.
Fixes issue #6251
This prevents an edge case where a block downloaded and pruned
in-between successive calls to FindNextBlocksToDownload could
cause the block to be unnecessarily re-requested.
In some corner cases, it may be possible for recent blocks to end up in
the same block file as much older blocks. Previously, the pruning code
would stop looking for files to remove upon first encountering a file
containing a block that cannot be pruned, now it will keep looking for
candidate files until the target is met and all other criteria are
satisfied.
This can result in a noncontiguous set of block files (by number) on
disk, which is fine except for during some reindex corner cases, so
make reindex preparation smarter such that we keep the data we can
actually use and throw away the rest. This allows pruning to work
correctly while downloading any blocks needed during the reindex.
- Added FindProposal and FindFinalBudget to budgeting class
- Added 2 new sporks for Proposals and Budget payment enforcement. This is outside of the decentralized code so we can turn it off if there's a problem.
- Detect budget blocks and pay correct amounts in super blocks
- All budgeting code seems to be rather stable now. Serialization/caching is working rather well.
- Fixed some ambiguous variable names within the budgeting system that were causing the file caching to not work all of the time
Previously due to an off-by-one error the wallet ignored
nLockTime-by-height transactions that would be valid in the next block
even though they are accepted into the mempool. The transactions
wouldn't show up until confirmed, nor would they be included in the
unconfirmed balance. Similar to the mempool behavior fix in 665bdd3b,
the wallet code was calling IsFinalTx() directly without taking into
account the fact that doing so tells you if the transaction could have
been mined in the *current* block, rather than the next block.
To fix this we strip IsFinalTx() of non-consensus-critical
functionality, removing the default arguments, and add CheckFinalTx() to
check if a transaction will be final in the next block.
Previously this was cleared only after UnlinkPrunedFiles, but it should really be cleared after FindFilesToPrune, regardless of whether there are any files to be pruned.
86a5f4b Relocate calls to CheckDiskSpace (Alex Morcos)
67708ac Write block index more frequently than cache flushes (Pieter Wuille)
b3ed423 Cache tweak and logging improvements (Pieter Wuille)
fc684ad Use accurate memory for flushing decisions (Pieter Wuille)
046392d Keep track of memory usage in CCoinsViewCache (Pieter Wuille)
540629c Add memusage.h (Pieter Wuille)
- Added commands for using budgets "mnbudget" and "mnfinalbudget"
- Supports 100% decentralized budget control and view-only site with json meta data object
Create a monitoring task that counts how many blocks have been found in the last four hours.
If very few or too many have been found, an alert is triggered.
"Very few" and "too many" are set based on a false positive rate of once every fifty years of constant running with constant hashing power, which works out to getting 5 or fewer or 48 or more blocks in four hours (instead of the average of 24).
Only one alert per day is triggered, so if you get disconnected from the network (or are being Sybil'ed) -alertnotify will be triggered after 3.5 hours but you won't get another -alertnotify for 24 hours.
Tested with a new unit test and by running on the main network with -debug=partitioncheck
Run test/test_bitcoin --log_level=message to see the alert messages:
WARNING: check your network connection, 3 blocks received in the last 4 hours (24 expected)
WARNING: abnormally high number of blocks generated, 60 blocks received in the last 4 hours (24 expected)
The -debug=partitioncheck debug.log messages look like:
ThreadPartitionCheck : Found 22 blocks in the last 4 hours
ThreadPartitionCheck : likelihood: 0.0777702
Instead of only checking height to decide whether to disable script checks,
actually check whether a block is an ancestor of a checkpoint, up to which
headers have been validated. This means that we don't have to prevent
accepting a side branch anymore - it will be safe, just less fast to
do.
We still need to prevent being fed a multitude of low-difficulty headers
filling up our memory. The mechanism for that is unchanged for now: once
a checkpoint is reached with headers, no headers chain branching off before
that point are allowed anymore.
This class groups transactions that have been confirmed in blocks into buckets, based on either their fee or their priority. Then for each bucket, the class calculates what percentage of the transactions were confirmed within various numbers of blocks. It does this by keeping an exponentially decaying moving history for each bucket and confirm block count of the percentage of transactions in that bucket that were confirmed within that number of blocks.
-Eliminate txs which didn't have all inputs available at entry from fee/pri calcs
-Add dynamic breakpoints and tracking of confirmation delays in mempool transactions
-Remove old CMinerPolicyEstimator and CBlockAverage code
-New smartfees.py
-Pass a flag to the estimation code, using IsInitialBlockDownload as a proxy for when we are still catching up and we shouldn't be counting how many blocks it takes for transactions to be included.
-Add a policyestimator unit test
We don't want to erase orphans that still have missing inputs, they should still be tracked as orphans. Also, the transaction thats being accepted can't be an orphan otherwise it would have previously been accepted, so doesn't need to be added to the erase queue.
a8cdaf5 checkpoints: move the checkpoints enable boolean into main (Cory Fields)
11982d3 checkpoints: Decouple checkpoints from Params (Cory Fields)
6996823 checkpoints: make checkpoints a member of CChainParams (Cory Fields)
9f13a10 checkpoints: store mapCheckpoints in CCheckpointData rather than a pointer (Cory Fields)
Use a probabilistic bloom filter to keep track of which addresses
we think we have given our peers, instead of a list.
This uses much less memory, at the cost of sometimes failing to
relay an address to a peer-- worst case if the bloom filter happens
to be as full as it gets, 1-in-1,000.
Measured memory usage of a full mruset setAddrKnown: 650Kbytes
Constant memory usage of CRollingBloomFilter addrKnown: 37Kbytes.
This will also help heap fragmentation, because the 37K of storage
is allocated when a CNode is created (when a connection to a peer
is established) and then there is no per-item-remembered memory
allocation.
I plan on testing by restarting a full node with an empty peers.dat,
running a while with -debug=addrman and -debug=net, and making sure
that the 'addr' message traffic out is reasonable.
(suggestions for better tests welcome)
- Removed of reference node and replaced with decentralized quorums that pick the masternodes who get paid each block.
- Made a budgeting system, where masternodes can vote on individual budgets and the data is stored perminently on each clients computer
This adds a -prune=N option to bitcoind, which if set to N>0 will enable block
file pruning. When pruning is enabled, block and undo files will be deleted to
try to keep total space used by those files to below the prune target (N, in
MB) specified by the user, subject to some constraints:
- The last 288 blocks on the main chain are always kept (MIN_BLOCKS_TO_KEEP),
- N must be at least 550MB (chosen as a value for the target that could
reasonably be met, with some assumptions about block sizes, orphan rates,
etc; see comment in main.h),
- No blocks are pruned until chainActive is at least 100,000 blocks long (on
mainnet; defined separately for mainnet, testnet, and regtest in chainparams
as nPruneAfterHeight).
This unsets NODE_NETWORK if pruning is enabled.
Also included is an RPC test for pruning (pruning.py).
Thanks to @rdponticelli for earlier work on this feature; this is based in
part off that work.
Some tests in CheckBlockIndex require chainActive.Tip(), but when reindexing, chainActive has not been set on the first call to CheckBlockIndex.
reindex.py starts a node, mines 3 blocks, stops, and reindexes with CheckBlockIndex enabled.
Rebased-From: 0421c18f3a
Github-Pull: #6012
Compare the block download timeout to what the timeout would be if calculated
based on current time and current value of nQueuedValidatedHeaders, but
ignoring other in-flight blocks from the same peer. If the calculation based on
present conditions is shorter, then set that to be the time after which we
disconnect the peer for not delivering this block.
Some tests in CheckBlockIndex require chainActive.Tip(), but when reindexing, chainActive has not been set on the first call to CheckBlockIndex.
reindex.py starts a node, mines 3 blocks, stops, and reindexes with CheckBlockIndex enabled.
SendMessages will now call getheaders on both inbound and outbound peers,
once the headers chain is close to synced. It will also try downloading
blocks from inbound peers once we're out of initial block download (so
inbound peers will participate in parallel block fetching for the last day
or two of blocks being downloaded).
This adds more tests to CheckBlockIndex:
- HAVE_DATA is true iff nTx > 0
- BLOCK_VALID_TRANSACTIONS is true iff nTx > 0
- BLOCK_VALID_TRANSACTIONS is true for a block and all parents iff
nChainTx > 0
This adds a -checkblockindex (defaulting to true for regtest), which occasionally
does a full consistency check for mapBlockIndex, setBlockIndexCandidates, chainActive, and
mapBlocksUnlinked.
This adds a -checkblockindex (defaulting to true for regtest), which occasionally
does a full consistency check for mapBlockIndex, setBlockIndexCandidates, chainActive, and
mapBlocksUnlinked.
Adds a regression test for the wallet's ResendWalletTransactions function, which uses a new, hidden RPC command "resendwallettransactions."
I refactored main's Broadcast signal so it is passed the best-block time, which let me remove a global variable shared between main.cpp and the wallet (nTimeBestReceived).
I also manually tested the "rebroadcast unconfirmed every half hour or so" functionality by:
1. Running bitcoind -connect=0.0.0.0:8333
2. Creating a couple of send-to-self transactions
3. Connect to a peer using -addnode
4. Waited a while, monitoring debug.log, until I see:
```2015-03-23 18:48:10 ResendWalletTransactions: rebroadcast 2 unconfirmed transactions```
One last change: don't bother putting ResendWalletTransactions messages in debug.log unless unconfirmed transactions were actually rebroadcast.
- Ensures ports remain open and client are responsive to IX requests.
- Completely 100% decentralized. This farms out the work of checking the masternode network to the masternode network. 1% of the network is determistically selected to check another 1% of the network each block. It takes six separate checks to deactivate a node, thus making it tamper proof.
- Nodes are kept in the masternode list if they fail enough PoSe checks to deactivate. They will continue to be checked until the operator fixes them. However they will not be paid while they're failing checks.
When re-indexing, there are a few cases where garbage data may be skipped in
the block files. In these cases, the indices are correctly written to the index
db, however the pointer to the next position for writing in the current block
file is calculated by adding the sizes of the valid blocks found.
As a result, when the re-index is finished, the index db is correct for all
existing blocks, but the next block will be written to an incorrect offset,
likely overwriting existing blocks.
Rather than using the sum of all valid blocks to determine the next write
position, use the end of the last block written to the file. Don't assume that
the current block is the last one in the file, since they may be read
out-of-order.
Rebased-From: bb6acff079
Github-Pull: #5864
When re-indexing, there are a few cases where garbage data may be skipped in
the block files. In these cases, the indices are correctly written to the index
db, however the pointer to the next position for writing in the current block
file is calculated by adding the sizes of the valid blocks found.
As a result, when the re-index is finished, the index db is correct for all
existing blocks, but the next block will be written to an incorrect offset,
likely overwriting existing blocks.
Rather than using the sum of all valid blocks to determine the next write
position, use the end of the last block written to the file. Don't assume that
the current block is the last one in the file, since they may be read
out-of-order.
The only time when a client sends a "getaddr" message is when he
esatblishes an Outbound connection (see ProcessMessage() in
src/main.cpp). Another bitcoin client is expected to receive a
"getaddr" message only on Inbound connection. Ignoring "gettaddr"
requests on Outbound connections can resolve potential privacy issues
(and as was said such request normally do not happen anyway).
Rebased-From: dca799e1db
Github-Pull: #5442
With headers-first we can compare against the best header timestamp, rather
than using checkpoints which require code updates to maintain.
Rebased-From: 85da07a5a0
Github-Pull: #5820
- call ProcessMessageMasternodePayments on ProcessBlock (lost after moving mnodeman functionality)
- do not process extra functionality messages (DS, IX, spork) on initial download / reindex
Normally bitcoin core does not display any network originated strings without
sanitizing or hex encoding. This wasn't done for strcommand in many places.
This could be used to play havoc with a terminal displaying the logs,
especially with printtoconsole in use.
Thanks to Evil-Knievel for reporting this issue.
Conflicts:
src/main.cpp
Normally bitcoin core does not display any network originated strings without
sanitizing or hex encoding. This wasn't done for strcommand in many places.
This could be used to play havoc with a terminal displaying the logs,
especially with printtoconsole in use.
Thanks to Evil-Knievel for reporting this issue.
Conflicts:
src/main.cpp
src/net.cpp
src/rpcserver.cpp
Rebased-From: 28d4cff0ed
Github-Pull: #5770
Normally bitcoin core does not display any network originated strings without
sanitizing or hex encoding. This wasn't done for strcommand in many places.
This could be used to play havoc with a terminal displaying the logs,
especially with printtoconsole in use.
Thanks to Evil-Knievel for reporting this issue.
The only time when a client sends a "getaddr" message is when he
esatblishes an Outbound connection (see ProcessMessage() in
src/main.cpp). Another bitcoin client is expected to receive a
"getaddr" message only on Inbound connection. Ignoring "gettaddr"
requests on Outbound connections can resolve potential privacy issues
(and as was said such request normally do not happen anyway).
- Scan IX locks on new blocks to make sure no conflicting txes are present
- Upon completion of a IX lock, check for conflicts and remove blocks if needed
Instead, create a separate function that applies the undo operation of a
CTxInUndo object onto a CCoinsViewCache. This method is used from
DisconnectBlock.
This harmonizes the block fetch timeout with the existing ping timeout
and eliminates a guaranteed eventual failure from congestion collapse
for a network operating right at its limit.
It's unlikely that we wouldn't suffer other failures if we were really
anywhere near the network's limit, and a complete avoidance of congestion
collapse risk requires (I think) an exponential back-off. So this isn't
a major concern, but I think it's also useful for reducing the complexity
of understanding out timeouts.
Github-Pull: #5647
Rebased-From: 3ff735c99a
Note that this will also require translation changes in Transifex for the key
"A fee higher than %1 is considered an insanely high fee." which is now
"A fee higher than %1 is considered an absurdly high fee."
Signed-off-by: Daira Hopwood <daira@jacaranda.org>