dash/src/txmempool.h

151 lines
5.1 KiB
C
Raw Normal View History

// Copyright (c) 2009-2010 Satoshi Nakamoto
// Copyright (c) 2009-2013 The Bitcoin developers
// Distributed under the MIT/X11 software license, see the accompanying
// file COPYING or http://www.opensource.org/licenses/mit-license.php.
#ifndef BITCOIN_TXMEMPOOL_H
#define BITCOIN_TXMEMPOOL_H
#include <list>
#include "coins.h"
#include "core.h"
#include "sync.h"
estimatefee / estimatepriority RPC methods New RPC methods: return an estimate of the fee (or priority) a transaction needs to be likely to confirm in a given number of blocks. Mike Hearn created the first version of this method for estimating fees. It works as follows: For transactions that took 1 to N (I picked N=25) blocks to confirm, keep N buckets with at most 100 entries in each recording the fees-per-kilobyte paid by those transactions. (separate buckets are kept for transactions that confirmed because they are high-priority) The buckets are filled as blocks are found, and are saved/restored in a new fee_estiamtes.dat file in the data directory. A few variations on Mike's initial scheme: To estimate the fee needed for a transaction to confirm in X buckets, all of the samples in all of the buckets are used and a median of all of the data is used to make the estimate. For example, imagine 25 buckets each containing the full 100 entries. Those 2,500 samples are sorted, and the estimate of the fee needed to confirm in the very next block is the 50'th-highest-fee-entry in that sorted list; the estimate of the fee needed to confirm in the next two blocks is the 150'th-highest-fee-entry, etc. That algorithm has the nice property that estimates of how much fee you need to pay to get confirmed in block N will always be greater than or equal to the estimate for block N+1. It would clearly be wrong to say "pay 11 uBTC and you'll get confirmed in 3 blocks, but pay 12 uBTC and it will take LONGER". A single block will not contribute more than 10 entries to any one bucket, so a single miner and a large block cannot overwhelm the estimates.
2014-03-17 13:19:54 +01:00
inline bool AllowFree(double dPriority)
{
// Large (in bytes) low-priority (new, small-coin) transactions
// need a fee.
return dPriority > COIN * 144 / 250;
}
/** Fake height value used in CCoins to signify they are only in the memory pool (since 0.8) */
static const unsigned int MEMPOOL_HEIGHT = 0x7FFFFFFF;
/*
* CTxMemPool stores these:
*/
class CTxMemPoolEntry
{
private:
CTransaction tx;
int64_t nFee; // Cached to avoid expensive parent-transaction lookups
size_t nTxSize; // ... and avoid recomputing tx size
int64_t nTime; // Local time when entering the mempool
double dPriority; // Priority when entering the mempool
unsigned int nHeight; // Chain height when entering the mempool
public:
CTxMemPoolEntry(const CTransaction& _tx, int64_t _nFee,
int64_t _nTime, double _dPriority, unsigned int _nHeight);
CTxMemPoolEntry();
CTxMemPoolEntry(const CTxMemPoolEntry& other);
const CTransaction& GetTx() const { return this->tx; }
double GetPriority(unsigned int currentHeight) const;
int64_t GetFee() const { return nFee; }
size_t GetTxSize() const { return nTxSize; }
int64_t GetTime() const { return nTime; }
unsigned int GetHeight() const { return nHeight; }
};
estimatefee / estimatepriority RPC methods New RPC methods: return an estimate of the fee (or priority) a transaction needs to be likely to confirm in a given number of blocks. Mike Hearn created the first version of this method for estimating fees. It works as follows: For transactions that took 1 to N (I picked N=25) blocks to confirm, keep N buckets with at most 100 entries in each recording the fees-per-kilobyte paid by those transactions. (separate buckets are kept for transactions that confirmed because they are high-priority) The buckets are filled as blocks are found, and are saved/restored in a new fee_estiamtes.dat file in the data directory. A few variations on Mike's initial scheme: To estimate the fee needed for a transaction to confirm in X buckets, all of the samples in all of the buckets are used and a median of all of the data is used to make the estimate. For example, imagine 25 buckets each containing the full 100 entries. Those 2,500 samples are sorted, and the estimate of the fee needed to confirm in the very next block is the 50'th-highest-fee-entry in that sorted list; the estimate of the fee needed to confirm in the next two blocks is the 150'th-highest-fee-entry, etc. That algorithm has the nice property that estimates of how much fee you need to pay to get confirmed in block N will always be greater than or equal to the estimate for block N+1. It would clearly be wrong to say "pay 11 uBTC and you'll get confirmed in 3 blocks, but pay 12 uBTC and it will take LONGER". A single block will not contribute more than 10 entries to any one bucket, so a single miner and a large block cannot overwhelm the estimates.
2014-03-17 13:19:54 +01:00
class CMinerPolicyEstimator;
/*
* CTxMemPool stores valid-according-to-the-current-best-chain
* transactions that may be included in the next block.
*
* Transactions are added when they are seen on the network
* (or created by the local node), but not all transactions seen
* are added to the pool: if a new transaction double-spends
* an input of a transaction in the pool, it is dropped,
* as are non-standard transactions.
*/
class CTxMemPool
{
private:
bool fSanityCheck; // Normally false, true if -checkmempool or -regtest
unsigned int nTransactionsUpdated;
estimatefee / estimatepriority RPC methods New RPC methods: return an estimate of the fee (or priority) a transaction needs to be likely to confirm in a given number of blocks. Mike Hearn created the first version of this method for estimating fees. It works as follows: For transactions that took 1 to N (I picked N=25) blocks to confirm, keep N buckets with at most 100 entries in each recording the fees-per-kilobyte paid by those transactions. (separate buckets are kept for transactions that confirmed because they are high-priority) The buckets are filled as blocks are found, and are saved/restored in a new fee_estiamtes.dat file in the data directory. A few variations on Mike's initial scheme: To estimate the fee needed for a transaction to confirm in X buckets, all of the samples in all of the buckets are used and a median of all of the data is used to make the estimate. For example, imagine 25 buckets each containing the full 100 entries. Those 2,500 samples are sorted, and the estimate of the fee needed to confirm in the very next block is the 50'th-highest-fee-entry in that sorted list; the estimate of the fee needed to confirm in the next two blocks is the 150'th-highest-fee-entry, etc. That algorithm has the nice property that estimates of how much fee you need to pay to get confirmed in block N will always be greater than or equal to the estimate for block N+1. It would clearly be wrong to say "pay 11 uBTC and you'll get confirmed in 3 blocks, but pay 12 uBTC and it will take LONGER". A single block will not contribute more than 10 entries to any one bucket, so a single miner and a large block cannot overwhelm the estimates.
2014-03-17 13:19:54 +01:00
CMinerPolicyEstimator* minerPolicyEstimator;
2014-07-03 20:25:32 +02:00
CFeeRate minRelayFee; // Passed to constructor to avoid dependency on main
uint64_t totalTxSize; // sum of all mempool tx' byte sizes
2014-07-03 20:25:32 +02:00
public:
mutable CCriticalSection cs;
std::map<uint256, CTxMemPoolEntry> mapTx;
std::map<COutPoint, CInPoint> mapNextTx;
std::map<uint256, std::pair<double, int64_t> > mapDeltas;
2014-07-03 20:25:32 +02:00
CTxMemPool(const CFeeRate& _minRelayFee);
estimatefee / estimatepriority RPC methods New RPC methods: return an estimate of the fee (or priority) a transaction needs to be likely to confirm in a given number of blocks. Mike Hearn created the first version of this method for estimating fees. It works as follows: For transactions that took 1 to N (I picked N=25) blocks to confirm, keep N buckets with at most 100 entries in each recording the fees-per-kilobyte paid by those transactions. (separate buckets are kept for transactions that confirmed because they are high-priority) The buckets are filled as blocks are found, and are saved/restored in a new fee_estiamtes.dat file in the data directory. A few variations on Mike's initial scheme: To estimate the fee needed for a transaction to confirm in X buckets, all of the samples in all of the buckets are used and a median of all of the data is used to make the estimate. For example, imagine 25 buckets each containing the full 100 entries. Those 2,500 samples are sorted, and the estimate of the fee needed to confirm in the very next block is the 50'th-highest-fee-entry in that sorted list; the estimate of the fee needed to confirm in the next two blocks is the 150'th-highest-fee-entry, etc. That algorithm has the nice property that estimates of how much fee you need to pay to get confirmed in block N will always be greater than or equal to the estimate for block N+1. It would clearly be wrong to say "pay 11 uBTC and you'll get confirmed in 3 blocks, but pay 12 uBTC and it will take LONGER". A single block will not contribute more than 10 entries to any one bucket, so a single miner and a large block cannot overwhelm the estimates.
2014-03-17 13:19:54 +01:00
~CTxMemPool();
/*
* If sanity-checking is turned on, check makes sure the pool is
* consistent (does not contain two transactions that spend the same inputs,
* all inputs are in the mapNextTx array). If sanity-checking is turned off,
* check does nothing.
*/
void check(CCoinsViewCache *pcoins) const;
void setSanityCheck(bool _fSanityCheck) { fSanityCheck = _fSanityCheck; }
bool addUnchecked(const uint256& hash, const CTxMemPoolEntry &entry);
void remove(const CTransaction &tx, std::list<CTransaction>& removed, bool fRecursive = false);
void removeConflicts(const CTransaction &tx, std::list<CTransaction>& removed);
estimatefee / estimatepriority RPC methods New RPC methods: return an estimate of the fee (or priority) a transaction needs to be likely to confirm in a given number of blocks. Mike Hearn created the first version of this method for estimating fees. It works as follows: For transactions that took 1 to N (I picked N=25) blocks to confirm, keep N buckets with at most 100 entries in each recording the fees-per-kilobyte paid by those transactions. (separate buckets are kept for transactions that confirmed because they are high-priority) The buckets are filled as blocks are found, and are saved/restored in a new fee_estiamtes.dat file in the data directory. A few variations on Mike's initial scheme: To estimate the fee needed for a transaction to confirm in X buckets, all of the samples in all of the buckets are used and a median of all of the data is used to make the estimate. For example, imagine 25 buckets each containing the full 100 entries. Those 2,500 samples are sorted, and the estimate of the fee needed to confirm in the very next block is the 50'th-highest-fee-entry in that sorted list; the estimate of the fee needed to confirm in the next two blocks is the 150'th-highest-fee-entry, etc. That algorithm has the nice property that estimates of how much fee you need to pay to get confirmed in block N will always be greater than or equal to the estimate for block N+1. It would clearly be wrong to say "pay 11 uBTC and you'll get confirmed in 3 blocks, but pay 12 uBTC and it will take LONGER". A single block will not contribute more than 10 entries to any one bucket, so a single miner and a large block cannot overwhelm the estimates.
2014-03-17 13:19:54 +01:00
void removeForBlock(const std::vector<CTransaction>& vtx, unsigned int nBlockHeight,
std::list<CTransaction>& conflicts);
void clear();
void queryHashes(std::vector<uint256>& vtxid);
void pruneSpent(const uint256& hash, CCoins &coins);
unsigned int GetTransactionsUpdated() const;
void AddTransactionsUpdated(unsigned int n);
/** Affect CreateNewBlock prioritisation of transactions */
void PrioritiseTransaction(const uint256 hash, const std::string strHash, double dPriorityDelta, int64_t nFeeDelta);
void ApplyDeltas(const uint256 hash, double &dPriorityDelta, int64_t &nFeeDelta);
void ClearPrioritisation(const uint256 hash);
unsigned long size()
{
LOCK(cs);
return mapTx.size();
}
uint64_t GetTotalTxSize()
{
LOCK(cs);
return totalTxSize;
}
bool exists(uint256 hash)
{
LOCK(cs);
return (mapTx.count(hash) != 0);
}
bool lookup(uint256 hash, CTransaction& result) const;
estimatefee / estimatepriority RPC methods New RPC methods: return an estimate of the fee (or priority) a transaction needs to be likely to confirm in a given number of blocks. Mike Hearn created the first version of this method for estimating fees. It works as follows: For transactions that took 1 to N (I picked N=25) blocks to confirm, keep N buckets with at most 100 entries in each recording the fees-per-kilobyte paid by those transactions. (separate buckets are kept for transactions that confirmed because they are high-priority) The buckets are filled as blocks are found, and are saved/restored in a new fee_estiamtes.dat file in the data directory. A few variations on Mike's initial scheme: To estimate the fee needed for a transaction to confirm in X buckets, all of the samples in all of the buckets are used and a median of all of the data is used to make the estimate. For example, imagine 25 buckets each containing the full 100 entries. Those 2,500 samples are sorted, and the estimate of the fee needed to confirm in the very next block is the 50'th-highest-fee-entry in that sorted list; the estimate of the fee needed to confirm in the next two blocks is the 150'th-highest-fee-entry, etc. That algorithm has the nice property that estimates of how much fee you need to pay to get confirmed in block N will always be greater than or equal to the estimate for block N+1. It would clearly be wrong to say "pay 11 uBTC and you'll get confirmed in 3 blocks, but pay 12 uBTC and it will take LONGER". A single block will not contribute more than 10 entries to any one bucket, so a single miner and a large block cannot overwhelm the estimates.
2014-03-17 13:19:54 +01:00
// Estimate fee rate needed to get into the next
// nBlocks
CFeeRate estimateFee(int nBlocks) const;
// Estimate priority needed to get into the next
// nBlocks
double estimatePriority(int nBlocks) const;
// Write/Read estimates to disk
bool WriteFeeEstimates(CAutoFile& fileout) const;
bool ReadFeeEstimates(CAutoFile& filein);
};
/** CCoinsView that brings transactions from a memorypool into view.
It does not check for spendings by memory pool transactions. */
class CCoinsViewMemPool : public CCoinsViewBacked
{
protected:
CTxMemPool &mempool;
public:
CCoinsViewMemPool(CCoinsView &baseIn, CTxMemPool &mempoolIn);
bool GetCoins(const uint256 &txid, CCoins &coins) const;
bool HaveCoins(const uint256 &txid) const;
};
#endif /* BITCOIN_TXMEMPOOL_H */