64c0800 Use logging in individual tests (John Newbery)
38ad281 Use logging in test_framework/comptool.py (John Newbery)
ff19073 Use logging in test_framework/blockstore.py (John Newbery)
2a9c7c7 Use logging in test_framework/util.py (John Newbery)
b0dec4a Remove manual debug settings in qa tests. (John Newbery)
af1363c Always enable debug log and microsecond logging for test nodes. (John Newbery)
6d0e325 Use logging in mininode.py (John Newbery)
553a976 Add logging to p2p-segwit.py (John Newbery)
0e6d23d Add logging to test_framework.py (John Newbery)
Tree-SHA512: 42ee2acbf444ec32d796f930f9f6e272da03c75e93d974a126d4ea9b2dbaa77cc57ab5e63ce3fd33d609049d884eb8d9f65272c08922d10f8db69d4a60ad05a3
This class groups transactions that have been confirmed in blocks into buckets, based on either their fee or their priority. Then for each bucket, the class calculates what percentage of the transactions were confirmed within various numbers of blocks. It does this by keeping an exponentially decaying moving history for each bucket and confirm block count of the percentage of transactions in that bucket that were confirmed within that number of blocks.
-Eliminate txs which didn't have all inputs available at entry from fee/pri calcs
-Add dynamic breakpoints and tracking of confirmation delays in mempool transactions
-Remove old CMinerPolicyEstimator and CBlockAverage code
-New smartfees.py
-Pass a flag to the estimation code, using IsInitialBlockDownload as a proxy for when we are still catching up and we shouldn't be counting how many blocks it takes for transactions to be included.
-Add a policyestimator unit test
New RPC methods: return an estimate of the fee (or priority) a
transaction needs to be likely to confirm in a given number of
blocks.
Mike Hearn created the first version of this method for estimating fees.
It works as follows:
For transactions that took 1 to N (I picked N=25) blocks to confirm,
keep N buckets with at most 100 entries in each recording the
fees-per-kilobyte paid by those transactions.
(separate buckets are kept for transactions that confirmed because
they are high-priority)
The buckets are filled as blocks are found, and are saved/restored
in a new fee_estiamtes.dat file in the data directory.
A few variations on Mike's initial scheme:
To estimate the fee needed for a transaction to confirm in X buckets,
all of the samples in all of the buckets are used and a median of
all of the data is used to make the estimate. For example, imagine
25 buckets each containing the full 100 entries. Those 2,500 samples
are sorted, and the estimate of the fee needed to confirm in the very
next block is the 50'th-highest-fee-entry in that sorted list; the
estimate of the fee needed to confirm in the next two blocks is the
150'th-highest-fee-entry, etc.
That algorithm has the nice property that estimates of how much fee
you need to pay to get confirmed in block N will always be greater
than or equal to the estimate for block N+1. It would clearly be wrong
to say "pay 11 uBTC and you'll get confirmed in 3 blocks, but pay
12 uBTC and it will take LONGER".
A single block will not contribute more than 10 entries to any one
bucket, so a single miner and a large block cannot overwhelm
the estimates.