There are only a few uses of `insecure_random` outside the tests.
This PR replaces uses of insecure_random (and its accompanying global
state) in the core code with an FastRandomContext that is automatically
seeded on creation.
This is meant to be used for inner loops. The FastRandomContext
can be in the outer scope, or the class itself, then rand32() is used
inside the loop. Useful e.g. for pushing addresses in CNode or the fee
rounding, or randomization for coin selection.
As a context is created per purpose, thus it gets rid of
cross-thread unprotected shared usage of a single set of globals, this
should also get rid of the potential race conditions.
- I'd say TxMempool::check is not called enough to warrant using a special
fast random context, this is switched to GetRand() (open for
discussion...)
- The use of `insecure_rand` in ConnectThroughProxy has been replaced by
an atomic integer counter. The only goal here is to have a different
credentials pair for each connection to go on a different Tor circuit,
it does not need to be random nor unpredictable.
- To avoid having a FastRandomContext on every CNode, the context is
passed into PushAddress as appropriate.
There remains an insecure_random for test usage in `test_random.h`.
Saves about 10% of application memory usage once the mempool warms up. Since the
mempool is DynamicUsage-regulated, this will translate to a larger mempool in
the same amount of space.
Map value type: eliminate the vin index; no users of the map need to know which
input of the transaction is spending the prevout.
Map key type: replace the COutPoint with a pointer to a COutPoint. A COutPoint
is 36 bytes, but each COutPoint is accessible from the same map entry's value.
A trivial DereferencingComparator functor allows indirect map keys, but the
resulting syntax is misleading: `map.find(&outpoint)`. Implement an indirectmap
that acts as a wrapper to a map that uses a DereferencingComparator, supporting
a syntax that accurately reflect the container's semantics: inserts and
iterators use pointers since they store pointers and need them to remain
constant and dereferenceable, but lookup functions take const references.
The ability to GETDATA a transaction which has not (yet) been relayed
is a privacy loss vector.
The use of the mempool for this was added as part of the mempool p2p
message and is only needed to fetch transactions returned by it.
b559914 Move bloom and feerate filtering to just prior to tx sending. (Gregory Maxwell)
4578215 Return mempool queries in dependency order (Pieter Wuille)
ed70683 Handle mempool requests in send loop, subject to trickle (Pieter Wuille)
dc13dcd Split up and optimize transaction and block inv queues (Pieter Wuille)
f2d3ba7 Eliminate TX trickle bypass, sort TX invs for privacy and priority. (Gregory Maxwell)
Previously Bitcoin would send 1/4 of transactions out to all peers
instantly. This causes high overhead because it makes >80% of
INVs size 1. Doing so harms privacy, because it limits the
amount of source obscurity a transaction can receive.
These randomized broadcasts also disobeyed transaction dependencies
and required use of the orphan pool. Because the orphan pool is
so small this leads to poor propagation for dependent transactions.
When the bypass wasn't in effect, transactions were sent in the
order they were received. This avoided creating orphans but
undermines privacy fairly significantly.
This commit:
Eliminates the bypass. The bypass is replaced by halving the
average delay for outbound peers.
Sorts candidate transactions for INV by their topological
depth then by their feerate (then hash); removing the
information leakage and providing priority service to
higher fee transactions.
Limits the amount of transactions sent in a single INV to
7tx/sec (and twice that for outbound); this limits the
harm of low fee transaction floods, gives faster relay
service to higher fee transactions. The 7 sounds lower
than it really is because received advertisements need
not be sent, and because the aggregate rate is multipled
by the number of peers.
The "feefilter" p2p message is used to inform other nodes of your mempool min fee which is the feerate that any new transaction must meet to be accepted to your mempool. This will allow them to filter invs to you according to this feerate.
This implements caching of ancestor state to each mempool entry, similar to
descendant tracking, but also including caching sigops-with-ancestors (as that
metric will be helpful to future code that implements better transaction
selection in CreatenewBlock).
The work limit served to prevent the descendant walking algorithm from doing
too much work by marking the parent transaction as dirty. However to implement
ancestor tracking, it's not possible to similarly mark those descendant
transactions as dirty without having to calculate them to begin with.
This commit removes the work limit altogether. With appropriate
chain limits (-limitdescendantcount) the concern about doing too much
work inside this function should be mitigated.
SequenceLocks functions are used to evaluate sequence lock times or heights per BIP 68.
The majority of this code is copied from maaku in #6312
Further credit: btcdrak, sipa, NicolasDorier
Redo the feerate index to be based on mining score, rather than fee.
Update mempool_packages.py to test prioritisetransaction's effect on
package scores.
The score index is meant to represent the order of priority for being included in a block for miners. Initially this is set to the transactions modified (by any feeDelta) fee rate. Index improvements and unit tests by sdaftuar.
Store sum of legacy and P2SH sig op counts. This is calculated in AcceptToMemory pool and storing it saves redoing the expensive calculation in block template creation.
Compute the value of inputs that already are in the chain at time of mempool entry and only increase priority due to aging for those inputs. This effectively changes the CTxMemPoolEntry's GetPriority calculation from an upper bound to a lower bound.
These are more useful fee and priority estimation functions. If there is no fee/pri high enough for the target you are aiming for, it will give you the estimate for the lowest target that you can reliably obtain. This is better than defaulting to the minimum. It will also pass back the target for which it returned an answer.
After each transaction which is added to mempool, we first call
Expire() to remove old transactions, then throwing away the
lowest-feerate transactions.
After throwing away transactions by feerate, we set the minimum
relay fee to the maximum fee transaction-and-dependant-set we
removed, plus the default minimum relay fee.
After the next block is received, the minimum relay fee is allowed
to decrease exponentially. Its halflife defaults to 12 hours, but
is decreased to 6 hours if the mempool is smaller than half its
maximum size, and 3 hours if the mempool is smaller than a quarter
its maximum size.
The minimum -maxmempool size is 40*-limitdescendantsize, as it is
easy for an attacker to play games with the cheapest
-limitdescendantsize transactions. -maxmempool defaults to 300MB.
This disables high-priority transaction relay when the min relay
fee adjustment is >0 (ie when the mempool is full). When the relay
fee adjustment drops below the default minimum relay fee / 2 it is
set to 0 (re-enabling priority-based free relay).
(note the 9x multiplier on (void*)'s for CTxMemPool::DynamicMemoryUsage
was accidentally introduced in 5add7a7 but should have waited for this
commit which adds the extra index)
CalculateMemPoolAncestors was always looping over a transaction's inputs
to find in-mempool parents. When adding a new transaction, this is the
correct behavior, but when removing a transaction, we want to use the
ancestor set that would be calculated by walking mapLinks (which should
in general be the same set, except during a reorg when the mempool is
in an inconsistent state, and the mapLinks-based calculation would be the
correct one).
Associate with each CTxMemPoolEntry all the size/fees of descendant
mempool transactions. Sort mempool by max(feerate of entry, feerate
of descendants). Update statistics on-the-fly as transactions enter
or leave the mempool.
Also add ancestor and descendant limiting, so that transactions can
be rejected if the number or size of unconfirmed ancestors exceeds
a target, or if adding a transaction would cause some other mempool
entry to have too many (or too large) a set of unconfirmed in-
mempool descendants.
This class groups transactions that have been confirmed in blocks into buckets, based on either their fee or their priority. Then for each bucket, the class calculates what percentage of the transactions were confirmed within various numbers of blocks. It does this by keeping an exponentially decaying moving history for each bucket and confirm block count of the percentage of transactions in that bucket that were confirmed within that number of blocks.
-Eliminate txs which didn't have all inputs available at entry from fee/pri calcs
-Add dynamic breakpoints and tracking of confirmation delays in mempool transactions
-Remove old CMinerPolicyEstimator and CBlockAverage code
-New smartfees.py
-Pass a flag to the estimation code, using IsInitialBlockDownload as a proxy for when we are still catching up and we shouldn't be counting how many blocks it takes for transactions to be included.
-Add a policyestimator unit test
This fixes a subtle bug involving block re-orgs and non-standard transactions.
Start with a block containing a non-standard transaction, and
one or more transactions spending it in the memory pool.
Then re-org away from that block to another chain that does
not contain the non-standard transaction.
Result before this fix: the dependent transactions get stuck
in the mempool without their parent, putting the mempool
in an inconsistent state.
Tested with a new unit test.
'Sane' was already defined by this code as:
fee.GetFeePerK() > minRelayFee.GetFeePerK() * 10000
But sanity was only enforced for data loaded from disk.
Note that this is a pretty expansive definition of 'sane': A 10 BTC
fee is still passes the test if its on a 100kb transaction.
This prevents a single insane fee on the network from making us reject
our stored fee data at start. We still may reject valid saved fee
state if minRelayFee is changed between executions.
This also reduces the risk and limits the damage from a cascading
failure where one party pays a bunch of insane fees which cases
others to pay insane fees.
7c70438 Get rid of the dummy CCoinsViewCache constructor arg (Pieter Wuille)
ed27e53 Add coins_tests with a large randomized CCoinViewCache test. (Pieter Wuille)
058b08c Do not keep fully spent but unwritten CCoins entries cached. (Pieter Wuille)
c9d1a81 Get rid of CCoinsView's SetCoins and SetBestBlock. (Pieter Wuille)
f28aec0 Use ModifyCoins instead of mutable GetCoins. (Pieter Wuille)