We used to have a trickle node, a node which was chosen in each iteration of
the send loop that was privileged and allowed to send out queued up non-time
critical messages. Since the removal of the fixed sleeps in the network code,
this resulted in fast and attackable treatment of such broadcasts.
This pull request changes the 3 remaining trickle use cases by random delays:
* Local address broadcast (while also removing the the wiping of the seen filter)
* Address relay
* Inv relay (for transactions; blocks are always relayed immediately)
The code is based on older commits by Patrick Strateman.
Github-Pull: #7125
Rebased-From: 5400ef6bcb
ebb25f4 Limit setAskFor and retire requested entries only when a getdata returns. (Gregory Maxwell)
5029698 prevent peer flooding request queue for an inv (kazcw)
This replaces using inv messages to announce new blocks, when a peer requests
(via the new "sendheaders" message) that blocks be announced with headers
instead of inv's.
Since headers-first was introduced, peers send getheaders messages in response
to an inv, which requires generating a block locator that is large compared to
the size of the header being requested, and requires an extra round-trip before
a reorg can be relayed. Save time by tracking headers that a peer is likely to
know about, and send a headers chain that would connect to a peer's known
headers, unless the chain would be too big, in which case we revert to sending
an inv instead.
Based off of @sipa's commit to announce all blocks in a reorg via inv,
which has been squashed into this commit.
Rebased-by: Pieter Wuille
The setAskFor duplicate elimination was too eager and removed entries
when we still had no getdata response, allowing the peer to keep
INVing and not responding.
mapAlreadyAskedFor does not keep track of which peer has a request queued for a
particular tx. As a result, a peer can blind a node to a tx indefinitely by
sending many invs for the same tx, and then never replying to getdatas for it.
Each inv received will be placed 2 minutes farther back in mapAlreadyAskedFor,
so a short message containing 10 invs would render that tx unavailable for 20
minutes.
This is fixed by disallowing a peer from having more than one entry for a
particular inv in mapAlreadyAskedFor at a time.
- Force AUTHCOOKIE size to be 32 bytes: This provides protection against
an attack where a process pretends to be Tor and uses the cookie
authentication method to nab arbitrary files such as the
wallet
- torcontrol logging
- fix cookie auth
- add HASHEDPASSWORD auth, fix fd leak when fwrite() fails
- better error reporting when cookie file is not ok
- better init/shutdown flow
- stop advertizing service when disconnected from tor control port
- COOKIE->SAFECOOKIE auth
* -maxuploadtarget can be set in MiB
* if <limit> - ( time-left-in-24h-cycle / 600 * MAX_BLOCK_SIZE ) has reach, stop serve blocks older than one week and filtered blocks
* no action if limit has reached, no guarantee that the target will not be surpassed
* add outbound limit informations to rpc getnettotals
7b79cbd limit total length of user agent comments (Pavol Rusnak)
557f8ea implement uacomment config parameter which can add comments to user agent as per BIP-0014 (Pavol Rusnak)
This sets aside a number of connection slots for whitelisted peers,
useful for ensuring your local users and miners can always get in,
even if your limit on inbound connections has already been reached.
Use a probabilistic bloom filter to keep track of which addresses
we think we have given our peers, instead of a list.
This uses much less memory, at the cost of sometimes failing to
relay an address to a peer-- worst case if the bloom filter happens
to be as full as it gets, 1-in-1,000.
Measured memory usage of a full mruset setAddrKnown: 650Kbytes
Constant memory usage of CRollingBloomFilter addrKnown: 37Kbytes.
This will also help heap fragmentation, because the 37K of storage
is allocated when a CNode is created (when a connection to a peer
is established) and then there is no per-item-remembered memory
allocation.
I plan on testing by restarting a full node with an empty peers.dat,
running a while with -debug=addrman and -debug=net, and making sure
that the 'addr' message traffic out is reasonable.
(suggestions for better tests welcome)