1) "optimistic write": Push each message to kernel socket buffer immediately.
2) If there is write data at select time, that implies send() blocked
during optimistic write. Drain write queue, before receiving
any more messages.
This avoids needlessly queueing received data, if the remote peer
is not themselves receiving data.
Result: write buffer (and thus memory usage) is kept small, DoS
potential is slightly lower, and TCP flow control signalling is
properly utilized.
The kernel will queue data into the socket buffer, then signal the
remote peer to stop sending data, until we resume reading again.
Replaces CNode::vRecv buffer with a vector of CNetMessage's. This simplifies
ProcessMessages() and eliminates several redundant data copies.
Overview:
* socket thread now parses incoming message datastream into
header/data components, as encapsulated by CNetMessage
* socket thread adds each CNetMessage to a vector inside CNode
* message thread (ProcessMessages) iterates through CNode's CNetMessage vector
Message parsing is made more strict:
* Socket is disconnected, if message larger than MAX_SIZE
or if CMessageHeader deserialization fails (latter is impossible?).
Previously, code would simply eat garbage data all day long.
* Socket is disconnected, if we fail to find pchMessageStart.
We do not search through garbage, to find pchMessageStart. Each
message must begin precisely after the last message ends.
ProcessMessages() always processes a complete message, and is more efficient:
* buffer is always precisely sized, using CDataStream::resize(),
rather than progressively sized in 64k chunks. More efficient
for large messages like "block".
* whole-buffer memory copy eliminated (vRecv -> vMsg)
* other buffer-shifting memory copies eliminated (vRecv.insert, vRecv.erase)
Note that the default value for fRelayTxes is false, meaning we
now no longer relay tx inv messages before receiving the remote
peer's version message.
Client (SPV) mode never got implemented entirely, and whatever part was already
working, is likely not been tested (or even executed at all) for the past two
years. This removes it entirely.
If we want an SPV implementation, I think we should first get the block chain
data structures to be encapsulated in a class implementing a standard interface,
and then writing an alternate implementation with SPV semantics.
- "ThreadIRCSeed started" was not displayed, even if the thread ran
(although only for a short time as the "do we want this thread?"-checks
happen IN ThreadIRCSeed2())
- the patch ensures we always get that message
- add a "ThreadIRCSeed trying to connect..." message
- add missing "ThreadDumpAddress started" message
- fix#1560 by properly locking proxy related data-structures
- update GetProxy() and introduce GetNameProxy() to be able to use a
thread-safe local copy from proxyInfo and nameproxyInfo
- update usage of GetProxy() all over the source to match the new
behaviour, as it now fills a full proxyType object
- rename GetNameProxy() into HaveNameProxy() to be more clear
These command are a leftover from send-to-IP transactions, which have been
removed a long time ago.
Also removes CNode::mapRequests and CNode::PushRequests, as these were
only used for the mentioned commands.
- I checked every occurance of strprintf() in the code and used %u, where
unsigned vars are used
- the change to GetByte() was made, as ip is an unsigned char
- ensure warnings always start with "Warning:" and that the first
character after ":" is written uppercase
- ensure the first sentence in warnings ends with an "!"
- remove unneeded spaces from Warning-strings
- add missing Warning-string translation
- remove a "\n" and replace with untranslatable "<br><br>"
* Fix wrong thread name for wallet *relocking* thread
- Was named the unlocking thread
* Use consistent naming
Signed-off-by: Giel van Schijndel <me@mortis.eu>
NOTE: These thread names are visible in gdb when using 'info threads'.
Additionally both 'top' and 'ps' show these names *unless* told to
display the command-line instead of task name.
Signed-off-by: Giel van Schijndel <me@mortis.eu>
Because new nodes pull from the first connected node the load
balancing of the first connection is more important than it should
be. This change puts Pieter's seed first, because its probably
the best maintained right now.
Bitcoin will not make an outbound connection to a network group
(/16 for IPv4) that it is already connected to. This means that
if an attacker wants good odds of capturing all a nodes outbound
connections he must have hosts on a a large number of distinct
groups.
Previously both inbound and outbound connections were used to
feed this exclusion. The use of inbound connections, which can be
controlled by the attacker, actually has the potential of making
sibyl attacks _easier_: An attacker can start up hosts in groups
which house many honest nodes and make outbound connections to
the victim to exclude big swaths of honest nodes. Because the
attacker chooses to make the outbound connection he can always
beat out honest nodes for the consumption of inbound slots.
At _best_ the old behavior increases attacker costs by a single
group (e.g. one distinct group to use to fill up all your inbound
slots), but at worst it allows the attacker to select whole
networks you won't connect to.
This commit makes the nodes use only outbound links to exclude
network groups for outbound connections. Fancier things could
be done, like weaker exclusion for inbound groups... but
simplicity is good and I don't believe more complexity is
currently needed.