The new class is accessed via the Params() method and holds
most things that vary between main, test and regtest networks.
The regtest mode has two purposes, one is to run the
bitcoind/bitcoinj comparison tool which compares two separate
implementations of the Bitcoin protocol looking for divergence.
The other is that when run, you get a local node which can mine
a single block instantly, which is highly convenient for testing
apps during development as there's no need to wait 10 minutes for
a block on the testnet.
- removes our NewThread() function an replaces remaining calls with
boost::thread with our TraceThread template
- remove ExitThread() function
- fix THREAD_PRIORITY_ABOVE_NORMAL for non Windows OSes
Added explicit include of main.h in init.cpp, changed include of init.h to include of main.h in net.cpp.
Added function registration for net.cpp in init.cpp's network initialization.
Removed protocol.cpp's dependency on main.h.
TODO: Remove main.h include in net.cpp.
Instead of killing a connection when the receive buffer overflows,
just temporarily halt receiving before that happens. Also, no
matter what, always allow at least one full message in the receive
buffer (otherwise blocks larger than the configured buffer size
would pause indefinitely).
This introduces the concept of the 'sync node', which is the one we
asked for missing blocks. In case the sync node goes away, a new one
will be selected.
For now, the heuristic is very simple, but it can easily be extended
later to add better policies.
It seems there were two mechanisms for assessing whether a CNode
was still in use: a refcount and a release timestamp. The latter
seems to have been there for a long time, as a safety mechanism.
However, this timer also keeps CNode objects alive for far longer
than necessary after disconnects, potentially opening up a DoS
window.
This commit removes the timestamp-based mechanism, and replaces
it with an assert(nRefCount >= 0), to verify that the refcounting
is indeed correctly working.
Two reasons for this change:
1. Need to always use boost::thread's sleep, even on Windows, so the
sleeps can be interrupted (prior code used Windows' built-in Sleep).
2. I always forgot what units the old Sleep took.
Create a boost::thread_group object at the qt/bitcoind main-loop level
that will hold pointers to all the main-loop threads.
This will replace the vnThreadsRunning[] array.
For testing, ported the BitcoinMiner threads to use its
own boost::thread_group.
This will result in re-requesting invs if we are under heavy inv
load, however as long as we get no more than 16,000 invs in two
minutes, this should have no effect on runtime behavior.
* Change CNode::vRecvMsg to be a deque instead of a vector (less copying)
* Make sure to acquire cs_vRecvMsg in CNode::CloseSocketDisconnect (as it
may be called without that lock).
1) "optimistic write": Push each message to kernel socket buffer immediately.
2) If there is write data at select time, that implies send() blocked
during optimistic write. Drain write queue, before receiving
any more messages.
This avoids needlessly queueing received data, if the remote peer
is not themselves receiving data.
Result: write buffer (and thus memory usage) is kept small, DoS
potential is slightly lower, and TCP flow control signalling is
properly utilized.
The kernel will queue data into the socket buffer, then signal the
remote peer to stop sending data, until we resume reading again.
Replaces CNode::vRecv buffer with a vector of CNetMessage's. This simplifies
ProcessMessages() and eliminates several redundant data copies.
Overview:
* socket thread now parses incoming message datastream into
header/data components, as encapsulated by CNetMessage
* socket thread adds each CNetMessage to a vector inside CNode
* message thread (ProcessMessages) iterates through CNode's CNetMessage vector
Message parsing is made more strict:
* Socket is disconnected, if message larger than MAX_SIZE
or if CMessageHeader deserialization fails (latter is impossible?).
Previously, code would simply eat garbage data all day long.
* Socket is disconnected, if we fail to find pchMessageStart.
We do not search through garbage, to find pchMessageStart. Each
message must begin precisely after the last message ends.
ProcessMessages() always processes a complete message, and is more efficient:
* buffer is always precisely sized, using CDataStream::resize(),
rather than progressively sized in 64k chunks. More efficient
for large messages like "block".
* whole-buffer memory copy eliminated (vRecv -> vMsg)
* other buffer-shifting memory copies eliminated (vRecv.insert, vRecv.erase)
Note that the default value for fRelayTxes is false, meaning we
now no longer relay tx inv messages before receiving the remote
peer's version message.
Client (SPV) mode never got implemented entirely, and whatever part was already
working, is likely not been tested (or even executed at all) for the past two
years. This removes it entirely.
If we want an SPV implementation, I think we should first get the block chain
data structures to be encapsulated in a class implementing a standard interface,
and then writing an alternate implementation with SPV semantics.
- "ThreadIRCSeed started" was not displayed, even if the thread ran
(although only for a short time as the "do we want this thread?"-checks
happen IN ThreadIRCSeed2())
- the patch ensures we always get that message
- add a "ThreadIRCSeed trying to connect..." message
- add missing "ThreadDumpAddress started" message
- fix#1560 by properly locking proxy related data-structures
- update GetProxy() and introduce GetNameProxy() to be able to use a
thread-safe local copy from proxyInfo and nameproxyInfo
- update usage of GetProxy() all over the source to match the new
behaviour, as it now fills a full proxyType object
- rename GetNameProxy() into HaveNameProxy() to be more clear