This was a leftover from the times in which
peers.dat depended in BDB.
Other functions in db.cpp still depend on BerkelyDB,
to be able to compile without BDB this (small)
functionality needs to be moved to another file.
Use misc methods of avoiding unnecesary header includes.
Replace int typedefs with int##_t from stdint.h.
Replace PRI64[xdu] with PRI[xdu]64 from inttypes.h.
Normalize QT_VERSION ifs where possible.
Resolve some indirect dependencies as direct ones.
Remove extern declarations from .cpp files.
INIT_PROTO_VERSION is the initial version, after a succesful version/verack it is increased to a negotiated version.
MIN_PEER_PROTO_VERSION could be a different value to disconnect from peers older than a specified version.
The existing CNode::addrLocal member is revealed to the user,
as an address string, similar to the existing "addr" field.
Instead of showing garbage or empty string,
it simply will not appear in the output if local address not known yet.
New RPC "ping" command to request ping.
Implemented "pong" message handler.
New "pingtime" field in getpeerinfo, to provide results to user.
New "pingwait" field, to show pings still in flight, to better see newly lagging peers.
This reduces a peer's ability to attack network resources by
using a full bloom filter, but without reducing the usability
of bloom filters. It sets a default match everything filter
for peers and it generalizes a prior optimization to
cover more cases.
Added explicit include of main.h in init.cpp, changed include of init.h to include of main.h in net.cpp.
Added function registration for net.cpp in init.cpp's network initialization.
Removed protocol.cpp's dependency on main.h.
TODO: Remove main.h include in net.cpp.
This introduces the concept of the 'sync node', which is the one we
asked for missing blocks. In case the sync node goes away, a new one
will be selected.
For now, the heuristic is very simple, but it can easily be extended
later to add better policies.
It seems there were two mechanisms for assessing whether a CNode
was still in use: a refcount and a release timestamp. The latter
seems to have been there for a long time, as a safety mechanism.
However, this timer also keeps CNode objects alive for far longer
than necessary after disconnects, potentially opening up a DoS
window.
This commit removes the timestamp-based mechanism, and replaces
it with an assert(nRefCount >= 0), to verify that the refcounting
is indeed correctly working.
Create a boost::thread_group object at the qt/bitcoind main-loop level
that will hold pointers to all the main-loop threads.
This will replace the vnThreadsRunning[] array.
For testing, ported the BitcoinMiner threads to use its
own boost::thread_group.
This will result in re-requesting invs if we are under heavy inv
load, however as long as we get no more than 16,000 invs in two
minutes, this should have no effect on runtime behavior.
There exists a per-message-processed send buffer overflow protection,
where processing is halted when the send buffer is larger than the
allowed maximum.
This protection does not apply to individual items, however, and
getdata has the potential for causing large amounts of data to be
sent. In case several hundreds of blocks are requested in one getdata,
the send buffer can easily grow 50 megabytes above the send buffer
limit.
This commit breaks up the processing of getdata requests, remembering
them inside a CNode when too many are requested at once.
* Change CNode::vRecvMsg to be a deque instead of a vector (less copying)
* Make sure to acquire cs_vRecvMsg in CNode::CloseSocketDisconnect (as it
may be called without that lock).
1) "optimistic write": Push each message to kernel socket buffer immediately.
2) If there is write data at select time, that implies send() blocked
during optimistic write. Drain write queue, before receiving
any more messages.
This avoids needlessly queueing received data, if the remote peer
is not themselves receiving data.
Result: write buffer (and thus memory usage) is kept small, DoS
potential is slightly lower, and TCP flow control signalling is
properly utilized.
The kernel will queue data into the socket buffer, then signal the
remote peer to stop sending data, until we resume reading again.
Replaces CNode::vRecv buffer with a vector of CNetMessage's. This simplifies
ProcessMessages() and eliminates several redundant data copies.
Overview:
* socket thread now parses incoming message datastream into
header/data components, as encapsulated by CNetMessage
* socket thread adds each CNetMessage to a vector inside CNode
* message thread (ProcessMessages) iterates through CNode's CNetMessage vector
Message parsing is made more strict:
* Socket is disconnected, if message larger than MAX_SIZE
or if CMessageHeader deserialization fails (latter is impossible?).
Previously, code would simply eat garbage data all day long.
* Socket is disconnected, if we fail to find pchMessageStart.
We do not search through garbage, to find pchMessageStart. Each
message must begin precisely after the last message ends.
ProcessMessages() always processes a complete message, and is more efficient:
* buffer is always precisely sized, using CDataStream::resize(),
rather than progressively sized in 64k chunks. More efficient
for large messages like "block".
* whole-buffer memory copy eliminated (vRecv -> vMsg)
* other buffer-shifting memory copies eliminated (vRecv.insert, vRecv.erase)
Note that the default value for fRelayTxes is false, meaning we
now no longer relay tx inv messages before receiving the remote
peer's version message.
Client (SPV) mode never got implemented entirely, and whatever part was already
working, is likely not been tested (or even executed at all) for the past two
years. This removes it entirely.
If we want an SPV implementation, I think we should first get the block chain
data structures to be encapsulated in a class implementing a standard interface,
and then writing an alternate implementation with SPV semantics.
* During block verification (when parallelism is requested), script
check actions are stored instead of being executed immediately.
* After every processed transactions, its signature actions are
pushed to a CScriptCheckQueue, which maintains a queue and some
synchronization mechanism.
* Two or more threads (if enabled) start processing elements from
this queue,
* When the block connection code is finished processing transactions,
it joins the worker pool until the queue is empty.
As cs_main is held the entire time, and all verification must be
finished before the block continues processing, this does not reach
the best possible performance. It is a less drastic change than
some more advanced mechanisms (like doing verification out-of-band
entirely, and rolling back blocks when a failure is detected).
The -par=N flag controls the number of threads (1-16). 0 means auto,
and is the default.
These command are a leftover from send-to-IP transactions, which have been
removed a long time ago.
Also removes CNode::mapRequests and CNode::PushRequests, as these were
only used for the mentioned commands.
Prior to this change, each TX typically generated 3+ debug messages,
askfor tx 8644cc97480ba1537214 0
sending getdata: tx 8644cc97480ba1537214
askfor tx 8644cc97480ba1537214 1339640761000000
askfor tx 8644cc97480ba1537214 1339640881000000
CTxMemPool::accept() : accepted 8644cc9748 (poolsz 6857)
After this change, there is only one message for each valid TX received
CTxMemPool::accept() : accepted 22a73c5d8c (poolsz 42)
and two messages for each orphan tx received
ERROR: FetchInputs() : 673dc195aa mempool Tx prev not found 1e439346fc
stored orphan tx 673dc195aa (mapsz 19)
The -debugnet option, or its superset -debug, will restore the full debug
output.
Introduce a boolean variable for each "network" (ipv4, ipv6, tor, i2p),
and track whether we are likely to able to connect to it. Addresses in
"addr" messages outside of our network get limited relaying and are not
stored in addrman.
Change internal HTTP JSON-RPC server from single-threaded to
thread-per-connection model. The IP filter list is applied prior to starting
the thread, which then processes the RPC.
A mutex covers the entire RPC operation, because not all RPC operations are
thread-safe.
[minor modifications by jgarzik, to make change upstream-ready]
-externalip=<ip> can be used to explicitly set the public IP address
of your node. -discover=0 can be used to disable the automatic public
IP discovery system.
This commit removes the dependency of serialize.h on PROTOCOL_VERSION,
and makes this parameter required instead of implicit. This is much saner,
as it makes the places where changing a version number can have an
influence obvious.
Design goals:
* Only keep a limited number of addresses around, so that addr.dat does not grow without bound.
* Keep the address tables in-memory, and occasionally write the table to addr.dat.
* Make sure no (localized) attacker can fill the entire table with his nodes/addresses.
See comments in addrman.h for more detailed information.