Remove the nType and nVersion as parameters to all serialization methods
and functions. There is only one place where it's read and has an impact
(in CAddress), and even there it does not impact any of the recursively
invoked serializers.
Instead, the few places that need nType or nVersion are changed to read
it directly from the stream object, through GetType() and GetVersion()
methods which are added to all stream classes.
Given that in default GetSerializeSize implementations created by
ADD_SERIALIZE_METHODS we're already using CSizeComputer(), get rid
of the specialized GetSerializeSize methods everywhere, and just use
CSizeComputer. This removes a lot of code which isn't actually used
anywhere.
For CCompactSize and CVarInt this actually removes a more efficient
size computing algorithm, which is brought back in a later commit.
The stream implementations had two cascading layers (the upper one
with operator<< and operator>>, and a lower one with read and write).
The lower layer's functions are never cascaded (nor should they, as
they should only be used from the higher layer), so make them return
void instead.
Implement `begin_ptr` and `end_ptr` in terms of C++11 code,
and add a comment that they are deprecated.
Follow-up to developer notes update in 654a211622.
There's only one case where a vector containing a fundamental type is
serialized all-at-once, unsigned char. Anything else would lead to
strange results.
Use a dummy argument to overload in that case.
- it now takes over the passed file descriptor and closes it in the
destructor
- this fixes a leak in LoadExternalBlockFile(), where an exception could
cause the file to not getting closed
- disallow copies (like recently added for CAutoFile)
- make nType and nVersion private
One might assume that CAutoFile would be ref-counted so that a copied object
would delay closing the underlying file until all copies have gone out of
scope. Since that's not the case with CAutoFile, explicitly disable copying.
Thanks to Pieter Wuille for most of the work on this commit.
I did not fixup the overhaul commit, because a rebase conflicted
with "remove fields of ser_streamplaceholder".
I prefer not to risk making a mistake while resolving it.
The nType and nVersion fields of stream objects are never accessed
from outside the class (or perhaps from the inside too, I haven't checked).
Thus no need to have them in a placeholder, whose only purpose is to
fill the "Stream" template parameter in serialization implementation.
The implementation of each class' serialization/deserialization is no longer
passed within a macro. The implementation now lies within a template of form:
template <typename T, typename Stream, typename Operation>
inline static size_t SerializationOp(T thisPtr, Stream& s, Operation ser_action, int nType, int nVersion) {
size_t nSerSize = 0;
/* CODE */
return nSerSize;
}
In cases when codepath should depend on whether or not we are just deserializing
(old fGetSize, fWrite, fRead flags) an additional clause can be used:
bool fRead = boost::is_same<Operation, CSerActionUnserialize>();
The IMPLEMENT_SERIALIZE macro will now be a freestanding clause added within
class' body (similiar to Qt's Q_OBJECT) to implement GetSerializeSize,
Serialize and Unserialize. These are now wrappers around
the "SerializationOp" template.
- ensures a consistent usage in header files
- also add a blank line after the copyright header where missing
- also remove orphan new-lines at the end of some files
Thus the read(...) and write(...) methods of all stream classes now have identical parameter lists.
This will bring these classes one step closer to a common interface.
Remove the 'state' and 'exceptmask' from serialize.h's stream implementations,
as well as related methods.
As exceptmask always included 'failbit', and setstate was always called with
bits = failbit, all it did was immediately raise an exception. Get rid of
those variables, and replace the setstate with direct exception throwing
(which also removes some dead code).
As a result, good() is never reached after a failure (there are only 2
calls, one of which is in tests), and can just be replaced by !eof().
fail(), clear(n) and exceptions() are just never called. Delete them.
`&vch[vch.size()]` and even `&vch[0]` on vectors can cause assertion
errors with VC in debug mode. This is the problem mentioned in #4239.
The deeper problem with this is that we rely on undefined behavior.
- Add `begin_ptr` and `end_ptr` functions that get the beginning and end
pointer of vector in a reliable way that copes with empty vectors and
doesn't reference outside the vector
(see https://stackoverflow.com/questions/1339470/how-to-get-the-address-of-the-stdvector-buffer-start-most-elegantly/1339767#1339767).
- Add a convenience constructor to CFlatData that wraps a vector.
I added `begin_ptr` and `end_ptr` as separate functions as I imagine
they will be useful in more places.
Use misc methods of avoiding unnecesary header includes.
Replace int typedefs with int##_t from stdint.h.
Replace PRI64[xdu] with PRI[xdu]64 from inttypes.h.
Normalize QT_VERSION ifs where possible.
Resolve some indirect dependencies as direct ones.
Remove extern declarations from .cpp files.
Changed CDataStream::GetAndClear() to use the most obvious
get get and clear instead of a tricky swap().
Added a unit test for CDataStream insert/erase/GetAndClear.
Note: GetAndClear() is not performance critical, it is used only
by the send-a-message-to-the-network code. Bug was not noticed
before now because the send-a-message code never erased from the
stream.
This seems to cause problems on recent clang, and looks totally
redundant and unused.
The const_iterator version is identical to the vector::const_iterator
one (which is a typedef thereof). Marking it private (instead of
removing) compiles fine, so this version is effectively unused even.
The length of vectors, maps, sets, etc are serialized using
Write/ReadCompactSize -- which, unfortunately, do not use a
unique encoding.
So deserializing and then re-serializing a transaction (for example)
can give you different bits than you started with. That doesn't
cause any problems that we are aware of, but it is exactly the type
of subtle mismatch that can lead to exploits.
With this pull, reading a non-canonical CompactSize throws an
exception, which means nodes will ignore 'tx' or 'block' or
other messages that are not properly encoded.
Please check my logic... but this change is safe with respect to
causing a network split. Old clients that receive
non-canonically-encoded transactions or blocks deserialize
them into CTransaction/CBlock structures in memory, and then
re-serialize them before relaying them to peers.
And please check my logic with respect to causing a blockchain
split: there are no CompactSize fields in the block header, so
the block hash is always canonical. The merkle root in the block
header is computed on a vector<CTransaction>, so
any non-canonical encoding of the transactions in 'tx' or 'block'
messages is erased as they are read into memory by old clients,
and does not affect the block hash. And, as noted above, old
clients re-serialize (with canonical encoding) 'tx' and 'block'
messages before relaying to peers.