Note that the default value for fRelayTxes is false, meaning we
now no longer relay tx inv messages before receiving the remote
peer's version message.
Fixes issue #2178 : attacker could penny-flood with invalid-signature
transactions to deduce which addresses belonged to your node.
I'm committing this early for code review; I still need to write up
a test plan.
Executive summary of fix: check all transactions received from the network
for penny-flood rate-limiting before adding to the memory pool. But do NOT
ratelimit transactions added to the memory pool:
- because of blockchain reorgs
- stored in the wallet and added at startup
- sent from the GUI or one of the send* RPC commands (CWallet::CommitTransaction)
The limit-free-transactions code really should be a method on CNode, with
counters per-peer. But that is a bigger change for another day.
- it was bad, that quite some messages were just talking about a database,
I think a user should know, if we are talking about wallet db or
block/coin db
- also adds a new init message for "Verifying block database integrity..."
- this pull adds an InitMessage() function to noui.cpp, which outputs init
messages to debug.log (this allows to remove some printf() calls from
init.cpp)
- change InitMessage() in bitcoin.cpp to also write init messages to
debug.log to ensure nothting is missing in the log because of the
removal of printf() calls in init.cpp
Client (SPV) mode never got implemented entirely, and whatever part was already
working, is likely not been tested (or even executed at all) for the past two
years. This removes it entirely.
If we want an SPV implementation, I think we should first get the block chain
data structures to be encapsulated in a class implementing a standard interface,
and then writing an alternate implementation with SPV semantics.
- add qSort() for cachedAddressTable, as qLowerBound() and qUpperBound()
require the list to be in ascending order (see
http://harmattan-dev.nokia.com/docs/library/html/qt4/qtalgorithms.html#qLowerBound)
- add a new check in AddressTableModel::setData() to just return, when no
changes were made to a label or an address (prevents entry duplication
issue)
- remove "rec->label = value.toString();" from
AddressTableModel::setData() as the label gets updated by
AddressTablePriv::updateEntry() anyway (seems @sipa added this line via
1025440184 (L6R225))
- add another new check in AddressTableModel::setData() to just return, if
a duplicate address was found (prevents address overwrite)
- add a new check to EditAddressDialog::setModel() to prevent setting an
invalid model
- re-work the switch-case statement in AddressTableModel::accept() to
always break (as return get's called anyway) and order the list to match
the enum definition
- make accept() in editaddressdialog.h a public slot, which it should be
- misc small coding style changes
Previously when a transaction was set to lock at a specific block the
calculation was reversed, returning a negative number. This broke the UI
and caused it to display %n in place of the actual number.
In addition the previous calculation would display "Open for 0 blocks"
when the block height was such that the next block created would
finalize the transaction. Inserted the word "more" and changed the
calculation so that the last message would be "Open for 1 more block" to
better match user expectations.
Since block validation happens in parallel, multiple threads may be
accessing the signature cache simultaneously. To prevent contention:
* Turn the signature cache lock into a shared mutex
* Make reading from the cache only acquire a shared lock
* Let block validations not store their results in the cache
* During block verification (when parallelism is requested), script
check actions are stored instead of being executed immediately.
* After every processed transactions, its signature actions are
pushed to a CScriptCheckQueue, which maintains a queue and some
synchronization mechanism.
* Two or more threads (if enabled) start processing elements from
this queue,
* When the block connection code is finished processing transactions,
it joins the worker pool until the queue is empty.
As cs_main is held the entire time, and all verification must be
finished before the block continues processing, this does not reach
the best possible performance. It is a less drastic change than
some more advanced mechanisms (like doing verification out-of-band
entirely, and rolling back blocks when a failure is detected).
The -par=N flag controls the number of threads (1-16). 0 means auto,
and is the default.
- ensure we use strCaption for printf and fprintf, as before it could
happen to have an error message in the debug.log, which had no "Error"
(or whatever) in front
- this prevents an interference with the IPC message queue (which is used
for URI processing) when running a testnet and mainnet instance in
parallel
- to check for testnet, I had to raise the ParseParameters() call in
main() to the topmost position
- a click on "Reset Options" sets all options to the default values by
removing all stored settings (QSettings), loading the defaults and
saving them as the new settings
- before the reset is executed the user is presented a confirmation dialog
- special casing was needed for StartAtStartup
- some users reported it as weird, that the estimated block count could be
lower than our own nodes block number (which is indeed true and not good)
- this pull adds a new default behaviour, which displays our own block
number as estimated block number, if own >= est. block count
- the pull raises space for nodes block counts in cPeerBlockCounts to 8 to
be more accurate
- also removes a reduntant setNumBlocks() call in RPCConsole and moves
initialisation of numBlocksAtStartup in ClientModel, where it belongs
-checklevel gets a new meaning:
0: verify blocks can be read from disk (like before)
1: verify (contextless) block validity (like before)
2: verify undo files can be read and have good checksums
3: verify coin database is consistent with the last few blocks
(close to level 6 before)
4: verify all validity rules of the last few blocks
Level 3 is the new default, as it's reasonably fast. As level 3 and
4 are implemented using an in-memory rollback of the database, they
are limited to as many blocks as possible without exceeding the
limits set by -dbcache. The default of -dbcache=25 allows for some
150-200 blocks to be rolled back.
In case an error is found, the application quits with a message
instructing the user to restart with -reindex. Better instructions,
and automatic recovery (when possible) or automatic reindexing are
left as future work.
Initialize the OutputDebugStringF mutex and file pointer using
boost::call_once, to be thread-safe.
Make the return value of OutputDebugStringF really be the number of
characters written (*printf() semantics).
Declare the fReopenDebugLog flag volatile, since it is changed from
a signal handler.
And don't declare OutputDebugStringF() as inline.
If the user was really after the fastest possible confirmation times
they would be manually setting a fee. In cases where the wallet builds
a transaction with a priority that is too low to qualify as free until
the next block, go ahead without a fee. Confirmation frequently takes
multiple blocks even when a minimum fee is provided.
This avoids a potential crash when trying to read the scrippubkeys on
transactions where the first input IsMine but some of the rest are not
when running listaddressgroupings.
When the coin database is out of date with the block database, the
best block in it is automatically switched to. This reconnection
process can take time, so allow it to be interrupted.
This also stops block connection as soon as shutdown is requested,
leading to a faster shutdown.
This problem is like earth (mostly harmless). After/during a
-reindex, it means the statistics about the last block file
reported in debug.log are always of blk00000.dat instead of the
last file. Apart from that, it means a few more database entries
need to be read when finding a file to append to the first time.
- even if we are allowed to fail pre-allocating, it's better to check
for sufficient space before calling AllocateFileRange() and if we
are out of disk space return with error()
- the above change allows us to remove the CheckDiskSpace() check
in CBlock::AcceptBlock()
- use it for displaying URI parsing warnings
- use it for displaying error and information in backup wallet function
(the information display is new and the error was a warning before)
- cleanup BitcoinGUI::incomingTransaction()
-- use message() + the information icon from message
-- comment out an unused parameter in the function definition and
declaration
-- move all pre-checks at the beginning of the function
In case a reorganisation fails, the internal state could become
inconsistent (memory only). Previously, a cache per block connect
or disconnect action was used, so blocks could not be applied in
a partial way. Extend this to a cache for the entire reorganisation,
making it atomic entirely. This also simplifies the code a bit.
- this allows to setup the trayicon before we have and want a trayicon menu
- should be of great use, when we remove that splash screen
- fixes a small bug with the toggleHideAction icon, which is not only used with
trayicon but also with the Mac dock
- fix ThreadSafeMessageBox always displays error icon
- allow to specify MSG_ERROR / MSG_WARNING or MSG_INFORMATION without a
custom caption / title
- allow to specify CClientUIInterface::ICON_ERROR / ICON_WARNING and
ICON_INFORMATION (which is default) as message box icon
- remove CClientUIInterface::OK from ThreadSafeMessageBox-calls, as
the OK button will be set as default, if none is specified
- prepend "Bitcoin - " to used captions
- rename BitcoinGUI::error() -> BitcoinGUI::message() and add function
documentation
- change all style parameters and enum flags to unsigned
- update code to use that new API
- update Client- and WalletModel to use new BitcoinGUI::message() and
rename the classes error() method into message()
- include the possibility to supply the wanted icon for messages from
Client- and WalletModel via "style" parameter
When a transaction A is in the memory pool, while a transaction B
(which shares an input with A) gets accepted into a block, A was
kept forever in the memory pool.
This commit adds a CTxMemPool::removeConflicts method, which
removes transactions that conflict with a given transaction, and
all their children.
This results in less transactions in the memory pool, and faster
construction of new blocks.
- can be triggerd by just adding -proxy=crashme with 0.7.1
- crash occured, when AppInit2() was left with return false; after the
first call to bitdb.open() (Step 6 in init)
- this is caused by GetDataDir() or .string() in CDBEnv::EnvShutdown()
called via the bitdb global destructor
- init fDbEnvInit and fMockDb to false in CDBEnv::CDBEnv()
These flags select features to be enabled/disabled during script
evaluation/checking, instead of several booleans passed along.
Currently these flags are defined:
* SCRIPT_VERIFY_P2SH: enable BIP16-style subscript evaluation
* SCRIPT_VERIFY_STRICTENC: enforce strict adherence to pubkey/sig encoding standards.
o Remove unused Leave and GetLock functions
o Make Enter and TryEnter private.
o Simplify Enter and TryEnter.
boost::unique_lock doesn't really know whether the
mutex it wraps is locked or not when the defer_lock
option is used.
The boost::recursive_mutex does not expose this
information, so unique_lock only infers this
knowledge. When taking the lock is defered, it
(randomly) assumes that the lock is not taken.
boost::unique_lock has the following definition:
unique_lock(Mutex& m_,defer_lock_t):
m(&m_),is_locked(false)
{}
bool owns_lock() const
{
return is_locked;
}
Thus it is a mistake to check owns_lock() in Enter
and TryEnter - they will always return false.
- remove an unwanted ";" at the end of the ~CCoinsView() destructor
- in FindBlockPos() and FindUndoPos() only call fclose(), is file is open
- fix an error string in the CBlockUndo class
feature in clang. These macros should primarily be used to
document which locks protect a given piece of data. Secondary it
can be used to document the set of held and excluded locks when
entering a function.
- remove pathEnv from CDBEnv, as this attribute is not needed
- change path parameter in ::Open() to a reference
- make nDbCache variable an unsigned integer
- remove a missplaced ";" behin ::IsMock()
- this allows the client to listen on via -bind specified addresses
(e.g. 127.0.0.1), even when a network (IPv4 in that case) was blocked
via e.g -onlynet="Tor"
- introduce enum BindFlags to avoid passing multiple bools to Bind()
- make -bind help text clear we ALWAYS listen on the specified address
- remove an unused variable
- remove 2 unneeded IsLimited() checks before calling Bind(), which does
these checks anyway
- usage case: specify -bind=127.0.0.1 -onlynet="Tor" to allow incoming
connections to a Tor hidden service, but still don't allow other IPv4
nodes to connect / get connected
As memset() can be optimized out by a compiler it should not be used in
privacy/security relevant code parts. OpenSSL provides the safe
OPENSSL_cleanse() function in crypto.h, which perfectly does the job of
clean and overwrite data.
For details see: http://www.viva64.com/en/b/0178/
- change memset() to OPENSSL_cleanse() where appropriate
- change a hard-coded number from netbase.cpp into a sizeof()
Flushes the blktree/ and coins/ databases, and reindexes the
block chain files, as if their contents was loaded via -loadblock.
Based on earlier work by Jeff Garzik.
- ensure header inclusion guard is named after the header file
- add missing comments at the end of some inclusion guards
- add a small Qt5 compatibility fix in macdockiconhandler.h
rather than reusing ReadHTTPStatus() from the client mode.
The following additional HTTP request validations are added, both in line with
existing HTTP client practice:
1) HTTP method must be GET or POST. Most clients use POST, some
use GET. Either way, this continues to work.
2) HTTP URI must start with "/" character.
Normal URI is "/" (a 1-char string), so this is fine.
ReadHTTPStatus() is currently overloaded: In client mode, it properly parses
and receives an HTTP status line. In server mode, it incorrectly parses the
HTTP request line as an HTTP status line.
This server mode bug has never mattered, because the RPC server never
cared about the URI (path) provided in the HTTP request. That will change in
the future, so go ahead and begin fixing the problem.
This patch is cosmetic, and should result in NO behavior changes.
Further renames:
ReadHTTPHeader -> ReadHTTPHeaders
ReadHTTP -> ReadHTTPMessage
The original test (checking whether the transaction occurs in the
txindex) is not usable anymore, as it will miss anything already
fully spent. However, as merkle transactions (and by extension,
wallet transactions) track which block they were last seen being
included in, we can use that to determine the need for
rebroadcasting.
- add setStatusTip() in addition to setTooltip() where it makes sense
- add only setStatusTip() if GUI element is only used in main- or tray menu
- add an event filter on our BitcoinGUI object to prevent garbelled text
on the status bar, which happens when we use it for e.g. displaying
block-sync state and then a QEvent::StatusTip wants to write own text to it
- remove a double translation of "Bitcoin client"
This is to support the signrawtransaction API call; given the public
keys involved in a multisig transaction, this gives back the redeemScript
needed to sign it.
signrawtransaction was unable to sign pay-to-script-hash inputs
when given the list of private keys to use. With this commit
you can provide the p2sh redemption script in the list of
inputs.
As the coinset data refers to the best block, stored in the block
tree. Flushing the coin set first can cause inconsistencies if
the process gets killed in between.
- "ThreadIRCSeed started" was not displayed, even if the thread ran
(although only for a short time as the "do we want this thread?"-checks
happen IN ThreadIRCSeed2())
- the patch ensures we always get that message
- add a "ThreadIRCSeed trying to connect..." message
- add missing "ThreadDumpAddress started" message
- instead of "return false;" use "return QDialog::eventFilter(object,
event);" to harmonize this event filter with our default behaviour
- remove orphan spaces found while editting the files
Implements #1948
- Add macro `CLIENT_VERSION_IS_RELEASE` to clientversion.h
- When running a prerelease (the above macro is `false`):
- In UI, show an orange warning bar at the top. This will be used for other
warnings (and alerts) as well, instead of the status bar.
- For `bitcoind`, show the warning in the "errors" field in `getinfo`
response.
CreateNewBlock was reading pindexBest at the start before taking the lock
so it was possible to have the the block content not match the prevheader
and this can also trigger a newly added assert in ConnectBlock.
I noticed this during a code review after twobitcoins reported that ab91bf39
(BIP30 for all blocks) could cause a null dereference on a modified node
that mined during the IBD, or on testnet when it reached heights 91842 and
91880 due to CreateNewBlock calling ConnectBlock with pindex->phashBlock NULL.
- remove uiInterface.InitMessage() calls from ThreadImport(), as Qt
doesn't like them getting called out of it's main thread and because the
thread will continue to run after the GUI was loaded
With a change of libs, and specifying NATIVE_WINDOWS as TARGET_OS it should compile libleveldb.a and libmemenv.a just fine, it did for me and Diapolo when testing.
Split off CBlockTreeDB and CCoinsViewDB into txdb-*.{cpp,h} files,
implemented by either LevelDB or BDB.
Based on code from earlier commits by Mike Hearn in his leveldb
branch.
Given that the block tree database (chain.dat) and the active chain
database (coins.dat) are entirely separate now, it becomes legal to
swap one with another instance without affecting the other.
This commit introduces a check in the startup code that detects the
presence of a better chain in chain.dat that has not been activated
yet, and does so efficiently (in batch, while reusing the blk???.dat
files).
To prevent excessive copying of CCoins in and out of the CCoinsView
implementations, introduce a GetCoins() function in CCoinsViewCache
with returns a direct reference. The block validation and connection
logic is updated to require caching CCoinsViews, and exploits the
GetCoins() function heavily.
Use CBlock's vMerkleTree to cache transaction hashes, and pass them
along as argument in more function calls. During initial block download,
this results in every transaction's hash to be only computed once.
During the initial block download (or -loadblock), delay connection
of new blocks a bit, and perform them in a single action. This reduces
the load on the database engine, as subsequent blocks often update an
earlier block's transaction already.
This switches bitcoin's transaction/block verification logic to use a
"coin database", which contains all unredeemed transaction output scripts,
amounts and heights.
The name ultraprune comes from the fact that instead of a full transaction
index, we only (need to) keep an index with unspent outputs. For now, the
blocks themselves are kept as usual, although they are only necessary for
serving, rescanning and reorganizing.
The basic datastructures are CCoins (representing the coins of a single
transaction), and CCoinsView (representing a state of the coins database).
There are several implementations for CCoinsView. A dummy, one backed by
the coins database (coins.dat), one backed by the memory pool, and one
that adds a cache on top of it. FetchInputs, ConnectInputs, ConnectBlock,
DisconnectBlock, ... now operate on a generic CCoinsView.
The block switching logic now builds a single cached CCoinsView with
changes to be committed to the database before any changes are made.
This means no uncommitted changes are ever read from the database, and
should ease the transition to another database layer which does not
support transactions (but does support atomic writes), like LevelDB.
For the getrawtransaction() RPC call, access to a txid-to-disk index
would be preferable. As this index is not necessary or even useful
for any other part of the implementation, it is not provided. Instead,
getrawtransaction() uses the coin database to find the block height,
and then scans that block to find the requested transaction. This is
slow, but should suffice for debug purposes.
Introduce a AllocateFileRange() function in util, which wipes or
at least allocates a given range of a file. It can be overriden
by more efficient OS-dependent versions if necessary.
Block and undo files are now allocated in chunks of 16 and 1 MiB,
respectively.
Change the block storage layer again, this time with multiple files
per block, but tracked by txindex.dat database entries. The file
format is exactly the same as the earlier blk00001.dat, but with
smaller files (128 MiB for now).
The database entries track how many bytes each block file already
uses, how many blocks are in it, which range of heights is present
and which range of dates.
The CTxUndo class encapsulates data necessary to undo the effects of
a transaction on the txout set, namely the previous outputs consumed
by it (script + amount), and potentially transaction meta-data when
it is spent entirely.
The CCoins class represents a pruned set of transaction outputs from
a given transaction. It only retains information about its height in
the block chain, whether it was a coinbase transaction, and its
unspent outputs (script + amount).
It has a custom serializer that has very low redundancy.
Special serializer/deserializer for amount values. It is optimized for
values which have few non-zero digits in decimal representation. Most
amounts currently in the txout set take only 1 or 2 bytes to
represent.
Special serializers for script which detect common cases and encode
them much more efficiently. 3 special cases are defined:
* Pay to pubkey hash (encoded as 21 bytes)
* Pay to script hash (encoded as 21 bytes)
* Pay to pubkey starting with 0x02, 0x03 or 0x04 (encoded as 33 bytes)
Other scripts up to 121 bytes require 1 byte + script length. Above
that, scripts up to 16505 bytes require 2 bytes + script length.
Variable-length integers: bytes are a MSB base-128 encoding of the number.
The high bit in each byte signifies whether another digit follows. To make
the encoding is one-to-one, one is subtracted from all but the last digit.
Thus, the byte sequence a[] with length len, where all but the last byte
has bit 128 set, encodes the number:
(a[len-1] & 0x7F) + sum(i=1..len-1, 128^i*((a[len-i-1] & 0x7F)+1))
Properties:
* Very small (0-127: 1 byte, 128-16511: 2 bytes, 16512-2113663: 3 bytes)
* Every integer has exactly one encoding
* Encoding does not depend on size of original integer type
This reverts commit 199d88cf90, reversing
changes made to 65bc1573e7.
License is worse instead of better. Will only accept public domain and
MIT-licensed icons from now on.
Corrupt wallets used to cause a DB_RUNRECOVERY uncaught exception and a
crash. This commit does three things:
1) Runs a BDB verify early in the startup process, and if there is a
low-level problem with the database:
+ Moves the bad wallet.dat to wallet.timestamp.bak
+ Runs a 'salvage' operation to get key/value pairs, and
writes them to a new wallet.dat
+ Continues with startup.
2) Much more tolerant of serialization errors. All errors in deserialization
are reported by tolerated EXCEPT for errors related to reading keypairs
or master key records-- those are reported and then shut down, so the user
can get help (or recover from a backup).
3) Adds a new -salvagewallet option, which:
+ Moves the wallet.dat to wallet.timestamp.bak
+ extracts ONLY keypairs and master keys into a new wallet.dat
+ soft-sets -rescan, to recreate transaction history
This was tested by randomly corrupting testnet wallets using a little
python script I wrote (https://gist.github.com/3812689)
Before, opening a -datadir that was created with a new
version of Berkeley DB would result in an un-caught DB_RUNRECOVERY
exception.
After these changes, the error is caught and the user is told
that there is a problem and is told how to try to recover from
it.
Before, opening a -datadir that was created with a new
version of Berkeley DB would result in an un-caught DB_RUNRECOVERY
exception.
After these changes, the error is caught and the user is told
that there is a problem and is told how to try to recover from
it.
- don't rely on the QSettings for cases ProxyUse and ProxySocksVersion and
query the real values via the GetProxy() call
- add a missing "succesful =" for case ProxyUse in ::setData()
I2P apparently needs 256 bits to store a fully routable address. Garlicat
requires a centralized lookup service to map the 80-bit addresses to fully
routable ones (as far as I understood), so that's not really usable in our
situation.
To support I2P routing and peer exchange for it, another solution is needed.
This will most likely imply a network protocol change, and extension of the
'addr' message.
- fix#1560 by properly locking proxy related data-structures
- update GetProxy() and introduce GetNameProxy() to be able to use a
thread-safe local copy from proxyInfo and nameproxyInfo
- update usage of GetProxy() all over the source to match the new
behaviour, as it now fills a full proxyType object
- rename GetNameProxy() into HaveNameProxy() to be more clear
This allows fun stuff such as `bitcoin --help | less`, and more
easy piping to files.
Looking at other tools such as bash, gcc, they all send their help
text to stdout.
These command are a leftover from send-to-IP transactions, which have been
removed a long time ago.
Also removes CNode::mapRequests and CNode::PushRequests, as these were
only used for the mentioned commands.
As the code was before, toHTML added empty elements to mapValue to check for their existance. Now first it check for their existance and then for their non-emptiness.
Removed a duplicated identical if
There are two equal ifs, one inside another. If the first one is true, then the second one is true.
Due to a bug in the implementation of MakeSameSize(), using OP_AND, OP_OR, or OP_XOR with signed values of unequal size will result in the sign-value becoming part of the smaller integer, with nonsensical results. This patch documents the unexpected behavior and provides the basis of a solution should decision be made to fix the bug in the future.
-detachdb option. Useful for upgrading, for example. Lets you use fast stops usually, but force a detach when needed. Also, allows
you to do a fast stop in a system normally configured for fast stops.
- I checked every occurance of strprintf() in the code and used %u, where
unsigned vars are used
- the change to GetByte() was made, as ip is an unsigned char
Matt pointed out some time ago that there existed a minor DOS
attack where a node in its initial block download could be wedged
by an overwrite attack in a fork created between checkpoints before
a time where BIP30 was enforced. Now that the BIP30 timestamp
is irreversibly past the check can be more aggressive and apply to
all blocks except the two historic violations.
- Paging using PageUp / PageDown now works when entry widget has focus
- Typing or pasting while the messages widget has focus auto-selects entry widget
We're in a wholly different world now, C++-compiler-wise.
Current std::stringstream implementations don't have the stated problem anymore,
and are just as fast as CDataStream.
The #ifdef'd block does not even compile anymore; CDataStream constructor changed,
and missing some std::. Also timing in whole seconds is also way too granular
to say anything sensible in such microbenchmarks. Just remove it,
it can always be found again in git history.