Remove the (relay/mempool) rule that all outputs of free transactions
must be greater than 0.01 XBT. Dust spam is now taken care of by making
dusty outputs non-standard.
This changes the priority calculation to not include the size of per-txin
data including up to 110 bytes of scriptsig so that transactions which
sweep up extra UTXO don't lose priority relative to ones that don't.
I'd toyed with some other variations, but it seems like any formulation
which results in an incentive stronger than making them not count will
sometimes create incentives to add extra outputs so that you have
extra inputs to consume later. The maximum credit is limited so that
users don't lose the disincentive to stuff random data in their
transactions, the limit of 110 is based on the size of a P2SH
redemption with a compressed public key.
This shouldn't need a staged deployment because the priority is not
used as a relay criteria, only a mining criteria.
Correctly use the purpose of addresses that are added after the start
of the client. Addresses with purpose "refund" and "change" should not
be visible in the GUI. This is now handled correctly.
Add support for a Payment Protocol to Bitcoin-Qt.
Payment messages are protocol-buffer encoded and communicated over
http(s), so this adds a dependency on the Google protocol buffer
library, and requires Qt with OpenSSL support.
Straight refactor, so mapAddressBook stores a CAddressBookData
(which just contains a std::string) instead of a std::string.
Preparation for payment protocol work, which will add the notion
of refund addresses to the address book.
Compute safe lower bounds on the birth times of all wallet keys. For
pool keys or keys with metadata, the actually stored birth time is
used. For all others, the birth times are inferred from the wallet
transactions.
Refactor keytime:
* Key metadata is kept in a CWallet::mapKeyMetadata (std::map<CKeyId,CKeyMetadata>).
* When generating a new key, time is put in that map, and new key is written.
* AddKeyPubKey and AddCryptedKey do not take a creation time argument, but instead
pull it from that map, if it exists there.
Bugfix:
* AddKeyPubKey and AddCryptedKey in CWallet didn't override the CKeyStore
definition anymore. This is fixed, as they no longed need the nCreationTime
argument now.
Also a few related other changes:
* Metadata can be overwritten.
* Only GenerateNewKey calls GetTime(), as it's the only place where we know for
sure a key was not constructed earlier.
* When the nTimeFirstKey is known to be inaccurate, it is set to the value 1
(instead of 0, which would mean unknown).
* Use CPubKey instead of std::vector<unsigned char> where possible.
This (nearly) doesn't change fee rules at all:
* To make it into the fee transaction area, the dPriority comparison
changed from < to <=
* We now just ignore transactions > MAX_BLOCK_SIZE/4 instead of
doing some calculations to require increasingly large fees as
size increases.
Removed AreInputsStandard from CTransaction, made it a regular function in main.
Moved CTransaction::GetOutputFor to CCoinsViewCache.
Moved GetLegacySigOpCount and GetP2SHSigOpCount out of CTransaction into regular functions in main.
Moved GetValueIn and HaveInputs from CTransaction into CCoinsViewCache.
Moved AllowFree, ClientCheckInputs, CheckInputs, UpdateCoins, and CheckTransaction out of CTransaction and into main.
Moved IsStandard and IsFinal out of CTransaction and put them in main as IsStandardTx and IsFinalTx. Moved GetValueOut out of CTransaction into main. Moved CTxIn, CTxOut, and CTransaction into core.
Added minimum fee parameter to CTxOut::IsDust() temporarily until CTransaction is moved to core.h so that CTxOut needn't know about CTransaction.
Remove the pnext pointer in CBlockIndex, and replace it with a
vBlockIndexByHeight vector (no effect on memory usage). pnext can
now be replaced by vBlockIndexByHeight[nHeight+1], but
FindBlockByHeight becomes constant-time.
This also means the entire mapBlockIndex structure and the block
index entries in it become purely blocktree-related data, and
independent from the currently active chain, potentially allowing
them to be protected by separate mutexes in the future.
New method in bitcoinrpc: RunLater, that uses a map of deadline
timers to run a function later.
Behavior of walletpassphrase is changed; before, calling
walletpassphrase again before the lock timeout passed
would result in: Error: Wallet is already unlocked.
You would have to call lockwallet before walletpassphrase.
Now: the last walletpassphrase with correct password
wins, and overrides any previous timeout.
Fixes issue# 1961 which was caused by spawning too many threads.
Test plan:
Start with encrypted wallet, password 'foo'
NOTE:
python -c 'import time; print("%d"%time.time())'
... will tell you current unix timestamp.
Try:
walletpassphrase foo 600
getinfo
EXPECT: unlocked_until is about 10 minutes in the future
walletpassphrase foo 1
sleep 2
sendtoaddress mun74Bvba3B1PF2YkrF4NsgcJwHXXh12LF 11
EXPECT: Error: Please enter the wallet passphrase with walletpassphrase first.
walletpassphrase foo 600
walletpassphrase foo 0
getinfo
EXPECT: wallet is locked (unlocked_until is 0)
walletpassphrase foo 10
walletpassphrase foo 600
getinfo
EXPECT: wallet is unlocked until 10 minutes in future
walletpassphrase foo 60
walletpassphrase bar 600
EXPECT: Error, incorrect passphrase
getinfo
EXPECT: wallet still scheduled to lock 60 seconds from first (successful) walletpassphrase
When debugging another issue, I found a hang-during-startup race condition due to
LoadWallet calling SetMinVersion (via LoadCryptedKey).
Writing to the file that you're in the process of reading is a bad idea.
This fixes test_bitcoin failures on openbsd reported by dhill on IRC.
On some systems rand() is a simple LCG over 2^31 and so it produces
an even-odd sequence. ApproximateBestSubset was only using the least
significant bit and so every run of the iterative solver would be the
same for some inputs, resulting in some pretty dumb decisions.
Using something other than the least significant bit would paper over
the issue but who knows what other way a system's rand() might get us
here. Instead we use an internal RNG with a period of something like
2^60 which is well behaved. This also makes it possible to make the
selection deterministic for the tests, if we wanted to implement that.
Extremely large transactions with lots of inputs can cost the network
almost as much to process as they cost the sender in fees.
We would never create transactions larger than 100K big; this change
makes transactions larger than 100K non-standard, so they are not
relayed/mined by default. This is most important for miners that might
create blocks larger than 250K big, who could be vulnerable to a
make-your-blocks-so-expensive-to-verify-they-get-orphaned attack.
Two changes:
Use IsConfirmed() instead of IsFinal(), so 'getbalance "*" 0' uses the same
'is this output spendable' criteria as 'getbalance'. Fixes issue #172.
And a tiny refactor to CWallet::GetBalance() (redundant call to IsFinal -- IsConfirmed
calls IsFinal).
getbalance with no arguments and 'getbalance "*" 0' could return different different results,
Fixes issue #2178 : attacker could penny-flood with invalid-signature
transactions to deduce which addresses belonged to your node.
I'm committing this early for code review; I still need to write up
a test plan.
Executive summary of fix: check all transactions received from the network
for penny-flood rate-limiting before adding to the memory pool. But do NOT
ratelimit transactions added to the memory pool:
- because of blockchain reorgs
- stored in the wallet and added at startup
- sent from the GUI or one of the send* RPC commands (CWallet::CommitTransaction)
The limit-free-transactions code really should be a method on CNode, with
counters per-peer. But that is a bigger change for another day.
If the user was really after the fastest possible confirmation times
they would be manually setting a fee. In cases where the wallet builds
a transaction with a priority that is too low to qualify as free until
the next block, go ahead without a fee. Confirmation frequently takes
multiple blocks even when a minimum fee is provided.
This avoids a potential crash when trying to read the scrippubkeys on
transactions where the first input IsMine but some of the rest are not
when running listaddressgroupings.
The original test (checking whether the transaction occurs in the
txindex) is not usable anymore, as it will miss anything already
fully spent. However, as merkle transactions (and by extension,
wallet transactions) track which block they were last seen being
included in, we can use that to determine the need for
rebroadcasting.
Use CBlock's vMerkleTree to cache transaction hashes, and pass them
along as argument in more function calls. During initial block download,
this results in every transaction's hash to be only computed once.
During the initial block download (or -loadblock), delay connection
of new blocks a bit, and perform them in a single action. This reduces
the load on the database engine, as subsequent blocks often update an
earlier block's transaction already.
This switches bitcoin's transaction/block verification logic to use a
"coin database", which contains all unredeemed transaction output scripts,
amounts and heights.
The name ultraprune comes from the fact that instead of a full transaction
index, we only (need to) keep an index with unspent outputs. For now, the
blocks themselves are kept as usual, although they are only necessary for
serving, rescanning and reorganizing.
The basic datastructures are CCoins (representing the coins of a single
transaction), and CCoinsView (representing a state of the coins database).
There are several implementations for CCoinsView. A dummy, one backed by
the coins database (coins.dat), one backed by the memory pool, and one
that adds a cache on top of it. FetchInputs, ConnectInputs, ConnectBlock,
DisconnectBlock, ... now operate on a generic CCoinsView.
The block switching logic now builds a single cached CCoinsView with
changes to be committed to the database before any changes are made.
This means no uncommitted changes are ever read from the database, and
should ease the transition to another database layer which does not
support transactions (but does support atomic writes), like LevelDB.
For the getrawtransaction() RPC call, access to a txid-to-disk index
would be preferable. As this index is not necessary or even useful
for any other part of the implementation, it is not provided. Instead,
getrawtransaction() uses the coin database to find the block height,
and then scans that block to find the requested transaction. This is
slow, but should suffice for debug purposes.
Corrupt wallets used to cause a DB_RUNRECOVERY uncaught exception and a
crash. This commit does three things:
1) Runs a BDB verify early in the startup process, and if there is a
low-level problem with the database:
+ Moves the bad wallet.dat to wallet.timestamp.bak
+ Runs a 'salvage' operation to get key/value pairs, and
writes them to a new wallet.dat
+ Continues with startup.
2) Much more tolerant of serialization errors. All errors in deserialization
are reported by tolerated EXCEPT for errors related to reading keypairs
or master key records-- those are reported and then shut down, so the user
can get help (or recover from a backup).
3) Adds a new -salvagewallet option, which:
+ Moves the wallet.dat to wallet.timestamp.bak
+ extracts ONLY keypairs and master keys into a new wallet.dat
+ soft-sets -rescan, to recreate transaction history
This was tested by randomly corrupting testnet wallets using a little
python script I wrote (https://gist.github.com/3812689)
OrderedTxItems returns a multimap of pointers, but needs a place to store the actual CAccountingEntries it points to.
It had been using a stack item, which was clobbered as soon as it returned, resulting in undefined behaviour.
This fixes at least bug #1768.
- Show address receiving the generation, and include it in the correct "account"
- Multiple entries in listtransactions output if the coinbase has multiple outputs to us
Logic:
- If sending a transaction, assign its timestamp to the current time.
- If receiving a transaction outside a block, assign its timestamp to the current time.
- If receiving a block with a future timestamp, assign all its (not already known) transactions' timestamps to the current time.
- If receiving a block with a past timestamp, before the most recent known transaction (that we care about), assign all its (not already known) transactions' timestamps to the same timestamp as that most-recent-known transaction.
- If receiving a block with a past timestamp, but after the most recent known transaction, assign all its (not already known) transactions' timestamps to the block time.
For backward compatibility, new accounting data is stored after a \0 in the comment string.
This way, old versions and third-party software should load and store them, but all actual use (listtransactions, for example) ignores it.