To protect privacy, do not use UPNP when a proxy is set. The user may
still specify -listen=1 to listen locally (for a hidden service), so
don't rely on this happening through -listen.
Fixes#2927.
On a busy or slow system, the CScheduler unit test could fail because it
assumed all threads would be done after a couple of milliseconds.
Replace the hard-coded sleep with CScheduler stop() method that
will cleanly exit the servicing threads when all tasks are completely
finished.
Previously this was cleared only after UnlinkPrunedFiles, but it should really be cleared after FindFilesToPrune, regardless of whether there are any files to be pruned.
86a5f4b Relocate calls to CheckDiskSpace (Alex Morcos)
67708ac Write block index more frequently than cache flushes (Pieter Wuille)
b3ed423 Cache tweak and logging improvements (Pieter Wuille)
fc684ad Use accurate memory for flushing decisions (Pieter Wuille)
046392d Keep track of memory usage in CCoinsViewCache (Pieter Wuille)
540629c Add memusage.h (Pieter Wuille)
8f0947b Increase timeouts in pruning.py and modify warning language. (Alex Morcos)
b89f307 Fix incorrect variable name in FindFilesToPrune (Suhas Daftuar)
This assertion will occur any time that the client quits without
shutting down properly due to an error condition. As the user will
report this error instead of the error that was the root cause, it is
better to remove it.
Create a monitoring task that counts how many blocks have been found in the last four hours.
If very few or too many have been found, an alert is triggered.
"Very few" and "too many" are set based on a false positive rate of once every fifty years of constant running with constant hashing power, which works out to getting 5 or fewer or 48 or more blocks in four hours (instead of the average of 24).
Only one alert per day is triggered, so if you get disconnected from the network (or are being Sybil'ed) -alertnotify will be triggered after 3.5 hours but you won't get another -alertnotify for 24 hours.
Tested with a new unit test and by running on the main network with -debug=partitioncheck
Run test/test_bitcoin --log_level=message to see the alert messages:
WARNING: check your network connection, 3 blocks received in the last 4 hours (24 expected)
WARNING: abnormally high number of blocks generated, 60 blocks received in the last 4 hours (24 expected)
The -debug=partitioncheck debug.log messages look like:
ThreadPartitionCheck : Found 22 blocks in the last 4 hours
ThreadPartitionCheck : likelihood: 0.0777702
ca5f688 [QT] don't colorize icons on win and mac (Jonas Schnelli)
7247d10 [QT] use alert icon with tooltip insted of "(out of sync)" text (Jonas Schnelli)
51c7c70 [QT] remove frame to avoid double-frame situation in sendcoinsentry.ui (Jonas Schnelli)
2a6b844 [QT] change transaction amount and height in overview page (Jonas Schnelli)
Instead of only checking height to decide whether to disable script checks,
actually check whether a block is an ancestor of a checkpoint, up to which
headers have been validated. This means that we don't have to prevent
accepting a side branch anymore - it will be safe, just less fast to
do.
We still need to prevent being fed a multitude of low-difficulty headers
filling up our memory. The mechanism for that is unchanged for now: once
a checkpoint is reached with headers, no headers chain branching off before
that point are allowed anymore.
This class groups transactions that have been confirmed in blocks into buckets, based on either their fee or their priority. Then for each bucket, the class calculates what percentage of the transactions were confirmed within various numbers of blocks. It does this by keeping an exponentially decaying moving history for each bucket and confirm block count of the percentage of transactions in that bucket that were confirmed within that number of blocks.
-Eliminate txs which didn't have all inputs available at entry from fee/pri calcs
-Add dynamic breakpoints and tracking of confirmation delays in mempool transactions
-Remove old CMinerPolicyEstimator and CBlockAverage code
-New smartfees.py
-Pass a flag to the estimation code, using IsInitialBlockDownload as a proxy for when we are still catching up and we shouldn't be counting how many blocks it takes for transactions to be included.
-Add a policyestimator unit test
We don't want to erase orphans that still have missing inputs, they should still be tracked as orphans. Also, the transaction thats being accepted can't be an orphan otherwise it would have previously been accepted, so doesn't need to be added to the erase queue.
When the internal miner is enabled at the start of a new node, there
is an near instant assert in TestBlockValidity because its attempting
to mine a block before the top checkpoint.
Also avoids a data race around vNodes.
03c5687 appropriate response when trying to get a block in pruned mode (Jonas Schnelli)
1b2e555 add autoprune information to RPC "getblockchaininfo" (Jonas Schnelli)
a8cdaf5 checkpoints: move the checkpoints enable boolean into main (Cory Fields)
11982d3 checkpoints: Decouple checkpoints from Params (Cory Fields)
6996823 checkpoints: make checkpoints a member of CChainParams (Cory Fields)
9f13a10 checkpoints: store mapCheckpoints in CCheckpointData rather than a pointer (Cory Fields)
a71ab10 QA: add RPC tests for error reporting of "signrawtransaction" (dexX7)
8ac2a4e RPC: show script verification errors in "signrawtransaction" result (dexX7)
If there are any script verification errors, when using "signrawtransaction", they are shown in the RPC result:
```
// ...
Result:
{
"hex" : "value", (string) The hex-encoded raw transaction with signature(s)
"complete" : true|false, (boolean) If the transaction has a complete set of signatures
"errors" : [ (json array of objects) Script verification errors (if there are any)
{
"txid" : "hash", (string) The hash of the referenced, previous transaction
"vout" : n, (numeric) The index of the output to spent and used as input
"scriptSig" : "hex", (string) The hex-encoded signature script
"sequence" : n, (numeric) Script sequence number
"error" : "text" (string) Verification or signing error related to the input
}
,...
]
}
```
Connecting the chain can take quite a while.
All the while it is still showing `Loading wallet...`.
Add an init message to inform the user what is happening.
libsecp256k1's API changed, so update key.cpp to use it.
Libsecp256k1 now has explicit context objects, which makes it completely thread-safe.
In turn, keep an explicit context object in key.cpp, which is explicitly initialized
destroyed. This is not really pretty now, but it's more efficient than the static
initialized object in key.cpp (which made for example bitcoin-tx slow, as for most of
its calls, libsecp256k1 wasn't actually needed).
This also brings in the new blinding support in libsecp256k1. By passing in a random
seed, temporary variables during the elliptic curve computations are altered, in such
a way that if an attacker does not know the blind, observing the internal operations
leaks less information about the keys used. This was implemented by Greg Maxwell.
bba2216 RPC test for "#5418 Report missing inputs in sendrawtransaction" (Jonas Schnelli)
de8e801 Report missing inputs in sendrawtransaction (Pieter Wuille)
Use a probabilistic bloom filter to keep track of which addresses
we think we have given our peers, instead of a list.
This uses much less memory, at the cost of sometimes failing to
relay an address to a peer-- worst case if the bloom filter happens
to be as full as it gets, 1-in-1,000.
Measured memory usage of a full mruset setAddrKnown: 650Kbytes
Constant memory usage of CRollingBloomFilter addrKnown: 37Kbytes.
This will also help heap fragmentation, because the 37K of storage
is allocated when a CNode is created (when a connection to a peer
is established) and then there is no per-item-remembered memory
allocation.
I plan on testing by restarting a full node with an empty peers.dat,
running a while with -debug=addrman and -debug=net, and making sure
that the 'addr' message traffic out is reasonable.
(suggestions for better tests welcome)
For when you need to keep track of the last N items
you've seen, and can tolerate some false-positives.
Rebased-by: Pieter Wuille <pieter.wuille@gmail.com>
1ec900a Remove broken+useless lock/unlock log prints (Matt Corallo)
352ed22 Add merkle blocks test (Matt Corallo)
59ed61b Add RPC call to generate and verify merkle blocks (Matt Corallo)
30da90d Add CMerkleBlock constructor for tx set + block and an empty one (Matt Corallo)
This commit adds several tests to the script_invalid.json data which
exercise some edge conditions that are not currently being tested.
These are mainly being added to cover several cases a branch coverage
analysis of btcd showed are not already being covered, but given more
tests of edge conditions are always a good thing, I'm contributing
them upstream.
The test which is intended to prove that the script engine is properly
rejecting non-minimally encoded PUSHDATA4 data is using the wrong
opcode and value. The test is using 0x4f, which is OP_1NEGATE instead
of the desired 0x4e, which is OP_PUSHDATA4. Further, the push of data
is intended to be 256 bytes, but the value the test is using is
0x00100000 (4096), instead of the desired 0x00010000 (256).
This commit fixes both issues.
This was found while examining the branch coverage in btcd against only
these tests to help find missing branch coverage.
This adds a -prune=N option to bitcoind, which if set to N>0 will enable block
file pruning. When pruning is enabled, block and undo files will be deleted to
try to keep total space used by those files to below the prune target (N, in
MB) specified by the user, subject to some constraints:
- The last 288 blocks on the main chain are always kept (MIN_BLOCKS_TO_KEEP),
- N must be at least 550MB (chosen as a value for the target that could
reasonably be met, with some assumptions about block sizes, orphan rates,
etc; see comment in main.h),
- No blocks are pruned until chainActive is at least 100,000 blocks long (on
mainnet; defined separately for mainnet, testnet, and regtest in chainparams
as nPruneAfterHeight).
This unsets NODE_NETWORK if pruning is enabled.
Also included is an RPC test for pruning (pruning.py).
Thanks to @rdponticelli for earlier work on this feature; this is based in
part off that work.
has parts of @mhearn #4351
* allows querying the utxos over REST
* same binary input and outputs as mentioned in Bip64
* input format = output format
* various rpc/rest regtests
Ever since #5957 there has been the problem that older RPC test cases
(as can be found plenty in open pulls) use setgenerate() on regtest,
assuming a different interpretation of the arguments. Directly
generating a number of blocks has been split off into a new method
`generate` - however using `setgenerate` with the previous arguments will
result in spawning an unreasonable number of threads, and well, simply
not work as expected without clear indication of the error.
Add an error to point the user at the right method.
It's reasonable that automatic coin selection will not pick a zero
value txout, but they're actually spendable; and you should know
if you have them. Listing also makes them available to tools like
dust-b-gone.
- write "Bitcoins" uppercase
- replace secure/insecure for payment requests with
authenticated/unauthenticated
- change a translatable string for payment request expiry to match another
existing string to only get ONE resulting string to translate
On hosts that had spent some time with a failed internet connection their
nAttempts penalty was going through the roof (e.g. thousands for all peers)
and as a result the connect search was pegging the CPU and failing to get
more than a 4 connections after days of running (because it was taking so
long per try).
According to Tor's extensions to the SOCKS protocol
(https://gitweb.torproject.org/torspec.git/tree/socks-extensions.txt)
it is possible to perform stream isolation by providing authentication
to the proxy. Each set of credentials will create a new circuit,
which makes it harder to correlate connections.
This patch adds an option, `-proxyrandomize` (on by default) that randomizes
credentials for every outgoing connection, thus creating a new circuit.
2015-03-16 15:29:59 SOCKS5 Sending proxy authentication 3842137544:3256031132
6171e49 [Qt] Use identical strings for expired payment request message (Philip Kaufmann)
06087bd [Qt] minor comment updates in PaymentServer (Philip Kaufmann)
35d1595 [Qt] constify first parameter of processPaymentRequest() (Philip Kaufmann)
9b14aef [Qt] take care of a missing typecast in PaymentRequestPlus::getMerchant() (Philip Kaufmann)
d19ae3c [Qt] remove unused PaymentRequestPlus::getPKIType function (Philip Kaufmann)
6e17a74 [Qt] paymentserver: better logging of invalid certs (Philip Kaufmann)
5a53d7c [Qt] paymentserver: do not log NULL certificates (Philip Kaufmann)
Compare the block download timeout to what the timeout would be if calculated
based on current time and current value of nQueuedValidatedHeaders, but
ignoring other in-flight blocks from the same peer. If the calculation based on
present conditions is shorter, then set that to be the time after which we
disconnect the peer for not delivering this block.
Before and after was tested in Windows:
before:
GUI: ReportInvalidCertificate : Payment server found
an invalid certificate: ("Microsoft Authenticode(tm) Root Authority")
GUI: ReportInvalidCertificate : Payment server found
an invalid certificate: ()
GUI: ReportInvalidCertificate : Payment server found
an invalid certificate: ()
GUI: ReportInvalidCertificate : Payment server found
an invalid certificate: ()
after:
GUI: ReportInvalidCertificate: Payment server found an
invalid certificate: "01" ("Microsoft Authenticode(tm) Root Authority")
() ()
GUI: ReportInvalidCertificate: Payment server found an
invalid certificate: "01" () () ("Copyright (c) 1997 Microsoft Corp.",
"Microsoft Time Stamping Service Root", "Microsoft Corporation")
GUI: ReportInvalidCertificate: Payment server found an
invalid certificate: "4a:19:d2:38:8c:82:59:1c:a5:5d:73:5f:15:5d:dc:a3" ()
() ("NO LIABILITY ACCEPTED, (c)97 VeriSign, Inc.", "VeriSign Time Stamping
Service Root", "VeriSign, Inc.")
GUI: ReportInvalidCertificate: Payment server found an
invalid certificate: "e4:9e:fd:f3:3a:e8:0e:cf:a5:11:3e:19:a4:24:02:32" ()
() ("Class 3 Public Primary Certification Authority")
Some tests in CheckBlockIndex require chainActive.Tip(), but when reindexing, chainActive has not been set on the first call to CheckBlockIndex.
reindex.py starts a node, mines 3 blocks, stops, and reindexes with CheckBlockIndex enabled.
437ada3 Switch test case signing to RFC6979 extra entropy (Pieter Wuille)
9d09322 Squashed 'src/secp256k1/' changes from 50cc6ab..1897b8e (Pieter Wuille)