Merge pull request #5825 from knst/bp-v21-p11

backport: bitcoin#19215, #19347, #19371, #19453, #19494, #19495, #19514, #19517, #21346, partial bitcoin-core/gui#30
This commit is contained in:
PastaPastaPasta 2024-01-23 22:15:23 -06:00 committed by GitHub
commit ac60d636b3
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
22 changed files with 126 additions and 147 deletions

View File

@ -14,11 +14,8 @@ services:
- docker
cache:
ccache: true
directories:
- $BASE_BUILD_DIR/releases/$HOST
before_cache:
- if [ "${TRAVIS_OS_NAME}" = "osx" ]; then brew cleanup; fi
env:
global:
# DOCKER_HUB_USER
@ -70,7 +67,7 @@ before_script:
- for i in {1..4}; do echo "$(sleep 500)" ; done &
- set -o errexit; source ./ci/test/05_before_script.sh &> "/dev/null"
script:
- if [ $SECONDS -gt 1200 ]; then set +o errexit; echo "Travis early exit to cache current state"; false; else set -o errexit; source .travis/test_06_script.sh; fi
- if [[ $SECONDS -gt 1200 ]]; then set +o errexit; echo "Travis early exit to cache current state"; false; else set -o errexit; source .travis/test_06_script.sh; fi
after_script:
- echo $TRAVIS_COMMIT_RANGE
jobs:
@ -257,22 +254,12 @@ after_success:
# Xcode 11.3.1, macOS 10.14, SDK 10.15
# https://docs.travis-ci.com/user/reference/osx/#macos-version
osx_image: xcode11.3
cache:
directories:
- $TRAVIS_BUILD_DIR/ci/scratch/.ccache
- $TRAVIS_BUILD_DIR/releases/$HOST
- $HOME/Library/Caches/Homebrew
- /usr/local/Homebrew
addons:
homebrew:
packages:
- libtool
- berkeley-db4
- boost
- miniupnpc
- qt
- qrencode
- python3
- ccache
- zeromq
env: >-

View File

@ -6,23 +6,24 @@ welcome to contribute towards development in the form of peer review, testing
and patches. This document explains the practical process and guidelines for
contributing.
Firstly in terms of structure, there is no particular concept of "Core
First, in terms of structure, there is no particular concept of "Dash Core
developers" in the sense of privileged people. Open source often naturally
revolves around meritocracy where longer term contributors gain more trust from
the developer community. However, some hierarchy is necessary for practical
purposes. As such there are repository "maintainers" who are responsible for
merging pull requests as well as a "lead maintainer" who is responsible for the
release cycle, overall merging, moderation and appointment of maintainers.
revolves around a meritocracy where contributors earn trust from the developer
community over time. Nevertheless, some hierarchy is necessary for practical
purposes. As such, there are repository "maintainers" who are responsible for
merging pull requests, as well as a "lead maintainer" who is responsible for the
release cycle as well as overall merging, moderation and appointment of
maintainers.
Getting Started
---------------
New contributors are very welcome and needed.
Reviewing and testing is the most effective way you can contribute as a new
contributor, and it also will teach you much more about the code and process
than opening PRs. Please refer to the section [peer review](#peer-review) later
in this document.
Reviewing and testing is highly valued and the most effective way you can contribute
as a new contributor. It also will teach you much more about the code and
process than opening pull requests. Please refer to the [peer review](#peer-review)
section below.
Before you start contributing, familiarize yourself with the Dash Core build
system and tests. Refer to the documentation in the repository on how to build
@ -48,12 +49,19 @@ you are encouraged to leave a comment if you are planning to work on it. This
will help other contributors monitor which issues are actively being addressed
and is also an effective way to request assistance if and when you need it.
Communication Channels
----------------------
Most communication about Dash Core development happens on Discord Server.
Discussion about codebase improvements happens in GitHub issues and pull
requests.
Contributor Workflow
--------------------
The codebase is maintained using the "contributor workflow" where everyone
without exception contributes patch proposals using "pull requests". This
without exception contributes patch proposals using "pull requests" (PRs). This
facilitates social contribution, easy testing and peer review.
To contribute a patch, the workflow is as follows:
@ -71,6 +79,9 @@ In general, [commits should be atomic](https://en.wikipedia.org/wiki/Atomic_comm
and diffs should be easy to read. For this reason, do not mix any formatting
fixes or code moves with actual code changes.
Make sure each individual commit is hygienic: that it builds successfully on its
own without warnings, errors, regressions, or test failures.
Commit messages should be verbose by default consisting of a short subject line
(50 chars max), a blank line and detailed explanatory text as separate
paragraph(s), unless the title alone is self-explanatory (like "Correct typo
@ -82,7 +93,7 @@ If a particular commit references another issue, please add the reference. For
example: `refs #1234` or `fixes #4321`. Using the `fixes` or `closes` keywords
will cause the corresponding issue to be closed when the pull request is merged.
Commit messages should never contain any `@` mentions.
Commit messages should never contain any `@` mentions (usernames prefixed with "@").
Please refer to the [Git manual](https://git-scm.com/doc) for more information
about Git.
@ -129,10 +140,16 @@ Examples:
fix(log): fix typo in log message
feat(rpc)!: modify gettransaction parameter type
The body of the pull request should contain enough description about what the
patch does together with any justification/reasoning. You should include
references to any discussions (for example other tickets or mailing list
discussions).
The body of the pull request should contain sufficient description of *what* the
patch does, and even more importantly, *why*, with justification and reasoning.
You should include references to any discussions (for example, other issues or
mailing list discussions).
The description for a new pull request should not contain any `@` mentions. The
PR description will be included in the commit message when the PR is merged and
any users mentioned in the description will be annoyingly notified each time a
fork of Dash Core copies the merge. Instead, make any username mentions in a
subsequent comment to the PR.
### Translation changes
@ -168,13 +185,13 @@ before it will be reviewed. The basic squashing workflow is shown below.
# Save and quit.
git push -f # (force push to GitHub)
Please update the resulting commit message if needed. It should read as a
coherent message. In most cases, this means that you should not just list the
interim commits.
Please update the resulting commit message, if needed. It should read as a
coherent message. In most cases, this means not just listing the interim
commits.
If you have problems with squashing (or other workflows with `git`), you can
alternatively enable "Allow edits from maintainers" in the right GitHub
sidebar and ask for help in the pull request.
If you have problems with squashing or other git workflows, you can enable
"Allow edits from maintainers" in the right-hand sidebar of the GitHub web
interface and ask for help in the pull request.
Please refrain from creating several pull requests for the same change.
Use the pull request that is already open (or was created earlier) to amend
@ -256,8 +273,8 @@ In general, all pull requests must:
- Have a clear use case, fix a demonstrable bug or serve the greater good of
the project (for example refactoring for modularisation);
- Be well peer reviewed;
- Have unit tests and functional tests where appropriate;
- Be well peer-reviewed;
- Have unit tests, functional tests, and fuzz tests, where appropriate;
- Follow code style guidelines ([C++](doc/developer-notes.md), [functional tests](test/functional/README.md));
- Not break the existing test suite;
- Where bugs are fixed, where possible, there should be unit tests
@ -284,41 +301,40 @@ spread out over GitHub, mailing list and IRC discussions).
#### Conceptual Review
A review can be a conceptual review, where the reviewer leaves a comment
* `Concept (N)ACK`, meaning "I do (not) agree in the general goal of this pull
* `Concept (N)ACK`, meaning "I do (not) agree with the general goal of this pull
request",
* `Approach (N)ACK`, meaning `Concept ACK`, but "I do (not) agree with the
approach of this change".
#### Code Review
After conceptual agreement on the change, code review can be provided. It is
starting with `ACK BRANCH_COMMIT`, where `BRANCH_COMMIT` is the top of the
topic branch. The review is followed by a description of how the reviewer did
the review. The following
language is used within pull-request comments:
After conceptual agreement on the change, code review can be provided. A review
begins with `ACK BRANCH_COMMIT`, where `BRANCH_COMMIT` is the top of the PR
branch, followed by a description of how the reviewer did the review. The
following language is used within pull request comments:
- (t)ACK means "I have tested the code and I agree it should be merged", involving
change-specific manual testing in addition to running the unit and functional
tests, and in case it is not obvious how the manual testing was done, it should
be described;
- (t)ACK means "I have tested the code and I agree it should be merged",
involving change-specific manual testing in
addition to running the unit, functional, or fuzz tests, and in case it is
not obvious how the manual testing was done, it should be described;
- NACK means "I disagree this should be merged", and must be accompanied by
sound technical justification (or in certain cases of copyright/patent/licensing
issues, legal justification). NACKs without accompanying reasoning may be
disregarded;
- utACK means "I have not tested the code, but I have reviewed it and it looks
OK, I agree it can be merged";
- Nit refers to trivial, often non-blocking issues.
- A "nit" refers to a trivial, often non-blocking issue.
Project maintainers reserve the right to weigh the opinions of peer reviewers
using common sense judgement and also may weight based on meritocracy: Those
that have demonstrated a deeper commitment and understanding towards the project
(over time) or have clear domain expertise may naturally have more weight, as
one would expect in all walks of life.
using common sense judgement and may also weigh based on merit. Reviewers that
have demonstrated a deeper commitment and understanding of the project over time
or who have clear domain expertise may naturally have more weight, as one would
expect in all walks of life.
Where a patch set affects consensus critical code, the bar will be set much
Where a patch set affects consensus-critical code, the bar will be much
higher in terms of discussion and peer review requirements, keeping in mind that
mistakes could be very costly to the wider community. This includes refactoring
of consensus critical code.
of consensus-critical code.
Where a patch set proposes to change the Dash consensus, it must have been
discussed extensively on the mailing list and IRC, be accompanied by a widely
@ -353,7 +369,7 @@ of reasons for this, some of which you can do something about:
- It may be because of a feature freeze due to an upcoming release. During this time,
only bug fixes are taken into consideration. If your pull request is a new feature,
it will not be prioritized until the release is over. Wait for release.
it will not be prioritized until after the release. Wait for the release.
- It may be because the changes you are suggesting do not appeal to people. Rather than
nits and critique, which require effort and means they care enough to spend time on your
contribution, thundering silence is a good sign of widespread (mild) dislike of a given change
@ -363,16 +379,18 @@ of reasons for this, some of which you can do something about:
[developer notes](doc/developer-notes.md), is dangerous or insecure, is messily written, etc.
Identify and address any of the issues you find. Then ask e.g. on the forum or on a community
discord if someone could give their opinion on the concept itself.
- It may be because your code is too complex for all but a few people. And those people
- It may be because your code is too complex for all but a few people, and those people
may not have realized your pull request even exists. A great way to find people who
are qualified and care about the code you are touching is the
[Git Blame feature](https://help.github.com/articles/tracing-changes-in-a-file/). Simply
find the person touching the code you are touching before you and see if you can find
them and give them a nudge. Don't be incessant about the nudging though.
look up who last modified the code you are changing and see if you can find
them and give them a nudge. Don't be incessant about the nudging, though.
- Finally, if all else fails, ask on discord or elsewhere for someone to give your pull request
a look. If you think you've been waiting an unreasonably long amount of time (month+) for
no particular reason (few lines changed, etc), this is totally fine. Try to return the favor
when someone else is asking for feedback on their code, and universe balances out.
a look. If you think you've been waiting for an unreasonably long time (say,
more than a month) for no particular reason (a few lines changed, etc.),
this is totally fine. Try to return the favor when someone else is asking
for feedback on their code, and the universe balances out.
- Remember that the best thing you can do while waiting is give review to others!
Backporting
@ -381,11 +399,11 @@ Backporting
Security and bug fixes can be backported from `master` to release
branches.
If the backport is non-trivial, it may be appropriate to open an
additional PR, to backport the change, only after the original PR
additional PR to backport the change, but only after the original PR
has been merged.
Otherwise, backports will be done in batches and
the maintainers will use the proper `Needs backport (...)` labels
when needed (the original author does not need to worry).
when needed (the original author does not need to worry about it).
A backport should contain the following metadata in the commit body:

View File

@ -36,6 +36,7 @@ export BASE_SCRATCH_DIR=${BASE_SCRATCH_DIR:-$BASE_ROOT_DIR/ci/scratch}
export HOST=${HOST:-$("$BASE_ROOT_DIR/depends/config.guess")}
# Whether to prefer BusyBox over GNU utilities
export USE_BUSY_BOX=${USE_BUSY_BOX:-false}
export RUN_UNIT_TESTS=${RUN_UNIT_TESTS:-true}
export RUN_INTEGRATION_TESTS=${RUN_INTEGRATION_TESTS:-true}
export RUN_SECURITY_TESTS=${RUN_SECURITY_TESTS:-false}
@ -46,6 +47,8 @@ export TEST_RUNNER_TIMEOUT_FACTOR=${TEST_RUNNER_TIMEOUT_FACTOR:-4}
export TEST_RUNNER_ENV=${TEST_RUNNER_ENV:-}
export RUN_FUZZ_TESTS=${RUN_FUZZ_TESTS:-false}
export RUN_SYMBOL_TESTS=${RUN_SYMBOL_TESTS:-true}
export EXPECTED_TESTS_DURATION_IN_SECONDS=${EXPECTED_TESTS_DURATION_IN_SECONDS:-1000}
export CONTAINER_NAME=${CONTAINER_NAME:-ci_unnamed}
export DOCKER_NAME_TAG=${DOCKER_NAME_TAG:-ubuntu:focal}
# Randomize test order.

View File

@ -14,4 +14,4 @@ export XCODE_BUILD_ID=12B45b
export RUN_UNIT_TESTS=false
export RUN_INTEGRATION_TESTS=false
export GOAL="all deploy"
export BITCOIN_CONFIG="--enable-gui --enable-reduce-exports --disable-miner --enable-werror"
export BITCOIN_CONFIG="--with-gui --enable-reduce-exports --disable-miner --enable-werror"

View File

@ -9,9 +9,14 @@ export LC_ALL=C.UTF-8
export CONTAINER_NAME=ci_macos
export HOST=x86_64-apple-darwin
export PIP_PACKAGES="zmq lief"
export RUN_SECURITY_TESTS="true"
export GOAL="install"
export BITCOIN_CONFIG="--enable-gui --enable-reduce-exports --disable-miner --enable-werror"
# Run without depends
export BITCOIN_CONFIG="--with-gui --enable-reduce-exports --disable-miner --enable-werror"
export NO_DEPENDS=1
export OSX_SDK=""
export CCACHE_SIZE=300M
export RUN_SECURITY_TESTS="true"
if [ "$TRAVIS_REPO_SLUG" != "dashpay/dash" ]; then
export RUN_FUNCTIONAL_TESTS="false"
export EXPECTED_TESTS_DURATION_IN_SECONDS=200
fi

View File

@ -8,6 +8,7 @@ export LC_ALL=C.UTF-8
export PACKAGES="clang llvm python3-zmq qtbase5-dev qttools5-dev-tools libevent-dev bsdmainutils libboost-filesystem-dev libboost-test-dev libdb5.3++-dev libminiupnpc-dev libzmq3-dev libqrencode-dev"
export NO_DEPENDS=1
export TEST_RUNNER_EXTRA="--timeout-factor=4" # Increase timeout because sanitizers slow down
export FUNCTIONAL_TESTS_CONFIG="--exclude wallet_multiwallet.py" # Temporarily suppress ASan heap-use-after-free (see issue #14163)
export RUN_BENCH=true
export GOAL="install"

View File

@ -10,6 +10,7 @@ export CONTAINER_NAME=ci_native_tsan
export PACKAGES="clang-15 llvm-15 python3-zmq qtbase5-dev qttools5-dev-tools libevent-dev bsdmainutils libboost-filesystem-dev libboost-test-dev libboost-thread-dev libdb5.3++-dev libminiupnpc-dev libzmq3-dev libqrencode-dev"
export DEP_OPTS="NO_UPNP=1 DEBUG=1"
export TEST_RUNNER_EXTRA="--extended --exclude feature_pruning,feature_dbcrash,wallet_multiwallet.py" # Temporarily suppress ASan heap-use-after-free (see issue #14163)
export TEST_RUNNER_EXTRA="${TEST_RUNNER_EXTRA} --timeout-factor=4" # Increase timeout because sanitizers slow down
export GOAL="install"
export BITCOIN_CONFIG="--enable-zmq --enable-reduce-exports --enable-crash-hooks --with-sanitizers=thread"
export BITCOIN_CONFIG="${BITCOIN_CONFIG} CC=clang-15 CXX=clang++-15 CXXFLAGS=-Werror=thread-safety"

View File

@ -9,6 +9,6 @@ export LC_ALL=C.UTF-8
export PACKAGES="valgrind clang llvm python3-zmq libevent-dev bsdmainutils libboost-system-dev libboost-filesystem-dev libboost-test-dev libdb5.3++-dev libminiupnpc-dev libzmq3-dev"
export USE_VALGRIND=1
export NO_DEPENDS=1
export TEST_RUNNER_EXTRA="--exclude rpc_bind" # Excluded for now, see https://github.com/bitcoin/bitcoin/issues/17765#issuecomment-602068547
export TEST_RUNNER_EXTRA="--exclude rpc_bind --timeout-factor=4" # Excluded for now, see https://github.com/bitcoin/bitcoin/issues/17765#issuecomment-602068547
export GOAL="install"
export BITCOIN_CONFIG="--enable-zmq --with-incompatible-bdb --with-gui=no --enable-suppress-external-warnings CC=clang-15 CXX=clang++-15" # TODO enable GUI

View File

@ -14,7 +14,6 @@ if [[ $QEMU_USER_CMD == qemu-s390* ]]; then
fi
if [ "$TRAVIS_OS_NAME" == "osx" ]; then
export PATH="/usr/local/opt/ccache/libexec:$PATH"
${CI_RETRY_EXE} pip3 install $PIP_PACKAGES
fi

View File

@ -19,7 +19,7 @@ Then install [Homebrew](https://brew.sh).
## Dependencies
```shell
brew install automake libtool boost gmp miniupnpc pkg-config python qt libevent libnatpmp qrencode
brew install automake libtool boost gmp miniupnpc pkg-config python qt@5 libevent libnatpmp qrencode
```
If you run into issues, check [Homebrew's troubleshooting page](https://docs.brew.sh/Troubleshooting).

View File

@ -763,8 +763,8 @@ static void ParseError(const UniValue& error, std::string& strPrint, int& nRet)
*/
static void GetWalletBalances(UniValue& result)
{
std::unique_ptr<BaseRequestHandler> rh{std::make_unique<DefaultRequestHandler>()};
const UniValue listwallets = ConnectAndCallRPC(rh.get(), "listwallets", /* args=*/{});
DefaultRequestHandler rh;
const UniValue listwallets = ConnectAndCallRPC(&rh, "listwallets", /* args=*/{});
if (!find_value(listwallets, "error").isNull()) return;
const UniValue& wallets = find_value(listwallets, "result");
if (wallets.size() <= 1) return;
@ -772,7 +772,7 @@ static void GetWalletBalances(UniValue& result)
UniValue balances(UniValue::VOBJ);
for (const UniValue& wallet : wallets.getValues()) {
const std::string wallet_name = wallet.get_str();
const UniValue getbalances = ConnectAndCallRPC(rh.get(), "getbalances", /* args=*/{}, wallet_name);
const UniValue getbalances = ConnectAndCallRPC(&rh, "getbalances", /* args=*/{}, wallet_name);
const UniValue& balance = find_value(getbalances, "result")["mine"]["trusted"];
balances.pushKV(wallet_name, balance);
}
@ -787,8 +787,8 @@ static UniValue GetNewAddress()
{
std::optional<std::string> wallet_name{};
if (gArgs.IsArgSet("-rpcwallet")) wallet_name = gArgs.GetArg("-rpcwallet", "");
std::unique_ptr<BaseRequestHandler> rh{std::make_unique<DefaultRequestHandler>()};
return ConnectAndCallRPC(rh.get(), "getnewaddress", /* args=*/{}, wallet_name);
DefaultRequestHandler rh;
return ConnectAndCallRPC(&rh, "getnewaddress", /* args=*/{}, wallet_name);
}
/**

View File

@ -1239,7 +1239,7 @@ void CConnman::CreateNodeFromAcceptedSocket(SOCKET hSocket,
SetSocketNoDelay(hSocket);
// Don't accept connections from banned peers.
bool banned = m_banman->IsBanned(addr);
bool banned = m_banman && m_banman->IsBanned(addr);
if (!NetPermissions::HasFlag(permissionFlags, NetPermissionFlags::PF_NOBAN) && banned)
{
LogPrint(BCLog::NET, "%s (banned)\n", strDropped);
@ -1248,7 +1248,7 @@ void CConnman::CreateNodeFromAcceptedSocket(SOCKET hSocket,
}
// Only accept connections from discouraged peers if our inbound slots aren't (almost) full.
bool discouraged = m_banman->IsDiscouraged(addr);
bool discouraged = m_banman && m_banman->IsDiscouraged(addr);
if (!NetPermissions::HasFlag(permissionFlags, NetPermissionFlags::PF_NOBAN) && nInbound + 1 >= nMaxInbound && discouraged)
{
LogPrint(BCLog::NET, "connection from %s dropped (discouraged)\n", addr.ToString());
@ -1400,12 +1400,9 @@ void CConnman::DisconnectNodes()
bool fDelete = false;
if (pnode->GetRefCount() <= 0) {
{
TRY_LOCK(pnode->cs_inventory, lockInv);
if (lockInv) {
TRY_LOCK(pnode->cs_vSend, lockSend);
if (lockSend) {
fDelete = true;
}
TRY_LOCK(pnode->cs_vSend, lockSend);
if (lockSend) {
fDelete = true;
}
}
if (fDelete) {

View File

@ -755,6 +755,7 @@ private:
bool m_use_addrman_outgoing;
CClientUIInterface* clientInterface;
NetEventsInterface* m_msgproc;
/** Pointer to this node's banman. May be nullptr - check existence before dereferencing. */
BanMan* m_banman;
/**
@ -1283,7 +1284,7 @@ public:
// There is no final sorting before sending, as they are always sent immediately
// and in the order requested.
std::vector<uint256> vInventoryBlockToSend GUARDED_BY(cs_inventory);
RecursiveMutex cs_inventory;
Mutex cs_inventory;
/** UNIX epoch time of the last block received from this peer that we had
* not yet seen (e.g. not already received from another peer), that passed
* preliminary validity checks and was saved to disk, even if we don't
@ -1490,12 +1491,12 @@ public:
void PushInventory(const CInv& inv)
{
ASSERT_IF_DEBUG(inv.type != MSG_BLOCK);
if (inv.type == MSG_BLOCK) {
LogPrint(BCLog::NET, "%s -- adding new inv: %s peer=%d\n", __func__, inv.ToString(), id);
LOCK(cs_inventory);
vInventoryBlockToSend.push_back(inv.hash);
LogPrintf("%s -- WARNING: using PushInventory for BLOCK inv, peer=%d\n", __func__, id);
return;
}
LOCK(m_tx_relay->cs_tx_inventory);
if (m_tx_relay->filterInventoryKnown.contains(inv.hash)) {
LogPrint(BCLog::NET, "%s -- skipping known inv: %s peer=%d\n", __func__, inv.ToString(), id);
@ -1509,12 +1510,6 @@ public:
m_tx_relay->vInventoryOtherToSend.push_back(inv);
}
void PushBlockHash(const uint256 &hash)
{
LOCK(cs_inventory);
vBlockHashesToAnnounce.push_back(hash);
}
void CloseSocketDisconnect(CConnman* connman);
void copyStats(CNodeStats &stats, const std::vector<bool> &m_asmap);

View File

@ -1744,9 +1744,10 @@ void PeerManagerImpl::UpdatedBlockTip(const CBlockIndex *pindexNew, const CBlock
if (!pnode->CanRelay()) {
return;
}
LOCK(pnode->cs_inventory);
if (m_best_height > (pnode->nStartingHeight != -1 ? pnode->nStartingHeight - 2000 : 0)) {
for (const uint256& hash : reverse_iterate(vHashes)) {
pnode->PushBlockHash(hash);
pnode->vBlockHashesToAnnounce.push_back(hash);
}
}
});
@ -2069,7 +2070,7 @@ void PeerManagerImpl::ProcessGetBlockData(CNode& pfrom, const CChainParams& chai
// Trigger the peer node to send a getblocks request for the next batch of inventory
if (inv.hash == pfrom.hashContinue)
{
// Bypass PushInventory, this must send even if redundant,
// Send immediately. This must send even if redundant,
// and we want it right after the last block so they don't
// wait for other stuff first.
std::vector<CInv> vInv;
@ -3169,8 +3170,10 @@ void PeerManagerImpl::ProcessMessage(
if (addr.nTime <= 100000000 || addr.nTime > nNow + 10 * 60)
addr.nTime = nNow - 5 * 24 * 60 * 60;
pfrom.AddAddressKnown(addr);
if (m_banman->IsDiscouraged(addr)) continue; // Do not process banned/discouraged addresses beyond remembering we received them
if (m_banman->IsBanned(addr)) continue;
if (m_banman && (m_banman->IsDiscouraged(addr) || m_banman->IsBanned(addr))) {
// Do not process banned/discouraged addresses beyond remembering we received them
continue;
}
bool fReachable = IsReachable(addr);
if (addr.nTime > nSince && !pfrom.fGetAddr && vAddr.size() <= 10 && addr.IsRoutable())
{
@ -3402,7 +3405,7 @@ void PeerManagerImpl::ProcessMessage(
break;
}
if (pfrom.CanRelay()) {
pfrom.PushInventory(CInv(MSG_BLOCK, pindex->GetBlockHash()));
WITH_LOCK(pfrom.cs_inventory, pfrom.vInventoryBlockToSend.push_back(pindex->GetBlockHash()));
}
if (--nLimit <= 0)
{
@ -4924,7 +4927,7 @@ bool PeerManagerImpl::SendMessages(CNode* pto)
// If the peer's chain has this block, don't inv it back.
if (!PeerHasHeader(&state, pindex)) {
pto->PushInventory(CInv(MSG_BLOCK, hashToAnnounce));
pto->vInventoryBlockToSend.push_back(hashToAnnounce);
LogPrint(BCLog::NET, "%s: sending inv peer=%d hash=%s\n", __func__,
pto->GetId(), hashToAnnounce.ToString());
}

View File

@ -42,14 +42,6 @@ bool PartiallySignedTransaction::Merge(const PartiallySignedTransaction& psbt)
return true;
}
bool PartiallySignedTransaction::IsSane() const
{
for (PSBTInput input : inputs) {
if (!input.IsSane()) return false;
}
return true;
}
bool PartiallySignedTransaction::AddInput(const CTxIn& txin, PSBTInput& psbtin)
{
if (std::find(tx->vin.begin(), tx->vin.end(), txin) != tx->vin.end()) {
@ -142,11 +134,6 @@ void PSBTInput::Merge(const PSBTInput& input)
if (final_script_sig.empty() && !input.final_script_sig.empty()) final_script_sig = input.final_script_sig;
}
bool PSBTInput::IsSane() const
{
return true;
}
void PSBTOutput::FillSignatureData(SignatureData& sigdata) const
{
if (!redeem_script.empty()) {
@ -220,11 +207,6 @@ bool SignPSBTInput(const SigningProvider& provider, PartiallySignedTransaction&
// Get UTXO
CTxOut utxo;
// Verify input sanity, which checks that at most one of witness or non-witness utxos is provided.
if (!input.IsSane()) {
return false;
}
if (input.non_witness_utxo) {
// If we're taking our information from a non-witness UTXO, verify that it matches the prevout.
COutPoint prevout = tx.vin[index].prevout;
@ -297,10 +279,6 @@ TransactionError CombinePSBTs(PartiallySignedTransaction& out, const std::vector
return TransactionError::PSBT_MISMATCH;
}
}
if (!out.IsSane()) {
return TransactionError::INVALID_PSBT;
}
return TransactionError::OK;
}

View File

@ -56,7 +56,6 @@ struct PSBTInput
void FillSignatureData(SignatureData& sigdata) const;
void FromSignatureData(const SignatureData& sigdata);
void Merge(const PSBTInput& input);
bool IsSane() const;
PSBTInput() {}
template <typename Stream>
@ -229,7 +228,6 @@ struct PSBTOutput
void FillSignatureData(SignatureData& sigdata) const;
void FromSignatureData(const SignatureData& sigdata);
void Merge(const PSBTOutput& output);
bool IsSane() const;
PSBTOutput() {}
template <typename Stream>
@ -330,7 +328,6 @@ struct PartiallySignedTransaction
/** Merge psbt into this. The two psbts must have the same underlying CTransaction (i.e. the
* same actual Bitcoin transaction.) Returns true if the merge succeeded, false otherwise. */
[[nodiscard]] bool Merge(const PartiallySignedTransaction& psbt);
bool IsSane() const;
bool AddInput(const CTxIn& txin, PSBTInput& psbtin);
bool AddOutput(const CTxOut& txout, const PSBTOutput& psbtout);
PartiallySignedTransaction() {}
@ -478,10 +475,6 @@ struct PartiallySignedTransaction
if (outputs.size() != tx->vout.size()) {
throw std::ios_base::failure("Outputs provided does not match the number of outputs in transaction.");
}
// Sanity check
if (!IsSane()) {
throw std::ios_base::failure("PSBT is not sane.");
}
}
template <typename Stream>

View File

@ -25,17 +25,18 @@ public:
explicit ModalOverlay(bool enable_wallet, QWidget *parent);
~ModalOverlay();
public Q_SLOTS:
void tipUpdate(int count, const QDateTime& blockDate, double nVerificationProgress);
void setKnownBestHeight(int count, const QDateTime& blockDate);
void toggleVisibility();
// will show or hide the modal layer
void showHide(bool hide = false, bool userRequested = false);
void closeClicked();
void hideForever();
bool isLayerVisible() const { return layerIsVisible; }
public Q_SLOTS:
void toggleVisibility();
void closeClicked();
protected:
bool eventFilter(QObject * obj, QEvent * ev) override;
bool event(QEvent* ev) override;

View File

@ -1348,13 +1348,20 @@ UniValue decodepsbt(const JSONRPCRequest& request)
const PSBTInput& input = psbtx.inputs[i];
UniValue in(UniValue::VOBJ);
// UTXOs
bool have_a_utxo = false;
CTxOut txout;
if (input.non_witness_utxo) {
txout = input.non_witness_utxo->vout[psbtx.tx->vin[i].prevout.n];
UniValue non_wit(UniValue::VOBJ);
TxToUniv(*input.non_witness_utxo, uint256(), non_wit, false);
in.pushKV("non_witness_utxo", non_wit);
CAmount utxo_val = input.non_witness_utxo->vout[psbtx.tx->vin[i].prevout.n].nValue;
if (MoneyRange(utxo_val) && MoneyRange(total_in + utxo_val)) {
total_in += utxo_val;
have_a_utxo = true;
}
if (have_a_utxo) {
if (MoneyRange(txout.nValue) && MoneyRange(total_in + txout.nValue)) {
total_in += txout.nValue;
} else {
// Hack to just not show fee later
have_all_utxos = false;

View File

@ -33,7 +33,6 @@ FUZZ_TARGET(psbt)
}
(void)psbt.IsNull();
(void)psbt.IsSane();
std::optional<CMutableTransaction> tx = psbt.tx;
if (tx) {
@ -44,7 +43,6 @@ FUZZ_TARGET(psbt)
for (const PSBTInput& input : psbt.inputs) {
(void)PSBTInputSigned(input);
(void)input.IsNull();
(void)input.IsSane();
}
for (const PSBTOutput& output : psbt.outputs) {

View File

@ -710,11 +710,6 @@ TransactionError LegacyScriptPubKeyMan::FillPSBT(PartiallySignedTransaction& psb
continue;
}
// Verify input looks sane. This will check that we have at most one uxto.
if (!input.IsSane()) {
return TransactionError::INVALID_PSBT;
}
// Get the Sighash type
if (sign && input.sighash_type > 0 && input.sighash_type != sighash_type) {
return TransactionError::SIGHASH_MISMATCH;

View File

@ -3003,11 +3003,6 @@ TransactionError CWallet::FillPSBT(PartiallySignedTransaction& psbtx, bool& comp
continue;
}
// Verify input looks sane. This will check that we have at most one uxto, witness or non-witness.
if (!input.IsSane()) {
return TransactionError::INVALID_PSBT;
}
// If we have no utxo, grab it from the wallet.
if (!input.non_witness_utxo) {
const uint256& txhash = txin.prevout.hash;

View File

@ -77,6 +77,9 @@ class PSBTTest(BitcoinTestFramework):
# spend single key from node 1
rawtx = self.nodes[1].walletcreatefundedpsbt([{"txid":txid,"vout":p2pkh_pos}], {self.nodes[1].getnewaddress():9.99})['psbt']
walletprocesspsbt_out = self.nodes[1].walletprocesspsbt(rawtx)
# Make sure it has both types of UTXOs
decoded = self.nodes[1].decodepsbt(walletprocesspsbt_out['psbt'])
assert 'non_witness_utxo' in decoded['inputs'][0]
assert_equal(walletprocesspsbt_out['complete'], True)
self.nodes[1].sendrawtransaction(self.nodes[1].finalizepsbt(walletprocesspsbt_out['psbt'])['hex'])