1bba72d824224f8a2625f529963d8982a00dfe14 Clarify in -maxtimeadjustment that only outbound peers influence time data (Jon Atack)
Pull request description:
#23631 changed our adjusted time to only take into account time from outbound peers.
Update `-maxtimeadjustment` to clarify this for users.
ACKs for top commit:
MarcoFalke:
cr ACK 1bba72d824224f8a2625f529963d8982a00dfe14
mzumsande:
code Review ACK 1bba72d824224f8a2625f529963d8982a00dfe14
brunoerg:
crACK 1bba72d824224f8a2625f529963d8982a00dfe14
Tree-SHA512: ad610ab3038fb83134e21d31cca952ef9ac926e88992ff93023b7010f2499f9a4d952e8e98a0ec56f8949872d966e5ffdd01a81e6b6115768f1992bd81be7a56
0c85dc30e6b628f7538a67776c7eefcb84ef4f82 p2p: Don't use timestamps from inbound peers (Martin Zumsande)
Pull request description:
`GetAdjustedTime()` (used e.g. in validation and addrman) returns a time with an offset that is influenced by timestamps that our peers have sent us in their version message.
Currently, timestamps from all peers are used for this.
However, I think that it would make sense to ignore the timedata samples from inbound peers, making it much harder for others to influence the Adjusted Time in a targeted way.
With the extra feeler connections (every 2 minutes on average) and extra block-relay-only connections (every 5 minutes on average) there are also now plenty of opportunities to gather a meaningful number of timedata samples from outbound peers.
There are some measures in place to prevent abuse: the `-maxtimeadjustment` parameter with a default of 70 minutes, warnings in cases of large deviations, only using the first 200 samples ([explanation](383d350bd5/src/timedata.cpp (L57-L72))), but I think that only using samples from outbound connections in the first place would be an additional safety measure that would make sense.
See also issue #4521 for further context and links: There have been several discussions in the past about replacing or abolishing the existing timedata system.
ACKs for top commit:
jnewbery:
Concept and code review ACK 0c85dc30e6b628f7538a67776c7eefcb84ef4f82
naumenkogs:
ACK 0c85dc30e6b628f7538a67776c7eefcb84ef4f82
vasild:
ACK 0c85dc30e6b628f7538a67776c7eefcb84ef4f82
Tree-SHA512: 2d6375305bcae034d68b58b7a07777b40ac430dfed554c88e681a048c527536691e1b7d08c0ef995247d356f8e81aa0a4b983bf2674faf6a416264e5f1af0a96
fad81548fa03861c244397201d6b6e6cbf883c38 test: Avoid testing negative block heights (MarcoFalke)
Pull request description:
A negative chain height is only used to denote an empty chain, not the height of any block.
So stop testing that and remove a suppression.
ACKs for top commit:
brunoerg:
crACK fad81548fa03861c244397201d6b6e6cbf883c38
Tree-SHA512: 0f9e91617dfb6ceda99831e6cf4b4bf0d951054957c159b1a05a178ab6090798fae7368edefe12800da24585bcdf7299ec3534f4d3bbf5ce6a6eca74dd3bb766
1621696a6f log: restore `LogPrintLevel` messages from prior backports (Kittywhiskers Van Gogh)
52a1263989 merge bitcoin#25614: Severity-based logging, step 2 (Kittywhiskers Van Gogh)
21470fdeb3 merge bitcoin#25292: Add LogPrintLevel to lint-format-strings, drop LogPrint-vs-LogPrintf section in dev notes (Kittywhiskers Van Gogh)
026409e4ff merge bitcoin#25217: update lint-logs.py to detect LogPrintLevel, mention WalletLogPrintf (Kittywhiskers Van Gogh)
b046e091c9 merge bitcoin#25202: Use severity-based logging for leveldb/libevent messages, reverse LogPrintLevel order (Kittywhiskers Van Gogh)
7697b73257 revert dash#2794: Disable logging of libevent debug messages (Kittywhiskers Van Gogh)
ff6304f5f3 merge bitcoin#24757: add `DEBUG_LOCKCONTENTION` to `--enable-debug` and CI (Kittywhiskers Van Gogh)
88592f30a3 merge bitcoin#24464: Add severity level to logs (Kittywhiskers Van Gogh)
d3e837ad22 merge bitcoin#24830: Allow -proxy="" setting values (Kittywhiskers Van Gogh)
0e01d5b5f3 partial bitcoin#22766: Clarify and disable unused ArgsManager flags (Kittywhiskers Van Gogh)
a9cfbd1048 fix: don't use non-existent `PrintLockContention` in `SharedEnter` (Kittywhiskers Van Gogh)
f331cbe8c8 merge bitcoin#24770: Put lock logging behind DEBUG_LOCKCONTENTION preprocessor directive (Kittywhiskers Van Gogh)
d9cc2ea178 merge bitcoin#23104: Avoid breaking single log lines over multiple lines in the log file (Kittywhiskers Van Gogh)
479ae82ecc merge bitcoin#23235: Reduce unnecessary default logging (Kittywhiskers Van Gogh)
Pull request description:
## Additional Information
* This pull request's primary purpose is to restore `LogPrintLevel`s from backports in [dash#6333](https://github.com/dashpay/dash/pull/6333) that were changed to `LogPrint`s as they were backported before `LogPrintLevel` was backported.
* ~~`clang-format` suggestions for `LogPrintLevel` have to be ignored in order to prevent the linter from tripping due to a "missing newline" ([build](https://gitlab.com/dashpay/dash/-/jobs/8398818860#L54)).~~ Resolved by applying diff ([source](https://github.com/dashpay/dash/pull/6399#issuecomment-2488992710)).
* `SharedLock` was introduced in [dash#5961](https://github.com/dashpay/dash/pull/5961) and `PrintLockContention` was removed in [dash#6046](https://github.com/dashpay/dash/pull/6046) but the changes in the latter were not extended to the former. This has been corrected as part of this pull request.
## Breaking Changes
None expected.
## Checklist
- [x] I have performed a self-review of my own code
- [x] I have commented my code, particularly in hard-to-understand areas **(note: N/A)**
- [x] I have added or updated relevant unit/integration/functional/e2e tests
- [x] I have made corresponding changes to the documentation
- [x] I have assigned this pull request to a milestone _(for repository code-owners and collaborators only)_
Top commit has no ACKs.
Tree-SHA512: f2d0ef8ce5cb1091c714a2169e89deb33fa71ff174ce4e6147b3ad421f57a84183d2a9e76736c0b064b2cc70fb3f2e545c42b8562cf36fdce18c3fb61307c364
87c31ad67a Update doc/release-process.md (UdjinM6)
55d74630b4 docs: mention building for some HOSTs only in `release-process.md` (UdjinM6)
Pull request description:
## Issue being fixed or feature implemented
https://github.com/dashpay/guix.sigs/pull/73#6390 follow-up
## What was done?
## How Has This Been Tested?
## Breaking Changes
## Checklist:
- [ ] I have performed a self-review of my own code
- [ ] I have commented my code, particularly in hard-to-understand areas
- [ ] I have added or updated relevant unit/integration/functional/e2e tests
- [ ] I have made corresponding changes to the documentation
- [ ] I have assigned this pull request to a milestone _(for repository code-owners and collaborators only)_
Top commit has no ACKs.
Tree-SHA512: b4a2cadf5899a8aea6612b4ff9c0e9f9c530a9e2344eb090967fbcf9a2ab219aff02f11f86434e4082f84c401d578cf2d033b6838c94705f532beca4ab604986
dafa7363a3 fix: respect SENDDSQUEUE message, move DSQ relay into net processing / peerman (pasta)
Pull request description:
## Issue being fixed or feature implemented
in #6148, I broke the functionality where a peer must opt in / opt out of DSQUEUE messages. This was mostly ok, and not immediately detected, as with this bug, simply everyone would receive DSQ messages over inventory (or classically, old proto versions were not affected by this bug). But this still would result in quite a bit of wasted bandwidth for peers which may not care about DSQ at all.
## What was done?
This commit should restore the prior functionality, where a node should send the SENDDSQUEUE message if they wish to receive DSQs. Once they've sent that, depending on their protocol version, they will either have the messages pushed to them as available, or on modern protocols, they will thereafter receive DSQs over the inventory system.
NOTE: I also refactor the code in this commit, moving some network proccessing into.... wait for it... net_processing.cpp! This allowed us to remove some dependencies in coinjoin.h. DSQ messages are now relayed to peers by calling peer_manager.RelayDSQ
## How Has This Been Tested?
I have not yet mixed on testnet with this; we should include it in rc.2 and test
## Breaking Changes
Slightly breaking for v22.0.x (so rc.1), as they in theory could be relying on this new logic of always receiving the DSQ inv. But I don't think anyone besides core is using this new protocol.
## Checklist:
_Go over all the following points, and put an `x` in all the boxes that apply._
- [ ] I have performed a self-review of my own code
- [ ] I have commented my code, particularly in hard-to-understand areas
- [ ] I have added or updated relevant unit/integration/functional/e2e tests
- [ ] I have made corresponding changes to the documentation
- [x] I have assigned this pull request to a milestone
ACKs for top commit:
UdjinM6:
light ACK dafa7363a3
kwvg:
utACK dafa7363a3
Tree-SHA512: 18f9b0dfe05cde19db451653db9bb9a00352efd1bc37adffd83f74958010475f2782b1111b1c0d2dd967e7a851c3c4795fa55033b4bd0cc810aa293e754ce314
b658d7d5c5339739dc19bf961d84186469a818d5 test: update assert_fee_amount() in test_framework/util.py (Jon Atack)
Pull request description:
Follow-up to 42e1b5d979 (#12486).
- update call to `round()` with our utility function `satoshi_round()` to avoid intermittent test failures
- rename `fee_per_kB` to `feerate_BTC_kvB` for precision
- store division result in `feerate_BTC_vB`
Possibly resolves#19418.
ACKs for top commit:
meshcollider:
utACK b658d7d5c5339739dc19bf961d84186469a818d5
Tree-SHA512: f124ded98c913f98782dc047a85a05d3fdf5f0585041fa81129be562138f6261ec1bd9ee2af89729028277e75b591b0a7ad50244016c2b2fa935c6e400523183
in 6148, I broke the functionality where a peer must opt in / opt out of DSQUEUE messages. This was mostly ok, and not immediately detected, as with this bug, simply everyone would receive DSQ messages over inventory (or classically, old proto versions were not affected by this bug). But this still would result in quite a bit of wasted bandwidth for peers which may not care about DSQ at all.
This commit should restore the prior functionality, where a node should send the SENDDSQUEUE message if they wish to receive DSQs. Once they've sent that, depending on their protocol version, they will either have the messages pushed to them as available, or on modern protocols, they will thereafter receive DSQs over the inventory system.
NOTE: I also refactor the code in this commit, moving some network proccessing into.... wait for it... net_processing.cpp! This allowed us to remove some dependencies in coinjoin.h. DSQ messages are now relayed to peers by calling peer_manager.RelayDSQ
86e92c376a refactor: drop unused CConnman from CSigningManager (Konstantin Akimov)
4668db60a2 refactor: create helper function RelayRecoveredSig inside peerman (pasta)
Pull request description:
## Issue being fixed or feature implemented
High m_nodes_mutex lock contention during high load
## What was done?
this commit should have a few benefits:
1. previous logic using ForEachNode results in locking m_nodes_mutex, a highly contended RecursiveMutex, AND m_peer_mutex(in GetPeerRef)
2. prior also resulted in calling .find over the m_peer_map for each node. Basically old logic was (probably) O(n(nlogn) the new logic results in acquiring m_peer_mutex once and looping over the list of peers, (probably) O(n)
3. Moves networking logic out of llmq/ and into actual net_processing.cpp
## How Has This Been Tested?
Hasn't really yet; it builds, but I need to run tests / maybe deploy to testnet mn
## Breaking Changes
## Checklist:
_Go over all the following points, and put an `x` in all the boxes that apply._
- [ ] I have performed a self-review of my own code
- [ ] I have commented my code, particularly in hard-to-understand areas
- [ ] I have added or updated relevant unit/integration/functional/e2e tests
- [ ] I have made corresponding changes to the documentation
- [x] I have assigned this pull request to a milestone _(for repository code-owners and collaborators only)_
ACKs for top commit:
knst:
utACK 86e92c376a
UdjinM6:
utACK 86e92c376a
Tree-SHA512: ca9d6ac22f8b72b117188147044c499ae62722283c6291633067b99726e6a6abc52e5c8cf3bdcd0d8fed0ad8d9086b000f628c9a932dfe89153e912b563eda5a
3ba602672c refactor: use self.wait_until in all the dash specific "wait_until_x" logic in order to actually apply the timeout scaling settings (pasta)
Pull request description:
## Issue being fixed or feature implemented
Currently we use the raw helper, but that means that timeout scaling isn't applying.. I think this may be a cause of a lot of the functional test failures that we see in tsan / ubsan.
## What was done?
## How Has This Been Tested?
hasn't; wait for CI
## Breaking Changes
## Checklist:
_Go over all the following points, and put an `x` in all the boxes that apply._
- [ ] I have performed a self-review of my own code
- [ ] I have commented my code, particularly in hard-to-understand areas
- [ ] I have added or updated relevant unit/integration/functional/e2e tests
- [ ] I have made corresponding changes to the documentation
- [x] I have assigned this pull request to a milestone _(for repository code-owners and collaborators only)_
ACKs for top commit:
knst:
utACK 3ba602672c
UdjinM6:
utACK 3ba602672c
Tree-SHA512: 935498f4b296b1abcac8be686cce396b61b654ef62da46de9a23a0f24ad31254f4938a581a6a4e2533576db0e0120861fd690bd9019e893b30990f21d1e48168
**This does change the logic!** We no longer prioritize asking MNs. This is probably fine? I don't specifically recall why we wanted to ask MNs besides potentially that they may be higher performing or better connected? We can potentially restore this logic once we bring masternode connection logic into Peer
Does also change logic, by short-circuiting once peersToAsk is full.
This commit has the added benefit of reducing contention on m_nodes_mutex due to no-longer calling connman.ForEachNode not once but twice
This may slightly increase contention on m_peer_mutex; but that should be an ok tradeoff for not only removing dependencies, but also reducing contention on a much more contested RecursiveMutex
3d67771f89 refactor: add in quorumBaseBlockIndexCache to reduce cs_main contention (pasta)
Pull request description:
## Issue being fixed or feature implemented
subset of https://github.com/dashpay/dash/pull/6418; only includes the new quorumBaseBlockIndexCache, doesn't include the caching of the chain-tip, as that introduced regressions I'm still debugging.
## What was done?
introduce a LRU cache for quorumHash -> const CBlockIndex*; this should significantly reduce cs_main contention during high transaction load.
## How Has This Been Tested?
Ran tests locally; let's see CI happy, and I also intend to run this on a testnet MN first and see the level of contention reduction
## Breaking Changes
None
## Checklist:
_Go over all the following points, and put an `x` in all the boxes that apply._
- [ ] I have performed a self-review of my own code
- [ ] I have commented my code, particularly in hard-to-understand areas
- [ ] I have added or updated relevant unit/integration/functional/e2e tests
- [ ] I have made corresponding changes to the documentation
- [x] I have assigned this pull request to a milestone _(for repository code-owners and collaborators only)_
ACKs for top commit:
UdjinM6:
utACK 3d67771f89
knst:
utACK 3d67771f89
Tree-SHA512: dbb4bdafed095397ca0e12dbd8bba25c108d199538387c71b1ff4285af821f9d9ad0ad4426407a015528270f3c163fa66ce91755efb1c8a7a90fd7cb70a918bc