87c31ad67a Update doc/release-process.md (UdjinM6)
55d74630b4 docs: mention building for some HOSTs only in `release-process.md` (UdjinM6)
Pull request description:
## Issue being fixed or feature implemented
https://github.com/dashpay/guix.sigs/pull/73#6390 follow-up
## What was done?
## How Has This Been Tested?
## Breaking Changes
## Checklist:
- [ ] I have performed a self-review of my own code
- [ ] I have commented my code, particularly in hard-to-understand areas
- [ ] I have added or updated relevant unit/integration/functional/e2e tests
- [ ] I have made corresponding changes to the documentation
- [ ] I have assigned this pull request to a milestone _(for repository code-owners and collaborators only)_
Top commit has no ACKs.
Tree-SHA512: b4a2cadf5899a8aea6612b4ff9c0e9f9c530a9e2344eb090967fbcf9a2ab219aff02f11f86434e4082f84c401d578cf2d033b6838c94705f532beca4ab604986
dafa7363a3 fix: respect SENDDSQUEUE message, move DSQ relay into net processing / peerman (pasta)
Pull request description:
## Issue being fixed or feature implemented
in #6148, I broke the functionality where a peer must opt in / opt out of DSQUEUE messages. This was mostly ok, and not immediately detected, as with this bug, simply everyone would receive DSQ messages over inventory (or classically, old proto versions were not affected by this bug). But this still would result in quite a bit of wasted bandwidth for peers which may not care about DSQ at all.
## What was done?
This commit should restore the prior functionality, where a node should send the SENDDSQUEUE message if they wish to receive DSQs. Once they've sent that, depending on their protocol version, they will either have the messages pushed to them as available, or on modern protocols, they will thereafter receive DSQs over the inventory system.
NOTE: I also refactor the code in this commit, moving some network proccessing into.... wait for it... net_processing.cpp! This allowed us to remove some dependencies in coinjoin.h. DSQ messages are now relayed to peers by calling peer_manager.RelayDSQ
## How Has This Been Tested?
I have not yet mixed on testnet with this; we should include it in rc.2 and test
## Breaking Changes
Slightly breaking for v22.0.x (so rc.1), as they in theory could be relying on this new logic of always receiving the DSQ inv. But I don't think anyone besides core is using this new protocol.
## Checklist:
_Go over all the following points, and put an `x` in all the boxes that apply._
- [ ] I have performed a self-review of my own code
- [ ] I have commented my code, particularly in hard-to-understand areas
- [ ] I have added or updated relevant unit/integration/functional/e2e tests
- [ ] I have made corresponding changes to the documentation
- [x] I have assigned this pull request to a milestone
ACKs for top commit:
UdjinM6:
light ACK dafa7363a3
kwvg:
utACK dafa7363a3
Tree-SHA512: 18f9b0dfe05cde19db451653db9bb9a00352efd1bc37adffd83f74958010475f2782b1111b1c0d2dd967e7a851c3c4795fa55033b4bd0cc810aa293e754ce314
5078baea2b fix(test): wait for chainlock before mining a block we expect to include said chainlock (pasta)
f39c1e6f4c fix: guard m_can_tx_relay behind m_tx_relay_mutex; make it private; add additional annotations (pasta)
Pull request description:
## Issue being fixed or feature implemented
See each commit; fixes two bugs, both discovered while running feature_llmq_chainlocks.py with tsan / debug.
one a datarace in net_processing.cpp and the other in the test I was using to ensure this fix was correct, feature_llmq_chainlocks
## What was done?
### net_processing.cpp
You can see the datarace here: https://gist.github.com/PastaPastaPasta/c966a9f805758b34524085e3d52ea7f8
We simply guard it with an existing mutex that is always locked in close proximity.
### feature_llmq_chainlocks.py
Most of the time, while generating the cycle quorum, there is sufficient time to generate a chainlock; however, this is racey, and I've observed locally where the block gets generated before a chainlock is present and as such `test_coinbase_best_cl` fails. We should instead wait for the chainlock first, and then mine the block. This was we can ensure the mined block will include that chainlock.
This was observed locally maybe 1/10 times or so
## How Has This Been Tested?
ran feature_llmq_chainlocks.py ~40 times locally with tsan / debug
## Breaking Changes
None
## Checklist:
_Go over all the following points, and put an `x` in all the boxes that apply._
- [x] I have performed a self-review of my own code
- [x] I have commented my code, particularly in hard-to-understand areas
- [ ] I have added or updated relevant unit/integration/functional/e2e tests
- [ ] I have made corresponding changes to the documentation
- [x] I have assigned this pull request to a milestone _(for repository code-owners and collaborators only)_
ACKs for top commit:
kwvg:
utACK 5078baea2b
knst:
utACK 5078baea2b
UdjinM6:
utACK 5078baea2b
Tree-SHA512: b346fc60809df72d0161f625073dce7062bd2641d35e4f80160fac9afeec63707de552e2856940ac2604875908ae3b98a225d352de36bfbfc6ee3fbe1e1538ff
3f2e064b18 refactor: set `const auto& cbTx` to avoid using optional throughout method (pasta)
0c0d91e491 fix: functional test feature_llmq_chainlocks.py should activate MN_RR instead v20 (Konstantin Akimov)
af93e877f2 refactor: removed pre-MN_RR logic of validation of CL (Konstantin Akimov)
Pull request description:
## Issue being fixed or feature implemented
The fork MN_RR is active awhile on testnet and mainnet and no more need legacy check
## What was done?
Removes legacy version of checks for CL and related functional tests
## How Has This Been Tested?
Run unit / functional test - DONE
Reindex testnet - DONE
Reindex mainnet - DONE
## Breaking Changes
Removed pre-mn_rr version of checks for CL.
It's no more relevant on mainnet and testnet.
It affects behavior on new devnets and regtest for pre-mn_rr activation blocks.
## Checklist:
- [x] I have performed a self-review of my own code
- [x] I have commented my code, particularly in hard-to-understand areas
- [x] I have added or updated relevant unit/integration/functional/e2e tests
- [x] I have made corresponding changes to the documentation
- [x] I have assigned this pull request to a milestone
ACKs for top commit:
UdjinM6:
utACK 3f2e064b18
Tree-SHA512: 8b4b3a20a54602f4df9d98e17c79004214493b15c0bce9c08c68d667a5cba86b817947f008d646c48ef9f2f86676c02085c7d0ed36e83548ef5425b64faffb89
9604d87af1 ci: cache depends/built like gitlab (pasta)
Pull request description:
## Issue being fixed or feature implemented
Depends build didn't seem to be caching properly
## What was done?
changed depends/${{ matrix.host }} to depends/built as gitlab does
## How Has This Been Tested?
Depends "build" now takes only ~1 minute instead of ~15 minutes in CI: https://github.com/PastaPastaPasta/dash/actions/runs/11899038167
## Breaking Changes
None
## Checklist:
- [ ] I have performed a self-review of my own code
- [ ] I have commented my code, particularly in hard-to-understand areas
- [ ] I have added or updated relevant unit/integration/functional/e2e tests
- [ ] I have made corresponding changes to the documentation
- [x] I have assigned this pull request to a milestone _(for repository code-owners and collaborators only)_
ACKs for top commit:
UdjinM6:
utACK 9604d87af1
kwvg:
utACK 9604d87af1
Tree-SHA512: 63d2321f41b284be6cc2f0b2d53294cf220b1623af464d411225c0e43ec14268e1c3a701e23973e5c641925b6ea28dcb92062d8cefb9e6baed6ac5bb619ce1a1
2f18c1af30 ci: deduplicate depends building (pasta)
Pull request description:
## Issue being fixed or feature implemented
Currently we build the same host / options multiple times. Don't do this!
## What was done?
Now building depends only twice
## How Has This Been Tested?
Local CI appears to be working
## Breaking Changes
None
## Checklist:
_Go over all the following points, and put an `x` in all the boxes that apply._
- [ ] I have performed a self-review of my own code
- [ ] I have commented my code, particularly in hard-to-understand areas
- [ ] I have added or updated relevant unit/integration/functional/e2e tests
- [ ] I have made corresponding changes to the documentation
- [x] I have assigned this pull request to a milestone _(for repository code-owners and collaborators only)_
ACKs for top commit:
kwvg:
utACK 2f18c1af30
UdjinM6:
utACK 2f18c1af30
Tree-SHA512: 67460508a2e9458152f7c8bb5f4a1a786aedcfded0e5c54fb03d85010ba8bb87362b66a0c322b51aeba75752e36418fc235d8dc4197ef10695e234ccc5a00a39
c5d5b14205 ci: add powerpc64 to GH Guix job matrix (UdjinM6)
Pull request description:
## Issue being fixed or feature implemented
I guess we should make sure that binaries for every possible platform can be built via Guix with no issues even if we do not want to provide "official" binaries for it.
## What was done?
## How Has This Been Tested?
## Breaking Changes
## Checklist:
- [ ] I have performed a self-review of my own code
- [ ] I have commented my code, particularly in hard-to-understand areas
- [ ] I have added or updated relevant unit/integration/functional/e2e tests
- [ ] I have made corresponding changes to the documentation
- [ ] I have assigned this pull request to a milestone _(for repository code-owners and collaborators only)_
Top commit has no ACKs.
Tree-SHA512: 6464b68a47e680bb379c82842599200d6917adb8f1493fa75ac80ddc7a6fea92a9190215cfa3f32b0bd88e2dd0425d4eb13846ea2d7e19e144afff9c1171ff13
87c31ad67a Update doc/release-process.md (UdjinM6)
55d74630b4 docs: mention building for some HOSTs only in `release-process.md` (UdjinM6)
Pull request description:
## Issue being fixed or feature implemented
https://github.com/dashpay/guix.sigs/pull/73#6390 follow-up
## What was done?
## How Has This Been Tested?
## Breaking Changes
## Checklist:
- [ ] I have performed a self-review of my own code
- [ ] I have commented my code, particularly in hard-to-understand areas
- [ ] I have added or updated relevant unit/integration/functional/e2e tests
- [ ] I have made corresponding changes to the documentation
- [ ] I have assigned this pull request to a milestone _(for repository code-owners and collaborators only)_
Top commit has no ACKs.
Tree-SHA512: b4a2cadf5899a8aea6612b4ff9c0e9f9c530a9e2344eb090967fbcf9a2ab219aff02f11f86434e4082f84c401d578cf2d033b6838c94705f532beca4ab604986
dafa7363a3 fix: respect SENDDSQUEUE message, move DSQ relay into net processing / peerman (pasta)
Pull request description:
## Issue being fixed or feature implemented
in #6148, I broke the functionality where a peer must opt in / opt out of DSQUEUE messages. This was mostly ok, and not immediately detected, as with this bug, simply everyone would receive DSQ messages over inventory (or classically, old proto versions were not affected by this bug). But this still would result in quite a bit of wasted bandwidth for peers which may not care about DSQ at all.
## What was done?
This commit should restore the prior functionality, where a node should send the SENDDSQUEUE message if they wish to receive DSQs. Once they've sent that, depending on their protocol version, they will either have the messages pushed to them as available, or on modern protocols, they will thereafter receive DSQs over the inventory system.
NOTE: I also refactor the code in this commit, moving some network proccessing into.... wait for it... net_processing.cpp! This allowed us to remove some dependencies in coinjoin.h. DSQ messages are now relayed to peers by calling peer_manager.RelayDSQ
## How Has This Been Tested?
I have not yet mixed on testnet with this; we should include it in rc.2 and test
## Breaking Changes
Slightly breaking for v22.0.x (so rc.1), as they in theory could be relying on this new logic of always receiving the DSQ inv. But I don't think anyone besides core is using this new protocol.
## Checklist:
_Go over all the following points, and put an `x` in all the boxes that apply._
- [ ] I have performed a self-review of my own code
- [ ] I have commented my code, particularly in hard-to-understand areas
- [ ] I have added or updated relevant unit/integration/functional/e2e tests
- [ ] I have made corresponding changes to the documentation
- [x] I have assigned this pull request to a milestone
ACKs for top commit:
UdjinM6:
light ACK dafa7363a3
kwvg:
utACK dafa7363a3
Tree-SHA512: 18f9b0dfe05cde19db451653db9bb9a00352efd1bc37adffd83f74958010475f2782b1111b1c0d2dd967e7a851c3c4795fa55033b4bd0cc810aa293e754ce314
b658d7d5c5339739dc19bf961d84186469a818d5 test: update assert_fee_amount() in test_framework/util.py (Jon Atack)
Pull request description:
Follow-up to 42e1b5d979 (#12486).
- update call to `round()` with our utility function `satoshi_round()` to avoid intermittent test failures
- rename `fee_per_kB` to `feerate_BTC_kvB` for precision
- store division result in `feerate_BTC_vB`
Possibly resolves#19418.
ACKs for top commit:
meshcollider:
utACK b658d7d5c5339739dc19bf961d84186469a818d5
Tree-SHA512: f124ded98c913f98782dc047a85a05d3fdf5f0585041fa81129be562138f6261ec1bd9ee2af89729028277e75b591b0a7ad50244016c2b2fa935c6e400523183
in 6148, I broke the functionality where a peer must opt in / opt out of DSQUEUE messages. This was mostly ok, and not immediately detected, as with this bug, simply everyone would receive DSQ messages over inventory (or classically, old proto versions were not affected by this bug). But this still would result in quite a bit of wasted bandwidth for peers which may not care about DSQ at all.
This commit should restore the prior functionality, where a node should send the SENDDSQUEUE message if they wish to receive DSQs. Once they've sent that, depending on their protocol version, they will either have the messages pushed to them as available, or on modern protocols, they will thereafter receive DSQs over the inventory system.
NOTE: I also refactor the code in this commit, moving some network proccessing into.... wait for it... net_processing.cpp! This allowed us to remove some dependencies in coinjoin.h. DSQ messages are now relayed to peers by calling peer_manager.RelayDSQ
86e92c376a refactor: drop unused CConnman from CSigningManager (Konstantin Akimov)
4668db60a2 refactor: create helper function RelayRecoveredSig inside peerman (pasta)
Pull request description:
## Issue being fixed or feature implemented
High m_nodes_mutex lock contention during high load
## What was done?
this commit should have a few benefits:
1. previous logic using ForEachNode results in locking m_nodes_mutex, a highly contended RecursiveMutex, AND m_peer_mutex(in GetPeerRef)
2. prior also resulted in calling .find over the m_peer_map for each node. Basically old logic was (probably) O(n(nlogn) the new logic results in acquiring m_peer_mutex once and looping over the list of peers, (probably) O(n)
3. Moves networking logic out of llmq/ and into actual net_processing.cpp
## How Has This Been Tested?
Hasn't really yet; it builds, but I need to run tests / maybe deploy to testnet mn
## Breaking Changes
## Checklist:
_Go over all the following points, and put an `x` in all the boxes that apply._
- [ ] I have performed a self-review of my own code
- [ ] I have commented my code, particularly in hard-to-understand areas
- [ ] I have added or updated relevant unit/integration/functional/e2e tests
- [ ] I have made corresponding changes to the documentation
- [x] I have assigned this pull request to a milestone _(for repository code-owners and collaborators only)_
ACKs for top commit:
knst:
utACK 86e92c376a
UdjinM6:
utACK 86e92c376a
Tree-SHA512: ca9d6ac22f8b72b117188147044c499ae62722283c6291633067b99726e6a6abc52e5c8cf3bdcd0d8fed0ad8d9086b000f628c9a932dfe89153e912b563eda5a
3ba602672c refactor: use self.wait_until in all the dash specific "wait_until_x" logic in order to actually apply the timeout scaling settings (pasta)
Pull request description:
## Issue being fixed or feature implemented
Currently we use the raw helper, but that means that timeout scaling isn't applying.. I think this may be a cause of a lot of the functional test failures that we see in tsan / ubsan.
## What was done?
## How Has This Been Tested?
hasn't; wait for CI
## Breaking Changes
## Checklist:
_Go over all the following points, and put an `x` in all the boxes that apply._
- [ ] I have performed a self-review of my own code
- [ ] I have commented my code, particularly in hard-to-understand areas
- [ ] I have added or updated relevant unit/integration/functional/e2e tests
- [ ] I have made corresponding changes to the documentation
- [x] I have assigned this pull request to a milestone _(for repository code-owners and collaborators only)_
ACKs for top commit:
knst:
utACK 3ba602672c
UdjinM6:
utACK 3ba602672c
Tree-SHA512: 935498f4b296b1abcac8be686cce396b61b654ef62da46de9a23a0f24ad31254f4938a581a6a4e2533576db0e0120861fd690bd9019e893b30990f21d1e48168
**This does change the logic!** We no longer prioritize asking MNs. This is probably fine? I don't specifically recall why we wanted to ask MNs besides potentially that they may be higher performing or better connected? We can potentially restore this logic once we bring masternode connection logic into Peer
Does also change logic, by short-circuiting once peersToAsk is full.
This commit has the added benefit of reducing contention on m_nodes_mutex due to no-longer calling connman.ForEachNode not once but twice
This may slightly increase contention on m_peer_mutex; but that should be an ok tradeoff for not only removing dependencies, but also reducing contention on a much more contested RecursiveMutex
3d67771f89 refactor: add in quorumBaseBlockIndexCache to reduce cs_main contention (pasta)
Pull request description:
## Issue being fixed or feature implemented
subset of https://github.com/dashpay/dash/pull/6418; only includes the new quorumBaseBlockIndexCache, doesn't include the caching of the chain-tip, as that introduced regressions I'm still debugging.
## What was done?
introduce a LRU cache for quorumHash -> const CBlockIndex*; this should significantly reduce cs_main contention during high transaction load.
## How Has This Been Tested?
Ran tests locally; let's see CI happy, and I also intend to run this on a testnet MN first and see the level of contention reduction
## Breaking Changes
None
## Checklist:
_Go over all the following points, and put an `x` in all the boxes that apply._
- [ ] I have performed a self-review of my own code
- [ ] I have commented my code, particularly in hard-to-understand areas
- [ ] I have added or updated relevant unit/integration/functional/e2e tests
- [ ] I have made corresponding changes to the documentation
- [x] I have assigned this pull request to a milestone _(for repository code-owners and collaborators only)_
ACKs for top commit:
UdjinM6:
utACK 3d67771f89
knst:
utACK 3d67771f89
Tree-SHA512: dbb4bdafed095397ca0e12dbd8bba25c108d199538387c71b1ff4285af821f9d9ad0ad4426407a015528270f3c163fa66ce91755efb1c8a7a90fd7cb70a918bc
this commit should have a few benefits:
1. previous logic using ForEachNode results in locking m_nodes_mutex, a highly contended RecursiveMutex, AND m_peer_mutex(in GetPeerRef)
2. prior also resulted in calling .find over the m_peer_map for each node. Basically old logic was (probably) O(n(nlogn) the new logic results in acquiring m_peer_mutex once and looping over the list of peers, (probably) O(n)
3. Moves networking logic out of llmq/ and into actual net_processing.cpp
b65f0bab7f refactor: move expensive CInv initialization out of hot loop (pasta)
Pull request description:
## Issue being fixed or feature implemented
Not sure how we introduced this one, but it appears we are calling a CInv construction (pretty cheap) and a GetDSTX ((relatively) EXPENSIVE) for each peer we are connected to in RelayTransaction... I can hope and pray that the compiler somehow was magically optimizing this for us, but I really doubt it
## What was done?
Move the initialization out of the loop
## How Has This Been Tested?
## Breaking Changes
N/A
## Checklist:
- [x] I have performed a self-review of my own code
- [ ] I have commented my code, particularly in hard-to-understand areas
- [ ] I have added or updated relevant unit/integration/functional/e2e tests
- [ ] I have made corresponding changes to the documentation
- [x] I have assigned this pull request to a milestone _(for repository code-owners and collaborators only)_
ACKs for top commit:
knst:
utACK b65f0bab7f
UdjinM6:
utACK b65f0bab7f
kwvg:
utACK b65f0bab7f
Tree-SHA512: f556789042ab9265d5c4d87c3ba2910138ce43ffa69c90ed208c9a3bcd861343f201ce2f00aeb541f345c9ca686dac7227df8d4833cf7fbdf61c36260f627864
31243ca313 chore: update nMinimumChainWork, defaultAssumeValid, checkpointData, chainTxData for mainnet and testnet (pasta)
Pull request description:
## Issue being fixed or feature implemented
Bump chainparams in prep for v22
## What was done?
## How Has This Been Tested?
## Breaking Changes
## Checklist:
_Go over all the following points, and put an `x` in all the boxes that apply._
- [ ] I have performed a self-review of my own code
- [ ] I have commented my code, particularly in hard-to-understand areas
- [ ] I have added or updated relevant unit/integration/functional/e2e tests
- [ ] I have made corresponding changes to the documentation
- [x] I have assigned this pull request to a milestone _(for repository code-owners and collaborators only)_
ACKs for top commit:
UdjinM6:
utACK 31243ca313
Tree-SHA512: 6a01bbbaefb69437e053340b968e0ce68e2bd9e5e5fd2900864d88ffc21c5cd3f9c2a50d8f5b19f16515cd4c62a4a8fc2d981dd6d0456e86ef96f40e350e786a
a34caecdff refactor: replace infinite loop with break to while in forcemninfo (Konstantin Akimov)
4ec385d020 refactor: simplify loop of node connection in test_framework (Konstantin Akimov)
fa4ba4d169 refactor: moved functions do_connect, remove_masternode closer to theirs usages (Konstantin Akimov)
8eb5d852c7 refactor: drop dead constant SPECIALTX_TYPE from quorum commitment (Konstantin Akimov)
Pull request description:
## What was done?
see each commit
## How Has This Been Tested?
Run unit and functional tests
## Breaking Changes
N/A
## Checklist:
- [x] I have performed a self-review of my own code
- [ ] I have commented my code, particularly in hard-to-understand areas
- [ ] I have added or updated relevant unit/integration/functional/e2e tests
- [ ] I have made corresponding changes to the documentation
- [x] I have assigned this pull request to a milestone
ACKs for top commit:
UdjinM6:
utACK a34caecdff
Tree-SHA512: d998839c992ab1ff36c88ef10236b2a7811b2edb81a9211a659065bf06bebc2f9174ef3268b1508b489a7b9b29a3f3e0b9fc096ff055cce6c4cacbfaf5877cd2
397a157e8d refactor: introduce cs_pendingSigns (pasta)
Pull request description:
## Issue being fixed or feature implemented
Much more minimal version of https://github.com/dashpay/dash/pull/6410; data from testnet while MNs were doing lots of instantsend locking showed minimal contention over signing_shares cs, however, the contention that did exist, most was a result of AsyncSign conflicting with something else. This should resolve the vast majority of contention in signing_shares while minimizing the complication / risk of introducing dead locks compared to 6410
## What was done?
introduce cs_pendingSigns
## How Has This Been Tested?
Built locally
## Breaking Changes
None
## Checklist:
_Go over all the following points, and put an `x` in all the boxes that apply._
- [x] I have performed a self-review of my own code
- [ ] I have commented my code, particularly in hard-to-understand areas
- [ ] I have added or updated relevant unit/integration/functional/e2e tests
- [ ] I have made corresponding changes to the documentation
- [x] I have assigned this pull request to a milestone _(for repository code-owners and collaborators only)_
ACKs for top commit:
knst:
utACK 397a157e8d
UdjinM6:
utACK 397a157e8d
Tree-SHA512: aac741c274449cf9c22625ac95e964d275dd1454647eaa2fc064c90b44a215e5509c9f9b8aceccbd49002edc21cbea883900f202204aa2138c104f4989ae5e26
c48efdac83 fix(qt): avoid potential precision loss in amounts on Governance Tab (UdjinM6)
Pull request description:
## Issue being fixed or feature implemented
https://github.com/dashpay/dash/pull/6403#discussion_r1847834174
## What was done?
## How Has This Been Tested?
## Breaking Changes
## Checklist:
- [ ] I have performed a self-review of my own code
- [ ] I have commented my code, particularly in hard-to-understand areas
- [ ] I have added or updated relevant unit/integration/functional/e2e tests
- [ ] I have made corresponding changes to the documentation
- [ ] I have assigned this pull request to a milestone _(for repository code-owners and collaborators only)_
ACKs for top commit:
knst:
utACK c48efdac83
PastaPastaPasta:
utACK c48efdac83
Tree-SHA512: 705bc154e10150e32e3ff04d4a90de14e29ffd195cc9f94a753dd1b4588f2730d6526cd66d851005b956e0d074d39250d142baa946bea8df95dbc92718931762
5078baea2b fix(test): wait for chainlock before mining a block we expect to include said chainlock (pasta)
f39c1e6f4c fix: guard m_can_tx_relay behind m_tx_relay_mutex; make it private; add additional annotations (pasta)
Pull request description:
## Issue being fixed or feature implemented
See each commit; fixes two bugs, both discovered while running feature_llmq_chainlocks.py with tsan / debug.
one a datarace in net_processing.cpp and the other in the test I was using to ensure this fix was correct, feature_llmq_chainlocks
## What was done?
### net_processing.cpp
You can see the datarace here: https://gist.github.com/PastaPastaPasta/c966a9f805758b34524085e3d52ea7f8
We simply guard it with an existing mutex that is always locked in close proximity.
### feature_llmq_chainlocks.py
Most of the time, while generating the cycle quorum, there is sufficient time to generate a chainlock; however, this is racey, and I've observed locally where the block gets generated before a chainlock is present and as such `test_coinbase_best_cl` fails. We should instead wait for the chainlock first, and then mine the block. This was we can ensure the mined block will include that chainlock.
This was observed locally maybe 1/10 times or so
## How Has This Been Tested?
ran feature_llmq_chainlocks.py ~40 times locally with tsan / debug
## Breaking Changes
None
## Checklist:
_Go over all the following points, and put an `x` in all the boxes that apply._
- [x] I have performed a self-review of my own code
- [x] I have commented my code, particularly in hard-to-understand areas
- [ ] I have added or updated relevant unit/integration/functional/e2e tests
- [ ] I have made corresponding changes to the documentation
- [x] I have assigned this pull request to a milestone _(for repository code-owners and collaborators only)_
ACKs for top commit:
kwvg:
utACK 5078baea2b
knst:
utACK 5078baea2b
UdjinM6:
utACK 5078baea2b
Tree-SHA512: b346fc60809df72d0161f625073dce7062bd2641d35e4f80160fac9afeec63707de552e2856940ac2604875908ae3b98a225d352de36bfbfc6ee3fbe1e1538ff
5741d5da28 chore: update seeds (Kittywhiskers Van Gogh)
2d732fc66e chore: drop defunct onion seeds, add new existing onion hosts as seeds (pasta)
Pull request description:
## Issue being fixed or feature implemented
Update onion seeds for upcoming version
## What was done?
Update onion seeds to reflect latest status
## How Has This Been Tested?
As of today, all of these are able to be connected to on port 9999; I've not actually connected to all of them and verified they're on latest core or something like that; but their addresses my core knows about, and are able to be trivially connected to
## Breaking Changes
## Checklist:
- [x] I have performed a self-review of my own code
- [ ] I have commented my code, particularly in hard-to-understand areas
- [ ] I have added or updated relevant unit/integration/functional/e2e tests
- [ ] I have made corresponding changes to the documentation
- [x] I have assigned this pull request to a milestone _(for repository code-owners and collaborators only)_
ACKs for top commit:
kwvg:
utACK 5741d5da28
UdjinM6:
utACK 5741d5da28
Tree-SHA512: 541bcc510b2ebf6de08ac91f2b7e5b1c910536dca9bab0b38930d6e5cfa1cd05e8c014ba4c74c14c43ed211cda3275fffb5baaf1489bbd1221567130d804b0ec