## Issue being fixed or feature implemented
Remove dash-qt from docker images; save ~41MB
## What was done?
## How Has This Been Tested?
Hasn't
## Breaking Changes
I guess in theory someone could've been relying on dash-qt from docker 🤷
## Checklist:
_Go over all the following points, and put an `x` in all the boxes that
apply._
- [ ] I have performed a self-review of my own code
- [ ] I have commented my code, particularly in hard-to-understand areas
- [ ] I have added or updated relevant unit/integration/functional/e2e
tests
- [ ] I have made corresponding changes to the documentation
- [x] I have assigned this pull request to a milestone _(for repository
code-owners and collaborators only)_
## Issue being fixed or feature implemented
1. `scanQuorumsCache` is a special one and we use it incorrectly.
2. Platform doesn't really use anything that calls `ScanQuorums()`
directly, they specify the exact quorum hash in RPCs so it's
`GetQuorum()` that is used instead. The only place `ScanQuorums()` is
used for Platform related stuff is `StartCleanupOldQuorumDataThread()`
because we want to preserve quorum data used by `GetQuorum()`. But this
can be optimised with its own (much more compact) cache.
3. RPCs that use `ScanQuorums()` should in most cases be ok with smaller
cache, for other use cases there is a note in help text now.
## What was done?
pls see individual commits
## How Has This Been Tested?
run tests, run a node (~in progress~ looks stable)
## Breaking Changes
n/a
## Checklist:
- [x] I have performed a self-review of my own code
- [x] I have commented my code, particularly in hard-to-understand areas
- [ ] I have added or updated relevant unit/integration/functional/e2e
tests
- [ ] I have made corresponding changes to the documentation
- [x] I have assigned this pull request to a milestone
## Issue being fixed or feature implemented
old mode 100644
new mode 100755
## What was done?
`chmod +x test/functional/*.py`
## How Has This Been Tested?
can now run these test directly e.g. `./test/functional/rpc_quorum.py`
## Breaking Changes
n/a
## Checklist:
- [x] I have performed a self-review of my own code
- [ ] I have commented my code, particularly in hard-to-understand areas
- [ ] I have added or updated relevant unit/integration/functional/e2e
tests
- [ ] I have made corresponding changes to the documentation
- [x] I have assigned this pull request to a milestone _(for repository
code-owners and collaborators only)_
## Issue being fixed or feature implemented
`instantlock` and `chainlock` are broken in `getspecialtxes`
kudos to @thephez for finding the issue
## What was done?
pass the hash and also rename the variable to self-describing
## How Has This Been Tested?
run `getspecialtxes` on a node with and without the patch
## Breaking Changes
`instantlock` and `chainlock` will show actual values and not just
`false` all the time now (not sure if that qualifies for "breaking"
though)
## Checklist:
- [x] I have performed a self-review of my own code
- [ ] I have commented my code, particularly in hard-to-understand areas
- [ ] I have added or updated relevant unit/integration/functional/e2e
tests
- [ ] I have made corresponding changes to the documentation
- [x] I have assigned this pull request to a milestone _(for repository
code-owners and collaborators only)_
## Issue being fixed or feature implemented
Once Platform is live, there could be an edge case where the CL could
arrive to an EvoNode faster through Platform quorum than regular P2P
propagation.
## What was done?
This PR introduces a new RPC `submitchainlock` with the following 3
mandatory parameters:
- `blockHash`, `signature` and `height`.
Besides some basic tests:
- If the block is unknown then the RPC returns an error (could happen if
the node is stucked)
- If the signature is not verified then the RPC return an error.
- If the node already has this CL, the RPC returns true.
- If the node doesn't have this CL, it inserts it, broadcast it through
the inv system and return true.
## How Has This Been Tested?
`feature_llmq_chainlocks.py` was modified with the following scenario:
1. node0 is isolated from the rest of the network
2. node1 mines a new block and waits for CL
3. Make sure node0 doesn't know the new block/CL (by checking
`getbestchainlock()`)
4. CL is submitted via the new RPC on node0
5. checking `getbestchainlock()` and make sure the CL was processed +
'known_block' is false
6. reconnect node0
## Breaking Changes
no
## Checklist:
- [x] I have performed a self-review of my own code
- [ ] I have commented my code, particularly in hard-to-understand areas
- [x] I have added or updated relevant unit/integration/functional/e2e
tests
- [ ] I have made corresponding changes to the documentation
- [x] I have assigned this pull request to a milestone _(for repository
code-owners and collaborators only)_
---------
Co-authored-by: UdjinM6 <UdjinM6@users.noreply.github.com>
Co-authored-by: thephez <thephez@users.noreply.github.com>
1ea11e10acd60807a06adea5ecf553974a1b0346 doc: link to managing-wallets from doc readme (fanquake)
Pull request description:
This was forgotten in #22523.
ACKs for top commit:
achow101:
ACK 1ea11e10acd60807a06adea5ecf553974a1b0346
jarolrod:
ACK 1ea11e10acd60807a06adea5ecf553974a1b0346
Tree-SHA512: b82664b282cc0fe733b752c011621593df0f846d2188f12dbc5fedb7ffed2bd161293ce2a369ca973926030795b5f7acde7a1cbf5e337042a6f665906069c656