## Issue being fixed or feature implemented
We had forgotten to harden dip20 and dip24 activation
## What was done?
Hardened dip20 and dip24 activation
## How Has This Been Tested?
Hasn't yet; should do an assumevalid=0 reindex
## Breaking Changes
Hopefully none
## Checklist:
_Go over all the following points, and put an `x` in all the boxes that
apply._
- [ ] I have performed a self-review of my own code
- [ ] I have commented my code, particularly in hard-to-understand areas
- [ ] I have added or updated relevant unit/integration/functional/e2e
tests
- [ ] I have made corresponding changes to the documentation
- [ ] I have assigned this pull request to a milestone _(for repository
code-owners and collaborators only)_
---------
Co-authored-by: Konstantin Akimov <knstqq@gmail.com>
Co-authored-by: UdjinM6 <UdjinM6@users.noreply.github.com>
e4f4ea47ebf7774fb6f445adde7bf7ea71fa05a1 lint: Catch use of [] or {} as default parameter values in Python functions (practicalswift)
25dd86715039586d92176eee16e9c6644d2547f0 Avoid using mutable default parameter values (practicalswift)
Pull request description:
Avoid common Python default parameter gotcha when mutable `dict`/`list`:s are used as default parameter values.
Examples of this gotcha caught during review:
* https://github.com/bitcoin/bitcoin/pull/16673#discussion_r317415261
* https://github.com/bitcoin/bitcoin/pull/14565#discussion_r241942304
Perhaps surprisingly this is how mutable list and dictionary default parameter values behave in Python:
```
>>> def f(i, j=[], k={}):
... j.append(i)
... k[i] = True
... return j, k
...
>>> f(1)
([1], {1: True})
>>> f(1)
([1, 1], {1: True})
>>> f(2)
([1, 1, 2], {1: True, 2: True})
```
In contrast to:
```
>>> def f(i, j=None, k=None):
... if j is None:
... j = []
... if k is None:
... k = {}
... j.append(i)
... k[i] = True
... return j, k
...
>>> f(1)
([1], {1: True})
>>> f(1)
([1], {1: True})
>>> f(2)
([2], {2: True})
```
The latter is typically the intended behaviour.
This PR fixes two instances of this and adds a check guarding against this gotcha going forward :-)
ACKs for top commit:
Sjors:
Oh Python... ACK e4f4ea47ebf7774fb6f445adde7bf7ea71fa05a1. Testing tip: swap the two commits.
Tree-SHA512: 56e14d24fc866211a20185c9fdb274ed046c3aed2dc0e07699e58b6f9fa3b79f6d0c880fb02d72b7fe5cc5eb7c0ff6da0ead33123344e1a872209370c2e49e3f
## Issue being fixed or feature implemented
This should speed up test `feature_llmq_data_recovery.py` for 30% from
500+ seconds to ~350 seconds (running locally) and all other tests that
uses any quorum, such as `feature_llmq_simplepose.py`
Time of CI running is also decreased noticeable 168min -> 131min
21 jobs for
[pr-5091/knst/dash/functional-tests-delays](https://gitlab.com/dashpay/dash/-/commits/pr-5091/knst/dash/functional-tests-delays)
in 131 minutes and 20 seconds (queued for 4 seconds)
vs some other pull request:
23 jobs for
[pr-5100/UdjinM6/dash/fix_GetStateFor_perf](https://gitlab.com/dashpay/dash/-/commits/pr-5100/UdjinM6/dash/fix_GetStateFor_perf)
in 195 minutes and 33 seconds (queued for 28 minutes and 13 seconds)
## What was done?
decreased delays in functional tests
## How Has This Been Tested?
I run several times locally and run CI/CD to see that it doesn't fail
## Breaking Changes
no breaking changes
## Checklist:
<!--- Go over all the following points, and put an `x` in all the boxes
that apply. -->
- [x] I have performed a self-review of my own code
**For repository code-owners and collaborators only**
- [x] I have assigned this pull request to a milestone
* tests: extend "age" period in `feature_llmq_connections.py`
see `NOTE`
* tests: sleep more in `wait_until` by default
Avoid overloading rpc with 20+ requests per second, 2 should be enough.
* tests: various fixes in `activate_dip0024`
- lower batch size
- no fast mode
- disable spork17 while mining
- bump mocktime on every generate call
* tests: bump mocktime on generate in `activate_dip8`
* tests: fix `reindex` option in `restart_mn`
Make sure nodes actually finished reindexing before moving any further.
* tests: trigger recovery threads and wait on mn restarts
* tests: sync blocks in `wait_for_quorum_data`
* tests: bump disconnect timeouts in `p2p_invalid_messages.py`
1 is too low for busy nodes
* tests: Wait for addrv2 processing before bumping mocktime in p2p_addrv2_relay.py
* tests: use `timeout_scale` option in `get_recovered_sig` and `isolate_node`
* tests: fix `wait_for...`s
* tests: fix `close_mn_port` banning test
* Bump MASTERNODE_SYNC_RESET_SECONDS to 900
This helps to avoid issues with 10m+ bump_mocktime on isolated nodes in feature_llmq_is_retroactive.py and feature_llmq_simplepose.py.
* style: fix extra whitespace
Co-authored-by: PastaPastaPasta <6443210+PastaPastaPasta@users.noreply.github.com>
* llmq|init|test: Add "mode" to -llmq-qvvec-sync parameter
This changes the paramter from `-llmq-qvvec-sync=<quorum_name>` to `-llmq-qvvec-sync=<quorum_name:mode>`
With the following definitions:
- `quorum_name`: Internal name of the quorum type
- `mode=0` - Sync always from all quorums of the type defined by `quorum_name`
- `mode=1` - Sync only if member of any from all other quorum of the type defined by `quorum_name`
`-llmq-qvvec-sync=llmq_100_67:0` To always request qvvec's from all `LLMQ_100_67`.
`-llmq-qvvec-sync=llmq_100_67:1` Only request if type member.
This means, if platform enables this on all MNs with `mode=0` we will
have all nodes asking new quorum for their verification vector instead
of only `24*100` at max.
* llmq: Adjust GetQuorumRecoveryStartOffset to use all MNs
* Turn `QvvecSyncMode` into `enum class`
Co-authored-by: UdjinM6 <UdjinM6@users.noreply.github.com>
* llmq: Implement automated DKG recovery threads
* llmq: Implement quorum verification vector sync
* init: Validiate quorum data recovery related command line parameter
* test: Add quorum_data_request_timeout_seconds in DashTestFramework
* test: Test quorum data recovery in feature_llmq_data_recovery.py
* test: Add feature_llmq_data_recovery.py to BASE_SCRIPTS
* test: Fix quorum_data_request_expiration_timeout in wait_for_quorum_data
* test: Always test the existence of secretKeyShare in test_mn_quorum_data
With this change it also validates that "secretKeyShare" is not in `quorum_info` if its not expected to be in there. Before this was basically just not tested.
* llmq|test: Use bool as argument type for -llmq-data-recovery
* llmq: Always set nTimeLastSuccess to 0
* test: Set -llmq-data-recovery=0 in p2p_quorum_data.py
* test: Simplify test_mns
Co-authored-by: UdjinM6 <UdjinM6@users.noreply.github.com>
* refactor: pass CQuorumCPtr to StartQuorumDataRecoveryThread
* test: Fix thread name in comment
Co-authored-by: UdjinM6 <UdjinM6@users.noreply.github.com>