## Issue being fixed or feature implemented
This should speed up test `feature_llmq_data_recovery.py` for 30% from
500+ seconds to ~350 seconds (running locally) and all other tests that
uses any quorum, such as `feature_llmq_simplepose.py`
Time of CI running is also decreased noticeable 168min -> 131min
21 jobs for
[pr-5091/knst/dash/functional-tests-delays](https://gitlab.com/dashpay/dash/-/commits/pr-5091/knst/dash/functional-tests-delays)
in 131 minutes and 20 seconds (queued for 4 seconds)
vs some other pull request:
23 jobs for
[pr-5100/UdjinM6/dash/fix_GetStateFor_perf](https://gitlab.com/dashpay/dash/-/commits/pr-5100/UdjinM6/dash/fix_GetStateFor_perf)
in 195 minutes and 33 seconds (queued for 28 minutes and 13 seconds)
## What was done?
decreased delays in functional tests
## How Has This Been Tested?
I run several times locally and run CI/CD to see that it doesn't fail
## Breaking Changes
no breaking changes
## Checklist:
<!--- Go over all the following points, and put an `x` in all the boxes
that apply. -->
- [x] I have performed a self-review of my own code
**For repository code-owners and collaborators only**
- [x] I have assigned this pull request to a milestone
* tests: extend "age" period in `feature_llmq_connections.py`
see `NOTE`
* tests: sleep more in `wait_until` by default
Avoid overloading rpc with 20+ requests per second, 2 should be enough.
* tests: various fixes in `activate_dip0024`
- lower batch size
- no fast mode
- disable spork17 while mining
- bump mocktime on every generate call
* tests: bump mocktime on generate in `activate_dip8`
* tests: fix `reindex` option in `restart_mn`
Make sure nodes actually finished reindexing before moving any further.
* tests: trigger recovery threads and wait on mn restarts
* tests: sync blocks in `wait_for_quorum_data`
* tests: bump disconnect timeouts in `p2p_invalid_messages.py`
1 is too low for busy nodes
* tests: Wait for addrv2 processing before bumping mocktime in p2p_addrv2_relay.py
* tests: use `timeout_scale` option in `get_recovered_sig` and `isolate_node`
* tests: fix `wait_for...`s
* tests: fix `close_mn_port` banning test
* Bump MASTERNODE_SYNC_RESET_SECONDS to 900
This helps to avoid issues with 10m+ bump_mocktime on isolated nodes in feature_llmq_is_retroactive.py and feature_llmq_simplepose.py.
* style: fix extra whitespace
Co-authored-by: PastaPastaPasta <6443210+PastaPastaPasta@users.noreply.github.com>
* llmq|init|test: Add "mode" to -llmq-qvvec-sync parameter
This changes the paramter from `-llmq-qvvec-sync=<quorum_name>` to `-llmq-qvvec-sync=<quorum_name:mode>`
With the following definitions:
- `quorum_name`: Internal name of the quorum type
- `mode=0` - Sync always from all quorums of the type defined by `quorum_name`
- `mode=1` - Sync only if member of any from all other quorum of the type defined by `quorum_name`
`-llmq-qvvec-sync=llmq_100_67:0` To always request qvvec's from all `LLMQ_100_67`.
`-llmq-qvvec-sync=llmq_100_67:1` Only request if type member.
This means, if platform enables this on all MNs with `mode=0` we will
have all nodes asking new quorum for their verification vector instead
of only `24*100` at max.
* llmq: Adjust GetQuorumRecoveryStartOffset to use all MNs
* Turn `QvvecSyncMode` into `enum class`
Co-authored-by: UdjinM6 <UdjinM6@users.noreply.github.com>
* llmq: Implement automated DKG recovery threads
* llmq: Implement quorum verification vector sync
* init: Validiate quorum data recovery related command line parameter
* test: Add quorum_data_request_timeout_seconds in DashTestFramework
* test: Test quorum data recovery in feature_llmq_data_recovery.py
* test: Add feature_llmq_data_recovery.py to BASE_SCRIPTS
* test: Fix quorum_data_request_expiration_timeout in wait_for_quorum_data
* test: Always test the existence of secretKeyShare in test_mn_quorum_data
With this change it also validates that "secretKeyShare" is not in `quorum_info` if its not expected to be in there. Before this was basically just not tested.
* llmq|test: Use bool as argument type for -llmq-data-recovery
* llmq: Always set nTimeLastSuccess to 0
* test: Set -llmq-data-recovery=0 in p2p_quorum_data.py
* test: Simplify test_mns
Co-authored-by: UdjinM6 <UdjinM6@users.noreply.github.com>
* refactor: pass CQuorumCPtr to StartQuorumDataRecoveryThread
* test: Fix thread name in comment
Co-authored-by: UdjinM6 <UdjinM6@users.noreply.github.com>