Instead of propagating all sig shares to all LLMQ members, this will now
make all members send their individual sig share to a single member, which
is then responsible for the recovery and propagation of the recovered
signature. This process is repeated by all members every second for another
target/recovering member, until a recovered signature appears.
* [tests] Tidy up mininode.py module
Mostly move only. Adds a few extra comments.
* #11648 [tests] Move test_framework Bitcoin primitives into separate module
I manually recreated this commit, since we have A LOT of conflicts in mininode. However since it is primarily just a move, it was pretty easy to recreate
Signed-off-by: Pasta <pasta@dashboost.org>
* add import to messages.py
Signed-off-by: Pasta <pasta@dashboost.org>
* move import from mininode.py to messages.py
Signed-off-by: Pasta <pasta@dashboost.org>
* fix test failure
Signed-off-by: Pasta <pasta@dashboost.org>
* remove empty line at top of messages.py
Signed-off-by: pasta <pasta@dashboost.org>
* alphabetize MESSAGEMAP seperated by if it is dash specific or not
Signed-off-by: pasta <pasta@dashboost.org>
* remove accidentally added feefilter message
Signed-off-by: pasta <pasta@dashboost.org>
* Add missing getmnlistd/mnlistdiff messages to MESSAGEMAP
Co-authored-by: John Newbery <john@johnnewbery.com>
Co-authored-by: UdjinM6 <UdjinM6@users.noreply.github.com>
When taking the proTxHash naively, we might end up with a few unlucky MNs
which always have to perform most of the outbound connections while other
unlucky MNs would always have to wait for inbound connections. Hashing
the proTxHash with the quorum hash makes this more random.
* Remove unused jenkins stuff
* Install all dependencies in builder image
Instead of only target specific dependencies.
* Use docker builder image for builds
* Optimize apt installations
* Move building of dependencies into separate stage
The build-depends-xxx jobs will create artifacts (depends/$HOST) which are
then pulled in by the build jobs with the help of "needs"
* Remove use of caches from develop branch
* Use gitlab specific extends instead of YAML anchors
* Move before_script of build_template into base_template
* Add hack for parallel installation of i686 and arm cross compilation
* Install python3-setuptools in builder image
* Remove unnecessary change-dir
* Use variables to pass BUILD_TARGET instead of relying on the job name
* Move integration tests into separate stage
* Don't use --quiet for integration tests on Gitlab
This causes re-tries of LLMQ connections, which is required in cases
where 2 MNs tried to connect to each other and due to bad timing then
disconnected each other.
This is especially important when waiting for phase 1 (initialization),
as we might have skipped a whole DKG session before while the async DKG
session handler is still in the init phase (but for the old/skipped LLMQ).
Only sleep 100ms when we previously tried to connect a MN. The back-off
logic in ThreadOpenMasternodeConnections will prevent too many unsuccessful
connects to offline/bad nodes.