dash/test/functional/test_framework/messages.py

2552 lines
76 KiB
Python
Raw Normal View History

#!/usr/bin/env python3
# Copyright (c) 2010 ArtForz -- public domain half-a-node
# Copyright (c) 2012 Jeff Garzik
# Copyright (c) 2010-2020 The Bitcoin Core developers
# Distributed under the MIT software license, see the accompanying
# file COPYING or http://www.opensource.org/licenses/mit-license.php.
"""Bitcoin test framework primitive and message structures
CBlock, CTransaction, CBlockHeader, CTxIn, CTxOut, etc....:
data structures that should map to corresponding structures in
bitcoin/primitives
msg_block, msg_tx, msg_headers, etc.:
data structures that represent network messages
ser_*, deser_*: functions that handle serialization/deserialization.
Classes use __slots__ to ensure extraneous attributes aren't accidentally added
by tests, compromising their intended effect.
"""
from base64 import b32decode, b32encode
import copy
from collections import namedtuple
import hashlib
from io import BytesIO
import random
import socket
import struct
import time
from test_framework.crypto.siphash import siphash256
from test_framework.util import assert_equal
import dash_hash
MIN_VERSION_SUPPORTED = 60001
2024-02-25 05:32:55 +01:00
MY_VERSION = 70231 # NO_LEGACY_ISLOCK_PROTO_VERSION
Merge #20993: test: store subversion (user agent) as string in msg_version de85af5cce727981383ac0fe81f635451b331f23 test: store subversion (user agent) as string in msg_version (Sebastian Falbesoner) Pull request description: It seems more natural to treat the "subversion" field (=user agent string, see [BIP 14](https://github.com/bitcoin/bips/blob/master/bip-0014.mediawiki#Proposal)) of a node as pure string rather than a bytestring within the test framework. This is also suggested with the naming prefix in `msg_version.strSubVer`: one probably wouldn't expect a field starting with "str" to be a bytestring that needs further decoding to be useful. This PR moves the encoding/decoding parts to the serialization/deserialization routines so that the user doesn't have to bother with that anymore. Note that currently, in the master branch the `msg_version.strSubVer` is never read (only in `msg_version.__repr__`); However, one issue that is solved by this PR came up while testing #19509 (not merged yet): A decoding script for binary message capture files takes use of the functional test framework convert it into JSON format. Bytestrings will be convered to hexstrings, while pure strings will (surprise surprise) end up without modification in the file. So without this patch, we get: ``` $ jq . out.json | grep -m5 strSubVer "strSubVer": "2f5361746f7368693a32312e39392e302f" "strSubVer": "2f5361746f7368693a302e32302e312f" "strSubVer": "2f5361746f7368693a32312e39392e302f" "strSubVer": "2f5361746f7368693a302e32302e312f" "strSubVer": "2f5361746f7368693a32312e39392e302f" ``` After this patch: ``` $ jq . out2.json | grep -m5 strSubVer "strSubVer": "/Satoshi:21.99.0/" "strSubVer": "/Satoshi:0.20.1/" "strSubVer": "/Satoshi:21.99.0/" "strSubVer": "/Satoshi:0.20.1/" "strSubVer": "/Satoshi:21.99.0/" ``` ACKs for top commit: jnewbery: utACK de85af5cce727981383ac0fe81f635451b331f23 Tree-SHA512: ff23642705c858e8387a625537dfec82e6b8a15da6d99b8d12152560e52d243ba17431b602b26f60996d897e00e3f37dcf8dc8a303ffb1d544df29a5937080f9
2021-02-17 09:36:27 +01:00
MY_SUBVERSION = "/python-p2p-tester:0.0.3%s/"
MY_RELAY = 1 # from version 70001 onwards, fRelay should be appended to version messages (BIP37)
MAX_LOCATOR_SZ = 101
MAX_BLOCK_SIZE = 1000000
Merge #18672: test: add further BIP37 size limit checks to p2p_filter.py c7437185589926ec8def2af6bede6a407b3d2e4a test: add further BIP37 size limit checks to p2p_filter.py (Sebastian Falbesoner) Pull request description: This is a follow-up PR to #18628. In addition to the hash-functions limit test introduced with commit https://github.com/bitcoin/bitcoin/pull/18628/commits/fa4c29bc1d2425f861845bae4f3816d9817e622a, it adds checks for the following size limits as defined in [BIP37](https://github.com/bitcoin/bips/blob/master/bip-0037.mediawiki): ad message type `filterload`: > The filter itself is simply a bit field of arbitrary byte-aligned size. The maximum size is **36,000 bytes**. ad message type `filteradd`: > The data field must be smaller than or equal to **520 bytes** in size (the maximum size of any potentially matched object). Also introduces new constants for the limits (or reuses the max script size constant in case for the `filteradd` limit). Also fixes #18711 by changing the misbehaviour check on "filteradd without filterset" (introduced with #18544) below to also use the more commonly used `assert_debug_log` method. ACKs for top commit: MarcoFalke: ACK c7437185589926ec8def2af6bede6a407b3d2e4a robot-visions: ACK c7437185589926ec8def2af6bede6a407b3d2e4a jonasschnelli: utACK c7437185589926ec8def2af6bede6a407b3d2e4a. Seems to fix it: https://bitcoinbuilds.org/index.php?build=2524 Tree-SHA512: a03e7639263eb36a381922afb4e1d0ed2ae286f2ad2e7bbd922509a043ddf6cfd08747e01d54d29bfb8f54b66908f653974b9c347e4ca4f43332b586778893be
2020-04-21 13:25:12 +02:00
MAX_BLOOM_FILTER_SIZE = 36000
MAX_BLOOM_HASH_FUNCS = 50
COIN = 100000000 # 1 btc in satoshis
MAX_MONEY = 21000000 * COIN
BIP125_SEQUENCE_NUMBER = 0xfffffffd # Sequence number that is BIP 125 opt-in and BIP 68-opt-out
MAX_PROTOCOL_MESSAGE_LENGTH = 3 * 1024 * 1024 # Maximum length of incoming protocol messages
MAX_HEADERS_RESULTS = 2000 # Number of headers sent in one getheaders result
MAX_INV_SIZE = 50000 # Maximum number of entries in an 'inv' protocol message
NODE_NETWORK = (1 << 0)
NODE_BLOOM = (1 << 2)
NODE_COMPACT_FILTERS = (1 << 6)
NODE_NETWORK_LIMITED = (1 << 10)
NODE_HEADERS_COMPRESSED = (1 << 11)
MSG_TX = 1
MSG_BLOCK = 2
MSG_FILTERED_BLOCK = 3
MSG_CMPCT_BLOCK = 20
MSG_TYPE_MASK = 0xffffffff >> 2
FILTER_TYPE_BASIC = 0
# Serialization/deserialization tools
def sha256(s):
return hashlib.new('sha256', s).digest()
def hash256(s):
return sha256(sha256(s))
def dashhash(s):
return dash_hash.getPoWHash(s)
def ser_compact_size(l):
r = b""
if l < 253:
r = struct.pack("B", l)
elif l < 0x10000:
r = struct.pack("<BH", 253, l)
elif l < 0x100000000:
r = struct.pack("<BI", 254, l)
else:
r = struct.pack("<BQ", 255, l)
return r
def deser_compact_size(f):
nit = struct.unpack("<B", f.read(1))[0]
if nit == 253:
nit = struct.unpack("<H", f.read(2))[0]
elif nit == 254:
nit = struct.unpack("<I", f.read(4))[0]
elif nit == 255:
nit = struct.unpack("<Q", f.read(8))[0]
return nit
def deser_string(f):
nit = deser_compact_size(f)
return f.read(nit)
def ser_string(s):
return ser_compact_size(len(s)) + s
def deser_uint256(f):
r = 0
for i in range(8):
t = struct.unpack("<I", f.read(4))[0]
r += t << (i * 32)
return r
def ser_uint256(u):
rs = b""
Merge #19674: refactor: test: use throwaway _ variable for unused loop counters dac7a111bdd3b0233d94cf68dae7a8bfc6ac9c64 refactor: test: use _ variable for unused loop counters (Sebastian Falbesoner) Pull request description: This tiny PR substitutes Python loops in the form of `for x in range(N): ...` by `for _ in range(N): ...` where applicable. The idea is indicating to the reader that a block (or statement, in list comprehensions) is just repeated N times, and that the loop counter is not used in the body, hence using the throwaway variable. This is already done quite often in the current tests (see e.g. `$ git grep "for _ in range("`). Another alternative would be using `itertools.repeat` (according to Python core developer Raymond Hettinger it's [even faster](https://twitter.com/raymondh/status/1144527183341375488)), but that doesn't seem to be widespread in use and I'm not sure about a readability increase. The only drawback I see is that whenever one wants to debug loop iterations, one would need to introduce a loop variable again. Reviewing this is basically a no-brainer, since tests would fail immediately if a a substitution has taken place on a loop where the variable is used. Instances to replace were found by `$ git grep "for.*in range("` and manually checked. ACKs for top commit: darosior: ACK dac7a111bdd3b0233d94cf68dae7a8bfc6ac9c64 instagibbs: manual inspection ACK https://github.com/bitcoin/bitcoin/pull/19674/commits/dac7a111bdd3b0233d94cf68dae7a8bfc6ac9c64 practicalswift: ACK dac7a111bdd3b0233d94cf68dae7a8bfc6ac9c64 -- the updated code is easier to reason about since the throwaway nature of a variable is expressed explicitly (using the Pythonic `_` idiom) instead of implicitly. Explicit is better than implicit was we all know by now :) Tree-SHA512: 5f43ded9ce14e5e00b3876ec445b90acda1842f813149ae7bafa93f3ac3d510bb778e2c701187fd2c73585e6b87797bb2d2987139bd1a9ba7d58775a59392406
2020-08-11 02:50:34 +02:00
for _ in range(8):
rs += struct.pack("<I", u & 0xFFFFFFFF)
u >>= 32
return rs
def uint256_from_str(s):
r = 0
t = struct.unpack("<IIIIIIII", s[:32])
for i in range(8):
r += t[i] << (i * 32)
return r
def uint256_to_string(uint256):
return '%064x' % uint256
def uint256_from_compact(c):
nbytes = (c >> 24) & 0xFF
v = (c & 0xFFFFFF) << (8 * (nbytes - 3))
return v
# deser_function_name: Allow for an alternate deserialization function on the
# entries in the vector.
def deser_vector(f, c, deser_function_name=None):
nit = deser_compact_size(f)
r = []
Merge #19674: refactor: test: use throwaway _ variable for unused loop counters dac7a111bdd3b0233d94cf68dae7a8bfc6ac9c64 refactor: test: use _ variable for unused loop counters (Sebastian Falbesoner) Pull request description: This tiny PR substitutes Python loops in the form of `for x in range(N): ...` by `for _ in range(N): ...` where applicable. The idea is indicating to the reader that a block (or statement, in list comprehensions) is just repeated N times, and that the loop counter is not used in the body, hence using the throwaway variable. This is already done quite often in the current tests (see e.g. `$ git grep "for _ in range("`). Another alternative would be using `itertools.repeat` (according to Python core developer Raymond Hettinger it's [even faster](https://twitter.com/raymondh/status/1144527183341375488)), but that doesn't seem to be widespread in use and I'm not sure about a readability increase. The only drawback I see is that whenever one wants to debug loop iterations, one would need to introduce a loop variable again. Reviewing this is basically a no-brainer, since tests would fail immediately if a a substitution has taken place on a loop where the variable is used. Instances to replace were found by `$ git grep "for.*in range("` and manually checked. ACKs for top commit: darosior: ACK dac7a111bdd3b0233d94cf68dae7a8bfc6ac9c64 instagibbs: manual inspection ACK https://github.com/bitcoin/bitcoin/pull/19674/commits/dac7a111bdd3b0233d94cf68dae7a8bfc6ac9c64 practicalswift: ACK dac7a111bdd3b0233d94cf68dae7a8bfc6ac9c64 -- the updated code is easier to reason about since the throwaway nature of a variable is expressed explicitly (using the Pythonic `_` idiom) instead of implicitly. Explicit is better than implicit was we all know by now :) Tree-SHA512: 5f43ded9ce14e5e00b3876ec445b90acda1842f813149ae7bafa93f3ac3d510bb778e2c701187fd2c73585e6b87797bb2d2987139bd1a9ba7d58775a59392406
2020-08-11 02:50:34 +02:00
for _ in range(nit):
t = c()
if deser_function_name:
getattr(t, deser_function_name)(f)
else:
t.deserialize(f)
r.append(t)
return r
# ser_function_name: Allow for an alternate serialization function on the
# entries in the vector (we use this for serializing addrv2 messages).
def ser_vector(l, ser_function_name=None):
r = ser_compact_size(len(l))
for i in l:
if ser_function_name:
r += getattr(i, ser_function_name)()
else:
r += i.serialize()
return r
def deser_uint256_vector(f):
nit = deser_compact_size(f)
r = []
Merge #19674: refactor: test: use throwaway _ variable for unused loop counters dac7a111bdd3b0233d94cf68dae7a8bfc6ac9c64 refactor: test: use _ variable for unused loop counters (Sebastian Falbesoner) Pull request description: This tiny PR substitutes Python loops in the form of `for x in range(N): ...` by `for _ in range(N): ...` where applicable. The idea is indicating to the reader that a block (or statement, in list comprehensions) is just repeated N times, and that the loop counter is not used in the body, hence using the throwaway variable. This is already done quite often in the current tests (see e.g. `$ git grep "for _ in range("`). Another alternative would be using `itertools.repeat` (according to Python core developer Raymond Hettinger it's [even faster](https://twitter.com/raymondh/status/1144527183341375488)), but that doesn't seem to be widespread in use and I'm not sure about a readability increase. The only drawback I see is that whenever one wants to debug loop iterations, one would need to introduce a loop variable again. Reviewing this is basically a no-brainer, since tests would fail immediately if a a substitution has taken place on a loop where the variable is used. Instances to replace were found by `$ git grep "for.*in range("` and manually checked. ACKs for top commit: darosior: ACK dac7a111bdd3b0233d94cf68dae7a8bfc6ac9c64 instagibbs: manual inspection ACK https://github.com/bitcoin/bitcoin/pull/19674/commits/dac7a111bdd3b0233d94cf68dae7a8bfc6ac9c64 practicalswift: ACK dac7a111bdd3b0233d94cf68dae7a8bfc6ac9c64 -- the updated code is easier to reason about since the throwaway nature of a variable is expressed explicitly (using the Pythonic `_` idiom) instead of implicitly. Explicit is better than implicit was we all know by now :) Tree-SHA512: 5f43ded9ce14e5e00b3876ec445b90acda1842f813149ae7bafa93f3ac3d510bb778e2c701187fd2c73585e6b87797bb2d2987139bd1a9ba7d58775a59392406
2020-08-11 02:50:34 +02:00
for _ in range(nit):
t = deser_uint256(f)
r.append(t)
return r
def ser_uint256_vector(l):
r = ser_compact_size(len(l))
for i in l:
r += ser_uint256(i)
return r
def deser_dyn_bitset(f, bytes_based):
if bytes_based:
nb = deser_compact_size(f)
n = nb * 8
else:
n = deser_compact_size(f)
nb = int((n + 7) / 8)
b = f.read(nb)
r = []
for i in range(n):
r.append((b[int(i / 8)] & (1 << (i % 8))) != 0)
return r
def ser_dyn_bitset(l, bytes_based):
n = len(l)
nb = int((n + 7) / 8)
r = [0] * nb
for i in range(n):
r[int(i / 8)] |= (1 if l[i] else 0) << (i % 8)
if bytes_based:
r = ser_compact_size(nb) + bytes(r)
else:
r = ser_compact_size(n) + bytes(r)
return r
Merge bitcoin/bitcoin#22257: test: refactor: various (de)serialization helpers cleanups/improvements bdb8b9a347e68f80a2e8d44ce5590a2e8214b6bb test: doc: improve doc for `from_hex` helper (mention `to_hex` alternative) (Sebastian Falbesoner) 191405420815d49ab50184513717a303fc2744d6 scripted-diff: test: rename `FromHex` to `from_hex` (Sebastian Falbesoner) a79396fe5f8f81c78cf84117a87074c6ff6c9d95 test: remove `ToHex` helper, use .serialize().hex() instead (Sebastian Falbesoner) 2ce7b47958c4a10ba20dc86c011d71cda4b070a5 test: introduce `tx_from_hex` helper for tx deserialization (Sebastian Falbesoner) Pull request description: There are still many functional tests that perform conversions from a hex-string to a message object (deserialization) manually. This PR identifies all those instances and replaces them with a newly introduced helper `tx_from_hex`. Instances were found via * `git grep "deserialize.*BytesIO"` and some of them manually, when it were not one-liners. Further, the helper `ToHex` was removed and simply replaced by `.serialize().hex()`, since now both variants are in use (sometimes even within the same test) and using the helper doesn't really have an advantage in readability. (see discussion https://github.com/bitcoin/bitcoin/pull/22257#discussion_r652404782) ACKs for top commit: MarcoFalke: review re-ACK bdb8b9a347e68f80a2e8d44ce5590a2e8214b6bb 😁 Tree-SHA512: e25d7dc85918de1d6755a5cea65471b07a743204c20ad1c2f71ff07ef48cc1b9ad3fe5f515c1efaba2b2e3d89384e7980380c5d81895f9826e2046808cd3266e
2021-06-24 12:47:04 +02:00
def from_hex(obj, hex_string):
"""Deserialize from a hex string representation (e.g. from RPC)
Note that there is no complementary helper like e.g. `to_hex` for the
inverse operation. To serialize a message object to a hex string, simply
use obj.serialize().hex()"""
obj.deserialize(BytesIO(bytes.fromhex(hex_string)))
return obj
Merge bitcoin/bitcoin#22257: test: refactor: various (de)serialization helpers cleanups/improvements bdb8b9a347e68f80a2e8d44ce5590a2e8214b6bb test: doc: improve doc for `from_hex` helper (mention `to_hex` alternative) (Sebastian Falbesoner) 191405420815d49ab50184513717a303fc2744d6 scripted-diff: test: rename `FromHex` to `from_hex` (Sebastian Falbesoner) a79396fe5f8f81c78cf84117a87074c6ff6c9d95 test: remove `ToHex` helper, use .serialize().hex() instead (Sebastian Falbesoner) 2ce7b47958c4a10ba20dc86c011d71cda4b070a5 test: introduce `tx_from_hex` helper for tx deserialization (Sebastian Falbesoner) Pull request description: There are still many functional tests that perform conversions from a hex-string to a message object (deserialization) manually. This PR identifies all those instances and replaces them with a newly introduced helper `tx_from_hex`. Instances were found via * `git grep "deserialize.*BytesIO"` and some of them manually, when it were not one-liners. Further, the helper `ToHex` was removed and simply replaced by `.serialize().hex()`, since now both variants are in use (sometimes even within the same test) and using the helper doesn't really have an advantage in readability. (see discussion https://github.com/bitcoin/bitcoin/pull/22257#discussion_r652404782) ACKs for top commit: MarcoFalke: review re-ACK bdb8b9a347e68f80a2e8d44ce5590a2e8214b6bb 😁 Tree-SHA512: e25d7dc85918de1d6755a5cea65471b07a743204c20ad1c2f71ff07ef48cc1b9ad3fe5f515c1efaba2b2e3d89384e7980380c5d81895f9826e2046808cd3266e
2021-06-24 12:47:04 +02:00
def tx_from_hex(hex_string):
"""Deserialize from hex string to a transaction object"""
return from_hex(CTransaction(), hex_string)
# Objects that map to dashd objects, which can be serialized/deserialized
class CService:
__slots__ = ("ip", "port")
def __init__(self):
self.ip = ""
self.port = 0
def deserialize(self, f):
self.ip = socket.inet_ntop(socket.AF_INET6, f.read(16))
self.port = struct.unpack(">H", f.read(2))[0]
def serialize(self):
r = b""
r += socket.inet_pton(socket.AF_INET6, self.ip)
r += struct.pack(">H", self.port)
return r
def __repr__(self):
return "CService(ip=%s port=%i)" % (self.ip, self.port)
class CAddress:
__slots__ = ("net", "ip", "nServices", "port", "time")
# see https://github.com/bitcoin/bips/blob/master/bip-0155.mediawiki
NET_IPV4 = 1
NET_I2P = 5
ADDRV2_NET_NAME = {
NET_IPV4: "IPv4",
NET_I2P: "I2P"
}
ADDRV2_ADDRESS_LENGTH = {
NET_IPV4: 4,
NET_I2P: 32
}
I2P_PAD = "===="
def __init__(self):
self.time = 0
self.nServices = 1
self.net = self.NET_IPV4
self.ip = "0.0.0.0"
self.port = 0
def __eq__(self, other):
return self.net == other.net and self.ip == other.ip and self.nServices == other.nServices and self.port == other.port and self.time == other.time
def deserialize(self, f, *, with_time=True):
"""Deserialize from addrv1 format (pre-BIP155)"""
if with_time:
# VERSION messages serialize CAddress objects without time
self.time = struct.unpack("<I", f.read(4))[0]
self.nServices = struct.unpack("<Q", f.read(8))[0]
# We only support IPv4 which means skip 12 bytes and read the next 4 as IPv4 address.
f.read(12)
self.net = self.NET_IPV4
self.ip = socket.inet_ntoa(f.read(4))
self.port = struct.unpack(">H", f.read(2))[0]
def serialize(self, *, with_time=True):
"""Serialize in addrv1 format (pre-BIP155)"""
assert self.net == self.NET_IPV4
r = b""
if with_time:
# VERSION messages serialize CAddress objects without time
r += struct.pack("<I", self.time)
r += struct.pack("<Q", self.nServices)
r += b"\x00" * 10 + b"\xff" * 2
r += socket.inet_aton(self.ip)
r += struct.pack(">H", self.port)
return r
def deserialize_v2(self, f):
"""Deserialize from addrv2 format (BIP155)"""
self.time = struct.unpack("<I", f.read(4))[0]
self.nServices = deser_compact_size(f)
self.net = struct.unpack("B", f.read(1))[0]
assert self.net in (self.NET_IPV4, self.NET_I2P)
address_length = deser_compact_size(f)
assert address_length == self.ADDRV2_ADDRESS_LENGTH[self.net]
addr_bytes = f.read(address_length)
if self.net == self.NET_IPV4:
self.ip = socket.inet_ntoa(addr_bytes)
else:
self.ip = b32encode(addr_bytes)[0:-len(self.I2P_PAD)].decode("ascii").lower() + ".b32.i2p"
self.port = struct.unpack(">H", f.read(2))[0]
def serialize_v2(self):
"""Serialize in addrv2 format (BIP155)"""
assert self.net in (self.NET_IPV4, self.NET_I2P)
r = b""
r += struct.pack("<I", self.time)
r += ser_compact_size(self.nServices)
r += struct.pack("B", self.net)
r += ser_compact_size(self.ADDRV2_ADDRESS_LENGTH[self.net])
if self.net == self.NET_IPV4:
r += socket.inet_aton(self.ip)
else:
sfx = ".b32.i2p"
assert self.ip.endswith(sfx)
r += b32decode(self.ip[0:-len(sfx)] + self.I2P_PAD, True)
r += struct.pack(">H", self.port)
return r
def __repr__(self):
return ("CAddress(nServices=%i net=%s addr=%s port=%i)"
% (self.nServices, self.ADDRV2_NET_NAME[self.net], self.ip, self.port))
class CInv:
__slots__ = ("hash", "type")
typemap = {
0: "Error",
MSG_TX: "TX",
MSG_BLOCK: "Block",
MSG_FILTERED_BLOCK: "filtered Block",
MSG_CMPCT_BLOCK: "CompactBlock",
}
def __init__(self, t=0, h=0):
self.type = t
self.hash = h
def deserialize(self, f):
self.type = struct.unpack("<I", f.read(4))[0]
self.hash = deser_uint256(f)
def serialize(self):
r = b""
r += struct.pack("<I", self.type)
r += ser_uint256(self.hash)
return r
def __repr__(self):
return "CInv(type=%s hash=%064x)" \
% (self.typemap.get(self.type, "%d" % self.type), self.hash)
def __eq__(self, other):
return isinstance(other, CInv) and self.hash == other.hash and self.type == other.type
class CBlockLocator:
__slots__ = ("nVersion", "vHave")
def __init__(self):
self.nVersion = MY_VERSION
self.vHave = []
def deserialize(self, f):
self.nVersion = struct.unpack("<i", f.read(4))[0]
self.vHave = deser_uint256_vector(f)
def serialize(self):
r = b""
r += struct.pack("<i", self.nVersion)
r += ser_uint256_vector(self.vHave)
return r
def __repr__(self):
return "CBlockLocator(nVersion=%i vHave=%s)" \
% (self.nVersion, repr(self.vHave))
class COutPoint:
__slots__ = ("hash", "n")
def __init__(self, hash=0, n=0xFFFFFFFF):
self.hash = hash
self.n = n
def deserialize(self, f):
self.hash = deser_uint256(f)
self.n = struct.unpack("<I", f.read(4))[0]
def serialize(self):
r = b""
r += ser_uint256(self.hash)
r += struct.pack("<I", self.n)
return r
def __repr__(self):
return "COutPoint(hash=%064x n=%i)" % (self.hash, self.n)
class CTxIn:
__slots__ = ("nSequence", "prevout", "scriptSig")
def __init__(self, outpoint=None, scriptSig=b"", nSequence=0):
if outpoint is None:
self.prevout = COutPoint()
else:
self.prevout = outpoint
self.scriptSig = scriptSig
self.nSequence = nSequence
def deserialize(self, f):
self.prevout = COutPoint()
self.prevout.deserialize(f)
self.scriptSig = deser_string(f)
self.nSequence = struct.unpack("<I", f.read(4))[0]
def serialize(self):
r = b""
r += self.prevout.serialize()
r += ser_string(self.scriptSig)
r += struct.pack("<I", self.nSequence)
return r
def __repr__(self):
return "CTxIn(prevout=%s scriptSig=%s nSequence=%i)" \
2021-08-27 21:03:02 +02:00
% (repr(self.prevout), self.scriptSig.hex(),
self.nSequence)
class CTxOut:
__slots__ = ("nValue", "scriptPubKey")
def __init__(self, nValue=0, scriptPubKey=b""):
self.nValue = nValue
self.scriptPubKey = scriptPubKey
def deserialize(self, f):
self.nValue = struct.unpack("<q", f.read(8))[0]
self.scriptPubKey = deser_string(f)
def serialize(self):
r = b""
r += struct.pack("<q", self.nValue)
r += ser_string(self.scriptPubKey)
return r
def __repr__(self):
return "CTxOut(nValue=%i.%08i scriptPubKey=%s)" \
% (self.nValue // COIN, self.nValue % COIN,
2021-08-27 21:03:02 +02:00
self.scriptPubKey.hex())
class CTransaction:
__slots__ = ("hash", "nLockTime", "nVersion", "sha256", "vin", "vout",
"nType", "vExtraPayload")
def __init__(self, tx=None):
if tx is None:
self.nVersion = 1
self.nType = 0
self.vin = []
self.vout = []
self.nLockTime = 0
self.vExtraPayload = None
self.sha256 = None
self.hash = None
else:
self.nVersion = tx.nVersion
self.nType = tx.nType
self.vin = copy.deepcopy(tx.vin)
self.vout = copy.deepcopy(tx.vout)
self.nLockTime = tx.nLockTime
self.vExtraPayload = tx.vExtraPayload
self.sha256 = tx.sha256
self.hash = tx.hash
def deserialize(self, f):
ver32bit = struct.unpack("<i", f.read(4))[0]
self.nVersion = ver32bit & 0xffff
self.nType = (ver32bit >> 16) & 0xffff
self.vin = deser_vector(f, CTxIn)
self.vout = deser_vector(f, CTxOut)
self.nLockTime = struct.unpack("<I", f.read(4))[0]
if self.nType != 0:
self.vExtraPayload = deser_string(f)
self.sha256 = None
self.hash = None
def serialize(self):
r = b""
ver32bit = int(self.nVersion | (self.nType << 16))
r += struct.pack("<i", ver32bit)
r += ser_vector(self.vin)
r += ser_vector(self.vout)
r += struct.pack("<I", self.nLockTime)
if self.nType != 0:
r += ser_string(self.vExtraPayload)
return r
def rehash(self):
self.sha256 = None
self.calc_sha256()
return self.hash
def calc_sha256(self):
if self.sha256 is None:
self.sha256 = uint256_from_str(hash256(self.serialize()))
self.hash = hash256(self.serialize())[::-1].hex()
def is_valid(self):
self.calc_sha256()
for tout in self.vout:
if tout.nValue < 0 or tout.nValue > 21000000 * COIN:
return False
return True
# Calculate the virtual transaction size using
# serialization size (does NOT use sigops).
def get_vsize(self):
return len(self.serialize())
def __repr__(self):
return "CTransaction(nVersion=%i vin=%s vout=%s nLockTime=%i)" \
% (self.nVersion, repr(self.vin), repr(self.vout), self.nLockTime)
class CBlockHeader:
__slots__ = ("hash", "hashMerkleRoot", "hashPrevBlock", "nBits", "nNonce",
"nTime", "nVersion", "sha256")
def __init__(self, header=None):
if header is None:
self.set_null()
else:
self.nVersion = header.nVersion
self.hashPrevBlock = header.hashPrevBlock
self.hashMerkleRoot = header.hashMerkleRoot
self.nTime = header.nTime
self.nBits = header.nBits
self.nNonce = header.nNonce
self.sha256 = header.sha256
self.hash = header.hash
self.calc_sha256()
def set_null(self):
self.nVersion = 1
self.hashPrevBlock = 0
self.hashMerkleRoot = 0
self.nTime = 0
self.nBits = 0
self.nNonce = 0
self.sha256 = None
self.hash = None
def deserialize(self, f):
self.nVersion = struct.unpack("<i", f.read(4))[0]
self.hashPrevBlock = deser_uint256(f)
self.hashMerkleRoot = deser_uint256(f)
self.nTime = struct.unpack("<I", f.read(4))[0]
self.nBits = struct.unpack("<I", f.read(4))[0]
self.nNonce = struct.unpack("<I", f.read(4))[0]
self.sha256 = None
self.hash = None
def serialize(self):
r = b""
r += struct.pack("<i", self.nVersion)
r += ser_uint256(self.hashPrevBlock)
r += ser_uint256(self.hashMerkleRoot)
r += struct.pack("<I", self.nTime)
r += struct.pack("<I", self.nBits)
r += struct.pack("<I", self.nNonce)
return r
def calc_sha256(self):
if self.sha256 is None:
r = b""
r += struct.pack("<i", self.nVersion)
r += ser_uint256(self.hashPrevBlock)
r += ser_uint256(self.hashMerkleRoot)
r += struct.pack("<I", self.nTime)
r += struct.pack("<I", self.nBits)
r += struct.pack("<I", self.nNonce)
self.sha256 = uint256_from_str(dashhash(r))
self.hash = dashhash(r)[::-1].hex()
def rehash(self):
self.sha256 = None
self.calc_sha256()
return self.sha256
def __repr__(self):
return "CBlockHeader(nVersion=%i hashPrevBlock=%064x hashMerkleRoot=%064x nTime=%s nBits=%08x nNonce=%08x)" \
% (self.nVersion, self.hashPrevBlock, self.hashMerkleRoot,
time.ctime(self.nTime), self.nBits, self.nNonce)
BLOCK_HEADER_SIZE = len(CBlockHeader().serialize())
assert_equal(BLOCK_HEADER_SIZE, 80)
class CBlock(CBlockHeader):
__slots__ = ("vtx",)
def __init__(self, header=None):
super().__init__(header)
self.vtx = []
def deserialize(self, f):
super().deserialize(f)
self.vtx = deser_vector(f, CTransaction)
def serialize(self):
r = b""
r += super().serialize()
r += ser_vector(self.vtx)
return r
# Calculate the merkle root given a vector of transaction hashes
@staticmethod
def get_merkle_root(hashes):
while len(hashes) > 1:
newhashes = []
for i in range(0, len(hashes), 2):
i2 = min(i+1, len(hashes)-1)
newhashes.append(hash256(hashes[i] + hashes[i2]))
hashes = newhashes
return uint256_from_str(hashes[0])
def calc_merkle_root(self):
hashes = []
for tx in self.vtx:
tx.calc_sha256()
hashes.append(ser_uint256(tx.sha256))
return self.get_merkle_root(hashes)
def is_valid(self):
self.calc_sha256()
target = uint256_from_compact(self.nBits)
if self.sha256 > target:
return False
for tx in self.vtx:
if not tx.is_valid():
return False
if self.calc_merkle_root() != self.hashMerkleRoot:
return False
return True
def solve(self):
self.rehash()
target = uint256_from_compact(self.nBits)
while self.sha256 > target:
self.nNonce += 1
self.rehash()
def __repr__(self):
return "CBlock(nVersion=%i hashPrevBlock=%064x hashMerkleRoot=%064x nTime=%s nBits=%08x nNonce=%08x vtx=%s)" \
% (self.nVersion, self.hashPrevBlock, self.hashMerkleRoot,
time.ctime(self.nTime), self.nBits, self.nNonce, repr(self.vtx))
class CompressibleBlockHeader:
__slots__ = ("bitfield", "timeOffset", "nVersion", "hashPrevBlock", "hashMerkleRoot", "nTime", "nBits", "nNonce",
"hash", "sha256")
FLAG_VERSION_BIT_0 = 1 << 0
FLAG_VERSION_BIT_1 = 1 << 1
FLAG_VERSION_BIT_2 = 1 << 2
FLAG_PREV_BLOCK_HASH = 1 << 3
FLAG_TIMESTAMP = 1 << 4
FLAG_NBITS = 1 << 5
BITMASK_VERSION = FLAG_VERSION_BIT_0 | FLAG_VERSION_BIT_1 | FLAG_VERSION_BIT_2
def __init__(self, header=None):
if header is None:
self.set_null()
else:
self.bitfield = 0
self.timeOffset = 0
self.nVersion = header.nVersion
self.hashPrevBlock = header.hashPrevBlock
self.hashMerkleRoot = header.hashMerkleRoot
self.nTime = header.nTime
self.nBits = header.nBits
self.nNonce = header.nNonce
self.hash = None
self.sha256 = None
self.calc_sha256()
def set_null(self):
self.bitfield = 0
self.timeOffset = 0
self.nVersion = 0
self.hashPrevBlock = 0
self.hashMerkleRoot = 0
self.nTime = 0
self.nBits = 0
self.nNonce = 0
self.hash = None
self.sha256 = None
def deserialize(self, f):
self.bitfield = struct.unpack("<B", f.read(1))[0]
if self.bitfield & self.BITMASK_VERSION == 0:
self.nVersion = struct.unpack("<i", f.read(4))[0]
if self.bitfield & self.FLAG_PREV_BLOCK_HASH:
self.hashPrevBlock = deser_uint256(f)
self.hashMerkleRoot = deser_uint256(f)
if self.bitfield & self.FLAG_TIMESTAMP:
self.nTime = struct.unpack("<I", f.read(4))[0]
else:
self.timeOffset = struct.unpack("<h", f.read(2))[0]
if self.bitfield & self.FLAG_NBITS:
self.nBits = struct.unpack("<I", f.read(4))[0]
self.nNonce = struct.unpack("<I", f.read(4))[0]
self.rehash()
def serialize(self):
r = b""
r += struct.pack("<B", self.bitfield)
if not self.bitfield & self.BITMASK_VERSION:
r += struct.pack("<i", self.nVersion)
if self.bitfield & self.FLAG_PREV_BLOCK_HASH:
r += ser_uint256(self.hashPrevBlock)
r += ser_uint256(self.hashMerkleRoot)
r += struct.pack("<I", self.nTime) if self.bitfield & self.FLAG_TIMESTAMP else struct.pack("<h", self.timeOffset)
if self.bitfield & self.FLAG_NBITS:
r += struct.pack("<I", self.nBits)
r += struct.pack("<I", self.nNonce)
return r
def calc_sha256(self):
if self.sha256 is None:
r = b""
r += struct.pack("<i", self.nVersion)
r += ser_uint256(self.hashPrevBlock)
r += ser_uint256(self.hashMerkleRoot)
r += struct.pack("<I", self.nTime)
r += struct.pack("<I", self.nBits)
r += struct.pack("<I", self.nNonce)
self.sha256 = uint256_from_str(dashhash(r))
self.hash = int(dashhash(r)[::-1].hex(), 16)
def rehash(self):
self.sha256 = None
self.calc_sha256()
return self.sha256
def __repr__(self):
return "BlockHeaderCompressed(bitfield=%064x, nVersion=%i hashPrevBlock=%064x hashMerkleRoot=%064x nTime=%s " \
"nBits=%08x nNonce=%08x timeOffset=%i)" % \
(self.bitfield, self.nVersion, self.hashPrevBlock, self.hashMerkleRoot, time.ctime(self.nTime), self.nBits, self.nNonce, self.timeOffset)
def __save_version_as_most_recent(self, last_unique_versions):
last_unique_versions.insert(0, self.nVersion)
# Evict the oldest version
if len(last_unique_versions) > 7:
last_unique_versions.pop()
@staticmethod
def __mark_version_as_most_recent(last_unique_versions, version_idx):
# Move version to the front of the list
last_unique_versions.insert(0, last_unique_versions.pop(version_idx))
def compress(self, last_blocks, last_unique_versions):
if not last_blocks:
# First block, everything must be uncompressed
self.bitfield &= (~CompressibleBlockHeader.BITMASK_VERSION)
self.bitfield |= CompressibleBlockHeader.FLAG_PREV_BLOCK_HASH
self.bitfield |= CompressibleBlockHeader.FLAG_TIMESTAMP
self.bitfield |= CompressibleBlockHeader.FLAG_NBITS
self.__save_version_as_most_recent(last_unique_versions)
return
# Compress version
try:
version_idx = last_unique_versions.index(self.nVersion)
version_offset = len(last_unique_versions) - version_idx
self.bitfield &= (~CompressibleBlockHeader.BITMASK_VERSION)
self.bitfield |= (version_offset & CompressibleBlockHeader.BITMASK_VERSION)
self.__mark_version_as_most_recent(last_unique_versions, version_idx)
except ValueError:
self.__save_version_as_most_recent(last_unique_versions)
# We have the previous block
last_block = last_blocks[-1]
# Compress time
self.timeOffset = self.nTime - last_block.nTime
if self.timeOffset > 32767 or self.timeOffset < -32768:
# Time diff overflows, we have to send it as 4 bytes (uncompressed)
self.bitfield |= CompressibleBlockHeader.FLAG_TIMESTAMP
# If nBits doesn't match previous block, we have to send it
if self.nBits != last_block.nBits:
self.bitfield |= CompressibleBlockHeader.FLAG_NBITS
def uncompress(self, last_compressed_blocks, last_unique_versions):
if not last_compressed_blocks:
# First block header is always uncompressed
self.__save_version_as_most_recent(last_unique_versions)
return
previous_block = last_compressed_blocks[-1]
# Uncompress version
version_idx = self.bitfield & self.BITMASK_VERSION
if version_idx != 0:
if version_idx <= len(last_unique_versions):
self.nVersion = last_unique_versions[version_idx - 1]
self.__mark_version_as_most_recent(last_unique_versions, version_idx - 1)
else:
self.__save_version_as_most_recent(last_unique_versions)
# Uncompress prev block hash
if not self.bitfield & self.FLAG_PREV_BLOCK_HASH:
self.hashPrevBlock = previous_block.hash
# Uncompress time
if not self.bitfield & self.FLAG_TIMESTAMP:
self.nTime = previous_block.nTime + self.timeOffset
# Uncompress time bits
if not self.bitfield & self.FLAG_NBITS:
self.nBits = previous_block.nBits
self.rehash()
class PrefilledTransaction:
__slots__ = ("index", "tx")
def __init__(self, index=0, tx = None):
self.index = index
self.tx = tx
def deserialize(self, f):
self.index = deser_compact_size(f)
self.tx = CTransaction()
self.tx.deserialize(f)
def serialize(self):
r = b""
r += ser_compact_size(self.index)
r += self.tx.serialize()
return r
def __repr__(self):
return "PrefilledTransaction(index=%d, tx=%s)" % (self.index, repr(self.tx))
# This is what we send on the wire, in a cmpctblock message.
class P2PHeaderAndShortIDs:
__slots__ = ("header", "nonce", "prefilled_txn", "prefilled_txn_length",
"shortids", "shortids_length")
def __init__(self):
self.header = CBlockHeader()
self.nonce = 0
self.shortids_length = 0
self.shortids = []
self.prefilled_txn_length = 0
self.prefilled_txn = []
def deserialize(self, f):
self.header.deserialize(f)
self.nonce = struct.unpack("<Q", f.read(8))[0]
self.shortids_length = deser_compact_size(f)
Merge #19674: refactor: test: use throwaway _ variable for unused loop counters dac7a111bdd3b0233d94cf68dae7a8bfc6ac9c64 refactor: test: use _ variable for unused loop counters (Sebastian Falbesoner) Pull request description: This tiny PR substitutes Python loops in the form of `for x in range(N): ...` by `for _ in range(N): ...` where applicable. The idea is indicating to the reader that a block (or statement, in list comprehensions) is just repeated N times, and that the loop counter is not used in the body, hence using the throwaway variable. This is already done quite often in the current tests (see e.g. `$ git grep "for _ in range("`). Another alternative would be using `itertools.repeat` (according to Python core developer Raymond Hettinger it's [even faster](https://twitter.com/raymondh/status/1144527183341375488)), but that doesn't seem to be widespread in use and I'm not sure about a readability increase. The only drawback I see is that whenever one wants to debug loop iterations, one would need to introduce a loop variable again. Reviewing this is basically a no-brainer, since tests would fail immediately if a a substitution has taken place on a loop where the variable is used. Instances to replace were found by `$ git grep "for.*in range("` and manually checked. ACKs for top commit: darosior: ACK dac7a111bdd3b0233d94cf68dae7a8bfc6ac9c64 instagibbs: manual inspection ACK https://github.com/bitcoin/bitcoin/pull/19674/commits/dac7a111bdd3b0233d94cf68dae7a8bfc6ac9c64 practicalswift: ACK dac7a111bdd3b0233d94cf68dae7a8bfc6ac9c64 -- the updated code is easier to reason about since the throwaway nature of a variable is expressed explicitly (using the Pythonic `_` idiom) instead of implicitly. Explicit is better than implicit was we all know by now :) Tree-SHA512: 5f43ded9ce14e5e00b3876ec445b90acda1842f813149ae7bafa93f3ac3d510bb778e2c701187fd2c73585e6b87797bb2d2987139bd1a9ba7d58775a59392406
2020-08-11 02:50:34 +02:00
for _ in range(self.shortids_length):
# shortids are defined to be 6 bytes in the spec, so append
# two zero bytes and read it in as an 8-byte number
self.shortids.append(struct.unpack("<Q", f.read(6) + b'\x00\x00')[0])
self.prefilled_txn = deser_vector(f, PrefilledTransaction)
self.prefilled_txn_length = len(self.prefilled_txn)
def serialize(self):
r = b""
r += self.header.serialize()
r += struct.pack("<Q", self.nonce)
r += ser_compact_size(self.shortids_length)
for x in self.shortids:
# We only want the first 6 bytes
r += struct.pack("<Q", x)[0:6]
r += ser_vector(self.prefilled_txn)
return r
def __repr__(self):
return "P2PHeaderAndShortIDs(header=%s, nonce=%d, shortids_length=%d, shortids=%s, prefilled_txn_length=%d, prefilledtxn=%s" % (repr(self.header), self.nonce, self.shortids_length, repr(self.shortids), self.prefilled_txn_length, repr(self.prefilled_txn))
# Calculate the BIP 152-compact blocks shortid for a given transaction hash
def calculate_shortid(k0, k1, tx_hash):
expected_shortid = siphash256(k0, k1, tx_hash)
expected_shortid &= 0x0000ffffffffffff
return expected_shortid
# This version gets rid of the array lengths, and reinterprets the differential
# encoding into indices that can be used for lookup.
class HeaderAndShortIDs:
__slots__ = ("header", "nonce", "prefilled_txn", "shortids")
def __init__(self, p2pheaders_and_shortids = None):
self.header = CBlockHeader()
self.nonce = 0
self.shortids = []
self.prefilled_txn = []
if p2pheaders_and_shortids is not None:
self.header = p2pheaders_and_shortids.header
self.nonce = p2pheaders_and_shortids.nonce
self.shortids = p2pheaders_and_shortids.shortids
last_index = -1
for x in p2pheaders_and_shortids.prefilled_txn:
self.prefilled_txn.append(PrefilledTransaction(x.index + last_index + 1, x.tx))
last_index = self.prefilled_txn[-1].index
def to_p2p(self):
ret = P2PHeaderAndShortIDs()
ret.header = self.header
ret.nonce = self.nonce
ret.shortids_length = len(self.shortids)
ret.shortids = self.shortids
ret.prefilled_txn_length = len(self.prefilled_txn)
ret.prefilled_txn = []
last_index = -1
for x in self.prefilled_txn:
ret.prefilled_txn.append(PrefilledTransaction(x.index - last_index - 1, x.tx))
last_index = x.index
return ret
def get_siphash_keys(self):
header_nonce = self.header.serialize()
header_nonce += struct.pack("<Q", self.nonce)
hash_header_nonce_as_str = sha256(header_nonce)
key0 = struct.unpack("<Q", hash_header_nonce_as_str[0:8])[0]
key1 = struct.unpack("<Q", hash_header_nonce_as_str[8:16])[0]
return [ key0, key1 ]
Merge #16726: tests: Avoid common Python default parameter gotcha when mutable dict/list:s are used as default parameter values e4f4ea47ebf7774fb6f445adde7bf7ea71fa05a1 lint: Catch use of [] or {} as default parameter values in Python functions (practicalswift) 25dd86715039586d92176eee16e9c6644d2547f0 Avoid using mutable default parameter values (practicalswift) Pull request description: Avoid common Python default parameter gotcha when mutable `dict`/`list`:s are used as default parameter values. Examples of this gotcha caught during review: * https://github.com/bitcoin/bitcoin/pull/16673#discussion_r317415261 * https://github.com/bitcoin/bitcoin/pull/14565#discussion_r241942304 Perhaps surprisingly this is how mutable list and dictionary default parameter values behave in Python: ``` >>> def f(i, j=[], k={}): ... j.append(i) ... k[i] = True ... return j, k ... >>> f(1) ([1], {1: True}) >>> f(1) ([1, 1], {1: True}) >>> f(2) ([1, 1, 2], {1: True, 2: True}) ``` In contrast to: ``` >>> def f(i, j=None, k=None): ... if j is None: ... j = [] ... if k is None: ... k = {} ... j.append(i) ... k[i] = True ... return j, k ... >>> f(1) ([1], {1: True}) >>> f(1) ([1], {1: True}) >>> f(2) ([2], {2: True}) ``` The latter is typically the intended behaviour. This PR fixes two instances of this and adds a check guarding against this gotcha going forward :-) ACKs for top commit: Sjors: Oh Python... ACK e4f4ea47ebf7774fb6f445adde7bf7ea71fa05a1. Testing tip: swap the two commits. Tree-SHA512: 56e14d24fc866211a20185c9fdb274ed046c3aed2dc0e07699e58b6f9fa3b79f6d0c880fb02d72b7fe5cc5eb7c0ff6da0ead33123344e1a872209370c2e49e3f
2019-08-28 19:34:22 +02:00
def initialize_from_block(self, block, nonce=0, prefill_list=None):
if prefill_list is None:
prefill_list = [0]
self.header = CBlockHeader(block)
self.nonce = nonce
self.prefilled_txn = [ PrefilledTransaction(i, block.vtx[i]) for i in prefill_list ]
self.shortids = []
[k0, k1] = self.get_siphash_keys()
for i in range(len(block.vtx)):
if i not in prefill_list:
self.shortids.append(calculate_shortid(k0, k1, block.vtx[i].sha256))
def __repr__(self):
return "HeaderAndShortIDs(header=%s, nonce=%d, shortids=%s, prefilledtxn=%s" % (repr(self.header), self.nonce, repr(self.shortids), repr(self.prefilled_txn))
class BlockTransactionsRequest:
__slots__ = ("blockhash", "indexes")
def __init__(self, blockhash=0, indexes = None):
self.blockhash = blockhash
self.indexes = indexes if indexes is not None else []
def deserialize(self, f):
self.blockhash = deser_uint256(f)
indexes_length = deser_compact_size(f)
Merge #19674: refactor: test: use throwaway _ variable for unused loop counters dac7a111bdd3b0233d94cf68dae7a8bfc6ac9c64 refactor: test: use _ variable for unused loop counters (Sebastian Falbesoner) Pull request description: This tiny PR substitutes Python loops in the form of `for x in range(N): ...` by `for _ in range(N): ...` where applicable. The idea is indicating to the reader that a block (or statement, in list comprehensions) is just repeated N times, and that the loop counter is not used in the body, hence using the throwaway variable. This is already done quite often in the current tests (see e.g. `$ git grep "for _ in range("`). Another alternative would be using `itertools.repeat` (according to Python core developer Raymond Hettinger it's [even faster](https://twitter.com/raymondh/status/1144527183341375488)), but that doesn't seem to be widespread in use and I'm not sure about a readability increase. The only drawback I see is that whenever one wants to debug loop iterations, one would need to introduce a loop variable again. Reviewing this is basically a no-brainer, since tests would fail immediately if a a substitution has taken place on a loop where the variable is used. Instances to replace were found by `$ git grep "for.*in range("` and manually checked. ACKs for top commit: darosior: ACK dac7a111bdd3b0233d94cf68dae7a8bfc6ac9c64 instagibbs: manual inspection ACK https://github.com/bitcoin/bitcoin/pull/19674/commits/dac7a111bdd3b0233d94cf68dae7a8bfc6ac9c64 practicalswift: ACK dac7a111bdd3b0233d94cf68dae7a8bfc6ac9c64 -- the updated code is easier to reason about since the throwaway nature of a variable is expressed explicitly (using the Pythonic `_` idiom) instead of implicitly. Explicit is better than implicit was we all know by now :) Tree-SHA512: 5f43ded9ce14e5e00b3876ec445b90acda1842f813149ae7bafa93f3ac3d510bb778e2c701187fd2c73585e6b87797bb2d2987139bd1a9ba7d58775a59392406
2020-08-11 02:50:34 +02:00
for _ in range(indexes_length):
self.indexes.append(deser_compact_size(f))
def serialize(self):
r = b""
r += ser_uint256(self.blockhash)
r += ser_compact_size(len(self.indexes))
for x in self.indexes:
r += ser_compact_size(x)
return r
# helper to set the differentially encoded indexes from absolute ones
def from_absolute(self, absolute_indexes):
self.indexes = []
last_index = -1
for x in absolute_indexes:
self.indexes.append(x-last_index-1)
last_index = x
def to_absolute(self):
absolute_indexes = []
last_index = -1
for x in self.indexes:
absolute_indexes.append(x+last_index+1)
last_index = absolute_indexes[-1]
return absolute_indexes
def __repr__(self):
return "BlockTransactionsRequest(hash=%064x indexes=%s)" % (self.blockhash, repr(self.indexes))
class BlockTransactions:
__slots__ = ("blockhash", "transactions")
def __init__(self, blockhash=0, transactions = None):
self.blockhash = blockhash
self.transactions = transactions if transactions is not None else []
def deserialize(self, f):
self.blockhash = deser_uint256(f)
self.transactions = deser_vector(f, CTransaction)
def serialize(self):
r = b""
r += ser_uint256(self.blockhash)
r += ser_vector(self.transactions)
return r
def __repr__(self):
return "BlockTransactions(hash=%064x transactions=%s)" % (self.blockhash, repr(self.transactions))
class CPartialMerkleTree:
__slots__ = ("nTransactions", "vBits", "vHash")
def __init__(self):
self.nTransactions = 0
self.vBits = []
self.vHash = []
def deserialize(self, f):
self.nTransactions = struct.unpack("<I", f.read(4))[0]
self.vHash = deser_uint256_vector(f)
self.vBits = deser_dyn_bitset(f, True)
def serialize(self):
r = b""
r += struct.pack("<I", self.nTransactions)
r += ser_uint256_vector(self.vHash)
r += ser_dyn_bitset(self.vBits, True)
return r
def __repr__(self):
return "CPartialMerkleTree(nTransactions=%d vBits.size=%d vHash.size=%d)" % (self.nTransactions, len(self.vBits), len(self.vHash))
class CMerkleBlock:
__slots__ = ("header", "txn")
def __init__(self, header=CBlockHeader(), txn=CPartialMerkleTree()):
self.header = header
self.txn = txn
def deserialize(self, f):
self.header.deserialize(f)
self.txn.deserialize(f)
def serialize(self):
r = b""
r += self.header.serialize()
r += self.txn.serialize()
return r
def __repr__(self):
return "CMerkleBlock(header=%s txn=%s)" % (repr(self.header), repr(self.txn))
class CCbTx:
__slots__ = ("version", "height", "merkleRootMNList", "merkleRootQuorums", "bestCLHeightDiff", "bestCLSignature", "lockedAmount")
def __init__(self, version=None, height=None, merkleRootMNList=None, merkleRootQuorums=None, bestCLHeightDiff=None, bestCLSignature=None, lockedAmount=None):
self.set_null()
if version is not None:
self.version = version
if height is not None:
self.height = height
if merkleRootMNList is not None:
self.merkleRootMNList = merkleRootMNList
if merkleRootQuorums is not None:
self.merkleRootQuorums = merkleRootQuorums
if bestCLHeightDiff is not None:
self.bestCLHeightDiff = bestCLHeightDiff
if bestCLSignature is not None:
self.bestCLSignature = bestCLSignature
if lockedAmount is not None:
self.lockedAmount = lockedAmount
def set_null(self):
self.version = 0
self.height = 0
self.merkleRootMNList = None
self.bestCLHeightDiff = 0
self.bestCLSignature = b'\x00' * 96
self.lockedAmount = 0
def deserialize(self, f):
self.version = struct.unpack("<H", f.read(2))[0]
self.height = struct.unpack("<i", f.read(4))[0]
self.merkleRootMNList = deser_uint256(f)
if self.version >= 2:
self.merkleRootQuorums = deser_uint256(f)
if self.version >= 3:
self.bestCLHeightDiff = deser_compact_size(f)
self.bestCLSignature = f.read(96)
self.lockedAmount = struct.unpack("<q", f.read(8))[0]
def serialize(self):
r = b""
r += struct.pack("<H", self.version)
r += struct.pack("<i", self.height)
r += ser_uint256(self.merkleRootMNList)
if self.version >= 2:
r += ser_uint256(self.merkleRootQuorums)
if self.version >= 3:
r += ser_compact_size(self.bestCLHeightDiff)
r += self.bestCLSignature
r += struct.pack("<q", self.lockedAmount)
return r
class CAssetLockTx:
__slots__ = ("version", "creditOutputs")
def __init__(self, version=None, creditOutputs=None):
self.set_null()
if version is not None:
self.version = version
self.creditOutputs = creditOutputs if creditOutputs is not None else []
def set_null(self):
self.version = 0
self.creditOutputs = None
def deserialize(self, f):
self.version = struct.unpack("<B", f.read(1))[0]
self.creditOutputs = deser_vector(f, CTxOut)
def serialize(self):
r = b""
r += struct.pack("<B", self.version)
r += ser_vector(self.creditOutputs)
return r
def __repr__(self):
return "CAssetLockTx(version={} creditOutputs={}" \
.format(self.version, repr(self.creditOutputs))
class CAssetUnlockTx:
__slots__ = ("version", "index", "fee", "requestedHeight", "quorumHash", "quorumSig")
def __init__(self, version=None, index=None, fee=None, requestedHeight=None, quorumHash = 0, quorumSig = None):
self.set_null()
if version is not None:
self.version = version
if index is not None:
self.index = index
if fee is not None:
self.fee = fee
if requestedHeight is not None:
self.requestedHeight = requestedHeight
if quorumHash is not None:
self.quorumHash = quorumHash
if quorumSig is not None:
self.quorumSig = quorumSig
def set_null(self):
self.version = 0
self.index = 0
self.fee = None
self.requestedHeight = 0
self.quorumHash = 0
self.quorumSig = b'\x00' * 96
def deserialize(self, f):
self.version = struct.unpack("<B", f.read(1))[0]
self.index = struct.unpack("<Q", f.read(8))[0]
self.fee = struct.unpack("<I", f.read(4))[0]
self.requestedHeight = struct.unpack("<I", f.read(4))[0]
self.quorumHash = deser_uint256(f)
self.quorumSig = f.read(96)
def serialize(self):
r = b""
r += struct.pack("<B", self.version)
r += struct.pack("<Q", self.index)
r += struct.pack("<I", self.fee)
r += struct.pack("<I", self.requestedHeight)
r += ser_uint256(self.quorumHash)
r += self.quorumSig
return r
def __repr__(self):
return "CAssetUnlockTx(version={} index={} fee={} requestedHeight={} quorumHash={:x} quorumSig={}" \
.format(self.version, self.index, self.fee, self.requestedHeight, self.quorumHash, self.quorumSig.hex())
class CMnEhf:
__slots__ = ("version", "versionBit", "quorumHash", "quorumSig")
def __init__(self, version=None, versionBit=None, quorumHash = 0, quorumSig = None):
self.set_null()
if version is not None:
self.version = version
if versionBit is not None:
self.versionBit = versionBit
if quorumHash is not None:
self.quorumHash = quorumHash
if quorumSig is not None:
self.quorumSig = quorumSig
def set_null(self):
self.version = 0
self.versionBit = 0
self.quorumHash = 0
self.quorumSig = b'\x00' * 96
def deserialize(self, f):
self.version = struct.unpack("<B", f.read(1))[0]
self.versionBit = struct.unpack("<B", f.read(1))[0]
self.quorumHash = deser_uint256(f)
self.quorumSig = f.read(96)
def serialize(self):
r = b""
r += struct.pack("<B", self.version)
r += struct.pack("<B", self.versionBit)
r += ser_uint256(self.quorumHash)
r += self.quorumSig
return r
def __repr__(self):
return "CMnEhf(version={} versionBit={} quorumHash={:x} quorumSig={}" \
.format(self.version, self.versionBit, self.quorumHash, self.quorumSig.hex())
class CSimplifiedMNListEntry:
feat: store protx version in CSimplifiedMNListEntry and use it to ser/deser pubKeyOperator (#5397) ## Issue being fixed or feature implemented Mobile wallets would have to convert 4k+ pubkeys at the V19 fork point and it's a pretty hard job for them that can easily take 10-15 seconds if not more. Also after the HF, if a masternode list is requested from before the HF, the operator keys come in basic scheme, but the merkelroot was calculated with legacy. From mobile team work it wasn't possible to convert all operator keys to legacy and then calculate the correct merkleroot. ~This PR builds on top of ~#5392~ #5403 (changes that belong to this PR: 26f7e966500bdea4c604f1d16716b40b366fc707 and 4b42dc8fcee3354afd82ce7e3a72ebe1659f5f22) and aims to solve both of these issues.~ cc @hashengineering @QuantumExplorer ## What was done? Introduce `nVersion` on p2p level for every CSimplifiedMNListEntry. Set `nVersion` to the same value we have it in CDeterministicMNState i.e. pubkey serialization would not be via basic scheme only after the V19 fork, it would match the way it’s serialized on-chain/in CDeterministicMNState for that specific MN. ## How Has This Been Tested? run tests ## Breaking Changes NOTE: `testnet` is going to re-fork at v19 forkpoint because `merkleRootMNList` is not going to match ## Checklist: - [x] I have performed a self-review of my own code - [ ] I have commented my code, particularly in hard-to-understand areas - [ ] I have added or updated relevant unit/integration/functional/e2e tests - [ ] I have made corresponding changes to the documentation - [ ] I have assigned this pull request to a milestone _(for repository code-owners and collaborators only)_
2023-06-11 19:29:00 +02:00
__slots__ = ("proRegTxHash", "confirmedHash", "service", "pubKeyOperator", "keyIDVoting", "isValid", "nVersion", "type", "platformHTTPPort", "platformNodeID")
def __init__(self):
self.set_null()
def set_null(self):
self.proRegTxHash = 0
self.confirmedHash = 0
self.service = CService()
self.pubKeyOperator = b'\x00' * 48
self.keyIDVoting = 0
self.isValid = False
feat: store protx version in CSimplifiedMNListEntry and use it to ser/deser pubKeyOperator (#5397) ## Issue being fixed or feature implemented Mobile wallets would have to convert 4k+ pubkeys at the V19 fork point and it's a pretty hard job for them that can easily take 10-15 seconds if not more. Also after the HF, if a masternode list is requested from before the HF, the operator keys come in basic scheme, but the merkelroot was calculated with legacy. From mobile team work it wasn't possible to convert all operator keys to legacy and then calculate the correct merkleroot. ~This PR builds on top of ~#5392~ #5403 (changes that belong to this PR: 26f7e966500bdea4c604f1d16716b40b366fc707 and 4b42dc8fcee3354afd82ce7e3a72ebe1659f5f22) and aims to solve both of these issues.~ cc @hashengineering @QuantumExplorer ## What was done? Introduce `nVersion` on p2p level for every CSimplifiedMNListEntry. Set `nVersion` to the same value we have it in CDeterministicMNState i.e. pubkey serialization would not be via basic scheme only after the V19 fork, it would match the way it’s serialized on-chain/in CDeterministicMNState for that specific MN. ## How Has This Been Tested? run tests ## Breaking Changes NOTE: `testnet` is going to re-fork at v19 forkpoint because `merkleRootMNList` is not going to match ## Checklist: - [x] I have performed a self-review of my own code - [ ] I have commented my code, particularly in hard-to-understand areas - [ ] I have added or updated relevant unit/integration/functional/e2e tests - [ ] I have made corresponding changes to the documentation - [ ] I have assigned this pull request to a milestone _(for repository code-owners and collaborators only)_
2023-06-11 19:29:00 +02:00
self.nVersion = 0
self.type = 0
self.platformHTTPPort = 0
self.platformNodeID = b'\x00' * 20
feat: store protx version in CSimplifiedMNListEntry and use it to ser/deser pubKeyOperator (#5397) ## Issue being fixed or feature implemented Mobile wallets would have to convert 4k+ pubkeys at the V19 fork point and it's a pretty hard job for them that can easily take 10-15 seconds if not more. Also after the HF, if a masternode list is requested from before the HF, the operator keys come in basic scheme, but the merkelroot was calculated with legacy. From mobile team work it wasn't possible to convert all operator keys to legacy and then calculate the correct merkleroot. ~This PR builds on top of ~#5392~ #5403 (changes that belong to this PR: 26f7e966500bdea4c604f1d16716b40b366fc707 and 4b42dc8fcee3354afd82ce7e3a72ebe1659f5f22) and aims to solve both of these issues.~ cc @hashengineering @QuantumExplorer ## What was done? Introduce `nVersion` on p2p level for every CSimplifiedMNListEntry. Set `nVersion` to the same value we have it in CDeterministicMNState i.e. pubkey serialization would not be via basic scheme only after the V19 fork, it would match the way it’s serialized on-chain/in CDeterministicMNState for that specific MN. ## How Has This Been Tested? run tests ## Breaking Changes NOTE: `testnet` is going to re-fork at v19 forkpoint because `merkleRootMNList` is not going to match ## Checklist: - [x] I have performed a self-review of my own code - [ ] I have commented my code, particularly in hard-to-understand areas - [ ] I have added or updated relevant unit/integration/functional/e2e tests - [ ] I have made corresponding changes to the documentation - [ ] I have assigned this pull request to a milestone _(for repository code-owners and collaborators only)_
2023-06-11 19:29:00 +02:00
def deserialize(self, f):
self.nVersion = struct.unpack("<H", f.read(2))[0]
self.proRegTxHash = deser_uint256(f)
self.confirmedHash = deser_uint256(f)
self.service.deserialize(f)
self.pubKeyOperator = f.read(48)
self.keyIDVoting = f.read(20)
self.isValid = struct.unpack("<?", f.read(1))[0]
feat: store protx version in CSimplifiedMNListEntry and use it to ser/deser pubKeyOperator (#5397) ## Issue being fixed or feature implemented Mobile wallets would have to convert 4k+ pubkeys at the V19 fork point and it's a pretty hard job for them that can easily take 10-15 seconds if not more. Also after the HF, if a masternode list is requested from before the HF, the operator keys come in basic scheme, but the merkelroot was calculated with legacy. From mobile team work it wasn't possible to convert all operator keys to legacy and then calculate the correct merkleroot. ~This PR builds on top of ~#5392~ #5403 (changes that belong to this PR: 26f7e966500bdea4c604f1d16716b40b366fc707 and 4b42dc8fcee3354afd82ce7e3a72ebe1659f5f22) and aims to solve both of these issues.~ cc @hashengineering @QuantumExplorer ## What was done? Introduce `nVersion` on p2p level for every CSimplifiedMNListEntry. Set `nVersion` to the same value we have it in CDeterministicMNState i.e. pubkey serialization would not be via basic scheme only after the V19 fork, it would match the way it’s serialized on-chain/in CDeterministicMNState for that specific MN. ## How Has This Been Tested? run tests ## Breaking Changes NOTE: `testnet` is going to re-fork at v19 forkpoint because `merkleRootMNList` is not going to match ## Checklist: - [x] I have performed a self-review of my own code - [ ] I have commented my code, particularly in hard-to-understand areas - [ ] I have added or updated relevant unit/integration/functional/e2e tests - [ ] I have made corresponding changes to the documentation - [ ] I have assigned this pull request to a milestone _(for repository code-owners and collaborators only)_
2023-06-11 19:29:00 +02:00
if self.nVersion == 2:
self.type = struct.unpack("<H", f.read(2))[0]
if self.type == 1:
self.platformHTTPPort = struct.unpack("<H", f.read(2))[0]
self.platformNodeID = f.read(20)
feat: store protx version in CSimplifiedMNListEntry and use it to ser/deser pubKeyOperator (#5397) ## Issue being fixed or feature implemented Mobile wallets would have to convert 4k+ pubkeys at the V19 fork point and it's a pretty hard job for them that can easily take 10-15 seconds if not more. Also after the HF, if a masternode list is requested from before the HF, the operator keys come in basic scheme, but the merkelroot was calculated with legacy. From mobile team work it wasn't possible to convert all operator keys to legacy and then calculate the correct merkleroot. ~This PR builds on top of ~#5392~ #5403 (changes that belong to this PR: 26f7e966500bdea4c604f1d16716b40b366fc707 and 4b42dc8fcee3354afd82ce7e3a72ebe1659f5f22) and aims to solve both of these issues.~ cc @hashengineering @QuantumExplorer ## What was done? Introduce `nVersion` on p2p level for every CSimplifiedMNListEntry. Set `nVersion` to the same value we have it in CDeterministicMNState i.e. pubkey serialization would not be via basic scheme only after the V19 fork, it would match the way it’s serialized on-chain/in CDeterministicMNState for that specific MN. ## How Has This Been Tested? run tests ## Breaking Changes NOTE: `testnet` is going to re-fork at v19 forkpoint because `merkleRootMNList` is not going to match ## Checklist: - [x] I have performed a self-review of my own code - [ ] I have commented my code, particularly in hard-to-understand areas - [ ] I have added or updated relevant unit/integration/functional/e2e tests - [ ] I have made corresponding changes to the documentation - [ ] I have assigned this pull request to a milestone _(for repository code-owners and collaborators only)_
2023-06-11 19:29:00 +02:00
def serialize(self, with_version = True):
r = b""
feat: store protx version in CSimplifiedMNListEntry and use it to ser/deser pubKeyOperator (#5397) ## Issue being fixed or feature implemented Mobile wallets would have to convert 4k+ pubkeys at the V19 fork point and it's a pretty hard job for them that can easily take 10-15 seconds if not more. Also after the HF, if a masternode list is requested from before the HF, the operator keys come in basic scheme, but the merkelroot was calculated with legacy. From mobile team work it wasn't possible to convert all operator keys to legacy and then calculate the correct merkleroot. ~This PR builds on top of ~#5392~ #5403 (changes that belong to this PR: 26f7e966500bdea4c604f1d16716b40b366fc707 and 4b42dc8fcee3354afd82ce7e3a72ebe1659f5f22) and aims to solve both of these issues.~ cc @hashengineering @QuantumExplorer ## What was done? Introduce `nVersion` on p2p level for every CSimplifiedMNListEntry. Set `nVersion` to the same value we have it in CDeterministicMNState i.e. pubkey serialization would not be via basic scheme only after the V19 fork, it would match the way it’s serialized on-chain/in CDeterministicMNState for that specific MN. ## How Has This Been Tested? run tests ## Breaking Changes NOTE: `testnet` is going to re-fork at v19 forkpoint because `merkleRootMNList` is not going to match ## Checklist: - [x] I have performed a self-review of my own code - [ ] I have commented my code, particularly in hard-to-understand areas - [ ] I have added or updated relevant unit/integration/functional/e2e tests - [ ] I have made corresponding changes to the documentation - [ ] I have assigned this pull request to a milestone _(for repository code-owners and collaborators only)_
2023-06-11 19:29:00 +02:00
if with_version:
r += struct.pack("<H", self.nVersion)
r += ser_uint256(self.proRegTxHash)
r += ser_uint256(self.confirmedHash)
r += self.service.serialize()
r += self.pubKeyOperator
r += self.keyIDVoting
r += struct.pack("<?", self.isValid)
feat: store protx version in CSimplifiedMNListEntry and use it to ser/deser pubKeyOperator (#5397) ## Issue being fixed or feature implemented Mobile wallets would have to convert 4k+ pubkeys at the V19 fork point and it's a pretty hard job for them that can easily take 10-15 seconds if not more. Also after the HF, if a masternode list is requested from before the HF, the operator keys come in basic scheme, but the merkelroot was calculated with legacy. From mobile team work it wasn't possible to convert all operator keys to legacy and then calculate the correct merkleroot. ~This PR builds on top of ~#5392~ #5403 (changes that belong to this PR: 26f7e966500bdea4c604f1d16716b40b366fc707 and 4b42dc8fcee3354afd82ce7e3a72ebe1659f5f22) and aims to solve both of these issues.~ cc @hashengineering @QuantumExplorer ## What was done? Introduce `nVersion` on p2p level for every CSimplifiedMNListEntry. Set `nVersion` to the same value we have it in CDeterministicMNState i.e. pubkey serialization would not be via basic scheme only after the V19 fork, it would match the way it’s serialized on-chain/in CDeterministicMNState for that specific MN. ## How Has This Been Tested? run tests ## Breaking Changes NOTE: `testnet` is going to re-fork at v19 forkpoint because `merkleRootMNList` is not going to match ## Checklist: - [x] I have performed a self-review of my own code - [ ] I have commented my code, particularly in hard-to-understand areas - [ ] I have added or updated relevant unit/integration/functional/e2e tests - [ ] I have made corresponding changes to the documentation - [ ] I have assigned this pull request to a milestone _(for repository code-owners and collaborators only)_
2023-06-11 19:29:00 +02:00
if self.nVersion == 2:
r += struct.pack("<H", self.type)
if self.type == 1:
r += struct.pack("<H", self.platformHTTPPort)
r += self.platformNodeID
return r
class CFinalCommitment:
__slots__ = ("nVersion", "llmqType", "quorumHash", "quorumIndex", "signers", "validMembers", "quorumPublicKey",
"quorumVvecHash", "quorumSig", "membersSig")
def __init__(self):
self.set_null()
def set_null(self):
self.nVersion = 0
self.llmqType = 0
self.quorumHash = 0
self.quorumIndex = 0
self.signers = []
self.validMembers = []
self.quorumPublicKey = b'\x00' * 48
self.quorumVvecHash = 0
self.quorumSig = b'\x00' * 96
self.membersSig = b'\x00' * 96
def deserialize(self, f):
self.nVersion = struct.unpack("<H", f.read(2))[0]
self.llmqType = struct.unpack("<B", f.read(1))[0]
self.quorumHash = deser_uint256(f)
if self.nVersion == 2 or self.nVersion == 4:
self.quorumIndex = struct.unpack("<H", f.read(2))[0]
self.signers = deser_dyn_bitset(f, False)
self.validMembers = deser_dyn_bitset(f, False)
self.quorumPublicKey = f.read(48)
self.quorumVvecHash = deser_uint256(f)
self.quorumSig = f.read(96)
self.membersSig = f.read(96)
def serialize(self):
r = b""
r += struct.pack("<H", self.nVersion)
r += struct.pack("<B", self.llmqType)
r += ser_uint256(self.quorumHash)
if self.nVersion == 2 or self.nVersion == 4:
r += struct.pack("<H", self.quorumIndex)
r += ser_dyn_bitset(self.signers, False)
r += ser_dyn_bitset(self.validMembers, False)
r += self.quorumPublicKey
r += ser_uint256(self.quorumVvecHash)
r += self.quorumSig
r += self.membersSig
return r
def __repr__(self):
return "CFinalCommitment(nVersion={} llmqType={} quorumHash={:x} quorumIndex={} signers={}" \
" validMembers={} quorumPublicKey={} quorumVvecHash={:x}) quorumSig={} membersSig={})" \
.format(self.nVersion, self.llmqType, self.quorumHash, self.quorumIndex, repr(self.signers),
repr(self.validMembers), self.quorumPublicKey.hex(), self.quorumVvecHash, self.quorumSig.hex(), self.membersSig.hex())
class CGovernanceObject:
__slots__ = ("nHashParent", "nRevision", "nTime", "nCollateralHash", "vchData", "nObjectType",
"masternodeOutpoint", "vchSig")
def __init__(self):
self.nHashParent = 0
self.nRevision = 0
self.nTime = 0
self.nCollateralHash = 0
self.vchData = []
self.nObjectType = 0
self.masternodeOutpoint = COutPoint()
self.vchSig = []
def deserialize(self, f):
self.nHashParent = deser_uint256(f)
self.nRevision = struct.unpack("<i", f.read(4))[0]
self.nTime = struct.unpack("<q", f.read(8))[0]
self.nCollateralHash = deser_uint256(f)
size = deser_compact_size(f)
if size > 0:
self.vchData = f.read(size)
self.nObjectType = struct.unpack("<i", f.read(4))[0]
self.masternodeOutpoint.deserialize(f)
size = deser_compact_size(f)
if size > 0:
self.vchSig = f.read(size)
def serialize(self):
r = b""
r += ser_uint256(self.nParentHash)
r += struct.pack("<i", self.nRevision)
r += struct.pack("<q", self.nTime)
r += deser_uint256(self.nCollateralHash)
r += deser_compact_size(len(self.vchData))
r += self.vchData
r += struct.pack("<i", self.nObjectType)
r += self.masternodeOutpoint.serialize()
r += deser_compact_size(len(self.vchSig))
r += self.vchSig
return r
class CGovernanceVote:
__slots__ = ("masternodeOutpoint", "nParentHash", "nVoteOutcome", "nVoteSignal", "nTime", "vchSig")
def __init__(self):
self.masternodeOutpoint = COutPoint()
self.nParentHash = 0
self.nVoteOutcome = 0
self.nVoteSignal = 0
self.nTime = 0
self.vchSig = []
def deserialize(self, f):
self.masternodeOutpoint.deserialize(f)
self.nParentHash = deser_uint256(f)
self.nVoteOutcome = struct.unpack("<i", f.read(4))[0]
self.nVoteSignal = struct.unpack("<i", f.read(4))[0]
self.nTime = struct.unpack("<q", f.read(8))[0]
size = deser_compact_size(f)
if size > 0:
self.vchSig = f.read(size)
def serialize(self):
r = b""
r += self.masternodeOutpoint.serialize()
r += ser_uint256(self.nParentHash)
r += struct.pack("<i", self.nVoteOutcome)
r += struct.pack("<i", self.nVoteSignal)
r += struct.pack("<q", self.nTime)
r += ser_compact_size(len(self.vchSig))
r += self.vchSig
return r
class CRecoveredSig:
__slots__ = ("llmqType", "quorumHash", "id", "msgHash", "sig")
def __init__(self):
self.llmqType = 0
self.quorumHash = 0
self.id = 0
self.msgHash = 0
self.sig = b'\x00' * 96
def deserialize(self, f):
self.llmqType = struct.unpack("<B", f.read(1))[0]
self.quorumHash = deser_uint256(f)
self.id = deser_uint256(f)
self.msgHash = deser_uint256(f)
self.sig = f.read(96)
def serialize(self):
r = b""
r += struct.pack("<B", self.llmqType)
r += ser_uint256(self.quorumHash)
r += ser_uint256(self.id)
r += ser_uint256(self.msgHash)
r += self.sig
return r
class CSigShare:
__slots__ = ("llmqType", "quorumHash", "quorumMember", "id", "msgHash", "sigShare")
def __init__(self):
self.llmqType = 0
self.quorumHash = 0
self.quorumMember = 0
self.id = 0
self.msgHash = 0
self.sigShare = b'\x00' * 96
def deserialize(self, f):
self.llmqType = struct.unpack("<B", f.read(1))[0]
self.quorumHash = deser_uint256(f)
self.quorumMember = struct.unpack("<H", f.read(2))[0]
self.id = deser_uint256(f)
self.msgHash = deser_uint256(f)
self.sigShare = f.read(96)
def serialize(self):
r = b""
r += struct.pack("<B", self.llmqType)
r += ser_uint256(self.quorumHash)
r += struct.pack("<H", self.quorumMember)
r += ser_uint256(self.id)
r += ser_uint256(self.msgHash)
r += self.sigShare
return r
llmq|rpc|test|version: Implement P2P messages QGETDATA <-> QDATA (#3953) * version: Bump PROTOCOL_VERSION and MIN_MASTERNODE_PROTO_VERSION * version: Introduce LLMQ_DATA_MESSAGES_VERSION for QGETDATA/QDATA support * test: Bump MY_VERSION to 70219 (LLMQ_DATA_MESSAGES_VERSION) * llmq: Introduce CQuorumDataRequest as wrapper for QGETDATA requests * llmq: Implement CQuorum::{SetVerificationVector, SetSecretKeyShare} * llmq|net|protocol: Implement QGETDATA/QDATA P2P messages * llmq: Restrict processing QGETDATA/QDATA to masternodes only * llmq: Implement request limiting for QGETDATA/QDATA * llmq: Implement CQuorumManger::RequestQuorumData * rpc: Implement "quorum getdata" as wrapper around QGETDATA Allows to trigger sending QGETDATA messages to connected peers by RPC. * test: Handle QGETDATA/QDATA messages in mininode * test: Add data structures to support QGETDATA/QDATA * test: Add some helper in test_framework.py * test: Implement tests for QGETDATA/QDATA in p2p_quorum_data.py * test: Add p2p_quorum_data.py to BASE_SCRIPTS * llmq|test: Add QWATCH support for QGETDATA/QDATA * llmq: Store CQuorumPtr in cache, not CQuorumCPtr * llmq: Fix cache usage after recent changes * Use uacomment to create/find specific p2ps * No need to use network adjusted time here, GetTime should be enough * rpc: check proTxHash * minor tweaks * test: Adjustments after 4e27d6513e0073ed848ede262cfec82a9134abc0 * llmq: Rename and improve error lambda in CQuorumManager::ProcessMessage * llmq: Process QDATA if -watchquorums is enabled * test: Handle qwatch messages in mininode * test: Add test for -watchquorums support * test: Just some empty lines * test: Properly stop the p2p network thread at the end of the test * rpc: Adjust "quorum getdata" parameter descriptions Co-authored-by: PastaPastaPasta <6443210+PastaPastaPasta@users.noreply.github.com> * rpc: Fix optionality of proTxHash in "quorum getdata" command * test: Test optionality of proTxHash for "quorum getdata" command * test: Be more specific about imports in p2p_quorum_data.py * llmq|rpc: Add some comments about the request.GetDataMask checks * test: Some more empty lines * rpc: One more parameter description Co-authored-by: PastaPastaPasta <6443210+PastaPastaPasta@users.noreply.github.com> * test: Unify assert statements / drop parentheses for all of them * fix typo Signed-off-by: pasta <pasta@dashboost.org> * adjust some line wrapping to 80 chars Signed-off-by: pasta <pasta@dashboost.org> * tests: Seperate out into dif atomic methods, add logging Signed-off-by: pasta <pasta@dashboost.org> * test: Avoid restarting masternodes, just let available requests expire Just takes a lot time and isn't required imo. * test: Drop redundant code/tests after separation This was introduced in 9e224ec2f2ef4a58adaf0f9d4ffe110e379718ef * test: Merge three tests "test_mnauth_restriction", "test_invalid_messages" and "test_invalid_unexpected_qdata" with the resulting name "test_basics" because i don't feel like DKG recovery thing should be part of a test called "test_invalid_messages" and giving it an own test probably wouldn't make a lot sense because it would still depend on "test_invalid_messages". I also think there is no need for a separated "test_invalid_unexpected_qdata". * test: Rename test_ratelimiting_banscore -> test_request_limit * test: Apply python style * test: Wrap all at 120 characters Thats the default "draw annoying warnings" setting for PyCharm (and IMO a reasonable line length). * test: Move some variables * test: Optimize for speed * tests: use wait_until in get_mininode_id * test: Don't use `!=` to check for `None` Co-authored-by: UdjinM6 <UdjinM6@users.noreply.github.com> Co-authored-by: PastaPastaPasta <6443210+PastaPastaPasta@users.noreply.github.com> Co-authored-by: pasta <pasta@dashboost.org>
2021-01-28 23:33:18 +01:00
class CBLSPublicKey:
__slots__ = ("data")
llmq|rpc|test|version: Implement P2P messages QGETDATA <-> QDATA (#3953) * version: Bump PROTOCOL_VERSION and MIN_MASTERNODE_PROTO_VERSION * version: Introduce LLMQ_DATA_MESSAGES_VERSION for QGETDATA/QDATA support * test: Bump MY_VERSION to 70219 (LLMQ_DATA_MESSAGES_VERSION) * llmq: Introduce CQuorumDataRequest as wrapper for QGETDATA requests * llmq: Implement CQuorum::{SetVerificationVector, SetSecretKeyShare} * llmq|net|protocol: Implement QGETDATA/QDATA P2P messages * llmq: Restrict processing QGETDATA/QDATA to masternodes only * llmq: Implement request limiting for QGETDATA/QDATA * llmq: Implement CQuorumManger::RequestQuorumData * rpc: Implement "quorum getdata" as wrapper around QGETDATA Allows to trigger sending QGETDATA messages to connected peers by RPC. * test: Handle QGETDATA/QDATA messages in mininode * test: Add data structures to support QGETDATA/QDATA * test: Add some helper in test_framework.py * test: Implement tests for QGETDATA/QDATA in p2p_quorum_data.py * test: Add p2p_quorum_data.py to BASE_SCRIPTS * llmq|test: Add QWATCH support for QGETDATA/QDATA * llmq: Store CQuorumPtr in cache, not CQuorumCPtr * llmq: Fix cache usage after recent changes * Use uacomment to create/find specific p2ps * No need to use network adjusted time here, GetTime should be enough * rpc: check proTxHash * minor tweaks * test: Adjustments after 4e27d6513e0073ed848ede262cfec82a9134abc0 * llmq: Rename and improve error lambda in CQuorumManager::ProcessMessage * llmq: Process QDATA if -watchquorums is enabled * test: Handle qwatch messages in mininode * test: Add test for -watchquorums support * test: Just some empty lines * test: Properly stop the p2p network thread at the end of the test * rpc: Adjust "quorum getdata" parameter descriptions Co-authored-by: PastaPastaPasta <6443210+PastaPastaPasta@users.noreply.github.com> * rpc: Fix optionality of proTxHash in "quorum getdata" command * test: Test optionality of proTxHash for "quorum getdata" command * test: Be more specific about imports in p2p_quorum_data.py * llmq|rpc: Add some comments about the request.GetDataMask checks * test: Some more empty lines * rpc: One more parameter description Co-authored-by: PastaPastaPasta <6443210+PastaPastaPasta@users.noreply.github.com> * test: Unify assert statements / drop parentheses for all of them * fix typo Signed-off-by: pasta <pasta@dashboost.org> * adjust some line wrapping to 80 chars Signed-off-by: pasta <pasta@dashboost.org> * tests: Seperate out into dif atomic methods, add logging Signed-off-by: pasta <pasta@dashboost.org> * test: Avoid restarting masternodes, just let available requests expire Just takes a lot time and isn't required imo. * test: Drop redundant code/tests after separation This was introduced in 9e224ec2f2ef4a58adaf0f9d4ffe110e379718ef * test: Merge three tests "test_mnauth_restriction", "test_invalid_messages" and "test_invalid_unexpected_qdata" with the resulting name "test_basics" because i don't feel like DKG recovery thing should be part of a test called "test_invalid_messages" and giving it an own test probably wouldn't make a lot sense because it would still depend on "test_invalid_messages". I also think there is no need for a separated "test_invalid_unexpected_qdata". * test: Rename test_ratelimiting_banscore -> test_request_limit * test: Apply python style * test: Wrap all at 120 characters Thats the default "draw annoying warnings" setting for PyCharm (and IMO a reasonable line length). * test: Move some variables * test: Optimize for speed * tests: use wait_until in get_mininode_id * test: Don't use `!=` to check for `None` Co-authored-by: UdjinM6 <UdjinM6@users.noreply.github.com> Co-authored-by: PastaPastaPasta <6443210+PastaPastaPasta@users.noreply.github.com> Co-authored-by: pasta <pasta@dashboost.org>
2021-01-28 23:33:18 +01:00
def __init__(self):
self.data = b'\x00' * 48
llmq|rpc|test|version: Implement P2P messages QGETDATA <-> QDATA (#3953) * version: Bump PROTOCOL_VERSION and MIN_MASTERNODE_PROTO_VERSION * version: Introduce LLMQ_DATA_MESSAGES_VERSION for QGETDATA/QDATA support * test: Bump MY_VERSION to 70219 (LLMQ_DATA_MESSAGES_VERSION) * llmq: Introduce CQuorumDataRequest as wrapper for QGETDATA requests * llmq: Implement CQuorum::{SetVerificationVector, SetSecretKeyShare} * llmq|net|protocol: Implement QGETDATA/QDATA P2P messages * llmq: Restrict processing QGETDATA/QDATA to masternodes only * llmq: Implement request limiting for QGETDATA/QDATA * llmq: Implement CQuorumManger::RequestQuorumData * rpc: Implement "quorum getdata" as wrapper around QGETDATA Allows to trigger sending QGETDATA messages to connected peers by RPC. * test: Handle QGETDATA/QDATA messages in mininode * test: Add data structures to support QGETDATA/QDATA * test: Add some helper in test_framework.py * test: Implement tests for QGETDATA/QDATA in p2p_quorum_data.py * test: Add p2p_quorum_data.py to BASE_SCRIPTS * llmq|test: Add QWATCH support for QGETDATA/QDATA * llmq: Store CQuorumPtr in cache, not CQuorumCPtr * llmq: Fix cache usage after recent changes * Use uacomment to create/find specific p2ps * No need to use network adjusted time here, GetTime should be enough * rpc: check proTxHash * minor tweaks * test: Adjustments after 4e27d6513e0073ed848ede262cfec82a9134abc0 * llmq: Rename and improve error lambda in CQuorumManager::ProcessMessage * llmq: Process QDATA if -watchquorums is enabled * test: Handle qwatch messages in mininode * test: Add test for -watchquorums support * test: Just some empty lines * test: Properly stop the p2p network thread at the end of the test * rpc: Adjust "quorum getdata" parameter descriptions Co-authored-by: PastaPastaPasta <6443210+PastaPastaPasta@users.noreply.github.com> * rpc: Fix optionality of proTxHash in "quorum getdata" command * test: Test optionality of proTxHash for "quorum getdata" command * test: Be more specific about imports in p2p_quorum_data.py * llmq|rpc: Add some comments about the request.GetDataMask checks * test: Some more empty lines * rpc: One more parameter description Co-authored-by: PastaPastaPasta <6443210+PastaPastaPasta@users.noreply.github.com> * test: Unify assert statements / drop parentheses for all of them * fix typo Signed-off-by: pasta <pasta@dashboost.org> * adjust some line wrapping to 80 chars Signed-off-by: pasta <pasta@dashboost.org> * tests: Seperate out into dif atomic methods, add logging Signed-off-by: pasta <pasta@dashboost.org> * test: Avoid restarting masternodes, just let available requests expire Just takes a lot time and isn't required imo. * test: Drop redundant code/tests after separation This was introduced in 9e224ec2f2ef4a58adaf0f9d4ffe110e379718ef * test: Merge three tests "test_mnauth_restriction", "test_invalid_messages" and "test_invalid_unexpected_qdata" with the resulting name "test_basics" because i don't feel like DKG recovery thing should be part of a test called "test_invalid_messages" and giving it an own test probably wouldn't make a lot sense because it would still depend on "test_invalid_messages". I also think there is no need for a separated "test_invalid_unexpected_qdata". * test: Rename test_ratelimiting_banscore -> test_request_limit * test: Apply python style * test: Wrap all at 120 characters Thats the default "draw annoying warnings" setting for PyCharm (and IMO a reasonable line length). * test: Move some variables * test: Optimize for speed * tests: use wait_until in get_mininode_id * test: Don't use `!=` to check for `None` Co-authored-by: UdjinM6 <UdjinM6@users.noreply.github.com> Co-authored-by: PastaPastaPasta <6443210+PastaPastaPasta@users.noreply.github.com> Co-authored-by: pasta <pasta@dashboost.org>
2021-01-28 23:33:18 +01:00
def deserialize(self, f):
self.data = f.read(48)
def serialize(self):
r = b""
r += self.data
return r
class CBLSIESEncryptedSecretKey:
__slots__ = ("ephemeral_pubKey", "iv", "data")
llmq|rpc|test|version: Implement P2P messages QGETDATA <-> QDATA (#3953) * version: Bump PROTOCOL_VERSION and MIN_MASTERNODE_PROTO_VERSION * version: Introduce LLMQ_DATA_MESSAGES_VERSION for QGETDATA/QDATA support * test: Bump MY_VERSION to 70219 (LLMQ_DATA_MESSAGES_VERSION) * llmq: Introduce CQuorumDataRequest as wrapper for QGETDATA requests * llmq: Implement CQuorum::{SetVerificationVector, SetSecretKeyShare} * llmq|net|protocol: Implement QGETDATA/QDATA P2P messages * llmq: Restrict processing QGETDATA/QDATA to masternodes only * llmq: Implement request limiting for QGETDATA/QDATA * llmq: Implement CQuorumManger::RequestQuorumData * rpc: Implement "quorum getdata" as wrapper around QGETDATA Allows to trigger sending QGETDATA messages to connected peers by RPC. * test: Handle QGETDATA/QDATA messages in mininode * test: Add data structures to support QGETDATA/QDATA * test: Add some helper in test_framework.py * test: Implement tests for QGETDATA/QDATA in p2p_quorum_data.py * test: Add p2p_quorum_data.py to BASE_SCRIPTS * llmq|test: Add QWATCH support for QGETDATA/QDATA * llmq: Store CQuorumPtr in cache, not CQuorumCPtr * llmq: Fix cache usage after recent changes * Use uacomment to create/find specific p2ps * No need to use network adjusted time here, GetTime should be enough * rpc: check proTxHash * minor tweaks * test: Adjustments after 4e27d6513e0073ed848ede262cfec82a9134abc0 * llmq: Rename and improve error lambda in CQuorumManager::ProcessMessage * llmq: Process QDATA if -watchquorums is enabled * test: Handle qwatch messages in mininode * test: Add test for -watchquorums support * test: Just some empty lines * test: Properly stop the p2p network thread at the end of the test * rpc: Adjust "quorum getdata" parameter descriptions Co-authored-by: PastaPastaPasta <6443210+PastaPastaPasta@users.noreply.github.com> * rpc: Fix optionality of proTxHash in "quorum getdata" command * test: Test optionality of proTxHash for "quorum getdata" command * test: Be more specific about imports in p2p_quorum_data.py * llmq|rpc: Add some comments about the request.GetDataMask checks * test: Some more empty lines * rpc: One more parameter description Co-authored-by: PastaPastaPasta <6443210+PastaPastaPasta@users.noreply.github.com> * test: Unify assert statements / drop parentheses for all of them * fix typo Signed-off-by: pasta <pasta@dashboost.org> * adjust some line wrapping to 80 chars Signed-off-by: pasta <pasta@dashboost.org> * tests: Seperate out into dif atomic methods, add logging Signed-off-by: pasta <pasta@dashboost.org> * test: Avoid restarting masternodes, just let available requests expire Just takes a lot time and isn't required imo. * test: Drop redundant code/tests after separation This was introduced in 9e224ec2f2ef4a58adaf0f9d4ffe110e379718ef * test: Merge three tests "test_mnauth_restriction", "test_invalid_messages" and "test_invalid_unexpected_qdata" with the resulting name "test_basics" because i don't feel like DKG recovery thing should be part of a test called "test_invalid_messages" and giving it an own test probably wouldn't make a lot sense because it would still depend on "test_invalid_messages". I also think there is no need for a separated "test_invalid_unexpected_qdata". * test: Rename test_ratelimiting_banscore -> test_request_limit * test: Apply python style * test: Wrap all at 120 characters Thats the default "draw annoying warnings" setting for PyCharm (and IMO a reasonable line length). * test: Move some variables * test: Optimize for speed * tests: use wait_until in get_mininode_id * test: Don't use `!=` to check for `None` Co-authored-by: UdjinM6 <UdjinM6@users.noreply.github.com> Co-authored-by: PastaPastaPasta <6443210+PastaPastaPasta@users.noreply.github.com> Co-authored-by: pasta <pasta@dashboost.org>
2021-01-28 23:33:18 +01:00
def __init__(self):
self.ephemeral_pubKey = b'\x00' * 48
self.iv = b'\x00' * 32
self.data = b'\x00' * 32
llmq|rpc|test|version: Implement P2P messages QGETDATA <-> QDATA (#3953) * version: Bump PROTOCOL_VERSION and MIN_MASTERNODE_PROTO_VERSION * version: Introduce LLMQ_DATA_MESSAGES_VERSION for QGETDATA/QDATA support * test: Bump MY_VERSION to 70219 (LLMQ_DATA_MESSAGES_VERSION) * llmq: Introduce CQuorumDataRequest as wrapper for QGETDATA requests * llmq: Implement CQuorum::{SetVerificationVector, SetSecretKeyShare} * llmq|net|protocol: Implement QGETDATA/QDATA P2P messages * llmq: Restrict processing QGETDATA/QDATA to masternodes only * llmq: Implement request limiting for QGETDATA/QDATA * llmq: Implement CQuorumManger::RequestQuorumData * rpc: Implement "quorum getdata" as wrapper around QGETDATA Allows to trigger sending QGETDATA messages to connected peers by RPC. * test: Handle QGETDATA/QDATA messages in mininode * test: Add data structures to support QGETDATA/QDATA * test: Add some helper in test_framework.py * test: Implement tests for QGETDATA/QDATA in p2p_quorum_data.py * test: Add p2p_quorum_data.py to BASE_SCRIPTS * llmq|test: Add QWATCH support for QGETDATA/QDATA * llmq: Store CQuorumPtr in cache, not CQuorumCPtr * llmq: Fix cache usage after recent changes * Use uacomment to create/find specific p2ps * No need to use network adjusted time here, GetTime should be enough * rpc: check proTxHash * minor tweaks * test: Adjustments after 4e27d6513e0073ed848ede262cfec82a9134abc0 * llmq: Rename and improve error lambda in CQuorumManager::ProcessMessage * llmq: Process QDATA if -watchquorums is enabled * test: Handle qwatch messages in mininode * test: Add test for -watchquorums support * test: Just some empty lines * test: Properly stop the p2p network thread at the end of the test * rpc: Adjust "quorum getdata" parameter descriptions Co-authored-by: PastaPastaPasta <6443210+PastaPastaPasta@users.noreply.github.com> * rpc: Fix optionality of proTxHash in "quorum getdata" command * test: Test optionality of proTxHash for "quorum getdata" command * test: Be more specific about imports in p2p_quorum_data.py * llmq|rpc: Add some comments about the request.GetDataMask checks * test: Some more empty lines * rpc: One more parameter description Co-authored-by: PastaPastaPasta <6443210+PastaPastaPasta@users.noreply.github.com> * test: Unify assert statements / drop parentheses for all of them * fix typo Signed-off-by: pasta <pasta@dashboost.org> * adjust some line wrapping to 80 chars Signed-off-by: pasta <pasta@dashboost.org> * tests: Seperate out into dif atomic methods, add logging Signed-off-by: pasta <pasta@dashboost.org> * test: Avoid restarting masternodes, just let available requests expire Just takes a lot time and isn't required imo. * test: Drop redundant code/tests after separation This was introduced in 9e224ec2f2ef4a58adaf0f9d4ffe110e379718ef * test: Merge three tests "test_mnauth_restriction", "test_invalid_messages" and "test_invalid_unexpected_qdata" with the resulting name "test_basics" because i don't feel like DKG recovery thing should be part of a test called "test_invalid_messages" and giving it an own test probably wouldn't make a lot sense because it would still depend on "test_invalid_messages". I also think there is no need for a separated "test_invalid_unexpected_qdata". * test: Rename test_ratelimiting_banscore -> test_request_limit * test: Apply python style * test: Wrap all at 120 characters Thats the default "draw annoying warnings" setting for PyCharm (and IMO a reasonable line length). * test: Move some variables * test: Optimize for speed * tests: use wait_until in get_mininode_id * test: Don't use `!=` to check for `None` Co-authored-by: UdjinM6 <UdjinM6@users.noreply.github.com> Co-authored-by: PastaPastaPasta <6443210+PastaPastaPasta@users.noreply.github.com> Co-authored-by: pasta <pasta@dashboost.org>
2021-01-28 23:33:18 +01:00
def deserialize(self, f):
self.ephemeral_pubKey = f.read(48)
self.iv = f.read(32)
data_size = deser_compact_size(f)
self.data = f.read(data_size)
def serialize(self):
r = b""
r += self.ephemeral_pubKey
r += self.iv
r += ser_compact_size(len(self.data))
r += self.data
return r
# Objects that correspond to messages on the wire
class msg_version:
__slots__ = ("addrFrom", "addrTo", "nNonce", "nRelay", "nServices",
"nStartingHeight", "nTime", "nVersion", "strSubVer")
msgtype = b"version"
def __init__(self):
self.nVersion = MY_VERSION
self.nServices = 1
self.nTime = int(time.time())
self.addrTo = CAddress()
self.addrFrom = CAddress()
self.nNonce = random.getrandbits(64)
Merge #20993: test: store subversion (user agent) as string in msg_version de85af5cce727981383ac0fe81f635451b331f23 test: store subversion (user agent) as string in msg_version (Sebastian Falbesoner) Pull request description: It seems more natural to treat the "subversion" field (=user agent string, see [BIP 14](https://github.com/bitcoin/bips/blob/master/bip-0014.mediawiki#Proposal)) of a node as pure string rather than a bytestring within the test framework. This is also suggested with the naming prefix in `msg_version.strSubVer`: one probably wouldn't expect a field starting with "str" to be a bytestring that needs further decoding to be useful. This PR moves the encoding/decoding parts to the serialization/deserialization routines so that the user doesn't have to bother with that anymore. Note that currently, in the master branch the `msg_version.strSubVer` is never read (only in `msg_version.__repr__`); However, one issue that is solved by this PR came up while testing #19509 (not merged yet): A decoding script for binary message capture files takes use of the functional test framework convert it into JSON format. Bytestrings will be convered to hexstrings, while pure strings will (surprise surprise) end up without modification in the file. So without this patch, we get: ``` $ jq . out.json | grep -m5 strSubVer "strSubVer": "2f5361746f7368693a32312e39392e302f" "strSubVer": "2f5361746f7368693a302e32302e312f" "strSubVer": "2f5361746f7368693a32312e39392e302f" "strSubVer": "2f5361746f7368693a302e32302e312f" "strSubVer": "2f5361746f7368693a32312e39392e302f" ``` After this patch: ``` $ jq . out2.json | grep -m5 strSubVer "strSubVer": "/Satoshi:21.99.0/" "strSubVer": "/Satoshi:0.20.1/" "strSubVer": "/Satoshi:21.99.0/" "strSubVer": "/Satoshi:0.20.1/" "strSubVer": "/Satoshi:21.99.0/" ``` ACKs for top commit: jnewbery: utACK de85af5cce727981383ac0fe81f635451b331f23 Tree-SHA512: ff23642705c858e8387a625537dfec82e6b8a15da6d99b8d12152560e52d243ba17431b602b26f60996d897e00e3f37dcf8dc8a303ffb1d544df29a5937080f9
2021-02-17 09:36:27 +01:00
self.strSubVer = MY_SUBVERSION % ""
self.nStartingHeight = -1
self.nRelay = MY_RELAY
def deserialize(self, f):
self.nVersion = struct.unpack("<i", f.read(4))[0]
self.nServices = struct.unpack("<Q", f.read(8))[0]
self.nTime = struct.unpack("<q", f.read(8))[0]
self.addrTo = CAddress()
self.addrTo.deserialize(f, with_time=False)
self.addrFrom = CAddress()
self.addrFrom.deserialize(f, with_time=False)
self.nNonce = struct.unpack("<Q", f.read(8))[0]
Merge #20993: test: store subversion (user agent) as string in msg_version de85af5cce727981383ac0fe81f635451b331f23 test: store subversion (user agent) as string in msg_version (Sebastian Falbesoner) Pull request description: It seems more natural to treat the "subversion" field (=user agent string, see [BIP 14](https://github.com/bitcoin/bips/blob/master/bip-0014.mediawiki#Proposal)) of a node as pure string rather than a bytestring within the test framework. This is also suggested with the naming prefix in `msg_version.strSubVer`: one probably wouldn't expect a field starting with "str" to be a bytestring that needs further decoding to be useful. This PR moves the encoding/decoding parts to the serialization/deserialization routines so that the user doesn't have to bother with that anymore. Note that currently, in the master branch the `msg_version.strSubVer` is never read (only in `msg_version.__repr__`); However, one issue that is solved by this PR came up while testing #19509 (not merged yet): A decoding script for binary message capture files takes use of the functional test framework convert it into JSON format. Bytestrings will be convered to hexstrings, while pure strings will (surprise surprise) end up without modification in the file. So without this patch, we get: ``` $ jq . out.json | grep -m5 strSubVer "strSubVer": "2f5361746f7368693a32312e39392e302f" "strSubVer": "2f5361746f7368693a302e32302e312f" "strSubVer": "2f5361746f7368693a32312e39392e302f" "strSubVer": "2f5361746f7368693a302e32302e312f" "strSubVer": "2f5361746f7368693a32312e39392e302f" ``` After this patch: ``` $ jq . out2.json | grep -m5 strSubVer "strSubVer": "/Satoshi:21.99.0/" "strSubVer": "/Satoshi:0.20.1/" "strSubVer": "/Satoshi:21.99.0/" "strSubVer": "/Satoshi:0.20.1/" "strSubVer": "/Satoshi:21.99.0/" ``` ACKs for top commit: jnewbery: utACK de85af5cce727981383ac0fe81f635451b331f23 Tree-SHA512: ff23642705c858e8387a625537dfec82e6b8a15da6d99b8d12152560e52d243ba17431b602b26f60996d897e00e3f37dcf8dc8a303ffb1d544df29a5937080f9
2021-02-17 09:36:27 +01:00
self.strSubVer = deser_string(f).decode('utf-8')
self.nStartingHeight = struct.unpack("<i", f.read(4))[0]
# Relay field is optional for version 70001 onwards
# But, unconditionally check it to match behaviour in bitcoind
try:
self.nRelay = struct.unpack("<b", f.read(1))[0]
except struct.error:
self.nRelay = 0
def serialize(self):
r = b""
r += struct.pack("<i", self.nVersion)
r += struct.pack("<Q", self.nServices)
r += struct.pack("<q", self.nTime)
r += self.addrTo.serialize(with_time=False)
r += self.addrFrom.serialize(with_time=False)
r += struct.pack("<Q", self.nNonce)
Merge #20993: test: store subversion (user agent) as string in msg_version de85af5cce727981383ac0fe81f635451b331f23 test: store subversion (user agent) as string in msg_version (Sebastian Falbesoner) Pull request description: It seems more natural to treat the "subversion" field (=user agent string, see [BIP 14](https://github.com/bitcoin/bips/blob/master/bip-0014.mediawiki#Proposal)) of a node as pure string rather than a bytestring within the test framework. This is also suggested with the naming prefix in `msg_version.strSubVer`: one probably wouldn't expect a field starting with "str" to be a bytestring that needs further decoding to be useful. This PR moves the encoding/decoding parts to the serialization/deserialization routines so that the user doesn't have to bother with that anymore. Note that currently, in the master branch the `msg_version.strSubVer` is never read (only in `msg_version.__repr__`); However, one issue that is solved by this PR came up while testing #19509 (not merged yet): A decoding script for binary message capture files takes use of the functional test framework convert it into JSON format. Bytestrings will be convered to hexstrings, while pure strings will (surprise surprise) end up without modification in the file. So without this patch, we get: ``` $ jq . out.json | grep -m5 strSubVer "strSubVer": "2f5361746f7368693a32312e39392e302f" "strSubVer": "2f5361746f7368693a302e32302e312f" "strSubVer": "2f5361746f7368693a32312e39392e302f" "strSubVer": "2f5361746f7368693a302e32302e312f" "strSubVer": "2f5361746f7368693a32312e39392e302f" ``` After this patch: ``` $ jq . out2.json | grep -m5 strSubVer "strSubVer": "/Satoshi:21.99.0/" "strSubVer": "/Satoshi:0.20.1/" "strSubVer": "/Satoshi:21.99.0/" "strSubVer": "/Satoshi:0.20.1/" "strSubVer": "/Satoshi:21.99.0/" ``` ACKs for top commit: jnewbery: utACK de85af5cce727981383ac0fe81f635451b331f23 Tree-SHA512: ff23642705c858e8387a625537dfec82e6b8a15da6d99b8d12152560e52d243ba17431b602b26f60996d897e00e3f37dcf8dc8a303ffb1d544df29a5937080f9
2021-02-17 09:36:27 +01:00
r += ser_string(self.strSubVer.encode('utf-8'))
r += struct.pack("<i", self.nStartingHeight)
r += struct.pack("<b", self.nRelay)
return r
def __repr__(self):
return 'msg_version(nVersion=%i nServices=%i nTime=%s addrTo=%s addrFrom=%s nNonce=0x%016X strSubVer=%s nStartingHeight=%i nRelay=%i)' \
% (self.nVersion, self.nServices, time.ctime(self.nTime),
repr(self.addrTo), repr(self.addrFrom), self.nNonce,
self.strSubVer, self.nStartingHeight, self.nRelay)
class msg_verack:
__slots__ = ()
msgtype = b"verack"
def __init__(self):
pass
def deserialize(self, f):
pass
def serialize(self):
return b""
def __repr__(self):
return "msg_verack()"
class msg_addr:
__slots__ = ("addrs",)
msgtype = b"addr"
def __init__(self):
self.addrs = []
def deserialize(self, f):
self.addrs = deser_vector(f, CAddress)
def serialize(self):
return ser_vector(self.addrs)
def __repr__(self):
return "msg_addr(addrs=%s)" % (repr(self.addrs))
class msg_addrv2:
__slots__ = ("addrs",)
# msgtype = b"addrv2"
msgtype = b"addrv2"
def __init__(self):
self.addrs = []
def deserialize(self, f):
self.addrs = deser_vector(f, CAddress, "deserialize_v2")
def serialize(self):
return ser_vector(self.addrs, "serialize_v2")
def __repr__(self):
return "msg_addrv2(addrs=%s)" % (repr(self.addrs))
class msg_sendaddrv2:
__slots__ = ()
# msgtype = b"sendaddrv2"
msgtype = b"sendaddrv2"
def __init__(self):
pass
def deserialize(self, f):
pass
def serialize(self):
return b""
def __repr__(self):
return "msg_sendaddrv2()"
class msg_inv:
__slots__ = ("inv",)
msgtype = b"inv"
def __init__(self, inv=None):
if inv is None:
self.inv = []
else:
self.inv = inv
def deserialize(self, f):
self.inv = deser_vector(f, CInv)
def serialize(self):
return ser_vector(self.inv)
def __repr__(self):
return "msg_inv(inv=%s)" % (repr(self.inv))
class msg_getdata:
__slots__ = ("inv",)
msgtype = b"getdata"
def __init__(self, inv=None):
self.inv = inv if inv is not None else []
def deserialize(self, f):
self.inv = deser_vector(f, CInv)
def serialize(self):
return ser_vector(self.inv)
def __repr__(self):
return "msg_getdata(inv=%s)" % (repr(self.inv))
class msg_getblocks:
__slots__ = ("locator", "hashstop")
msgtype = b"getblocks"
def __init__(self):
self.locator = CBlockLocator()
self.hashstop = 0
def deserialize(self, f):
self.locator = CBlockLocator()
self.locator.deserialize(f)
self.hashstop = deser_uint256(f)
def serialize(self):
r = b""
r += self.locator.serialize()
r += ser_uint256(self.hashstop)
return r
def __repr__(self):
return "msg_getblocks(locator=%s hashstop=%064x)" \
% (repr(self.locator), self.hashstop)
class msg_tx:
__slots__ = ("tx",)
msgtype = b"tx"
def __init__(self, tx=CTransaction()):
self.tx = tx
def deserialize(self, f):
self.tx.deserialize(f)
def serialize(self):
return self.tx.serialize()
def __repr__(self):
return "msg_tx(tx=%s)" % (repr(self.tx))
class msg_block:
__slots__ = ("block",)
msgtype = b"block"
def __init__(self, block=None):
if block is None:
self.block = CBlock()
else:
self.block = block
def deserialize(self, f):
self.block.deserialize(f)
def serialize(self):
return self.block.serialize()
def __repr__(self):
return "msg_block(block=%s)" % (repr(self.block))
# for cases where a user needs tighter control over what is sent over the wire
# note that the user must supply the name of the msgtype, and the data
class msg_generic:
Merge #19509: Per-Peer Message Capture bff7c66e67aa2f18ef70139338643656a54444fe Add documentation to contrib folder (Troy Giorshev) 381f77be858d7417209b6de0b7cd23cb7eb99261 Add Message Capture Test (Troy Giorshev) e4f378a505922c0f544b4cfbfdb169e884e02be9 Add capture parser (Troy Giorshev) 4d1a582549bc982d55e24585b0ba06f92f21e9da Call CaptureMessage at appropriate locations (Troy Giorshev) f2a77ff97bec09dd5fcc043d8659d8ec5dfb87c2 Add CaptureMessage (Troy Giorshev) dbf779d5deb04f55c6e8493ce4e12ed4628638f3 Clean PushMessage and ProcessMessages (Troy Giorshev) Pull request description: This PR introduces per-peer message capture into Bitcoin Core. 📓 ## Purpose The purpose and scope of this feature is intentionally limited. It answers a question anyone new to Bitcoin's P2P protocol has had: "Can I see what messages my node is sending and receiving?". ## Functionality When a new debug-only command line argument `capturemessages` is set, any message that the node receives or sends is captured. The capture occurs in the MessageHandler thread. When receiving a message, it is captured as soon as the MessageHandler thread takes the message off of the vProcessMsg queue. When sending, the message is captured just before the message is pushed onto the vSendMsg queue. The message capture is as minimal as possible to reduce the performance impact on the node. Messages are captured to a new `message_capture` folder in the datadir. Each node has their own subfolder named with their IP address and port. Inside, received and sent messages are captured into two binary files, msgs_recv.dat and msgs_sent.dat, like so: ``` message_capture/203.0.113.7:56072/msgs_recv.dat message_capture/203.0.113.7:56072/msgs_sent.dat ``` Because the messages are raw binary dumps, included in this PR is a Python parsing tool to convert the binary files into human-readable JSON. This script has been placed on its own and out of the way in the new `contrib/message-capture` folder. Its usage is simple and easily discovered by the autogenerated `-h` option. ## Future Maintenance I sympathize greatly with anyone who says "the best code is no code". The future maintenance of this feature will be minimal. The logic to deserialize the payload of the p2p messages exists in our testing framework. As long as our testing framework works, so will this tool. Additionally, I hope that the simplicity of this tool will mean that it gets used frequently, so that problems will be discovered and solved when they are small. ## FAQ "Why not just use Wireshark" Yes, Wireshark has the ability to filter and decode Bitcoin messages. However, the purpose of the message capture added in this PR is to assist with debugging, primarily for new developers looking to improve their knowledge of the Bitcoin Protocol. This drives the design in a different direction than Wireshark, in two different ways. First, this tool must be convenient and simple to use. Using an external tool, like Wireshark, requires setup and interpretation of the results. To a new user who doesn't necessarily know what to expect, this is unnecessary difficulty. This tool, on the other hand, "just works". Turn on the command line flag, run your node, run the script, read the JSON. Second, because this tool is being used for debugging, we want it to be as close to the true behavior of the node as possible. A lot can happen in the SocketHandler thread that would be missed by Wireshark. Additionally, if we are to use Wireshark, we are at the mercy of whoever it maintaining the protocol in Wireshark, both as to it being accurate and recent. As can be seen by the **many** previous attempts to include Bitcoin in Wireshark (google "bitcoin dissector") this is easier said than done. Lastly, I truly believe that this tool will be used significantly more by being included in the codebase. It's just that much more discoverable. ACKs for top commit: MarcoFalke: re-ACK bff7c66e67aa2f18ef70139338643656a54444fe only some minor changes: 👚 jnewbery: utACK bff7c66e67aa2f18ef70139338643656a54444fe theStack: re-ACK bff7c66e67aa2f18ef70139338643656a54444fe Tree-SHA512: e59e3160422269221f70f98720b47842775781c247c064071d546c24fa7a35a0e5534e8baa4b4591a750d7eb16de6b4ecf54cbee6d193b261f4f104e28c15f47
2021-02-02 13:11:14 +01:00
__slots__ = ("data")
def __init__(self, msgtype, data=None):
self.msgtype = msgtype
self.data = data
def serialize(self):
return self.data
def __repr__(self):
return "msg_generic()"
class msg_getaddr:
__slots__ = ()
msgtype = b"getaddr"
def __init__(self):
pass
def deserialize(self, f):
pass
def serialize(self):
return b""
def __repr__(self):
return "msg_getaddr()"
class msg_ping:
__slots__ = ("nonce",)
msgtype = b"ping"
def __init__(self, nonce=0):
self.nonce = nonce
def deserialize(self, f):
self.nonce = struct.unpack("<Q", f.read(8))[0]
def serialize(self):
r = b""
r += struct.pack("<Q", self.nonce)
return r
def __repr__(self):
return "msg_ping(nonce=%08x)" % self.nonce
class msg_pong:
__slots__ = ("nonce",)
msgtype = b"pong"
def __init__(self, nonce=0):
self.nonce = nonce
def deserialize(self, f):
self.nonce = struct.unpack("<Q", f.read(8))[0]
def serialize(self):
r = b""
r += struct.pack("<Q", self.nonce)
return r
def __repr__(self):
return "msg_pong(nonce=%08x)" % self.nonce
class msg_mempool:
__slots__ = ()
msgtype = b"mempool"
def __init__(self):
pass
def deserialize(self, f):
pass
def serialize(self):
return b""
def __repr__(self):
return "msg_mempool()"
class msg_notfound:
__slots__ = ("vec", )
msgtype = b"notfound"
def __init__(self, vec=None):
self.vec = vec or []
def deserialize(self, f):
self.vec = deser_vector(f, CInv)
def serialize(self):
return ser_vector(self.vec)
def __repr__(self):
return "msg_notfound(vec=%s)" % (repr(self.vec))
class msg_sendheaders:
__slots__ = ()
msgtype = b"sendheaders"
def __init__(self):
pass
def deserialize(self, f):
pass
def serialize(self):
return b""
def __repr__(self):
return "msg_sendheaders()"
class msg_sendheaders2:
__slots__ = ()
msgtype = b"sendheaders2"
def __init__(self):
pass
def deserialize(self, f):
pass
def serialize(self):
return b""
def __repr__(self):
return "msg_sendheaders2()"
# getheaders message has
# number of entries
# vector of hashes
# hash_stop (hash of last desired block header, 0 to get as many as possible)
class msg_getheaders:
__slots__ = ("hashstop", "locator",)
msgtype = b"getheaders"
def __init__(self):
self.locator = CBlockLocator()
self.hashstop = 0
def deserialize(self, f):
self.locator = CBlockLocator()
self.locator.deserialize(f)
self.hashstop = deser_uint256(f)
def serialize(self):
r = b""
r += self.locator.serialize()
r += ser_uint256(self.hashstop)
return r
def __repr__(self):
return "msg_getheaders(locator=%s, stop=%064x)" \
% (repr(self.locator), self.hashstop)
# same as msg_getheaders, but to request the headers compressed
class msg_getheaders2:
__slots__ = ("hashstop", "locator",)
msgtype = b"getheaders2"
def __init__(self):
self.locator = CBlockLocator()
self.hashstop = 0
def deserialize(self, f):
self.locator = CBlockLocator()
self.locator.deserialize(f)
self.hashstop = deser_uint256(f)
def serialize(self):
r = b""
r += self.locator.serialize()
r += ser_uint256(self.hashstop)
return r
def __repr__(self):
return "msg_getheaders2(locator=%s, stop=%064x)" \
% (repr(self.locator), self.hashstop)
# headers message has
# <count> <vector of block headers>
class msg_headers:
__slots__ = ("headers",)
msgtype = b"headers"
def __init__(self, headers=None):
self.headers = headers if headers is not None else []
def deserialize(self, f):
# comment in dashd indicates these should be deserialized as blocks
blocks = deser_vector(f, CBlock)
for x in blocks:
self.headers.append(CBlockHeader(x))
def serialize(self):
blocks = [CBlock(x) for x in self.headers]
return ser_vector(blocks)
def __repr__(self):
return "msg_headers(headers=%s)" % repr(self.headers)
# headers message has
# <count> <vector of compressed block headers>
class msg_headers2:
__slots__ = ("headers",)
msgtype = b"headers2"
def __init__(self, headers=None):
self.headers = headers if headers is not None else []
def deserialize(self, f):
self.headers = deser_vector(f, CompressibleBlockHeader)
last_unique_versions = []
for idx in range(len(self.headers)):
self.headers[idx].uncompress(self.headers[:idx], last_unique_versions)
def serialize(self):
last_unique_versions = []
for idx in range(len(self.headers)):
self.headers[idx].compress(self.headers[:idx], last_unique_versions)
return ser_vector(self.headers)
def __repr__(self):
return "msg_headers2(headers=%s)" % repr(self.headers)
class msg_merkleblock:
__slots__ = ("merkleblock",)
msgtype = b"merkleblock"
def __init__(self, merkleblock=None):
if merkleblock is None:
self.merkleblock = CMerkleBlock()
else:
self.merkleblock = merkleblock
def deserialize(self, f):
self.merkleblock.deserialize(f)
def serialize(self):
return self.merkleblock.serialize()
def __repr__(self):
return "msg_merkleblock(merkleblock=%s)" % (repr(self.merkleblock))
class msg_filterload:
__slots__ = ("data", "nHashFuncs", "nTweak", "nFlags")
msgtype = b"filterload"
def __init__(self, data=b'00', nHashFuncs=0, nTweak=0, nFlags=0):
self.data = data
self.nHashFuncs = nHashFuncs
self.nTweak = nTweak
self.nFlags = nFlags
def deserialize(self, f):
self.data = deser_string(f)
self.nHashFuncs = struct.unpack("<I", f.read(4))[0]
self.nTweak = struct.unpack("<I", f.read(4))[0]
self.nFlags = struct.unpack("<B", f.read(1))[0]
def serialize(self):
r = b""
r += ser_string(self.data)
r += struct.pack("<I", self.nHashFuncs)
r += struct.pack("<I", self.nTweak)
r += struct.pack("<B", self.nFlags)
return r
def __repr__(self):
return "msg_filterload(data={}, nHashFuncs={}, nTweak={}, nFlags={})".format(
self.data, self.nHashFuncs, self.nTweak, self.nFlags)
Merge #18515: test: add BIP37 remote crash bug [CVE-2013-5700] test to p2p_filter.py 0ed2d8e07d3806d78d03a77d2153f22f9d733a07 test: add BIP37 remote crash bug [CVE-2013-5700] test to p2p_filter.py (Sebastian Falbesoner) Pull request description: Integrates the missing message type `filteradd` to the test framework and checks that the BIP37 implementation is not vulnerable to the "remote crash bug" [CVE-2013-5700](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2013-5700) anymore. Prior to v.0.8.4, it was possible to trigger a division-by-zero error on the following line in the function `CBloomFilter::Hash()`: https://github.com/bitcoin/bitcoin/blob/f0d6487e290761a4fb03798240a351b5fddfdb38/src/bloom.cpp#L45 By setting a zero-length filter via `filterload`, `vData.size()` is 0, so the modulo operation above, called on any .insert() or .contains() operation then crashed the node. The test uses the approach of just sending an arbitrary `filteradd` message after, which calls `CBloomFilter::insert()` (and in turn `CBloomFilter::Hash()`) on the node. The vulnerability was fixed by commit https://github.com/bitcoin/bitcoin/commit/37c6389c5a0ca63ae3573440ecdfe95d28ad8f07 (an intentional covert fix, [according to gmaxwell](https://github.com/bitcoin/bitcoin/issues/18483#issuecomment-608224095)), which introduced flags `isEmpty`/`isFull` that wouldn't call the `Hash()` member function if `isFull` is true (set to true by default constructor). To validate that the test fails if the implementation is vulnerable, one can simply set the flags to false in the member function `UpdateEmptyFull()` (that is called after a filter received via `filterload` is constructed), which activates the vulnerable code path calling `Hash` in any case on adding or testing for data in the filter: ```diff diff --git a/src/bloom.cpp b/src/bloom.cpp index bd6069b..ef294a3 100644 --- a/src/bloom.cpp +++ b/src/bloom.cpp @@ -199,8 +199,8 @@ void CBloomFilter::UpdateEmptyFull() full &= vData[i] == 0xff; empty &= vData[i] == 0; } - isFull = full; - isEmpty = empty; + isFull = false; + isEmpty = false; } ``` Resulting in: ``` $ ./p2p_filter.py [...] 2020-04-03T14:38:59.593000Z TestFramework (INFO): Check that division-by-zero remote crash bug [CVE-2013-5700] is fixed 2020-04-03T14:38:59.695000Z TestFramework (ERROR): Assertion failed [...] [... some exceptions following ...] ``` ACKs for top commit: naumenkogs: utACK 0ed2d8e07d3806d78d03a77d2153f22f9d733a07 Tree-SHA512: 02d0253d13eab70c4bd007b0750c56a5a92d05d419d53033523eeb3ed80318bc95196ab90f7745ea3ac9ebae7caee3adbf2a055a40a4124e0915226e49018fe8
2020-04-05 15:17:50 +02:00
class msg_filteradd:
__slots__ = ("data")
msgtype = b"filteradd"
Merge #18515: test: add BIP37 remote crash bug [CVE-2013-5700] test to p2p_filter.py 0ed2d8e07d3806d78d03a77d2153f22f9d733a07 test: add BIP37 remote crash bug [CVE-2013-5700] test to p2p_filter.py (Sebastian Falbesoner) Pull request description: Integrates the missing message type `filteradd` to the test framework and checks that the BIP37 implementation is not vulnerable to the "remote crash bug" [CVE-2013-5700](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2013-5700) anymore. Prior to v.0.8.4, it was possible to trigger a division-by-zero error on the following line in the function `CBloomFilter::Hash()`: https://github.com/bitcoin/bitcoin/blob/f0d6487e290761a4fb03798240a351b5fddfdb38/src/bloom.cpp#L45 By setting a zero-length filter via `filterload`, `vData.size()` is 0, so the modulo operation above, called on any .insert() or .contains() operation then crashed the node. The test uses the approach of just sending an arbitrary `filteradd` message after, which calls `CBloomFilter::insert()` (and in turn `CBloomFilter::Hash()`) on the node. The vulnerability was fixed by commit https://github.com/bitcoin/bitcoin/commit/37c6389c5a0ca63ae3573440ecdfe95d28ad8f07 (an intentional covert fix, [according to gmaxwell](https://github.com/bitcoin/bitcoin/issues/18483#issuecomment-608224095)), which introduced flags `isEmpty`/`isFull` that wouldn't call the `Hash()` member function if `isFull` is true (set to true by default constructor). To validate that the test fails if the implementation is vulnerable, one can simply set the flags to false in the member function `UpdateEmptyFull()` (that is called after a filter received via `filterload` is constructed), which activates the vulnerable code path calling `Hash` in any case on adding or testing for data in the filter: ```diff diff --git a/src/bloom.cpp b/src/bloom.cpp index bd6069b..ef294a3 100644 --- a/src/bloom.cpp +++ b/src/bloom.cpp @@ -199,8 +199,8 @@ void CBloomFilter::UpdateEmptyFull() full &= vData[i] == 0xff; empty &= vData[i] == 0; } - isFull = full; - isEmpty = empty; + isFull = false; + isEmpty = false; } ``` Resulting in: ``` $ ./p2p_filter.py [...] 2020-04-03T14:38:59.593000Z TestFramework (INFO): Check that division-by-zero remote crash bug [CVE-2013-5700] is fixed 2020-04-03T14:38:59.695000Z TestFramework (ERROR): Assertion failed [...] [... some exceptions following ...] ``` ACKs for top commit: naumenkogs: utACK 0ed2d8e07d3806d78d03a77d2153f22f9d733a07 Tree-SHA512: 02d0253d13eab70c4bd007b0750c56a5a92d05d419d53033523eeb3ed80318bc95196ab90f7745ea3ac9ebae7caee3adbf2a055a40a4124e0915226e49018fe8
2020-04-05 15:17:50 +02:00
def __init__(self, data):
self.data = data
def deserialize(self, f):
self.data = deser_string(f)
def serialize(self):
r = b""
r += ser_string(self.data)
return r
def __repr__(self):
return "msg_filteradd(data={})".format(self.data)
class msg_filterclear:
__slots__ = ()
msgtype = b"filterclear"
def __init__(self):
pass
def deserialize(self, f):
pass
def serialize(self):
return b""
def __repr__(self):
return "msg_filterclear()"
class msg_sendcmpct:
__slots__ = ("announce", "version")
msgtype = b"sendcmpct"
def __init__(self, announce=False, version=1):
self.announce = announce
self.version = version
def deserialize(self, f):
self.announce = struct.unpack("<?", f.read(1))[0]
self.version = struct.unpack("<Q", f.read(8))[0]
def serialize(self):
r = b""
r += struct.pack("<?", self.announce)
r += struct.pack("<Q", self.version)
return r
def __repr__(self):
return "msg_sendcmpct(announce=%s, version=%lu)" % (self.announce, self.version)
class msg_cmpctblock:
__slots__ = ("header_and_shortids",)
msgtype = b"cmpctblock"
def __init__(self, header_and_shortids = None):
self.header_and_shortids = header_and_shortids
def deserialize(self, f):
self.header_and_shortids = P2PHeaderAndShortIDs()
self.header_and_shortids.deserialize(f)
def serialize(self):
r = b""
r += self.header_and_shortids.serialize()
return r
def __repr__(self):
return "msg_cmpctblock(HeaderAndShortIDs=%s)" % repr(self.header_and_shortids)
class msg_getblocktxn:
__slots__ = ("block_txn_request",)
msgtype = b"getblocktxn"
def __init__(self):
self.block_txn_request = None
def deserialize(self, f):
self.block_txn_request = BlockTransactionsRequest()
self.block_txn_request.deserialize(f)
def serialize(self):
r = b""
r += self.block_txn_request.serialize()
return r
def __repr__(self):
return "msg_getblocktxn(block_txn_request=%s)" % (repr(self.block_txn_request))
class msg_blocktxn:
__slots__ = ("block_transactions",)
msgtype = b"blocktxn"
def __init__(self):
self.block_transactions = BlockTransactions()
def deserialize(self, f):
self.block_transactions.deserialize(f)
def serialize(self):
r = b""
r += self.block_transactions.serialize()
return r
def __repr__(self):
return "msg_blocktxn(block_transactions=%s)" % (repr(self.block_transactions))
class msg_getmnlistd:
__slots__ = ("baseBlockHash", "blockHash",)
msgtype = b"getmnlistd"
def __init__(self, baseBlockHash=0, blockHash=0):
self.baseBlockHash = baseBlockHash
self.blockHash = blockHash
def deserialize(self, f):
self.baseBlockHash = deser_uint256(f)
self.blockHash = deser_uint256(f)
def serialize(self):
r = b""
r += ser_uint256(self.baseBlockHash)
r += ser_uint256(self.blockHash)
return r
def __repr__(self):
return "msg_getmnlistd(baseBlockHash=%064x, blockHash=%064x)" % (self.baseBlockHash, self.blockHash)
QuorumId = namedtuple('QuorumId', ['llmqType', 'quorumHash'])
class msg_mnlistdiff:
feat: mnlistdiff v20 CL sig quorums (#5377) ## Issue being fixed or feature implemented Implementation of Randomness Beacon Part 3. Starting from v20 activation fork, members for quorums are sorted using (if available) the best CL signature found in Coinbase. If no CL signature is present yet, then the usual way is used (By using Blockhash instead) The actual new way to shuffle is already implemented in https://github.com/dashpay/dash/pull/5366. SPV clients also need to calculate members, but they only know block headers. Since Coinbase is in the actual block, then they lack the required information to correctly calculate quorum members. ## What was done? - Message `MNLISTIDFF` is enriched with a new field `quorumsCLSigs`. This field holds the Chainlock Signature required for each set of indexes corresponding to quorums in field `newQuorums`. - Protocol version has been bumped to `70230`. - Clients with protocol version greater or equal to `70230` will receive the new field `quorumsCLSigs`. - The same field is returned in `protx diff` RPC. Note: - Field `quorumsCLSigs` will populated only after v20 activation - If for one or more quorums, no non-null CL sig was found in CbTx then a null signature is returned in `quorumsCLSigs`. ## How Has This Been Tested? - Functional test mininode's protocol version was bumped to `70230`. - `feature_llmq_rotation.py` checks that `quorumsCLSigs` match in both P2P and RPC messages. ## Breaking Changes No ## Checklist: - [x] I have performed a self-review of my own code - [x] I have commented my code, particularly in hard-to-understand areas - [x] I have added or updated relevant unit/integration/functional/e2e tests - [x] I have made corresponding changes to the documentation - [x] I have assigned this pull request to a milestone _(for repository code-owners and collaborators only)_ --------- Co-authored-by: thephez <thephez@users.noreply.github.com> Co-authored-by: UdjinM6 <UdjinM6@users.noreply.github.com> Co-authored-by: pasta <pasta@dashboost.org>
2023-07-10 18:23:09 +02:00
__slots__ = ("baseBlockHash", "blockHash", "merkleProof", "cbTx", "nVersion", "deletedMNs", "mnList", "deletedQuorums", "newQuorums", "quorumsCLSigs")
msgtype = b"mnlistdiff"
def __init__(self):
self.baseBlockHash = 0
self.blockHash = 0
self.merkleProof = CPartialMerkleTree()
self.cbTx = None
feat: store protx version in CSimplifiedMNListEntry and use it to ser/deser pubKeyOperator (#5397) ## Issue being fixed or feature implemented Mobile wallets would have to convert 4k+ pubkeys at the V19 fork point and it's a pretty hard job for them that can easily take 10-15 seconds if not more. Also after the HF, if a masternode list is requested from before the HF, the operator keys come in basic scheme, but the merkelroot was calculated with legacy. From mobile team work it wasn't possible to convert all operator keys to legacy and then calculate the correct merkleroot. ~This PR builds on top of ~#5392~ #5403 (changes that belong to this PR: 26f7e966500bdea4c604f1d16716b40b366fc707 and 4b42dc8fcee3354afd82ce7e3a72ebe1659f5f22) and aims to solve both of these issues.~ cc @hashengineering @QuantumExplorer ## What was done? Introduce `nVersion` on p2p level for every CSimplifiedMNListEntry. Set `nVersion` to the same value we have it in CDeterministicMNState i.e. pubkey serialization would not be via basic scheme only after the V19 fork, it would match the way it’s serialized on-chain/in CDeterministicMNState for that specific MN. ## How Has This Been Tested? run tests ## Breaking Changes NOTE: `testnet` is going to re-fork at v19 forkpoint because `merkleRootMNList` is not going to match ## Checklist: - [x] I have performed a self-review of my own code - [ ] I have commented my code, particularly in hard-to-understand areas - [ ] I have added or updated relevant unit/integration/functional/e2e tests - [ ] I have made corresponding changes to the documentation - [ ] I have assigned this pull request to a milestone _(for repository code-owners and collaborators only)_
2023-06-11 19:29:00 +02:00
self.nVersion = 0
self.deletedMNs = []
self.mnList = []
self.deletedQuorums = []
self.newQuorums = []
feat: mnlistdiff v20 CL sig quorums (#5377) ## Issue being fixed or feature implemented Implementation of Randomness Beacon Part 3. Starting from v20 activation fork, members for quorums are sorted using (if available) the best CL signature found in Coinbase. If no CL signature is present yet, then the usual way is used (By using Blockhash instead) The actual new way to shuffle is already implemented in https://github.com/dashpay/dash/pull/5366. SPV clients also need to calculate members, but they only know block headers. Since Coinbase is in the actual block, then they lack the required information to correctly calculate quorum members. ## What was done? - Message `MNLISTIDFF` is enriched with a new field `quorumsCLSigs`. This field holds the Chainlock Signature required for each set of indexes corresponding to quorums in field `newQuorums`. - Protocol version has been bumped to `70230`. - Clients with protocol version greater or equal to `70230` will receive the new field `quorumsCLSigs`. - The same field is returned in `protx diff` RPC. Note: - Field `quorumsCLSigs` will populated only after v20 activation - If for one or more quorums, no non-null CL sig was found in CbTx then a null signature is returned in `quorumsCLSigs`. ## How Has This Been Tested? - Functional test mininode's protocol version was bumped to `70230`. - `feature_llmq_rotation.py` checks that `quorumsCLSigs` match in both P2P and RPC messages. ## Breaking Changes No ## Checklist: - [x] I have performed a self-review of my own code - [x] I have commented my code, particularly in hard-to-understand areas - [x] I have added or updated relevant unit/integration/functional/e2e tests - [x] I have made corresponding changes to the documentation - [x] I have assigned this pull request to a milestone _(for repository code-owners and collaborators only)_ --------- Co-authored-by: thephez <thephez@users.noreply.github.com> Co-authored-by: UdjinM6 <UdjinM6@users.noreply.github.com> Co-authored-by: pasta <pasta@dashboost.org>
2023-07-10 18:23:09 +02:00
self.quorumsCLSigs = {}
def deserialize(self, f):
self.nVersion = struct.unpack("<H", f.read(2))[0]
self.baseBlockHash = deser_uint256(f)
self.blockHash = deser_uint256(f)
self.merkleProof.deserialize(f)
self.cbTx = CTransaction()
self.cbTx.deserialize(f)
self.cbTx.rehash()
self.deletedMNs = deser_uint256_vector(f)
self.mnList = []
Merge #19674: refactor: test: use throwaway _ variable for unused loop counters dac7a111bdd3b0233d94cf68dae7a8bfc6ac9c64 refactor: test: use _ variable for unused loop counters (Sebastian Falbesoner) Pull request description: This tiny PR substitutes Python loops in the form of `for x in range(N): ...` by `for _ in range(N): ...` where applicable. The idea is indicating to the reader that a block (or statement, in list comprehensions) is just repeated N times, and that the loop counter is not used in the body, hence using the throwaway variable. This is already done quite often in the current tests (see e.g. `$ git grep "for _ in range("`). Another alternative would be using `itertools.repeat` (according to Python core developer Raymond Hettinger it's [even faster](https://twitter.com/raymondh/status/1144527183341375488)), but that doesn't seem to be widespread in use and I'm not sure about a readability increase. The only drawback I see is that whenever one wants to debug loop iterations, one would need to introduce a loop variable again. Reviewing this is basically a no-brainer, since tests would fail immediately if a a substitution has taken place on a loop where the variable is used. Instances to replace were found by `$ git grep "for.*in range("` and manually checked. ACKs for top commit: darosior: ACK dac7a111bdd3b0233d94cf68dae7a8bfc6ac9c64 instagibbs: manual inspection ACK https://github.com/bitcoin/bitcoin/pull/19674/commits/dac7a111bdd3b0233d94cf68dae7a8bfc6ac9c64 practicalswift: ACK dac7a111bdd3b0233d94cf68dae7a8bfc6ac9c64 -- the updated code is easier to reason about since the throwaway nature of a variable is expressed explicitly (using the Pythonic `_` idiom) instead of implicitly. Explicit is better than implicit was we all know by now :) Tree-SHA512: 5f43ded9ce14e5e00b3876ec445b90acda1842f813149ae7bafa93f3ac3d510bb778e2c701187fd2c73585e6b87797bb2d2987139bd1a9ba7d58775a59392406
2020-08-11 02:50:34 +02:00
for _ in range(deser_compact_size(f)):
e = CSimplifiedMNListEntry()
feat: store protx version in CSimplifiedMNListEntry and use it to ser/deser pubKeyOperator (#5397) ## Issue being fixed or feature implemented Mobile wallets would have to convert 4k+ pubkeys at the V19 fork point and it's a pretty hard job for them that can easily take 10-15 seconds if not more. Also after the HF, if a masternode list is requested from before the HF, the operator keys come in basic scheme, but the merkelroot was calculated with legacy. From mobile team work it wasn't possible to convert all operator keys to legacy and then calculate the correct merkleroot. ~This PR builds on top of ~#5392~ #5403 (changes that belong to this PR: 26f7e966500bdea4c604f1d16716b40b366fc707 and 4b42dc8fcee3354afd82ce7e3a72ebe1659f5f22) and aims to solve both of these issues.~ cc @hashengineering @QuantumExplorer ## What was done? Introduce `nVersion` on p2p level for every CSimplifiedMNListEntry. Set `nVersion` to the same value we have it in CDeterministicMNState i.e. pubkey serialization would not be via basic scheme only after the V19 fork, it would match the way it’s serialized on-chain/in CDeterministicMNState for that specific MN. ## How Has This Been Tested? run tests ## Breaking Changes NOTE: `testnet` is going to re-fork at v19 forkpoint because `merkleRootMNList` is not going to match ## Checklist: - [x] I have performed a self-review of my own code - [ ] I have commented my code, particularly in hard-to-understand areas - [ ] I have added or updated relevant unit/integration/functional/e2e tests - [ ] I have made corresponding changes to the documentation - [ ] I have assigned this pull request to a milestone _(for repository code-owners and collaborators only)_
2023-06-11 19:29:00 +02:00
e.deserialize(f)
self.mnList.append(e)
self.deletedQuorums = []
Merge #19674: refactor: test: use throwaway _ variable for unused loop counters dac7a111bdd3b0233d94cf68dae7a8bfc6ac9c64 refactor: test: use _ variable for unused loop counters (Sebastian Falbesoner) Pull request description: This tiny PR substitutes Python loops in the form of `for x in range(N): ...` by `for _ in range(N): ...` where applicable. The idea is indicating to the reader that a block (or statement, in list comprehensions) is just repeated N times, and that the loop counter is not used in the body, hence using the throwaway variable. This is already done quite often in the current tests (see e.g. `$ git grep "for _ in range("`). Another alternative would be using `itertools.repeat` (according to Python core developer Raymond Hettinger it's [even faster](https://twitter.com/raymondh/status/1144527183341375488)), but that doesn't seem to be widespread in use and I'm not sure about a readability increase. The only drawback I see is that whenever one wants to debug loop iterations, one would need to introduce a loop variable again. Reviewing this is basically a no-brainer, since tests would fail immediately if a a substitution has taken place on a loop where the variable is used. Instances to replace were found by `$ git grep "for.*in range("` and manually checked. ACKs for top commit: darosior: ACK dac7a111bdd3b0233d94cf68dae7a8bfc6ac9c64 instagibbs: manual inspection ACK https://github.com/bitcoin/bitcoin/pull/19674/commits/dac7a111bdd3b0233d94cf68dae7a8bfc6ac9c64 practicalswift: ACK dac7a111bdd3b0233d94cf68dae7a8bfc6ac9c64 -- the updated code is easier to reason about since the throwaway nature of a variable is expressed explicitly (using the Pythonic `_` idiom) instead of implicitly. Explicit is better than implicit was we all know by now :) Tree-SHA512: 5f43ded9ce14e5e00b3876ec445b90acda1842f813149ae7bafa93f3ac3d510bb778e2c701187fd2c73585e6b87797bb2d2987139bd1a9ba7d58775a59392406
2020-08-11 02:50:34 +02:00
for _ in range(deser_compact_size(f)):
llmqType = struct.unpack("<B", f.read(1))[0]
quorumHash = deser_uint256(f)
self.deletedQuorums.append(QuorumId(llmqType, quorumHash))
self.newQuorums = []
Merge #19674: refactor: test: use throwaway _ variable for unused loop counters dac7a111bdd3b0233d94cf68dae7a8bfc6ac9c64 refactor: test: use _ variable for unused loop counters (Sebastian Falbesoner) Pull request description: This tiny PR substitutes Python loops in the form of `for x in range(N): ...` by `for _ in range(N): ...` where applicable. The idea is indicating to the reader that a block (or statement, in list comprehensions) is just repeated N times, and that the loop counter is not used in the body, hence using the throwaway variable. This is already done quite often in the current tests (see e.g. `$ git grep "for _ in range("`). Another alternative would be using `itertools.repeat` (according to Python core developer Raymond Hettinger it's [even faster](https://twitter.com/raymondh/status/1144527183341375488)), but that doesn't seem to be widespread in use and I'm not sure about a readability increase. The only drawback I see is that whenever one wants to debug loop iterations, one would need to introduce a loop variable again. Reviewing this is basically a no-brainer, since tests would fail immediately if a a substitution has taken place on a loop where the variable is used. Instances to replace were found by `$ git grep "for.*in range("` and manually checked. ACKs for top commit: darosior: ACK dac7a111bdd3b0233d94cf68dae7a8bfc6ac9c64 instagibbs: manual inspection ACK https://github.com/bitcoin/bitcoin/pull/19674/commits/dac7a111bdd3b0233d94cf68dae7a8bfc6ac9c64 practicalswift: ACK dac7a111bdd3b0233d94cf68dae7a8bfc6ac9c64 -- the updated code is easier to reason about since the throwaway nature of a variable is expressed explicitly (using the Pythonic `_` idiom) instead of implicitly. Explicit is better than implicit was we all know by now :) Tree-SHA512: 5f43ded9ce14e5e00b3876ec445b90acda1842f813149ae7bafa93f3ac3d510bb778e2c701187fd2c73585e6b87797bb2d2987139bd1a9ba7d58775a59392406
2020-08-11 02:50:34 +02:00
for _ in range(deser_compact_size(f)):
qc = CFinalCommitment()
qc.deserialize(f)
self.newQuorums.append(qc)
feat: mnlistdiff v20 CL sig quorums (#5377) ## Issue being fixed or feature implemented Implementation of Randomness Beacon Part 3. Starting from v20 activation fork, members for quorums are sorted using (if available) the best CL signature found in Coinbase. If no CL signature is present yet, then the usual way is used (By using Blockhash instead) The actual new way to shuffle is already implemented in https://github.com/dashpay/dash/pull/5366. SPV clients also need to calculate members, but they only know block headers. Since Coinbase is in the actual block, then they lack the required information to correctly calculate quorum members. ## What was done? - Message `MNLISTIDFF` is enriched with a new field `quorumsCLSigs`. This field holds the Chainlock Signature required for each set of indexes corresponding to quorums in field `newQuorums`. - Protocol version has been bumped to `70230`. - Clients with protocol version greater or equal to `70230` will receive the new field `quorumsCLSigs`. - The same field is returned in `protx diff` RPC. Note: - Field `quorumsCLSigs` will populated only after v20 activation - If for one or more quorums, no non-null CL sig was found in CbTx then a null signature is returned in `quorumsCLSigs`. ## How Has This Been Tested? - Functional test mininode's protocol version was bumped to `70230`. - `feature_llmq_rotation.py` checks that `quorumsCLSigs` match in both P2P and RPC messages. ## Breaking Changes No ## Checklist: - [x] I have performed a self-review of my own code - [x] I have commented my code, particularly in hard-to-understand areas - [x] I have added or updated relevant unit/integration/functional/e2e tests - [x] I have made corresponding changes to the documentation - [x] I have assigned this pull request to a milestone _(for repository code-owners and collaborators only)_ --------- Co-authored-by: thephez <thephez@users.noreply.github.com> Co-authored-by: UdjinM6 <UdjinM6@users.noreply.github.com> Co-authored-by: pasta <pasta@dashboost.org>
2023-07-10 18:23:09 +02:00
self.quorumsCLSigs = {}
Merge #19674: refactor: test: use throwaway _ variable for unused loop counters dac7a111bdd3b0233d94cf68dae7a8bfc6ac9c64 refactor: test: use _ variable for unused loop counters (Sebastian Falbesoner) Pull request description: This tiny PR substitutes Python loops in the form of `for x in range(N): ...` by `for _ in range(N): ...` where applicable. The idea is indicating to the reader that a block (or statement, in list comprehensions) is just repeated N times, and that the loop counter is not used in the body, hence using the throwaway variable. This is already done quite often in the current tests (see e.g. `$ git grep "for _ in range("`). Another alternative would be using `itertools.repeat` (according to Python core developer Raymond Hettinger it's [even faster](https://twitter.com/raymondh/status/1144527183341375488)), but that doesn't seem to be widespread in use and I'm not sure about a readability increase. The only drawback I see is that whenever one wants to debug loop iterations, one would need to introduce a loop variable again. Reviewing this is basically a no-brainer, since tests would fail immediately if a a substitution has taken place on a loop where the variable is used. Instances to replace were found by `$ git grep "for.*in range("` and manually checked. ACKs for top commit: darosior: ACK dac7a111bdd3b0233d94cf68dae7a8bfc6ac9c64 instagibbs: manual inspection ACK https://github.com/bitcoin/bitcoin/pull/19674/commits/dac7a111bdd3b0233d94cf68dae7a8bfc6ac9c64 practicalswift: ACK dac7a111bdd3b0233d94cf68dae7a8bfc6ac9c64 -- the updated code is easier to reason about since the throwaway nature of a variable is expressed explicitly (using the Pythonic `_` idiom) instead of implicitly. Explicit is better than implicit was we all know by now :) Tree-SHA512: 5f43ded9ce14e5e00b3876ec445b90acda1842f813149ae7bafa93f3ac3d510bb778e2c701187fd2c73585e6b87797bb2d2987139bd1a9ba7d58775a59392406
2020-08-11 02:50:34 +02:00
for _ in range(deser_compact_size(f)):
feat: mnlistdiff v20 CL sig quorums (#5377) ## Issue being fixed or feature implemented Implementation of Randomness Beacon Part 3. Starting from v20 activation fork, members for quorums are sorted using (if available) the best CL signature found in Coinbase. If no CL signature is present yet, then the usual way is used (By using Blockhash instead) The actual new way to shuffle is already implemented in https://github.com/dashpay/dash/pull/5366. SPV clients also need to calculate members, but they only know block headers. Since Coinbase is in the actual block, then they lack the required information to correctly calculate quorum members. ## What was done? - Message `MNLISTIDFF` is enriched with a new field `quorumsCLSigs`. This field holds the Chainlock Signature required for each set of indexes corresponding to quorums in field `newQuorums`. - Protocol version has been bumped to `70230`. - Clients with protocol version greater or equal to `70230` will receive the new field `quorumsCLSigs`. - The same field is returned in `protx diff` RPC. Note: - Field `quorumsCLSigs` will populated only after v20 activation - If for one or more quorums, no non-null CL sig was found in CbTx then a null signature is returned in `quorumsCLSigs`. ## How Has This Been Tested? - Functional test mininode's protocol version was bumped to `70230`. - `feature_llmq_rotation.py` checks that `quorumsCLSigs` match in both P2P and RPC messages. ## Breaking Changes No ## Checklist: - [x] I have performed a self-review of my own code - [x] I have commented my code, particularly in hard-to-understand areas - [x] I have added or updated relevant unit/integration/functional/e2e tests - [x] I have made corresponding changes to the documentation - [x] I have assigned this pull request to a milestone _(for repository code-owners and collaborators only)_ --------- Co-authored-by: thephez <thephez@users.noreply.github.com> Co-authored-by: UdjinM6 <UdjinM6@users.noreply.github.com> Co-authored-by: pasta <pasta@dashboost.org>
2023-07-10 18:23:09 +02:00
signature = f.read(96)
idx_set = set()
Merge #19674: refactor: test: use throwaway _ variable for unused loop counters dac7a111bdd3b0233d94cf68dae7a8bfc6ac9c64 refactor: test: use _ variable for unused loop counters (Sebastian Falbesoner) Pull request description: This tiny PR substitutes Python loops in the form of `for x in range(N): ...` by `for _ in range(N): ...` where applicable. The idea is indicating to the reader that a block (or statement, in list comprehensions) is just repeated N times, and that the loop counter is not used in the body, hence using the throwaway variable. This is already done quite often in the current tests (see e.g. `$ git grep "for _ in range("`). Another alternative would be using `itertools.repeat` (according to Python core developer Raymond Hettinger it's [even faster](https://twitter.com/raymondh/status/1144527183341375488)), but that doesn't seem to be widespread in use and I'm not sure about a readability increase. The only drawback I see is that whenever one wants to debug loop iterations, one would need to introduce a loop variable again. Reviewing this is basically a no-brainer, since tests would fail immediately if a a substitution has taken place on a loop where the variable is used. Instances to replace were found by `$ git grep "for.*in range("` and manually checked. ACKs for top commit: darosior: ACK dac7a111bdd3b0233d94cf68dae7a8bfc6ac9c64 instagibbs: manual inspection ACK https://github.com/bitcoin/bitcoin/pull/19674/commits/dac7a111bdd3b0233d94cf68dae7a8bfc6ac9c64 practicalswift: ACK dac7a111bdd3b0233d94cf68dae7a8bfc6ac9c64 -- the updated code is easier to reason about since the throwaway nature of a variable is expressed explicitly (using the Pythonic `_` idiom) instead of implicitly. Explicit is better than implicit was we all know by now :) Tree-SHA512: 5f43ded9ce14e5e00b3876ec445b90acda1842f813149ae7bafa93f3ac3d510bb778e2c701187fd2c73585e6b87797bb2d2987139bd1a9ba7d58775a59392406
2020-08-11 02:50:34 +02:00
for _ in range(deser_compact_size(f)):
feat: mnlistdiff v20 CL sig quorums (#5377) ## Issue being fixed or feature implemented Implementation of Randomness Beacon Part 3. Starting from v20 activation fork, members for quorums are sorted using (if available) the best CL signature found in Coinbase. If no CL signature is present yet, then the usual way is used (By using Blockhash instead) The actual new way to shuffle is already implemented in https://github.com/dashpay/dash/pull/5366. SPV clients also need to calculate members, but they only know block headers. Since Coinbase is in the actual block, then they lack the required information to correctly calculate quorum members. ## What was done? - Message `MNLISTIDFF` is enriched with a new field `quorumsCLSigs`. This field holds the Chainlock Signature required for each set of indexes corresponding to quorums in field `newQuorums`. - Protocol version has been bumped to `70230`. - Clients with protocol version greater or equal to `70230` will receive the new field `quorumsCLSigs`. - The same field is returned in `protx diff` RPC. Note: - Field `quorumsCLSigs` will populated only after v20 activation - If for one or more quorums, no non-null CL sig was found in CbTx then a null signature is returned in `quorumsCLSigs`. ## How Has This Been Tested? - Functional test mininode's protocol version was bumped to `70230`. - `feature_llmq_rotation.py` checks that `quorumsCLSigs` match in both P2P and RPC messages. ## Breaking Changes No ## Checklist: - [x] I have performed a self-review of my own code - [x] I have commented my code, particularly in hard-to-understand areas - [x] I have added or updated relevant unit/integration/functional/e2e tests - [x] I have made corresponding changes to the documentation - [x] I have assigned this pull request to a milestone _(for repository code-owners and collaborators only)_ --------- Co-authored-by: thephez <thephez@users.noreply.github.com> Co-authored-by: UdjinM6 <UdjinM6@users.noreply.github.com> Co-authored-by: pasta <pasta@dashboost.org>
2023-07-10 18:23:09 +02:00
set_element = struct.unpack('H', f.read(2))[0]
idx_set.add(set_element)
self.quorumsCLSigs[signature] = idx_set
def __repr__(self):
return "msg_mnlistdiff(baseBlockHash=%064x, blockHash=%064x)" % (self.baseBlockHash, self.blockHash)
class msg_clsig:
__slots__ = ("height", "blockHash", "sig",)
msgtype = b"clsig"
def __init__(self, height=0, blockHash=0, sig=b'\x00' * 96):
self.height = height
self.blockHash = blockHash
self.sig = sig
def deserialize(self, f):
self.height = struct.unpack('<i', f.read(4))[0]
self.blockHash = deser_uint256(f)
self.sig = f.read(96)
def serialize(self):
r = b""
r += struct.pack('<i', self.height)
r += ser_uint256(self.blockHash)
r += self.sig
return r
def __repr__(self):
return "msg_clsig(height=%d, blockHash=%064x)" % (self.height, self.blockHash)
class msg_isdlock:
__slots__ = ("nVersion", "inputs", "txid", "cycleHash", "sig")
msgtype = b"isdlock"
Merge #16726: tests: Avoid common Python default parameter gotcha when mutable dict/list:s are used as default parameter values e4f4ea47ebf7774fb6f445adde7bf7ea71fa05a1 lint: Catch use of [] or {} as default parameter values in Python functions (practicalswift) 25dd86715039586d92176eee16e9c6644d2547f0 Avoid using mutable default parameter values (practicalswift) Pull request description: Avoid common Python default parameter gotcha when mutable `dict`/`list`:s are used as default parameter values. Examples of this gotcha caught during review: * https://github.com/bitcoin/bitcoin/pull/16673#discussion_r317415261 * https://github.com/bitcoin/bitcoin/pull/14565#discussion_r241942304 Perhaps surprisingly this is how mutable list and dictionary default parameter values behave in Python: ``` >>> def f(i, j=[], k={}): ... j.append(i) ... k[i] = True ... return j, k ... >>> f(1) ([1], {1: True}) >>> f(1) ([1, 1], {1: True}) >>> f(2) ([1, 1, 2], {1: True, 2: True}) ``` In contrast to: ``` >>> def f(i, j=None, k=None): ... if j is None: ... j = [] ... if k is None: ... k = {} ... j.append(i) ... k[i] = True ... return j, k ... >>> f(1) ([1], {1: True}) >>> f(1) ([1], {1: True}) >>> f(2) ([2], {2: True}) ``` The latter is typically the intended behaviour. This PR fixes two instances of this and adds a check guarding against this gotcha going forward :-) ACKs for top commit: Sjors: Oh Python... ACK e4f4ea47ebf7774fb6f445adde7bf7ea71fa05a1. Testing tip: swap the two commits. Tree-SHA512: 56e14d24fc866211a20185c9fdb274ed046c3aed2dc0e07699e58b6f9fa3b79f6d0c880fb02d72b7fe5cc5eb7c0ff6da0ead33123344e1a872209370c2e49e3f
2019-08-28 19:34:22 +02:00
def __init__(self, nVersion=1, inputs=None, txid=0, cycleHash=0, sig=b'\x00' * 96):
self.nVersion = nVersion
Merge #16726: tests: Avoid common Python default parameter gotcha when mutable dict/list:s are used as default parameter values e4f4ea47ebf7774fb6f445adde7bf7ea71fa05a1 lint: Catch use of [] or {} as default parameter values in Python functions (practicalswift) 25dd86715039586d92176eee16e9c6644d2547f0 Avoid using mutable default parameter values (practicalswift) Pull request description: Avoid common Python default parameter gotcha when mutable `dict`/`list`:s are used as default parameter values. Examples of this gotcha caught during review: * https://github.com/bitcoin/bitcoin/pull/16673#discussion_r317415261 * https://github.com/bitcoin/bitcoin/pull/14565#discussion_r241942304 Perhaps surprisingly this is how mutable list and dictionary default parameter values behave in Python: ``` >>> def f(i, j=[], k={}): ... j.append(i) ... k[i] = True ... return j, k ... >>> f(1) ([1], {1: True}) >>> f(1) ([1, 1], {1: True}) >>> f(2) ([1, 1, 2], {1: True, 2: True}) ``` In contrast to: ``` >>> def f(i, j=None, k=None): ... if j is None: ... j = [] ... if k is None: ... k = {} ... j.append(i) ... k[i] = True ... return j, k ... >>> f(1) ([1], {1: True}) >>> f(1) ([1], {1: True}) >>> f(2) ([2], {2: True}) ``` The latter is typically the intended behaviour. This PR fixes two instances of this and adds a check guarding against this gotcha going forward :-) ACKs for top commit: Sjors: Oh Python... ACK e4f4ea47ebf7774fb6f445adde7bf7ea71fa05a1. Testing tip: swap the two commits. Tree-SHA512: 56e14d24fc866211a20185c9fdb274ed046c3aed2dc0e07699e58b6f9fa3b79f6d0c880fb02d72b7fe5cc5eb7c0ff6da0ead33123344e1a872209370c2e49e3f
2019-08-28 19:34:22 +02:00
self.inputs = inputs if inputs is not None else []
self.txid = txid
self.cycleHash = cycleHash
self.sig = sig
def deserialize(self, f):
self.nVersion = struct.unpack("<B", f.read(1))[0]
self.inputs = deser_vector(f, COutPoint)
self.txid = deser_uint256(f)
self.cycleHash = deser_uint256(f)
self.sig = f.read(96)
def serialize(self):
r = b""
r += struct.pack("<B", self.nVersion)
r += ser_vector(self.inputs)
r += ser_uint256(self.txid)
r += ser_uint256(self.cycleHash)
r += self.sig
return r
def __repr__(self):
return "msg_isdlock(nVersion=%d, inputs=%s, txid=%064x, cycleHash=%064x)" % \
(self.nVersion, repr(self.inputs), self.txid, self.cycleHash)
class msg_qsigshare:
__slots__ = ("sig_shares",)
msgtype = b"qsigshare"
Merge #16726: tests: Avoid common Python default parameter gotcha when mutable dict/list:s are used as default parameter values e4f4ea47ebf7774fb6f445adde7bf7ea71fa05a1 lint: Catch use of [] or {} as default parameter values in Python functions (practicalswift) 25dd86715039586d92176eee16e9c6644d2547f0 Avoid using mutable default parameter values (practicalswift) Pull request description: Avoid common Python default parameter gotcha when mutable `dict`/`list`:s are used as default parameter values. Examples of this gotcha caught during review: * https://github.com/bitcoin/bitcoin/pull/16673#discussion_r317415261 * https://github.com/bitcoin/bitcoin/pull/14565#discussion_r241942304 Perhaps surprisingly this is how mutable list and dictionary default parameter values behave in Python: ``` >>> def f(i, j=[], k={}): ... j.append(i) ... k[i] = True ... return j, k ... >>> f(1) ([1], {1: True}) >>> f(1) ([1, 1], {1: True}) >>> f(2) ([1, 1, 2], {1: True, 2: True}) ``` In contrast to: ``` >>> def f(i, j=None, k=None): ... if j is None: ... j = [] ... if k is None: ... k = {} ... j.append(i) ... k[i] = True ... return j, k ... >>> f(1) ([1], {1: True}) >>> f(1) ([1], {1: True}) >>> f(2) ([2], {2: True}) ``` The latter is typically the intended behaviour. This PR fixes two instances of this and adds a check guarding against this gotcha going forward :-) ACKs for top commit: Sjors: Oh Python... ACK e4f4ea47ebf7774fb6f445adde7bf7ea71fa05a1. Testing tip: swap the two commits. Tree-SHA512: 56e14d24fc866211a20185c9fdb274ed046c3aed2dc0e07699e58b6f9fa3b79f6d0c880fb02d72b7fe5cc5eb7c0ff6da0ead33123344e1a872209370c2e49e3f
2019-08-28 19:34:22 +02:00
def __init__(self, sig_shares=None):
self.sig_shares = sig_shares if sig_shares is not None else []
def deserialize(self, f):
self.sig_shares = deser_vector(f, CSigShare)
def serialize(self):
r = b""
r += ser_vector(self.sig_shares)
return r
def __repr__(self):
return "msg_qsigshare(sigShares=%d)" % (len(self.sig_shares))
llmq|rpc|test|version: Implement P2P messages QGETDATA <-> QDATA (#3953) * version: Bump PROTOCOL_VERSION and MIN_MASTERNODE_PROTO_VERSION * version: Introduce LLMQ_DATA_MESSAGES_VERSION for QGETDATA/QDATA support * test: Bump MY_VERSION to 70219 (LLMQ_DATA_MESSAGES_VERSION) * llmq: Introduce CQuorumDataRequest as wrapper for QGETDATA requests * llmq: Implement CQuorum::{SetVerificationVector, SetSecretKeyShare} * llmq|net|protocol: Implement QGETDATA/QDATA P2P messages * llmq: Restrict processing QGETDATA/QDATA to masternodes only * llmq: Implement request limiting for QGETDATA/QDATA * llmq: Implement CQuorumManger::RequestQuorumData * rpc: Implement "quorum getdata" as wrapper around QGETDATA Allows to trigger sending QGETDATA messages to connected peers by RPC. * test: Handle QGETDATA/QDATA messages in mininode * test: Add data structures to support QGETDATA/QDATA * test: Add some helper in test_framework.py * test: Implement tests for QGETDATA/QDATA in p2p_quorum_data.py * test: Add p2p_quorum_data.py to BASE_SCRIPTS * llmq|test: Add QWATCH support for QGETDATA/QDATA * llmq: Store CQuorumPtr in cache, not CQuorumCPtr * llmq: Fix cache usage after recent changes * Use uacomment to create/find specific p2ps * No need to use network adjusted time here, GetTime should be enough * rpc: check proTxHash * minor tweaks * test: Adjustments after 4e27d6513e0073ed848ede262cfec82a9134abc0 * llmq: Rename and improve error lambda in CQuorumManager::ProcessMessage * llmq: Process QDATA if -watchquorums is enabled * test: Handle qwatch messages in mininode * test: Add test for -watchquorums support * test: Just some empty lines * test: Properly stop the p2p network thread at the end of the test * rpc: Adjust "quorum getdata" parameter descriptions Co-authored-by: PastaPastaPasta <6443210+PastaPastaPasta@users.noreply.github.com> * rpc: Fix optionality of proTxHash in "quorum getdata" command * test: Test optionality of proTxHash for "quorum getdata" command * test: Be more specific about imports in p2p_quorum_data.py * llmq|rpc: Add some comments about the request.GetDataMask checks * test: Some more empty lines * rpc: One more parameter description Co-authored-by: PastaPastaPasta <6443210+PastaPastaPasta@users.noreply.github.com> * test: Unify assert statements / drop parentheses for all of them * fix typo Signed-off-by: pasta <pasta@dashboost.org> * adjust some line wrapping to 80 chars Signed-off-by: pasta <pasta@dashboost.org> * tests: Seperate out into dif atomic methods, add logging Signed-off-by: pasta <pasta@dashboost.org> * test: Avoid restarting masternodes, just let available requests expire Just takes a lot time and isn't required imo. * test: Drop redundant code/tests after separation This was introduced in 9e224ec2f2ef4a58adaf0f9d4ffe110e379718ef * test: Merge three tests "test_mnauth_restriction", "test_invalid_messages" and "test_invalid_unexpected_qdata" with the resulting name "test_basics" because i don't feel like DKG recovery thing should be part of a test called "test_invalid_messages" and giving it an own test probably wouldn't make a lot sense because it would still depend on "test_invalid_messages". I also think there is no need for a separated "test_invalid_unexpected_qdata". * test: Rename test_ratelimiting_banscore -> test_request_limit * test: Apply python style * test: Wrap all at 120 characters Thats the default "draw annoying warnings" setting for PyCharm (and IMO a reasonable line length). * test: Move some variables * test: Optimize for speed * tests: use wait_until in get_mininode_id * test: Don't use `!=` to check for `None` Co-authored-by: UdjinM6 <UdjinM6@users.noreply.github.com> Co-authored-by: PastaPastaPasta <6443210+PastaPastaPasta@users.noreply.github.com> Co-authored-by: pasta <pasta@dashboost.org>
2021-01-28 23:33:18 +01:00
class msg_qwatch:
__slots__ = ()
msgtype = b"qwatch"
llmq|rpc|test|version: Implement P2P messages QGETDATA <-> QDATA (#3953) * version: Bump PROTOCOL_VERSION and MIN_MASTERNODE_PROTO_VERSION * version: Introduce LLMQ_DATA_MESSAGES_VERSION for QGETDATA/QDATA support * test: Bump MY_VERSION to 70219 (LLMQ_DATA_MESSAGES_VERSION) * llmq: Introduce CQuorumDataRequest as wrapper for QGETDATA requests * llmq: Implement CQuorum::{SetVerificationVector, SetSecretKeyShare} * llmq|net|protocol: Implement QGETDATA/QDATA P2P messages * llmq: Restrict processing QGETDATA/QDATA to masternodes only * llmq: Implement request limiting for QGETDATA/QDATA * llmq: Implement CQuorumManger::RequestQuorumData * rpc: Implement "quorum getdata" as wrapper around QGETDATA Allows to trigger sending QGETDATA messages to connected peers by RPC. * test: Handle QGETDATA/QDATA messages in mininode * test: Add data structures to support QGETDATA/QDATA * test: Add some helper in test_framework.py * test: Implement tests for QGETDATA/QDATA in p2p_quorum_data.py * test: Add p2p_quorum_data.py to BASE_SCRIPTS * llmq|test: Add QWATCH support for QGETDATA/QDATA * llmq: Store CQuorumPtr in cache, not CQuorumCPtr * llmq: Fix cache usage after recent changes * Use uacomment to create/find specific p2ps * No need to use network adjusted time here, GetTime should be enough * rpc: check proTxHash * minor tweaks * test: Adjustments after 4e27d6513e0073ed848ede262cfec82a9134abc0 * llmq: Rename and improve error lambda in CQuorumManager::ProcessMessage * llmq: Process QDATA if -watchquorums is enabled * test: Handle qwatch messages in mininode * test: Add test for -watchquorums support * test: Just some empty lines * test: Properly stop the p2p network thread at the end of the test * rpc: Adjust "quorum getdata" parameter descriptions Co-authored-by: PastaPastaPasta <6443210+PastaPastaPasta@users.noreply.github.com> * rpc: Fix optionality of proTxHash in "quorum getdata" command * test: Test optionality of proTxHash for "quorum getdata" command * test: Be more specific about imports in p2p_quorum_data.py * llmq|rpc: Add some comments about the request.GetDataMask checks * test: Some more empty lines * rpc: One more parameter description Co-authored-by: PastaPastaPasta <6443210+PastaPastaPasta@users.noreply.github.com> * test: Unify assert statements / drop parentheses for all of them * fix typo Signed-off-by: pasta <pasta@dashboost.org> * adjust some line wrapping to 80 chars Signed-off-by: pasta <pasta@dashboost.org> * tests: Seperate out into dif atomic methods, add logging Signed-off-by: pasta <pasta@dashboost.org> * test: Avoid restarting masternodes, just let available requests expire Just takes a lot time and isn't required imo. * test: Drop redundant code/tests after separation This was introduced in 9e224ec2f2ef4a58adaf0f9d4ffe110e379718ef * test: Merge three tests "test_mnauth_restriction", "test_invalid_messages" and "test_invalid_unexpected_qdata" with the resulting name "test_basics" because i don't feel like DKG recovery thing should be part of a test called "test_invalid_messages" and giving it an own test probably wouldn't make a lot sense because it would still depend on "test_invalid_messages". I also think there is no need for a separated "test_invalid_unexpected_qdata". * test: Rename test_ratelimiting_banscore -> test_request_limit * test: Apply python style * test: Wrap all at 120 characters Thats the default "draw annoying warnings" setting for PyCharm (and IMO a reasonable line length). * test: Move some variables * test: Optimize for speed * tests: use wait_until in get_mininode_id * test: Don't use `!=` to check for `None` Co-authored-by: UdjinM6 <UdjinM6@users.noreply.github.com> Co-authored-by: PastaPastaPasta <6443210+PastaPastaPasta@users.noreply.github.com> Co-authored-by: pasta <pasta@dashboost.org>
2021-01-28 23:33:18 +01:00
def __init__(self):
pass
def deserialize(self, f):
pass
def serialize(self):
return b""
def __repr__(self):
return "msg_qwatch()"
class msg_qgetdata:
__slots__ = ("quorum_hash", "quorum_type", "data_mask", "protx_hash")
msgtype = b"qgetdata"
llmq|rpc|test|version: Implement P2P messages QGETDATA <-> QDATA (#3953) * version: Bump PROTOCOL_VERSION and MIN_MASTERNODE_PROTO_VERSION * version: Introduce LLMQ_DATA_MESSAGES_VERSION for QGETDATA/QDATA support * test: Bump MY_VERSION to 70219 (LLMQ_DATA_MESSAGES_VERSION) * llmq: Introduce CQuorumDataRequest as wrapper for QGETDATA requests * llmq: Implement CQuorum::{SetVerificationVector, SetSecretKeyShare} * llmq|net|protocol: Implement QGETDATA/QDATA P2P messages * llmq: Restrict processing QGETDATA/QDATA to masternodes only * llmq: Implement request limiting for QGETDATA/QDATA * llmq: Implement CQuorumManger::RequestQuorumData * rpc: Implement "quorum getdata" as wrapper around QGETDATA Allows to trigger sending QGETDATA messages to connected peers by RPC. * test: Handle QGETDATA/QDATA messages in mininode * test: Add data structures to support QGETDATA/QDATA * test: Add some helper in test_framework.py * test: Implement tests for QGETDATA/QDATA in p2p_quorum_data.py * test: Add p2p_quorum_data.py to BASE_SCRIPTS * llmq|test: Add QWATCH support for QGETDATA/QDATA * llmq: Store CQuorumPtr in cache, not CQuorumCPtr * llmq: Fix cache usage after recent changes * Use uacomment to create/find specific p2ps * No need to use network adjusted time here, GetTime should be enough * rpc: check proTxHash * minor tweaks * test: Adjustments after 4e27d6513e0073ed848ede262cfec82a9134abc0 * llmq: Rename and improve error lambda in CQuorumManager::ProcessMessage * llmq: Process QDATA if -watchquorums is enabled * test: Handle qwatch messages in mininode * test: Add test for -watchquorums support * test: Just some empty lines * test: Properly stop the p2p network thread at the end of the test * rpc: Adjust "quorum getdata" parameter descriptions Co-authored-by: PastaPastaPasta <6443210+PastaPastaPasta@users.noreply.github.com> * rpc: Fix optionality of proTxHash in "quorum getdata" command * test: Test optionality of proTxHash for "quorum getdata" command * test: Be more specific about imports in p2p_quorum_data.py * llmq|rpc: Add some comments about the request.GetDataMask checks * test: Some more empty lines * rpc: One more parameter description Co-authored-by: PastaPastaPasta <6443210+PastaPastaPasta@users.noreply.github.com> * test: Unify assert statements / drop parentheses for all of them * fix typo Signed-off-by: pasta <pasta@dashboost.org> * adjust some line wrapping to 80 chars Signed-off-by: pasta <pasta@dashboost.org> * tests: Seperate out into dif atomic methods, add logging Signed-off-by: pasta <pasta@dashboost.org> * test: Avoid restarting masternodes, just let available requests expire Just takes a lot time and isn't required imo. * test: Drop redundant code/tests after separation This was introduced in 9e224ec2f2ef4a58adaf0f9d4ffe110e379718ef * test: Merge three tests "test_mnauth_restriction", "test_invalid_messages" and "test_invalid_unexpected_qdata" with the resulting name "test_basics" because i don't feel like DKG recovery thing should be part of a test called "test_invalid_messages" and giving it an own test probably wouldn't make a lot sense because it would still depend on "test_invalid_messages". I also think there is no need for a separated "test_invalid_unexpected_qdata". * test: Rename test_ratelimiting_banscore -> test_request_limit * test: Apply python style * test: Wrap all at 120 characters Thats the default "draw annoying warnings" setting for PyCharm (and IMO a reasonable line length). * test: Move some variables * test: Optimize for speed * tests: use wait_until in get_mininode_id * test: Don't use `!=` to check for `None` Co-authored-by: UdjinM6 <UdjinM6@users.noreply.github.com> Co-authored-by: PastaPastaPasta <6443210+PastaPastaPasta@users.noreply.github.com> Co-authored-by: pasta <pasta@dashboost.org>
2021-01-28 23:33:18 +01:00
def __init__(self, quorum_hash=0, quorum_type=-1, data_mask=0, protx_hash=0):
self.quorum_hash = quorum_hash
self.quorum_type = quorum_type
self.data_mask = data_mask
self.protx_hash = protx_hash
def deserialize(self, f):
self.quorum_type = struct.unpack("<B", f.read(1))[0]
self.quorum_hash = deser_uint256(f)
self.data_mask = struct.unpack("<H", f.read(2))[0]
self.protx_hash = deser_uint256(f)
def serialize(self):
r = b""
r += struct.pack("<B", self.quorum_type)
r += ser_uint256(self.quorum_hash)
r += struct.pack("<H", self.data_mask)
r += ser_uint256(self.protx_hash)
return r
def __repr__(self):
return "msg_qgetdata(quorum_hash=%064x, quorum_type=%d, data_mask=%d, protx_hash=%064x)" % (
self.quorum_hash,
self.quorum_type,
self.data_mask,
self.protx_hash)
class msg_qdata:
__slots__ = ("quorum_hash", "quorum_type", "data_mask", "protx_hash", "error", "quorum_vvec", "enc_contributions",)
msgtype = b"qdata"
llmq|rpc|test|version: Implement P2P messages QGETDATA <-> QDATA (#3953) * version: Bump PROTOCOL_VERSION and MIN_MASTERNODE_PROTO_VERSION * version: Introduce LLMQ_DATA_MESSAGES_VERSION for QGETDATA/QDATA support * test: Bump MY_VERSION to 70219 (LLMQ_DATA_MESSAGES_VERSION) * llmq: Introduce CQuorumDataRequest as wrapper for QGETDATA requests * llmq: Implement CQuorum::{SetVerificationVector, SetSecretKeyShare} * llmq|net|protocol: Implement QGETDATA/QDATA P2P messages * llmq: Restrict processing QGETDATA/QDATA to masternodes only * llmq: Implement request limiting for QGETDATA/QDATA * llmq: Implement CQuorumManger::RequestQuorumData * rpc: Implement "quorum getdata" as wrapper around QGETDATA Allows to trigger sending QGETDATA messages to connected peers by RPC. * test: Handle QGETDATA/QDATA messages in mininode * test: Add data structures to support QGETDATA/QDATA * test: Add some helper in test_framework.py * test: Implement tests for QGETDATA/QDATA in p2p_quorum_data.py * test: Add p2p_quorum_data.py to BASE_SCRIPTS * llmq|test: Add QWATCH support for QGETDATA/QDATA * llmq: Store CQuorumPtr in cache, not CQuorumCPtr * llmq: Fix cache usage after recent changes * Use uacomment to create/find specific p2ps * No need to use network adjusted time here, GetTime should be enough * rpc: check proTxHash * minor tweaks * test: Adjustments after 4e27d6513e0073ed848ede262cfec82a9134abc0 * llmq: Rename and improve error lambda in CQuorumManager::ProcessMessage * llmq: Process QDATA if -watchquorums is enabled * test: Handle qwatch messages in mininode * test: Add test for -watchquorums support * test: Just some empty lines * test: Properly stop the p2p network thread at the end of the test * rpc: Adjust "quorum getdata" parameter descriptions Co-authored-by: PastaPastaPasta <6443210+PastaPastaPasta@users.noreply.github.com> * rpc: Fix optionality of proTxHash in "quorum getdata" command * test: Test optionality of proTxHash for "quorum getdata" command * test: Be more specific about imports in p2p_quorum_data.py * llmq|rpc: Add some comments about the request.GetDataMask checks * test: Some more empty lines * rpc: One more parameter description Co-authored-by: PastaPastaPasta <6443210+PastaPastaPasta@users.noreply.github.com> * test: Unify assert statements / drop parentheses for all of them * fix typo Signed-off-by: pasta <pasta@dashboost.org> * adjust some line wrapping to 80 chars Signed-off-by: pasta <pasta@dashboost.org> * tests: Seperate out into dif atomic methods, add logging Signed-off-by: pasta <pasta@dashboost.org> * test: Avoid restarting masternodes, just let available requests expire Just takes a lot time and isn't required imo. * test: Drop redundant code/tests after separation This was introduced in 9e224ec2f2ef4a58adaf0f9d4ffe110e379718ef * test: Merge three tests "test_mnauth_restriction", "test_invalid_messages" and "test_invalid_unexpected_qdata" with the resulting name "test_basics" because i don't feel like DKG recovery thing should be part of a test called "test_invalid_messages" and giving it an own test probably wouldn't make a lot sense because it would still depend on "test_invalid_messages". I also think there is no need for a separated "test_invalid_unexpected_qdata". * test: Rename test_ratelimiting_banscore -> test_request_limit * test: Apply python style * test: Wrap all at 120 characters Thats the default "draw annoying warnings" setting for PyCharm (and IMO a reasonable line length). * test: Move some variables * test: Optimize for speed * tests: use wait_until in get_mininode_id * test: Don't use `!=` to check for `None` Co-authored-by: UdjinM6 <UdjinM6@users.noreply.github.com> Co-authored-by: PastaPastaPasta <6443210+PastaPastaPasta@users.noreply.github.com> Co-authored-by: pasta <pasta@dashboost.org>
2021-01-28 23:33:18 +01:00
def __init__(self):
self.quorum_type = 0
self.quorum_hash = 0
self.data_mask = 0
self.protx_hash = 0
self.error = 0
self.quorum_vvec = list()
self.enc_contributions = list()
def deserialize(self, f):
self.quorum_type = struct.unpack("<B", f.read(1))[0]
self.quorum_hash = deser_uint256(f)
self.data_mask = struct.unpack("<H", f.read(2))[0]
self.protx_hash = deser_uint256(f)
self.error = struct.unpack("<B", f.read(1))[0]
if self.error == 0:
if self.data_mask & 0x01:
self.quorum_vvec = deser_vector(f, CBLSPublicKey)
if self.data_mask & 0x02:
self.enc_contributions = deser_vector(f, CBLSIESEncryptedSecretKey)
def serialize(self):
r = b""
r += struct.pack("<B", self.quorum_type)
r += ser_uint256(self.quorum_hash)
r += struct.pack("<H", self.data_mask)
r += ser_uint256(self.protx_hash)
r += struct.pack("<B", self.error)
if self.error == 0:
if self.data_mask & 0x01:
r += ser_vector(self.quorum_vvec)
if self.data_mask & 0x02:
r += ser_vector(self.enc_contributions)
return r
def __repr__(self):
return "msg_qdata(error=%d, quorum_vvec=%d, enc_contributions=%d)" % (self.error, len(self.quorum_vvec),
len(self.enc_contributions))
class msg_getcfilters:
__slots__ = ("filter_type", "start_height", "stop_hash")
msgtype = b"getcfilters"
Merge bitcoin/bitcoin#25126: test: add BIP157 message parsing support (via MESSAGEMAP) 5dc6d9207778c51c10c16fac4b3663bc7905bafc test: make BIP157 messages default-constructible (MESSAGEMAP compatibility) (Sebastian Falbesoner) 71e4cfefe765c58937b3fd3125782ca8407315d2 test: p2p: add missing BIP157 message types to MESSAGEMAP (Sebastian Falbesoner) Pull request description: The script [message-capture-parser.py](https://github.com/bitcoin/bitcoin/blob/master/contrib/message-capture/message-capture-parser.py) currently doesn't support parsing the BIP157 messages `getcfilters`, `getcfheaders` and `getcfcheckpt`, e.g. ``` $ ./contrib/message-capture/message-capture-parser.py msgs_recv.dat ... WARNING - Unrecognized message type b'getcfcheckpt' in /home/thestack/bitcoin/msgs_recv.dat ... ``` This PR fixes this by adding the missing message type mappings to the [`MESSAGEMAP`](https://github.com/bitcoin/bitcoin/blob/225e5b57b2ee2bc1acd7f09c89ccccc15ef8c85f/test/functional/test_framework/p2p.py#L95-L127) in the test framework and add default-constructors for the corresponding `msg_`... classes. Without the second commit, the following error message would occur: ``` File "/home/thestack/bitcoin/./contrib/message-capture/message-capture-parser.py", line 141, in process_file msg = MESSAGEMAP[msgtype]() TypeError: __init__() missing 2 required positional arguments: 'filter_type' and 'stop_hash' ``` ACKs for top commit: dunxen: tACK [5dc6d92](https://github.com/bitcoin/bitcoin/pull/25126/commits/5dc6d9207778c51c10c16fac4b3663bc7905bafc) Tree-SHA512: d656c4d38a856373f01d7c293ae7d2b27378a9fc248048ebf2a64725ef8b498b3ddf4f420704abdb20d0c68ca548f1777602c5e73b66821a20c97ae618f1d63f
2022-05-18 19:08:44 +02:00
def __init__(self, filter_type=None, start_height=None, stop_hash=None):
self.filter_type = filter_type
self.start_height = start_height
self.stop_hash = stop_hash
def deserialize(self, f):
self.filter_type = struct.unpack("<B", f.read(1))[0]
self.start_height = struct.unpack("<I", f.read(4))[0]
self.stop_hash = deser_uint256(f)
def serialize(self):
r = b""
r += struct.pack("<B", self.filter_type)
r += struct.pack("<I", self.start_height)
r += ser_uint256(self.stop_hash)
return r
def __repr__(self):
return "msg_getcfilters(filter_type={:#x}, start_height={}, stop_hash={:x})".format(
self.filter_type, self.start_height, self.stop_hash)
class msg_cfilter:
__slots__ = ("filter_type", "block_hash", "filter_data")
msgtype = b"cfilter"
def __init__(self, filter_type=None, block_hash=None, filter_data=None):
self.filter_type = filter_type
self.block_hash = block_hash
self.filter_data = filter_data
def deserialize(self, f):
self.filter_type = struct.unpack("<B", f.read(1))[0]
self.block_hash = deser_uint256(f)
self.filter_data = deser_string(f)
def serialize(self):
r = b""
r += struct.pack("<B", self.filter_type)
r += ser_uint256(self.block_hash)
r += ser_string(self.filter_data)
return r
def __repr__(self):
return "msg_cfilter(filter_type={:#x}, block_hash={:x})".format(
self.filter_type, self.block_hash)
class msg_getcfheaders:
__slots__ = ("filter_type", "start_height", "stop_hash")
msgtype = b"getcfheaders"
Merge bitcoin/bitcoin#25126: test: add BIP157 message parsing support (via MESSAGEMAP) 5dc6d9207778c51c10c16fac4b3663bc7905bafc test: make BIP157 messages default-constructible (MESSAGEMAP compatibility) (Sebastian Falbesoner) 71e4cfefe765c58937b3fd3125782ca8407315d2 test: p2p: add missing BIP157 message types to MESSAGEMAP (Sebastian Falbesoner) Pull request description: The script [message-capture-parser.py](https://github.com/bitcoin/bitcoin/blob/master/contrib/message-capture/message-capture-parser.py) currently doesn't support parsing the BIP157 messages `getcfilters`, `getcfheaders` and `getcfcheckpt`, e.g. ``` $ ./contrib/message-capture/message-capture-parser.py msgs_recv.dat ... WARNING - Unrecognized message type b'getcfcheckpt' in /home/thestack/bitcoin/msgs_recv.dat ... ``` This PR fixes this by adding the missing message type mappings to the [`MESSAGEMAP`](https://github.com/bitcoin/bitcoin/blob/225e5b57b2ee2bc1acd7f09c89ccccc15ef8c85f/test/functional/test_framework/p2p.py#L95-L127) in the test framework and add default-constructors for the corresponding `msg_`... classes. Without the second commit, the following error message would occur: ``` File "/home/thestack/bitcoin/./contrib/message-capture/message-capture-parser.py", line 141, in process_file msg = MESSAGEMAP[msgtype]() TypeError: __init__() missing 2 required positional arguments: 'filter_type' and 'stop_hash' ``` ACKs for top commit: dunxen: tACK [5dc6d92](https://github.com/bitcoin/bitcoin/pull/25126/commits/5dc6d9207778c51c10c16fac4b3663bc7905bafc) Tree-SHA512: d656c4d38a856373f01d7c293ae7d2b27378a9fc248048ebf2a64725ef8b498b3ddf4f420704abdb20d0c68ca548f1777602c5e73b66821a20c97ae618f1d63f
2022-05-18 19:08:44 +02:00
def __init__(self, filter_type=None, start_height=None, stop_hash=None):
self.filter_type = filter_type
self.start_height = start_height
self.stop_hash = stop_hash
def deserialize(self, f):
self.filter_type = struct.unpack("<B", f.read(1))[0]
self.start_height = struct.unpack("<I", f.read(4))[0]
self.stop_hash = deser_uint256(f)
def serialize(self):
r = b""
r += struct.pack("<B", self.filter_type)
r += struct.pack("<I", self.start_height)
r += ser_uint256(self.stop_hash)
return r
def __repr__(self):
return "msg_getcfheaders(filter_type={:#x}, start_height={}, stop_hash={:x})".format(
self.filter_type, self.start_height, self.stop_hash)
class msg_cfheaders:
__slots__ = ("filter_type", "stop_hash", "prev_header", "hashes")
msgtype = b"cfheaders"
def __init__(self, filter_type=None, stop_hash=None, prev_header=None, hashes=None):
self.filter_type = filter_type
self.stop_hash = stop_hash
self.prev_header = prev_header
self.hashes = hashes
def deserialize(self, f):
self.filter_type = struct.unpack("<B", f.read(1))[0]
self.stop_hash = deser_uint256(f)
self.prev_header = deser_uint256(f)
self.hashes = deser_uint256_vector(f)
def serialize(self):
r = b""
r += struct.pack("<B", self.filter_type)
r += ser_uint256(self.stop_hash)
r += ser_uint256(self.prev_header)
r += ser_uint256_vector(self.hashes)
return r
def __repr__(self):
return "msg_cfheaders(filter_type={:#x}, stop_hash={:x})".format(
self.filter_type, self.stop_hash)
class msg_getcfcheckpt:
__slots__ = ("filter_type", "stop_hash")
msgtype = b"getcfcheckpt"
Merge bitcoin/bitcoin#25126: test: add BIP157 message parsing support (via MESSAGEMAP) 5dc6d9207778c51c10c16fac4b3663bc7905bafc test: make BIP157 messages default-constructible (MESSAGEMAP compatibility) (Sebastian Falbesoner) 71e4cfefe765c58937b3fd3125782ca8407315d2 test: p2p: add missing BIP157 message types to MESSAGEMAP (Sebastian Falbesoner) Pull request description: The script [message-capture-parser.py](https://github.com/bitcoin/bitcoin/blob/master/contrib/message-capture/message-capture-parser.py) currently doesn't support parsing the BIP157 messages `getcfilters`, `getcfheaders` and `getcfcheckpt`, e.g. ``` $ ./contrib/message-capture/message-capture-parser.py msgs_recv.dat ... WARNING - Unrecognized message type b'getcfcheckpt' in /home/thestack/bitcoin/msgs_recv.dat ... ``` This PR fixes this by adding the missing message type mappings to the [`MESSAGEMAP`](https://github.com/bitcoin/bitcoin/blob/225e5b57b2ee2bc1acd7f09c89ccccc15ef8c85f/test/functional/test_framework/p2p.py#L95-L127) in the test framework and add default-constructors for the corresponding `msg_`... classes. Without the second commit, the following error message would occur: ``` File "/home/thestack/bitcoin/./contrib/message-capture/message-capture-parser.py", line 141, in process_file msg = MESSAGEMAP[msgtype]() TypeError: __init__() missing 2 required positional arguments: 'filter_type' and 'stop_hash' ``` ACKs for top commit: dunxen: tACK [5dc6d92](https://github.com/bitcoin/bitcoin/pull/25126/commits/5dc6d9207778c51c10c16fac4b3663bc7905bafc) Tree-SHA512: d656c4d38a856373f01d7c293ae7d2b27378a9fc248048ebf2a64725ef8b498b3ddf4f420704abdb20d0c68ca548f1777602c5e73b66821a20c97ae618f1d63f
2022-05-18 19:08:44 +02:00
def __init__(self, filter_type=None, stop_hash=None):
self.filter_type = filter_type
self.stop_hash = stop_hash
def deserialize(self, f):
self.filter_type = struct.unpack("<B", f.read(1))[0]
self.stop_hash = deser_uint256(f)
def serialize(self):
r = b""
r += struct.pack("<B", self.filter_type)
r += ser_uint256(self.stop_hash)
return r
def __repr__(self):
return "msg_getcfcheckpt(filter_type={:#x}, stop_hash={:x})".format(
self.filter_type, self.stop_hash)
class msg_cfcheckpt:
__slots__ = ("filter_type", "stop_hash", "headers")
msgtype = b"cfcheckpt"
def __init__(self, filter_type=None, stop_hash=None, headers=None):
self.filter_type = filter_type
self.stop_hash = stop_hash
self.headers = headers
def deserialize(self, f):
self.filter_type = struct.unpack("<B", f.read(1))[0]
self.stop_hash = deser_uint256(f)
self.headers = deser_uint256_vector(f)
def serialize(self):
r = b""
r += struct.pack("<B", self.filter_type)
r += ser_uint256(self.stop_hash)
r += ser_uint256_vector(self.headers)
return r
def __repr__(self):
return "msg_cfcheckpt(filter_type={:#x}, stop_hash={:x})".format(
self.filter_type, self.stop_hash)